Under gcc 7 with -std=c++11, compilation results in several of the
following warnings:
In file included from /home/jagerman/src/pybind11/tests/test_sequences_and_iterators.cpp:13:0:
/home/jagerman/src/pybind11/include/pybind11/operators.h: In function ‘pybind11::detail::op_<(pybind11::detail::op_id)0, (pybind11::detail::op_type)0, pybind11::detail::self_t, pybind11::detail::self_t> pybind11::detail::operator+(const pybind11::detail::self_t&, const pybind11::detail::self_t&)’:
/home/jagerman/src/pybind11/include/pybind11/operators.h:78:76: warning: inline declaration of ‘pybind11::detail::op_<(pybind11::detail::op_id)0, (pybind11::detail::op_type)0, pybind11::detail::self_t, pybind11::detail::self_t> pybind11::detail::operator+(const pybind11::detail::self_t&, const pybind11::detail::self_t&)’ follows declaration with attribute noinline [-Wattributes]
inline op_<op_##id, op_l, self_t, self_t> op(const self_t &, const self_t &) { \
^
/home/jagerman/src/pybind11/include/pybind11/operators.h:109:1: note: in expansion of macro ‘PYBIND11_BINARY_OPERATOR’
PYBIND11_BINARY_OPERATOR(add, radd, operator+, l + r)
^~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/jagerman/src/pybind11/include/pybind11/cast.h:15:0,
from /home/jagerman/src/pybind11/include/pybind11/attr.h:13,
from /home/jagerman/src/pybind11/include/pybind11/pybind11.h:36,
from /home/jagerman/src/pybind11/tests/pybind11_tests.h:2,
from /home/jagerman/src/pybind11/tests/test_sequences_and_iterators.cpp:11:
/home/jagerman/src/pybind11/include/pybind11/descr.h:116:36: note: previous definition of ‘pybind11::detail::descr pybind11::detail::operator+(pybind11::detail::descr&&, pybind11::detail::descr&&)’ was here
PYBIND11_NOINLINE descr friend operator+(descr &&d1, descr &&d2) {
^~~~~~~~
This appears to be happening because gcc is considering implicit
construction of `descr` in some places using addition of two
`descr`-compatible arguments in the `descr.h` c++11 fallback code.
There's no particular reason that this operator needs to be a friend
function: this commit changes it to an rvalue-context member function
operator, which avoids the warning.
This exposed a few underlying issues:
1. is_pod_struct was too strict to allow this. I've relaxed it to
require only trivially copyable and standard layout, rather than POD
(which additionally requires a trivial constructor, which std::complex
violates).
2. format_descriptor<std::complex<T>>::format() returned numpy format
strings instead of PEP3118 format strings, but register_dtype
feeds format codes of its fields to _dtype_from_pep3118. I've changed it
to return PEP3118 format codes. format_descriptor is a public type, so
this may be considered an incompatible change.
3. register_structured_dtype tried to be smart about whether to mark
fields as unaligned (with ^). However, it's examining the C++ alignment,
rather than what numpy (or possibly PEP3118) thinks the alignment should
be. For complex values those are different. I've made it mark all fields
as ^ unconditionally, which should always be safe even if they are
aligned, because we explicitly mark the padding.
Resolves#800.
Both C++ arrays and std::array are supported, including mixtures like
std::array<int, 2>[4]. In a multi-dimensional array of char, the last
dimension is used to construct a numpy string type.
Under MSVC we were ignoring PYBIND11_CPP_STANDARD and simply not
passing any standard (which makes MSVC default to its C++14 mode).
MSVC 2015u3 added the `/std:c++14` and `/std:c++latest` flags; the
latter, under MSVC 2017, enables some C++17 features (such as
`std::optional` and `std::variant`), so it is something we need to
start supporting under MSVC.
This makes the PYBIND11_CPP_STANDARD cmake variable work under MSVC,
defaulting it to /std:c++14 (matching the default -std=c++14 for
non-MSVC).
It also adds a new appveyor test running under MSVC 2017 with
/std:c++latest, which runs (and passes) the
`std::optional`/`std::variant` tests.
Also updated the documentation to clarify the c++ flags and add show
MSVC flag examples.
The PYBIND11_CPP14 macro started out as a guard for the compile-time
path code in `descr.h`, but has since come to mean other things. This
means that while the `descr.h` check has just checked the
`PYBIND11_CPP14` macro, various other places now check `PYBIND11_CPP14
|| _MSC_VER`. This reverses that by now setting the CPP14 macro when
MSVC is trying to support C++14, but disabling the `descr.h` C++14 code
(which still fails under MSVC 2017).
The CPP17 macro also gets enabled when MSVC 2017 is compiling with
/std:c++latest (the default is /std:c++14), which enables
`std::optional` and `std::variant` support under MSVC.
GCC 7 generates (when compiling in C++11/14 mode) warnings such as:
mangled name for ‘pybind11::class_<type_, options>&
pybind11::class_<type_, options>::def(const char*, Func&&, const Extra&
...) [with Func = int (test_exc_sp::C::*)(int) noexcept; Extra = {};
type_ = test_exc_sp::C; options = {}]’ will change in C++17 because the
exception specification is part of a function type [-Wnoexcept-type]
There's nothing we can actually do in the code to avoid this, so just
disable the warning.
GCC supports `deprecated(msg)` since v4.5 and VS supports the standard
[[deprecated(msg)]] since 2015 RTM.
The deprecated constructor change from `= default` to `{}` is
a workaround for a VS2015 bug.
We're current copy by creating an Eigen::Map into the input numpy
array, then assigning that to the basic eigen type, effectively having
Eigen do the copy. That doesn't work for negative strides, though:
Eigen doesn't allow them.
This commit makes numpy do the copying instead by allocating the eigen
type, then having numpy copy from the input array into a numpy reference
into the eigen object's data. This also saves a copy when type
conversion is required: numpy can do the conversion on-the-fly as part
of the copy.
Finally this commit also makes non-reference parameters respect the
convert flag, declining the load when called in a noconvert pass with a
convertible, but non-array input or an array with the wrong dtype.
`EigenConformable::stride_compatible` returns false if the strides are
negative. In this case, do not use `EigenConformable::stride`, as it
is {0,0}. We cannot write negative strides in this element, as Eigen
will throw an assertion if we do.
The `type_caster` specialization for regular, dense Eigen matrices now
does a second `array_t::ensure` to copy data in case of negative strides.
I'm not sure that this is the best way to implement this.
I have added "TODO" tags linking these changes to Eigen bug #747, which,
when fixed, will allow Eigen to accept negative strides.
If a bound std::function is invoked with a bound method, the implicit
bound self is lost because we use `detail::get_function` to unbox the
function. This commit amends the code to use py::function and only
unboxes in the special is-really-a-c-function case. This makes bound
methods stay bound rather than unbinding them by forcing extraction of
the c function.
Enumerations on Python 2.7 were not always implicitly converted to
integers (depending on the target size). This patch adds a __long__
conversion function (only enabled on 2.7) which fixes this issue.
The attached test case fails without this patch.
This removes the convert-from-arithemtic-scalar constructor of
any_container as it can result in ambiguous calls, as in:
py::array_t<float>({ 1, 2 })
which could be intepreted as either of:
py::array_t<float>(py::array_t<float>(1, 2))
py::array_t<float>(py::detail::any_container({ 1, 2 }))
Removing the convert-from-arithmetic constructor reduces the number of
implicit conversions, avoiding the ambiguity for array and array_t.
This also re-adds the array/array_t constructors taking a scalar
argument for backwards compatibility.
The job is using the released clang and stable-branch libc++, which
wasn't the case when it was added. Leave the g++7/c++17 in
allow_failures for now as it's still a pre-release compiler (and pulled
from debian experimental).
Python 3's `PyInstanceMethod_Type` hides itself via its `tp_descr_get`,
which prevents aliasing methods via `cls.attr("m2") = cls.attr("m1")`:
instead the `tp_descr_get` returns a plain function, when called on a
class, or a `PyMethod`, when called on an instance. Override that
behaviour for pybind11 types with a special bypass for
`PyInstanceMethod_Types`.
The Unicode support added in 2.1 (PR #624) inadvertently broke accepting
`bytes` as std::string/char* arguments. This restores it with a
separate path that does a plain conversion (i.e. completely bypassing
all the encoding/decoding code), but only for single-byte string types.
The numpy API constants can check past the end of the API array if the
numpy version is too old thus causing a segfault. The current list of
functions requires numpy >= 1.7.0, so this adds a check and exception if
numpy is too old.
The added feature version API element was added in numpy 1.4.0, so this
could still segfault if loaded in 1.3.0 or earlier, but given that
1.4.0 was released at the end of 2009, it seems reasonable enough to
not worry about that case. (1.7.0 was released in early 2013).
This commits adds base class pointers of offset base classes (i.e. due
to multiple inheritance) to `registered_instances` so that if such a
pointer is returned we properly recognize it as an existing instance.
Without this, returning a base class pointer will cast to the existing
instance if the pointer happens to coincide with the instance pointer,
but constructs a new instance (quite possibly with a segfault, if
ownership is applied) for unequal base class pointers due to multiple
inheritance.
When we are returned a base class pointer (either directly or via
shared_from_this()) we detect its runtime type (using `typeid`), then
end up essentially reinterpret_casting the pointer to the derived type.
This is invalid when the base class pointer was a non-first base, and we
end up with an invalid pointer. We could dynamic_cast to the
most-derived type, but if *that* type isn't pybind11-registered, the
resulting pointer given to the base `cast` implementation isn't necessarily valid
to be reinterpret_cast'ed back to the backup type.
This commit removes the "backup" type argument from the many-argument
`cast(...)` and instead does the derived-or-pointer type decision and
type lookup in type_caster_base, where the dynamic_cast has to be to
correctly get the derived pointer, but also has to do the type lookup to
ensure that we don't pass the wrong (derived) pointer when the backup
type (i.e. the type caster intrinsic type) pointer is needed.
Since the lookup is needed before calling the base cast(), this also
changes the input type to a detail::type_info rather than doing a
(second) lookup in cast().
This breaks up the instance management functions in class_support.h a
little bit so that other pybind11 code can use it. In particular:
- added make_new_instance() which does what pybind11_object_new does,
but also allows instance allocation without `value` allocation. This
lets `cast.h` use the same instance allocation rather than having its
own separate implementation.
- instance registration is now moved to a
`register_instance()`/deregister_instance()` pair (rather than having
individual code add or remove things from `registered_instances`
directory).
- clear_instance() does everything `pybind11_object_dealloc()` needs
except for the deallocation; this is helpful for factory construction
which needs to be able to replace the internals of an instance without
deallocating it.
- clear_instance() now also calls `dealloc` when `holder_constructed`
is true, even if `value` is false. This can happen in factory
construction when the pointer is moved from one instance to another,
but the holder itself is only copied (i.e. for a shared_ptr holder).
I got some unexpected errors from code using `overload_cast` until I
realized that I'd configured the build with -std=c++11.
This commit adds a fake `overload_cast` class in C++11 mode that
triggers a static_assert failure indicating that C++14 is needed.
We currently fail at runtime when trying to call a method that is
overloaded with both static and non-static methods. This is something
python won't allow: the object is either a function or an instance, and
can't be both.
Adding numpy to the pypy test exposed a segfault caused by the buffer
tests in test_stl_binders.py: the first such test was explicitly skipped
on pypy, but the second (test_vector_buffer_numpy) which also seems to
cause an occasional segfault was just marked as requiring numpy.
Explicitly skip it on pypy as well (until a workaround, fix, or pypy fix
are found).
Various bash variables that are only used in the travis-ci script and
don't need to propagate (e.g. to cmake) are being pointlessly exported;
this removes these `export`s.
This uses the trusty container rather than docker for the clang 4.0
build. It also caches the local libc++ installation so that it doesn't
need to be compiled every time, which should speed up the job
considerably.
This applies several changes to the non-docker travis-ci builds:
- Make all builds use trusty rather than precise. pybind can't really
build in precise anyway (we install essentially the entire toolchain
backported from trusty on every build), and so this saves needing to
install all the backported packages during the build setup.
- Updated the 3.5 build to 3.6 (via deadsnakes, which didn't backport
3.6 to ubuntu releases earlier than trusty).
- As a result of the switch to trusty, the BAREBONES build now picks up
the (default installed) python 3.5 installation.
- Invoke pip everywhere via $PYTHON -m pip rather than the pip
executable, which saves us having to figure out what the pip
executable is, and ensures that we are using the correct pip.
- Install packages with `pip --user` rather than in a virtualenv.
- Add the local user python package archive to the travis-ci cache
(rather than the pip cache). This saves needing to install packages
during installation (unless there are updates, in which case the
package and the cache are updated).
- Install numpy and scipy on the pypy build. This has to build from
source (and so blas and fortran need to be installed on the build),
but given the above caching, the build will only be slow for the first
build after a new numpy/scipy release. This testing is valuable:
numpy has various behaviour differences under pypy.
- Added set -e/+e around the before_install/install blocks so that a
failure here (e.g. a pip install failure or dependency download
failure) triggers a build failure.
- Update eigen version to latest (3.3.3), mainly to be consistent with
the appveyor build.
- The travis trusty environment has an upgraded cmake, so this
downgrades cmake (to the stock trusty version) on the first couple
jobs so that we're still including some cmake 2.8.12 testing.
Don't try to define these in the issues submodule, because that fails
if testing without issues compiled in (e.g. using
cmake -DPYBIND11_TEST_OVERRIDE=test_methods_and_attributes.cpp).
This further reduces the constructors required in buffer_info/numpy by
removing the need for the constructors that take a single size_t and
just forward it on via an initializer_list to the container-accepting
constructor.
Unfortunately, in `array` one of the constructors runs into an ambiguity
problem with the deprecated `array(handle, bool)` constructor (because
both the bool constructor and the any_container constructor involve an
implicit conversion, so neither has precedence), so a forwarding
constructor is kept there (until the deprecated constructor is
eventually removed).
This adds support for constructing `buffer_info` and `array`s using
arbitrary containers or iterator pairs instead of requiring a vector.
This is primarily needed by PR #782 (which makes strides signed to
properly support negative strides, and will likely also make shape and
itemsize to avoid mixed integer issues), but also needs to preserve
backwards compatibility with 2.1 and earlier which accepts the strides
parameter as a vector of size_t's.
Rather than adding nearly duplicate constructors for each stride-taking
constructor, it seems nicer to simply allow any type of container (or
iterator pairs). This works by replacing the existing vector arguments
with a new `detail::any_container` class that handles implicit
conversion of arbitrary containers into a vector of the desired type.
It can also be explicitly instantiated with a pair of iterators (e.g.
by passing {begin, end} instead of the container).
Upcoming changes to buffer_info make it need some things declared in
common.h; it also feels a bit misplaced in common.h (which is arguably
too large already), so move it out. (Separating this and the subsequent
changes into separate commits to make the changes easier to distinguish
from the move.)
When attempting to get a raw array pointer we return nullptr if given a
nullptr, which triggers an error_already_set(), but we haven't set an
exception message, which results in "Unknown internal error".
Callers that want explicit allowing of a nullptr here already handle it
(by clearing the exception after the call).
When processing many files that contain top-level items with the same
name (e.g. "operator<<"), the output was non-deterministic and depended
on the order in which the different Clang processes finished. This
commit adds sorting that also accounts for the filename to prevent
random changes from run to run.
Many of the Eigen type casters' name() methods weren't wrapping the type
description in a `type_descr` object, which thus wasn't adding the
"{...}" annotation used to identify an argument which broke the help
output by skipping eigen arguments.
The test code I had added even had some (unnoticed) broken output (with
the "arg0: " showing up in the return value).
This commit also adds test code to ensure that named eigen arguments
actually work properly, despite the invalid help output. (The added
tests pass without the rest of this commit).