C++ exceptions are destructed in the context of the code that catches
them. At this point, the Python GIL may not be held, which could lead
to crashes with the previous implementation.
PyErr_Fetch and PyErr_Restore should always occur in pairs, which was
not the case for the previous implementation. To clear the exception,
the new approach uses PyErr_Restore && PyErr_Clear instead of simply
decreasing the reference counts of the exception objects.
Fixes#509.
The move policy was already set for rvalues in PR #473, but this only
applied to directly cast user-defined types. The problem is that STL
containers cast values indirectly and the rvalue information is lost.
Therefore the move policy was not set correctly. This commit fixes it.
This also makes an additional adjustment to remove the `copy` policy
exception: rvalues now always use the `move` policy. This is also safe
for copy-only rvalues because the `move` policy has an internal fallback
to copying.
Following commit 90d278, the object code generated by the python
bindings of nanogui (github.com/wjakob/nanogui) went up by a whopping
12%. It turns out that that project has quite a few enums where we don't
really care about arithmetic operators.
This commit thus partially reverts the effects of #503 by introducing
an additional attribute py::arithmetic() that must be specified if the
arithmetic operators are desired.
* `array_t(const object &)` now throws on error
* `array_t::ensure()` is intended for casters —- old constructor is
deprecated
* `array` and `array_t` get default constructors (empty array)
* `array` gets a converting constructor
* `py::isinstance<array_T<T>>()` checks the type (but not flags)
There is only one special thing which must remain: `array_t` gets
its own `type_caster` specialization which uses `ensure` instead
of a simple check.
The pytype converting constructors are convenient and safe for user
code, but for library internals the additional type checks and possible
conversions are sometimes not desired. `reinterpret_borrow<T>()` and
`reinterpret_steal<T>()` serve as the low-level unsafe counterparts
of `cast<T>()`.
This deprecates the `object(handle, bool)` constructor.
Renamed `borrowed` parameter to `is_borrowed` to avoid shadowing
warnings on MSVC.
* Deprecate the `py::object::str()` member function since `py::str(obj)`
is now equivalent and preferred
* Make `py::repr()` a free function
* Make sure obj.cast<T>() works as expected when T is a Python type
`obj.cast<T>()` should be the same as `T(obj)`, i.e. it should convert
the given object to a different Python type. However, `obj.cast<T>()`
usually calls `type_caster::load()` which only checks the type without
doing any actual conversion. That causes a very unexpected `cast_error`.
This commit makes it so that `obj.cast<T>()` and `T(obj)` are the same
when T is a Python type.
* Simplify pytypes converting constructor implementation
It's not necessary to maintain a full set of converting constructors
and assignment operators + const& and &&. A single converting const&
constructor will work and there is no impact on binary size. On the
other hand, the conversion functions can be significantly simplified.
Allows checking the Python types before creating an object instead of
after. For example:
```c++
auto l = list(ptr, true);
if (l.check())
// ...
```
The above is replaced with:
```c++
if (isinstance<list>(ptr)) {
auto l = reinterpret_borrow(ptr);
// ...
}
```
This deprecates `py::object::check()`. `py::isinstance()` covers the
same use case, but it can also check for user-defined types:
```c++
class Pet { ... };
py::class_<Pet>(...);
m.def("is_pet", [](py::object obj) {
return py::isinstance<Pet>(obj); // works as expected
});
```
This commit includes the following changes:
* Don't provide make_copy_constructor for non-copyable container
make_copy_constructor currently fails for various stl containers (e.g.
std::vector, std::unordered_map, std::deque, etc.) when the container's
value type (e.g. the "T" or the std::pair<K,T> for a map) is
non-copyable. This adds an override that, for types that look like
containers, also requires that the value_type be copyable.
* stl_bind.h: make bind_{vector,map} work for non-copy-constructible types
Most stl_bind modifiers require copying, so if the type isn't copy
constructible, we provide a read-only interface instead.
In practice, this means that if the type is non-copyable, it will be,
for all intents and purposes, read-only from the Python side (but
currently it simply fails to compile with such a container).
It is still possible for the caller to provide an interface manually
(by defining methods on the returned class_ object), but this isn't
something stl_bind can handle because the C++ code to construct values
is going to be highly dependent on the container value_type.
* stl_bind: copy only for arithmetic value types
For non-primitive types, we may well be copying some complex type, when
returning by reference is more appropriate. This commit returns by
internal reference for all but basic arithmetic types.
* Return by reference whenever possible
Only if we definitely can't--i.e. std::vector<bool>--because v[i]
returns something that isn't a T& do we copy; for everything else, we
return by reference.
For the map case, we can always return by reference (at least for the
default stl map/unordered_map).
If we need to initialize a holder around an unowned instance, and the
holder type is non-copyable (i.e. a unique_ptr), we currently construct
the holder type around the value pointer, but then never actually
destruct the holder: the holder destructor is called only for the
instance that actually has `inst->owned = true` set.
This seems no pointer, however, in creating such a holder around an
unowned instance: we never actually intend to use anything that the
unique_ptr gives us: and, in fact, do not want the unique_ptr (because
if it ever actually got destroyed, it would cause destruction of the
wrapped pointer, despite the fact that that wrapped pointer isn't
owned).
This commit changes the logic to only create a unique_ptr holder if we
actually own the instance, and to destruct via the constructed holder
whenever we have a constructed holder--which will now only be the case
for owned-unique-holder or shared-holder types.
Other changes include:
* Added test for non-movable holder constructor/destructor counts
The three alive assertions now pass, before #478 they fail with counts
of 2/2/1 respectively, because of the unique_ptr that we don't want and
don't destroy (because we don't *want* its destructor to run).
* Return cstats reference; fix ConstructStats doc
Small cleanup to the #478 test code, and fix to the ConstructStats
documentation (the static method definition should use `reference` not
`reference_internal`).
* Rename inst->constructed to inst->holder_constructed
This makes it clearer exactly what it's referring to.
There are now more places than just descr.h that make use of these.
The new macro isn't quite the same: the old one only tested for a
couple features, while the new one checks for the __cplusplus version
(but doesn't even try to enable C++14 for MSVC/ICC).
g++ 7 adds <optional>, but including it in C++14 mode isn't allowed
(just as including <experimental/optional> isn't allowed in C++11 mode).
(This wasn't triggered in g++-6 because it doesn't provide <optional>
yet.)
* Add type caster for std::experimental::optional
* Add tests for std::experimental::optional
* Support both <optional> / <experimental/optional>
* Mention std{::experimental,}::optional in the docs
* Make reference(_internal) the default return value policy for properties
Before this, all `def_property*` functions used `automatic` as their
default return value policy. This commit makes it so that:
* Non-static properties use `reference_interal` by default, thus
matching `def_readonly` and `def_readwrite`.
* Static properties use `reference` by default, thus matching
`def_readonly_static` and `def_readwrite_static`.
In case `cpp_function` is passed to any `def_property*`, its policy will
be used instead of any defaults. User-defined arguments in `extras`
still have top priority and will override both the default policies and
the ones from `cpp_function`.
Resolves#436.
* Almost always use return_value_policy::move for rvalues
For functions which return rvalues or rvalue references, the only viable
return value policies are `copy` and `move`. `reference(_internal)` and
`take_ownership` would take the address of a temporary which is always
an error.
This commit prevents possible user errors by overriding the bad rvalue
policies with `move`. Besides `move`, only `copy` is allowed, and only
if it's explicitly selected by the user.
This is also a necessary safety feature to support the new default
return value policies for properties: `reference(_internal)`.
The current integer caster was unnecessarily strict and rejected
various kinds of NumPy integer types when calling C++ functions
expecting normal integers. This relaxes the current behavior.
Currently pybind11 doesn't check when you define a new object (e.g. a
class, function, or exception) that overwrites an existing one. If the
thing being overwritten is a class, this leads to a segfault (because
pybind still thinks the type is defined, even though Python no longer
has the type). In other cases this is harmless (e.g. replacing a
function with an exception), but even in that case it's most likely a
bug.
This code doesn't prevent you from actively doing something harmful,
like deliberately overwriting a previous definition, but detects
overwriting with a run-time error if it occurs in the standard
class/function/exception/def registration interfaces.
All of the additions are in non-template code; the result is actually a
tiny decrease in .so size compared to master without the new test code
(977304 to 977272 bytes), and about 4K higher with the new tests.
type_caster_generic::cast(): The values of
wrapper->value
wrapper->owned
are incorrect in the case that a return value policy of 'copy' is
requested but there is no copy-constructor. (Similarly 'move'.) In
particular, if the source object is a static instance, the destructor of
the 'object' 'inst' leads to class_::dealloc() which incorrectly
attempts to 'delete' the static instance.
This commit re-arranges the code to be clearer as to what the values of
'value' and 'owned' should be in the various cases. Behaviour is
different to previous code only in two situations:
policy = copy but no copy-ctor: Old code leaves 'value = src, owned =
true', which leads to trouble. New code leaves 'value = nullptr, owned
= false', which is correct.
policy = move but no move- or copy-ctor: old code leaves 'value = src,
owned = true', which leads to trouble. New code leaves 'value =
nullptr, owned = false', which is correct.
With this there is no more need for manual user declarations like
`PYBIND11_DECLARE_HOLDER_TYPE(T, std::shared_ptr<T>)`. Existing ones
will still compile without error -- they will just be ignored silently.
Resolves#446.
This prevents unwanted conversions to bool or int such as:
```
py::object my_object;
std::cout << my_object << std::endl; // compiles and prints 0 or 1
int n = my_object; // compiles and is nonsense
```
With `explicit operator bool()` the above cases become compiler errors.
We have various classes that have non-explicit constructors that accept
a single argument, which is implicitly making them implicitly
convertible from the argument. In a few cases, this is desirable (e.g.
implicit conversion of std::string to py::str, or conversion of double
to py::float_); in many others, however, it is unintended (e.g. implicit
conversion of size_t to some pre-declared py::array_t<T> type).
This disables most of the unwanted implicit conversions by marking them
`explicit`, and comments the ones that are deliberately left implicit.
This convenience function ensures that a py::object is either a
py::array, or the implementation will try to convert it into one. Layout
requirements (such as c_style or f_style) can be also be provided.
This patch adds an extra base handle parameter to most ``py::array`` and
``py::array_t<>`` constructors. If specified along with a pointer to
data, the base object will be registered within NumPy, which increases
the base's reference count. This feature is useful to create shallow
copies of C++ or Python arrays while ensuring that the owners of the
underlying can't be garbage collected while referenced by NumPy.
The commit also adds a simple test function involving a ``wrap()``
function that creates shallow copies of various N-D arrays.
Python 3.5 can often import pybind11 modules compiled compiled for
Python 3.4 (i.e. all symbols can be resolved), but this leads to crashes
later on due to changes in various Python-internal data structures. This
commit adds an extra sanity check to prevent a successful import when
the Python versions don't match.
This fixes an issue that can arise when forwarding (*args, **kwargs)
captured from a pybind11-bound function call to another Python function.
When the initial function call includes no keyword arguments, the
py::kwargs field is set to nullptr and causes a crash later on.
PR #425 removed the bool operator from attribute accessors. This is
likely in use by existing code as it was the only way before #425 added
the `hasattr` function to check for the existence of an attribute, via:
if (obj.attr("foo")) { ... }
This commit adds it back in for attr and item accessors, but with a
deprecation warning to use `hasattr(obj, ...)` or `obj.contains(...)`
instead.
`auto var = l[0]` has a strange quirk: `var` is actually an accessor and
not an object, so any later assignment of `var = ...` would modify l[0]
instead of `var`. This is surprising compared to the non-auto assignment
`py::object var = l[0]; var = ...`.
By overloading `operator=` on lvalue/rvalue, the expected behavior is
restored even for `auto` variables.
This also adds the `hasattr` and `getattr` functions which are needed
with the new attribute behavior. The new functions behave exactly like
their Python counterparts.
Similarly `object` gets a `contains` method which calls `__contains__`,
i.e. it's the same as the `in` keyword in Python.
The custom exception handling added in PR #273 is robust, but is overly
complex for declaring the most common simple C++ -> Python exception
mapping that needs only to copy `what()`. This add a simpler
`py::register_exception<CppExp>(module, "PyExp");` function that greatly
simplifies the common basic case of translation of a simple CppException
into a simple PythonException, while not removing the more advanced
capabilities of defining custom exception handlers.
This adds a static local variable (in dead code unless actually needed)
in the overload code that is used for storage if the overload is for
some convert-by-value type (such as numeric values or std::string).
This has limitations (as written up in the advanced doc), but is better
than simply not being able to overload reference or pointer methods.
This clears the Python error at the error_already_set throw site, thus
allowing Python calls to be made in destructors which are triggered by
the exception. This is preferable to the alternative, which would be
guarding every Python API call with an error_scope.
This effectively flips the behavior of error_already_set. Previously,
it was assumed that the error stays in Python, so handling the exception
in C++ would require explicitly calling PyErr_Clear(), but nothing was
needed to propagate the error to Python. With this change, handling the
error in C++ does not require a PyErr_Clear() call, but propagating the
error to Python requires an explicit error_already_set::restore().
The change does not break old code which explicitly calls PyErr_Clear()
for cleanup, which should be the majority of user code. The need for an
explicit restore() call does break old code, but this should be mostly
confined to the library and not user code.
This commit adds support for forcing alias type initialization by
defining constructors with `py::init_alias<arg1, arg2>()` instead of
`py::init<arg1, arg2>()`. Currently py::init<> only results in Alias
initialization if the type is extended in python, or the given
arguments can't be used to construct the base type, but can be used to
construct the alias. py::init_alias<>, in contrast, always invokes the
constructor of the alias type.
It looks like this was already the intention of
`py::detail::init_alias`, which was forward-declared in
86d825f330, but was apparently never
finished: despite the existance of a .def method accepting it, the
`detail::init_alias` class isn't actually defined anywhere.
This commit completes the feature (or possibly repurposes it), allowing
declaration of classes that will always initialize the trampoline which
is (as I argued in #397) sometimes useful.
Switch count_t to use constexpr_sum (under non-MSVC), and then make
all_of_t/any_of_t use it instead of doing the sum itself.
For MSVC, count_t is still done using template recursion, but
all_of_t/any_of_t can also make use of it.
Type alias for alias classes with members didn't work properly: space
was only allocated for sizeof(type), but if we want to be able to put a
type_alias instance there, we need sizeof(type_alias), but
sizeof(type_alias) > sizeof(type) whenever type_alias has members.
The previous commit to address #392 triggers a compiler warning about
returning a reference to a local variable, which is *not* a false alarm:
the following:
py::cast<int &>(o)
(which happens internally in an overload declaration) really is
returning a reference to a local, because the cast operators for the
type_caster for numeric types returns a reference to its own member.
This commit adds a static_assert to make that a compilation failure
rather than returning a reference into about-to-be-freed memory.
Incidentally, this is also a fix for #219, which is exactly the same
issue: we can't reference numeric primitives that are cast from
wrappers around python numeric types.
This allows a slightly cleaner base type specification of:
py::class_<Type, Base>("Type")
as an alternative to
py::class_<Type>("Type", py::base<Base>())
As with the other template parameters, the order relative to the holder
or trampoline types doesn't matter.
This also includes a compile-time assertion failure if attempting to
specify more than one base class (but is easily extendible to support
multiple inheritance, someday, by updating the class_selector::set_bases
function to set multiple bases).
The current pybind11::class_<Type, Holder, Trampoline> fixed template
ordering results in a requirement to repeat the Holder with its default
value (std::unique_ptr<Type>) argument, which is a little bit annoying:
it needs to be specified not because we want to override the default,
but rather because we need to specify the third argument.
This commit removes this limitation by making the class_ template take
the type name plus a parameter pack of options. It then extracts the
first valid holder type and the first subclass type for holder_type and
trampoline type_alias, respectively. (If unfound, both fall back to
their current defaults, `std::unique_ptr<type>` and `type`,
respectively). If any unmatched template arguments are provided, a
static assertion fails.
What this means is that you can specify or omit the arguments in any
order:
py::class_<A, PyA> c1(m, "A");
py::class_<B, PyB, std::shared_ptr<B>> c2(m, "B");
py::class_<C, std::shared_ptr<C>, PyB> c3(m, "C");
It also allows future class attributes (such as base types in the next
commit) to be passed as class template types rather than needing to use
a py::base<> wrapper.
With this change arg_t is no longer a template, but it must remain so
for backward compatibility. Thus, a non-template arg_v is introduced,
while a dummy template alias arg_t is there to keep old code from
breaking. This can be remove in the next major release.
The implementation of arg_v also needed to be placed a little earlier in
the headers because it's not a template any more and unpacking_collector
needs more than a forward declaration.
MSVC fails to compile if the constructor is defined out-of-line.
The error states that it cannot deduce the type of the default template
parameter which is used for SFINAE.
The variadic handle::operator() offers the same functionality as well
as mixed positional, keyword, * and ** arguments. The tests are also
superseded by the ones in `test_callbacks`.
A Python function can be called with the syntax:
```python
foo(a1, a2, *args, ka=1, kb=2, **kwargs)
```
This commit adds support for the equivalent syntax in C++:
```c++
foo(a1, a2, *args, "ka"_a=1, "kb"_a=2, **kwargs)
```
In addition, generalized unpacking is implemented, as per PEP 448,
which allows calls with multiple * and ** unpacking:
```python
bar(*args1, 99, *args2, 101, **kwargs1, kz=200, **kwargs2)
```
and
```c++
bar(*args1, 99, *args2, 101, **kwargs1, "kz"_a=200, **kwargs2)
```
Currently pybind11 only supports std::unique_ptr<T> holders by default
(other holders can, of course, be declared using the macro). PR #368
added a `py::nodelete` that is intended to be used as:
py::class_<Type, std::unique_ptr<Type, py::nodelete>> c("Type");
but this doesn't work out of the box. (You could add an explicit
holder type declaration, but this doesn't appear to have been the
intention of the commit).
This commit fixes it by generalizing the unique_ptr type_caster to take
both the type and deleter as template arguments, so that *any*
unique_ptr instances are now automatically handled by pybind. It also
adds a test to test_smart_ptr, testing both that py::nodelete (now)
works, and that the object is indeed not deleted as intended.
Problem
=======
The template trampoline pattern documented in PR #322 has a problem with
virtual method overloads in intermediate classes in the inheritance
chain between the trampoline class and the base class.
For example, consider the following inheritance structure, where `B` is
the actual class, `PyB<B>` is the trampoline class, and `PyA<B>` is an
intermediate class adding A's methods into the trampoline:
PyB<B> -> PyA<B> -> B -> A
Suppose PyA<B> has a method `some_method()` with a PYBIND11_OVERLOAD in
it to overload the virtual `A::some_method()`. If a Python class `C` is
defined that inherits from the pybind11-registered `B` and tries to
provide an overriding `some_method()`, the PYBIND11_OVERLOADs declared
in PyA<B> fails to find this overloaded method, and thus never invoke it
(or, if pure virtual and not overridden in PyB<B>, raises an exception).
This happens because the base (internal) `PYBIND11_OVERLOAD_INT` macro
simply calls `get_overload(this, name)`; `get_overload()` then uses the
inferred type of `this` to do a type lookup in `registered_types_cpp`.
This is where it fails: `this` will be a `PyA<B> *`, but `PyA<B>` is
neither the base type (`B`) nor the trampoline type (`PyB<B>`). As a
result, the overload fails and we get a failed overload lookup.
The fix
=======
The fix is relatively simple: we can cast `this` passed to
`get_overload()` to a `const B *`, which lets get_overload look up the
correct class. Since trampoline classes should be derived from `B`
classes anyway, this cast should be perfectly safe.
This does require adding the class name as an argument to the
PYBIND11_OVERLOAD_INT macro, but leaves the public macro signatures
unchanged.
- ICPC can't handle the NCVirt trampoline which returns a non-copyable
type, which is likely due to a constexpr/SFINAE issue. This disables
the test on that compiler so that at least the rest can be tested.
For example keep_alive<0,1>() should work where the return value may sometimes be None. At present a "Could not allocate weak reference!" exception is thrown.
Update documentation to clarify behaviour of keep_alive when nurse is None or does not support weak references.
This is required since format descriptors for string types that
were using PYBIND11_DESCR were causing problems on C++14 on Linux.
Although this is technically a breaking change, it shouldn't cause
problems since the only use of format strings is passing them to
buffer_info constructor which expects std::string.
Note: for non-structured types, the const char * value is still
accessible via ::value for compatibility purpose.
The format strings that are known at compile time are now accessible
via both ::value and ::format(), and format strings for everything
else is accessible via ::format(). This makes it backwards compatible.
This allows exposing a dict-like interface to python code, allowing
iteration over keys via:
for k in custommapping:
...
while still allowing iteration over pairs, so that you can also
implement 'dict.items()' functionality which returns a pair iterator,
allowing:
for k, v in custommapping.items():
...
example-sequences-and-iterators is updated with a custom class providing
both types of iteration.
PR #329 generates the following warning under MSVC:
...\cast.h(202): warning C4456: declaration of 'it' hides previous local declaration
This renames the second iterator to silence it.
reference_internal requires an `instance` field to track the returned
reference's parent, but that's just a duplication of what
keep_alive<0,1> does, so use a keep alive to do this to eliminate the
duplication.
The pointer to the first member of a class instance is the same as the
pointer to instance itself; pybind11 has some workarounds for this to
not track registered instances that have a registered parent with the
same address. This doesn't work everywhere, however: issue #328 is a
failure of this for a mutator operator which resolves its argument to
the parent rather than the child, as is needed in #328.
This commit resolves the issue (and restores tracking of same-address
instances) by changing registered_instances from an unordered_map to an
unordered_multimap that allows duplicate instances for the same pointer
to be recorded, then resolves these differences by checking the type of
each matched instance when looking up an instance. (A
unordered_multimap seems cleaner for this than a unordered_map<list> or
similar because, the vast majority of the time, the instance will be
unique).
Currently pybind11 always translates values returned by Python functions
invoked from C++ code by copying, even when moving is feasible--and,
more importantly, even when moving is required.
The first, and relatively minor, concern is that moving may be
considerably more efficient for some types. The second problem,
however, is more serious: there's currently no way python code can
return a non-copyable type to C++ code.
I ran into this while trying to add a PYBIND11_OVERLOAD of a virtual
method that returns just such a type: it simply fails to compile because
this:
overload = ...
overload(args).template cast<ret_type>();
involves a copy: overload(args) returns an object instance, and the
invoked object::cast() loads the returned value, then returns a copy of
the loaded value.
We can, however, safely move that returned value *if* the object has the
only reference to it (i.e. if ref_count() == 1) and the object is
itself temporary (i.e. if it's an rvalue).
This commit does that by adding an rvalue-qualified object::cast()
method that allows the returned value to be move-constructed out of the
stored instance when feasible.
This basically comes down to three cases:
- For objects that are movable but not copyable, we always try the move,
with a runtime exception raised if this would involve moving a value
with multiple references.
- When the type is both movable and non-trivially copyable, the move
happens only if the invoked object has a ref_count of 1, otherwise the
object is copied. (Trivially copyable types are excluded from this
case because they are typically just collections of primitive types,
which can be copied just as easily as they can be moved.)
- Non-movable and trivially copy constructible objects are simply
copied.
This also adds examples to example-virtual-functions that shows both a
non-copyable object and a movable/copyable object in action: the former
raises an exception if returned while holding a reference, the latter
invokes a move constructor if unreferenced, or a copy constructor if
referenced.
Basically this allows code such as:
class MyClass(Pybind11Class):
def somemethod(self, whatever):
mt = MovableType(whatever)
# ...
return mt
which allows the MovableType instance to be returned to the C++ code
via its move constructor.
Of course if you attempt to violate this by doing something like:
self.value = MovableType(whatever)
return self.value
you get an exception--but right now, the pybind11-side of that code
won't compile at all.
Example signatures (old => new):
foo(int) => foo(arg0: int)
bar(Object, int) => bar(self: Object, arg0: int)
The change makes the signatures uniform for named and unnamed arguments
and it helps static analysis tools reconstruct function signatures from
docstrings.
This also tweaks the signature whitespace style to better conform to
PEP 8 for annotations and default arguments:
" : " => ": "
" = " => "="
Functions returning specialized Eigen matrices like Eigen::DiagonalMatrix and
Eigen::SelfAdjointView--which inherit from EigenBase but not
DenseBase--isn't currently allowed; such classes are explicitly copyable
into a Matrix (by definition), and so we can support functions that
return them by copying the value into a Matrix then casting that
resulting dense Matrix into a numpy.ndarray. This commit does exactly
that.
Some Eigen objects, such as those returned by matrix.diagonal() and
matrix.block() have non-standard stride values because they are
basically just maps onto the underlying matrix without copying it (for
example, the primary diagonal of a 3x3 matrix is a vector-like object
with .src equal to the full matrix data, but with stride 4). Returning
such an object from a pybind11 method breaks, however, because pybind11
assumes vectors have stride 1, and that matrices have strides equal to
the number of rows/columns or 1 (depending on whether the matrix is
stored column-major or row-major).
This commit fixes the issue by making pybind11 use Eigen's stride
methods when copying the data.
PR #309 broke scoped enums, which failed to compile because the added:
value == value2
comparison isn't valid for a scoped enum (they aren't implicitly
convertible to the underlying type). This commit fixes it by
explicitly converting the enum value to its underlying type before
doing the comparison.
It also adds a scoped enum example to the constants-and-functions
example that triggers the problem fixed in this commit.
Eigen::Ref is a common way to pass eigen dense types without needing a
template, e.g. the single definition `void
func(Eigen::Ref<Eigen::MatrixXd> x)` can be called with any double
matrix-like object.
The current pybind11 eigen support fails with internal errors if
attempting to bind a function with an Eigen::Ref<...> argument because
Eigen::Ref<...> satisfies the "is_eigen_dense" requirement, but can't
compile if actually used: Eigen::Ref<...> itself is not default
constructible, and so the argument std::tuple containing an
Eigen::Ref<...> isn't constructible, which results in compilation
failure.
This commit adds support for Eigen::Ref<...> by giving it its own
type_caster implementation which consists of an internal type_caster of
the referenced type, load/cast methods that dispatch to the internal
type_caster, and a unique_ptr to an Eigen::Ref<> instance that gets
set during load().
There is, of course, no performance advantage for pybind11-using code of
using Eigen::Ref<...>--we are allocating a matrix of the derived type
when loading it--but this has the advantage of allowing pybind11 to bind
transparently to C++ methods taking Eigen::Refs.
This changes the exception error message of a bad-arguments error to
suppress the constructor argument when the failure is a constructor.
This changes both the "Invoked with: " output to omit the object
instances, and rewrites the constructor signature to make it look
like a constructor (changing the first argument to the object name, and
removing the ' -> NoneType' return type.
GCC-6 adds a -Wplacement-new warning that warns for placement-new into a
space that is too small, which is sometimes being triggered here (e.g.
example5 always generates the warning under g++-6). It's a false
warning, however: the line immediately before just checked the size, and
so this line is never going to actually be reached in the cases where
the GCC warning is being triggered.
This localizes the warning disabling just to this one spot as there are
other placement-new uses in pybind11 where this warning could warn about
legitimate future problems.
This commit adds an additional _ template function for compile-time
selection between two description strings. This in turn allows the
elimination of needing two name() methods in type_caster<arithmetic
types> and type_caster<eigen types>, which allows them to start using
PYBIND11_TYPE_CASTER instead, simplifying their code by eliminating all
the code that they are duplicating from the macro.
In eigen.h, type_caster<Type>::load(): For the 'ndim == 1' case, use
the 'InnerStride' type because there is only an inner stride for a
vector. Choose between (n_elts x 1) or (1 x n_elts) according to
whether we're constructing a Vector or a RowVector.
Sergey Lyskov pointed out that the trampoline mechanism used to override
virtual methods from within Python caused unnecessary overheads when
instantiating the original (i.e. non-extended) class.
This commit removes this inefficiency, but some syntax changes were
needed to achieve this. Projects using this features will need to make a
few changes:
In particular, the example below shows the old syntax to instantiate a
class with a trampoline:
class_<TrampolineClass>("MyClass")
.alias<MyClass>()
....
This is what should be used now:
class_<MyClass, std::unique_ptr<MyClass, TrampolineClass>("MyClass")
....
Importantly, the trampoline class is now specified as the *third*
argument to the class_ template, and the alias<..>() call is gone. The
second argument with the unique pointer is simply the default holder
type used by pybind11.
args was derived from list, but cpp_function::dispatcher sends a tuple to it->impl (line #346 and #392 in pybind11.h). As a result args::size() and args::operator[] don't work at all. On my mac args::size() returns -1. Making args a subclass of tuple fixes it.
This somewhat heavyweight solution will avoid size_t/long long/long/int
mismatches on various platforms once and for all. The previous template
overloads could e.g. not handle size_t on Darwin.
One gotcha: the 'format_descriptor<T>::value()' syntax changed to just
'format_descriptor<T>::value'
This fixes a build error compiling with `nvcc/7.5` + `gcc/4.9.2`
causing a
```
./include/pybind11/pybind11.h(952): here
./include/pybind11/pytypes.h: In member function ‘pybind11::str pybind11::handle::str() const’:
./include/pybind11/pytypes.h:269:8: error: expected primary-expression before ‘class’
return pybind11::str(str, false);
^
./include/pybind11/pytypes.h:269:8: error: expected ‘;’ before ‘class’
./include/pybind11/pytypes.h:269:8: error: expected primary-expression before ‘class’
```
- new pybind11::base<> attribute to indicate a subclass relationship
- unified infrastructure for parsing variadic arguments in class_ and cpp_function
- use 'handle' and 'object' more consistently everywhere
Previously, pybind11 required classes using std::shared_ptr<> to derive
from std::enable_shared_from_this<> (or compilation failures would ensue).
Everything now also works for classes that don't do this, assuming that
some basic rules are followed (e.g. never passing "raw" pointers of
instances manged by shared pointers). The safer
std::enable_shared_from_this<> approach continues to be supported.
This modification taps into some newer C++14 features (if present) to
generate function signatures considerably more efficiently at compile
time rather than at run time.
With this change, pybind11 binaries are now *2.1 times* smaller compared
to the Boost.Python baseline in the benchmark. Compilation times get a
nice improvement as well.
Visual Studio 2015 unfortunately doesn't implement 'constexpr' well
enough yet to support this change and uses a runtime fallback.
The cpp_function class accepts a variadic argument, which was formerly
processed twice -- once at registration time, and once in the dispatch
lambda function. This is not only unnecessarily slow but also leads to
code bloat since it adds to the object code generated for every bound
function. This change removes the second pass at dispatch time.
One noteworthy change of this commit is that default arguments are now
constructed (and converted to Python objects) right at declaration time.
Consider the following example:
py::class_<MyClass>("MyClass")
.def("myFunction", py::arg("arg") = SomeType(123));
In this case, the change means that pybind11 must already be set up to
deal with values of the type 'SomeType', or an exception will be thrown.
Another change is that the "preview" of the default argument in the
function signature is generated using the __repr__ special method. If
it is not available in this type, the signature may not be very helpful,
i.e.:
| myFunction(...)
| Signature : (MyClass, arg : SomeType = <SomeType object at 0x101b7b080>) -> None
One workaround (other than defining SomeType.__repr__) is to specify the
human-readable preview of the default argument manually using the more
cumbersome arg_t notation:
py::class_<MyClass>("MyClass")
.def("myFunction", py::arg_t<SomeType>("arg", SomeType(123), "SomeType(123)"));
Using object class to hold converted object automatically deallocates
object if an exception is thrown or scope is left before constructing
complete Python object.
Additionally added method object::release() that allows to release
ownership of python object without decreasing its reference count.
The array(const buffer_info &info) constructor fails when given
complex types since their format string is 'Zd' or 'Zf' which has
a length of two and causes an error here:
if (info.format.size() != 1)
throw std::runtime_error("Unsupported buffer format!");
Fixed by allowing format sizes of one and two.