* Adds std::deque to the types supported by list_caster in stl.h.
* Adds a new test_deque test in test_stl.{py,cpp}.
* Updates the documentation to include std::deque as a default
supported type.
* Check default holder
-Recognize "std::unique_ptr<T, D>" as a default holder even if "D" doesn't match between base and derived holders
* Add test for unique_ptr<T, D> change
Pybind11 provides a cast operator between opaque void* pointers on the
C++ side and capsules on the Python side. The py::cast<void *>
expression was not aware of this possibility and incorrectly triggered a
compile-time assertion ("Unable to cast type to reference: value is
local to type caster") that is now fixed.
* Support C++17 aligned new statement
This patch makes pybind11 aware of nonstandard alignment requirements in
bound types and passes on this information to C++17 aligned 'new'
operator. Pre-C++17, the behavior is unchanged.
This PR brings the std::array<> caster in sync with the other STL type
casters: to accept an arbitrary sequence as input (rather than a list,
which is too restrictive).
* Fix for Issue #1258
list_caster::load method will now check for a Python string and prevent its automatic conversion to a list.
This should fix the issue "pybind11/stl.h converts string to vector<string> #1258" (https://github.com/pybind/pybind11/issues/1258)
* Added tests for fix of issue #1258
* Changelog: stl string auto-conversion
* Fix potential crash when calling an overloaded function
The crash would occur if:
- dispatcher() uses two-pass logic (because the target is overloaded and some arguments support conversions)
- the first pass (with conversions disabled) doesn't find any matching overload
- the second pass does find a matching overload, but its return value can't be converted to Python
The code for formatting the error message assumed `it` still pointed to the selected overload,
but during the second-pass loop `it` was nullptr. Fix by setting `it` correctly if a second-pass
call returns a nullptr `handle`. Add a new test that segfaults without this fix.
* Make overload iteration const-correct so we don't have to iterate again on second-pass error
* Change test_error_after_conversions dependencies to local classes/variables
This commit addresses an inefficiency in how enums are created in
pybind11. Most of the enum_<> implementation is completely generic --
however, being a template class, it ended up instantiating vast amounts
of essentially identical code in larger projects with many enums.
This commit introduces a generic non-templated helper class that is
compatible with any kind of enumeration. enum_ then becomes a thin
wrapper around this new class.
The new enum_<> API is designed to be 100% compatible with the old one.
object_api::operator[] has a powerful overload for py::handle that can
accept slices, tuples (for NumPy), etc.
Lists, sequences, and tuples provide their own specialized operator[],
which unfortunately disables this functionality. This is accidental, and
the purpose of this commit is to re-enable the more general behavior.
This commit is tangentially related to the previous one in that it makes
py::handle/py::object et al. behave more like their Python counterparts.
This commit revamps the object_api class so that it maps most C++
operators to their Python analogs. This makes it possible to, e.g.
perform arithmetic using a py::int_ or py::array.
* check for already existing enum value added; added test
* added enum value name to exception message
* test for defining enum with multiple identical names moved to test_enum.cpp/py
This PR adds a new py::ellipsis() method which can be used in
conjunction with NumPy's generalized slicing support. For instance,
the following is now valid (where "a" is a NumPy array):
py::array b = a[py::make_tuple(0, py::ellipsis(), 0)];
* stl.h: propagate return value policies to type-specific casters
Return value policies for containers like those handled in in 'stl.h'
are currently broken.
The problem is that detail::return_value_policy_override<C>::policy()
always returns 'move' when given a non-pointer/reference type, e.g.
'std::vector<...>'.
This is sensible behavior for custom types that are exposed via
'py::class_<>', but it does not make sense for types that are handled by
other type casters (STL containers, Eigen matrices, etc.).
This commit changes the behavior so that
detail::return_value_policy_override only becomes active when the type
caster derives from type_caster_generic.
Furthermore, the override logic is called recursively in STL type
casters to enable key/value-specific behavior.
* Switching deprecated Thread Local Storage (TLS) usage in Python 3.7 to Thread Specific Storage (TSS)
* Changing Python version from 3.6 to 3.7 for Travis CI, to match brew's version of Python 3
* Introducing PYBIND11_ macros to switch between TLS and TSS API
The current code requires implicitly that integral types are cast-able to floating point. In case of strongly-typed integrals (e.g. as explained at http://www.ilikebigbits.com/blog/2014/5/6/type-safe-identifiers-in-c) this is not always the case.
This commit uses SFINAE to move the numeric conversions into separate `cast()` implementations to avoid the issue.
If an exception is thrown during module initialization, the
error_already_set destructor will try to call `get_internals()` *after*
setting Python's error indicator, resulting in a `SystemError: ...
returned with an error set`.
Fix that by temporarily stashing away the error indicator in the
destructor.
When using pybind11 to bind enums on MSVC and warnings (/W4) enabled,
the following warning pollutes builds. This fix renames one of the
occurrences.
pybind11\include\pybind11\pybind11.h(1398): warning C4459: declaration of 'self' hides global declaration
pybind11\include\pybind11\operators.h(41): note: see declaration of 'pybind11::detail::self'
* Add basic support for tag-based static polymorphism
Sometimes it is possible to look at a C++ object and know what its dynamic type is,
even if it doesn't use C++ polymorphism, because instances of the object and its
subclasses conform to some other mechanism for being self-describing; for example,
perhaps there's an enumerated "tag" or "kind" member in the base class that's always
set to an indication of the correct type. This might be done for performance reasons,
or to permit most-derived types to be trivially copyable. One of the most widely-known
examples is in LLVM: https://llvm.org/docs/HowToSetUpLLVMStyleRTTI.html
This PR permits pybind11 to be informed of such conventions via a new specializable
detail::polymorphic_type_hook<> template, which generalizes the previous logic for
determining the runtime type of an object based on C++ RTTI. Implementors provide
a way to map from a base class object to a const std::type_info* for the dynamic
type; pybind11 then uses this to ensure that casting a Base* to Python creates a
Python object that knows it's wrapping the appropriate sort of Derived.
There are a number of restrictions with this tag-based static polymorphism support
compared to pybind11's existing support for built-in C++ polymorphism:
- there is no support for this-pointer adjustment, so only single inheritance is permitted
- there is no way to make C++ code call new Python-provided subclasses
- when binding C++ classes that redefine a method in a subclass, the .def() must be
repeated in the binding for Python to know about the update
But these are not much of an issue in practice in many cases, the impact on the
complexity of pybind11's innards is minimal and localized, and the support for
automatic downcasting improves usability a great deal.
The property returns the enum_ value as a string.
For example:
>>> import module
>>> module.enum.VALUE
enum.VALUE
>>> str(module.enum.VALUE)
'enum.VALUE'
>>> module.enum.VALUE.name
'VALUE'
This is actually the equivalent of Boost.Python "name" property.
As reported in #1349, clang before 3.5 can segfault on a function-local
variable referenced inside a lambda. This moves the function-local
static into a separate function that the lambda can invoke to avoid the
issue.
Fixes#1349
This reimplements the version check to avoid sscanf (which has
reportedly started throwing warnings under MSVC, even when used
perfectly safely -- #1314). It also extracts the mostly duplicated
parts of PYBIND11_MODULE/PYBIND11_PLUGIN into separate macros.
- PYBIND11_MAKE_OPAQUE now takes ... rather than a single argument and
expands it with __VA_ARGS__; this lets templated, comma-containing
types get through correctly.
- Adds a new macro PYBIND11_TYPE() that lets you pass the type into a
macro as a single argument, such as:
PYBIND11_OVERLOAD(PYBIND11_TYPE(R<1,2>), PYBIND11_TYPE(C<3,4>), func)
Unfortunately this only works for one macro call: to forward the
argument on to the next macro call (without the processor breaking it
up again) requires also adding the PYBIND11_TYPE(...) to type macro
arguments in the PYBIND11_OVERLOAD_... macro chain.
- updated the documentation with these two changes, and use them at a couple
places in the test suite to test that they work.
This updates the `py::init` constructors to only use brace
initialization for aggregate initiailization if there is no constructor
with the given arguments.
This, in particular, fixes the regression in #1247 where the presence of
a `std::initializer_list<T>` constructor started being invoked for
constructor invocations in 2.2 even when there was a specific
constructor of the desired type.
The added test case demonstrates: without this change, it fails to
compile because the `.def(py::init<std::vector<int>>())` constructor
tries to invoke the `T(std::initializer_list<std::vector<int>>)`
constructor rather than the `T(std::vector<int>)` constructor.
By only using `new T{...}`-style construction when a `T(...)`
constructor doesn't exist, we should bypass this by while still allowing
`py::init<...>` to be used for aggregate type initialization (since such
types, by definition, don't have a user-declared constructor).
* Fix segfault when reloading interpreter with external modules
When embedding the interpreter and loading external modules in that
embedded interpreter, the external module correctly shares its
internals_ptr with the one in the embedded interpreter. When the
interpreter is shut down, however, only the `internals_ptr` local to
the embedded code is actually reset to nullptr: the external module
remains set.
The result is that loading an external pybind11 module, letting the
interpreter go through a finalize/initialize, then attempting to use
something in the external module fails because this external module is
still trying to use the old (destroyed) internals. This causes
undefined behaviour (typically a segfault).
This commit fixes it by adding a level of indirection in the internals
path, converting the local internals variable to `internals **` instead
of `internals *`. With this change, we can detect a stale internals
pointer and reload the internals pointer (either from a capsule or by
creating a new internals instance).
(No issue number: this was reported on gitter by @henryiii and @aoloe).
The anonymous struct nested in a union triggers a -Wnested-anon-type
warning ("anonymous types declared in an anonymous union are an
extension") under clang (#1204). This names the struct and defines it
out of the definition of `instance` to get around to warning (and makes
the code slightly simpler).
PEP8 indicates (correctly, IMO) that when an annotation is present, the
signature should include spaces around the equal sign, i.e.
def f(x: int = 1): ...
instead of
def f(x: int=1): ...
(in the latter case the equal appears to bind to the type, not to the
argument).
pybind11 signatures always includes a type annotation so we can always
add the spaces.