* Adds std::deque to the types supported by list_caster in stl.h.
* Adds a new test_deque test in test_stl.{py,cpp}.
* Updates the documentation to include std::deque as a default
supported type.
* Fix for Issue #1258
list_caster::load method will now check for a Python string and prevent its automatic conversion to a list.
This should fix the issue "pybind11/stl.h converts string to vector<string> #1258" (https://github.com/pybind/pybind11/issues/1258)
* Added tests for fix of issue #1258
* Changelog: stl string auto-conversion
This commit addresses an inefficiency in how enums are created in
pybind11. Most of the enum_<> implementation is completely generic --
however, being a template class, it ended up instantiating vast amounts
of essentially identical code in larger projects with many enums.
This commit introduces a generic non-templated helper class that is
compatible with any kind of enumeration. enum_ then becomes a thin
wrapper around this new class.
The new enum_<> API is designed to be 100% compatible with the old one.
pybind11 headers passed via the `pybind11_add_module` CMake
function can now be included as `SYSTEM` includes (`-isystem`).
This allows to set stricter (or experimental) warnings in
calling projects that might throw otherwise in headers
a user of pybind11 can not influence.
This PR adds a new py::ellipsis() method which can be used in
conjunction with NumPy's generalized slicing support. For instance,
the following is now valid (where "a" is a NumPy array):
py::array b = a[py::make_tuple(0, py::ellipsis(), 0)];
* Add basic support for tag-based static polymorphism
Sometimes it is possible to look at a C++ object and know what its dynamic type is,
even if it doesn't use C++ polymorphism, because instances of the object and its
subclasses conform to some other mechanism for being self-describing; for example,
perhaps there's an enumerated "tag" or "kind" member in the base class that's always
set to an indication of the correct type. This might be done for performance reasons,
or to permit most-derived types to be trivially copyable. One of the most widely-known
examples is in LLVM: https://llvm.org/docs/HowToSetUpLLVMStyleRTTI.html
This PR permits pybind11 to be informed of such conventions via a new specializable
detail::polymorphic_type_hook<> template, which generalizes the previous logic for
determining the runtime type of an object based on C++ RTTI. Implementors provide
a way to map from a base class object to a const std::type_info* for the dynamic
type; pybind11 then uses this to ensure that casting a Base* to Python creates a
Python object that knows it's wrapping the appropriate sort of Derived.
There are a number of restrictions with this tag-based static polymorphism support
compared to pybind11's existing support for built-in C++ polymorphism:
- there is no support for this-pointer adjustment, so only single inheritance is permitted
- there is no way to make C++ code call new Python-provided subclasses
- when binding C++ classes that redefine a method in a subclass, the .def() must be
repeated in the binding for Python to know about the update
But these are not much of an issue in practice in many cases, the impact on the
complexity of pybind11's innards is minimal and localized, and the support for
automatic downcasting improves usability a great deal.
The property returns the enum_ value as a string.
For example:
>>> import module
>>> module.enum.VALUE
enum.VALUE
>>> str(module.enum.VALUE)
'enum.VALUE'
>>> module.enum.VALUE.name
'VALUE'
This is actually the equivalent of Boost.Python "name" property.
I think that there's the word "for" missing for that sentence to be correct.
Please double-check that the sentence means what it's supposed to mean. :-)
- PYBIND11_MAKE_OPAQUE now takes ... rather than a single argument and
expands it with __VA_ARGS__; this lets templated, comma-containing
types get through correctly.
- Adds a new macro PYBIND11_TYPE() that lets you pass the type into a
macro as a single argument, such as:
PYBIND11_OVERLOAD(PYBIND11_TYPE(R<1,2>), PYBIND11_TYPE(C<3,4>), func)
Unfortunately this only works for one macro call: to forward the
argument on to the next macro call (without the processor breaking it
up again) requires also adding the PYBIND11_TYPE(...) to type macro
arguments in the PYBIND11_OVERLOAD_... macro chain.
- updated the documentation with these two changes, and use them at a couple
places in the test suite to test that they work.
None of the three currently recommended approaches works on PyPy, due to
it not garbage collecting things when you want it to. Added a note with
example showing how to get interpreter shutdown callbacks using the Python
atexit module.
py::class_<T>'s `def_property` and `def_property_static` can now take a
`nullptr` as the getter to allow a write-only property to be established
(mirroring Python's `property()` built-in when `None` is given for the
getter).
This also updates properties to use the new nullptr constructor internally.
This also matches the Eigen example for the row-major case.
This also enhances one of the tests to trigger a failure (and fixes it in the PR). (This isn't really a flaw in pybind itself, but rather fixes wrong code in the test code and docs).
MSCV does not allow `&typeid(T)` in constexpr contexts, but the string
part of the type signature can still be constexpr. In order to avoid
`typeid` as long as possible, `descr` is modified to collect type
information as template parameters instead of constexpr `typeid`.
The actual `std::type_info` pointers are only collected in the end,
as a `constexpr` (gcc/clang) or regular (MSVC) function call.
Not only does it significantly reduce binary size on MSVC, gcc/clang
benefit a little bit as well, since they can skip some intermediate
`std::type_info*` arrays.
* Expand documentation to include explicit example of py::module::import
where one would expect it.
* Describe how to use unbound and bound methods to class Python classes.
[skip ci]
E.g. trying to convert a `list` to a `std::vector<int>` without
including <pybind11/stl.h> will now raise an error with a note that
suggests checking the headers.
The note is only appended if `std::` is found in the function
signature. This should only be the case when a header is missing.
E.g. when stl.h is included, the signature would contain `List[int]`
instead of `std::vector<int>` while using stl_bind.h would produce
something like `MyVector`. Similarly for `std::map`/`Dict`, `complex`,
`std::function`/`Callable`, etc.
There's a possibility for false positives, but it's pretty low.
To avoid an ODR violation in the test suite while testing
both `stl.h` and `std_bind.h` with `std::vector<bool>`,
the `py::bind_vector<std::vector<bool>>` test is moved to
the secondary module (which does not include `stl.h`).
PR #880 changed the implementation of keep_alive to avoid weak
references when the nurse is pybind11-registered, but the documentation
didn't get updated to match.
There are two separate additions:
1. `py::hash(obj)` is equivalent to the Python `hash(obj)`.
2. `.def(hash(py::self))` registers the hash function defined by
`std::hash<T>` as the Python hash function.
Fixes one small variable name typo, and two instances where `py::arg().nocopy()` is used, where I think it should be `py::arg().noconvert()` instead. Probably `nocopy()` was the old/original name for it and then it was changed.
The main point of `py::module_local` is to make the C++ -> Python cast
unique so that returning/casting a C++ instance is well-defined.
Unfortunately it also makes loading unique, but this isn't particularly
desirable: when an instance contains `Type` instance there's no reason
it shouldn't be possible to pass that instance to a bound function
taking a `Type` parameter, even if that function is in another module.
This commit solves the issue by allowing foreign module (and global)
type loaders have a chance to load the value if the local module loader
fails. The implementation here does this by storing a module-local
loading function in a capsule in the python type, which we can then call
if the local (and possibly global, if the local type is masking a global
type) version doesn't work.
This allows you to use:
cls.def(py::init(&factory_function));
where `factory_function` returns a pointer, holder, or value of the
class type (or a derived type). Various compile-time checks
(static_asserts) are performed to ensure the function is valid, and
various run-time type checks where necessary.
Some other details of this feature:
- The `py::init` name doesn't conflict with the templated no-argument
`py::init<...>()`, but keeps the naming consistent: the existing
templated, no-argument one wraps constructors, the no-template,
function-argument one wraps factory functions.
- If returning a CppClass (whether by value or pointer) when an CppAlias
is required (i.e. python-side inheritance and a declared alias), a
dynamic_cast to the alias is attempted (for the pointer version); if
it fails, or if returned by value, an Alias(Class &&) constructor
is invoked. If this constructor doesn't exist, a runtime error occurs.
- for holder returns when an alias is required, we try a dynamic_cast of
the wrapped pointer to the alias to see if it is already an alias
instance; if it isn't, we raise an error.
- `py::init(class_factory, alias_factory)` is also available that takes
two factories: the first is called when an alias is not needed, the
second when it is.
- Reimplement factory instance clearing. The previous implementation
failed under python-side multiple inheritance: *each* inherited
type's factory init would clear the instance instead of only setting
its own type value. The new implementation here clears just the
relevant value pointer.
- dealloc is updated to explicitly set the leftover value pointer to
nullptr and the `holder_constructed` flag to false so that it can be
used to clear preallocated value without needing to rebuild the
instance internals data.
- Added various tests to test out new allocation/deallocation code.
- With preallocation now done lazily, init factory holders can
completely avoid the extra overhead of needing an extra
allocation/deallocation.
- Updated documentation to make factory constructors the default
advanced constructor style.
- If an `__init__` is called a second time, we have two choices: we can
throw away the first instance, replacing it with the second; or we can
ignore the second call. The latter is slightly easier, so do that.
* Doxygen needs `RECURSIVE = YES` in order to parse the `detail` subdir.
* The `-W` warnings-as-errors option for sphinx doesn't work with the
makefile build. Switched to calling sphinx directly.
* Fix "citation [cppimport] is not referenced" warning.
This updates the compilation to always apply hidden visibility to
resolve the issues with default visibility causing problems under debug
compilations. Moreover using the cmake property makes it easier for a
caller to override if absolutely needed for some reason.
For `pybind11_add_module` we use cmake to set the property; for the
targets, we append to compilation option to non-MSVC compilers.
In C++11 mode, `boost::apply_visitor` requires an explicit `result_type`.
This also adds optional tests for `boost::variant` in C++11/14, if boost
is available. In C++17 mode, `std::variant` is tested instead.
boost::apply_visitor accepts its arguments by non-const lvalue
reference, which fails to bind to an rvalue reference. Change the
example to remove the argument forwarding.
Attempting to mix py::module_local and non-module_local classes results
in some unexpected/undesirable behaviour:
- if a class is registered non-local by some other module, a later
attempt to register it locally fails. It doesn't need to: it is
perfectly acceptable for the local registration to simply override
the external global registration.
- going the other way (i.e. module `A` registers a type `T` locally,
then `B` registers the same type `T` globally) causes a more serious
issue: `A.T`'s constructors no longer work because the `self` argument
gets converted to a `B.T`, which then fails to resolve.
Changing the cast precedence to prefer local over global fixes this and
makes it work more consistently, regardless of module load order.
This commit adds a `py::module_local` attribute that lets you confine a
registered type to the module (more technically, the shared object) in
which it is defined, by registering it with:
py::class_<C>(m, "C", py::module_local())
This will allow the same C++ class `C` to be registered in different
modules with independent sets of class definitions. On the Python side,
two such types will be completely distinct; on the C++ side, the C++
type resolves to a different Python type in each module.
This applies `py::module_local` automatically to `stl_bind.h` bindings
when the container value type looks like something global: i.e. when it
is a converting type (for example, when binding a `std::vector<int>`),
or when it is a registered type itself bound with `py::module_local`.
This should help resolve potential future conflicts (e.g. if two
completely unrelated modules both try to bind a `std::vector<int>`.
Users can override the automatic selection by adding a
`py::module_local()` or `py::module_local(false)`.
Note that this does mildly break backwards compatibility: bound stl
containers of basic types like `std::vector<int>` cannot be bound in one
module and returned in a different module. (This can be re-enabled with
`py::module_local(false)` as described above, but with the potential for
eventual load conflicts).
This commit allows multiple inheritance of pybind11 classes from
Python, e.g.
class MyType(Base1, Base2):
def __init__(self):
Base1.__init__(self)
Base2.__init__(self)
where Base1 and Base2 are pybind11-exported classes.
This requires collapsing the various builtin base objects
(pybind11_object_56, ...) introduced in 2.1 into a single
pybind11_object of a fixed size; this fixed size object allocates enough
space to contain either a simple object (one base class & small* holder
instance), or a pointer to a new allocation that can contain an
arbitrary number of base classes and holders, with holder size
unrestricted.
* "small" here means having a sizeof() of at most 2 pointers, which is
enough to fit unique_ptr (sizeof is 1 ptr) and shared_ptr (sizeof is 2
ptrs).
To minimize the performance impact, this repurposes
`internals::registered_types_py` to store a vector of pybind-registered
base types. For direct-use pybind types (e.g. the `PyA` for a C++ `A`)
this is simply storing the same thing as before, but now in a vector;
for Python-side inherited types, the map lets us avoid having to do a
base class traversal as long as we've seen the class before. The
change to vector is needed for multiple inheritance: Python types
inheriting from multiple registered bases have one entry per base.
This commit also adds `doc()` to `object_api` as a shortcut for the
`attr("__doc__")` accessor.
The module macro changes from:
```c++
PYBIND11_PLUGIN(example) {
pybind11::module m("example", "pybind11 example plugin");
m.def("add", [](int a, int b) { return a + b; });
return m.ptr();
}
```
to:
```c++
PYBIND11_MODULE(example, m) {
m.doc() = "pybind11 example plugin";
m.def("add", [](int a, int b) { return a + b; });
}
```
Using the old macro results in a deprecation warning. The warning
actually points to the `pybind11_init` function (since attributes
don't bind to macros), but the message should be quite clear:
"PYBIND11_PLUGIN is deprecated, use PYBIND11_MODULE".
This extends py::vectorize to automatically pass through
non-vectorizable arguments. This removes the need for the documented
"explicitly exclude an argument" workaround.
Vectorization now applies to arithmetic, std::complex, and POD types,
passed as plain value or by const lvalue reference (previously only
pass-by-value types were supported). Non-const lvalue references and
any other types are passed through as-is.
Functions with rvalue reference arguments (whether vectorizable or not)
are explicitly prohibited: an rvalue reference is inherently not
something that can be passed multiple times and is thus unsuitable to
being in a vectorized function.
The vectorize returned value is also now more sensitive to inputs:
previously it would return by value when all inputs are of size 1; this
is now amended to having all inputs of size 1 *and* 0 dimensions. Thus
if you pass in, for example, [[1]], you get back a 1x1, 2D array, while
previously you got back just the resulting single value.
Vectorization of member function specializations is now also supported
via `py::vectorize(&Class::method)`; this required passthrough support
for the initial object pointer on the wrapping function pointer.
This attribute lets you disable (or explicitly enable) passing None to
an argument that otherwise would allow it by accepting
a value by raw pointer or shared_ptr.
This exposed a few underlying issues:
1. is_pod_struct was too strict to allow this. I've relaxed it to
require only trivially copyable and standard layout, rather than POD
(which additionally requires a trivial constructor, which std::complex
violates).
2. format_descriptor<std::complex<T>>::format() returned numpy format
strings instead of PEP3118 format strings, but register_dtype
feeds format codes of its fields to _dtype_from_pep3118. I've changed it
to return PEP3118 format codes. format_descriptor is a public type, so
this may be considered an incompatible change.
3. register_structured_dtype tried to be smart about whether to mark
fields as unaligned (with ^). However, it's examining the C++ alignment,
rather than what numpy (or possibly PEP3118) thinks the alignment should
be. For complex values those are different. I've made it mark all fields
as ^ unconditionally, which should always be safe even if they are
aligned, because we explicitly mark the padding.