This adds bounds-unchecked access to arrays through a `a.unchecked<Type,
Dimensions>()` method. (For `array_t<T>`, the `Type` template parameter
is omitted). The mutable version (which requires the array have the
`writeable` flag) is available as `a.mutable_unchecked<...>()`.
Specifying the Dimensions as a template parameter allows storage of an
std::array; having the strides and sizes stored that way (as opposed to
storing a copy of the array's strides/shape pointers) allows the
compiler to make significant optimizations of the shape() method that it
can't make with a pointer; testing with nested loops of the form:
for (size_t i0 = 0; i0 < r.shape(0); i0++)
for (size_t i1 = 0; i1 < r.shape(1); i1++)
...
r(i0, i1, ...) += 1;
over a 10 million element array gives around a 25% speedup (versus using
a pointer) for the 1D case, 33% for 2D, and runs more than twice as fast
with a 5D array.
RTD updated their build environment which broke the 1.8.14.dev build of
doxygen that we were using. The update also breaks the conda-forge build
of 1.8.13 (but that version has other issues).
Luckily, the RTD update did bring their doxygen version up to 1.8.11
which is enough to parse the C++11 code we need (ref qualifiers) and it
also avoids the segfault found in 1.8.13.
Since we're using the native doxygen, conda isn't required anymore and
we can simplify the RTD configuration.
[skip ci]
This commit largely rewrites the Eigen dense matrix support to avoid
copying in many cases: Eigen arguments can now reference numpy data, and
numpy objects can now reference Eigen data (given compatible types).
Eigen::Ref<...> arguments now also make use of the new `convert`
argument use (added in PR #634) to avoid conversion, allowing
`py::arg().noconvert()` to be used when binding a function to prohibit
copying when invoking the function. Respecting `convert` also means
Eigen overloads that avoid copying will be preferred during overload
resolution to ones that require copying.
This commit also rewrites the Eigen documentation and test suite to
explain and test the new capabilities.
Now that only one shared metaclass is ever allocated, it's extremely
cheap to enable it for all pybind11 types.
* Deprecate the default py::metaclass() since it's not needed anymore.
* Allow users to specify a custom metaclass via py::metaclass(handle).
* Switch breathe to stable releases. It was previously pulling directly
from master because a required bugfix was not in a stable release yet.
* Force update sphinx and RTD theme. When using conda, readthedocs pins
sphinx==1.3.5 and sphinx_rtd_theme==0.1.7, which is a bit older than
the ones used in the RTD regular (non-conda) build. The newer theme
has nicer sidebar navigation (4-level depth vs. only 2-level on the
older version). Note that the python==3.5 requirement must stay
because RTD still installs the older sphinx at one point which isn't
available with Python 3.6.
[skip ci]
Fixes#667
The sphinx version is pinned by readthedocs, but sphinx 1.3.5 is not
available with conda python 3.6. The workaround is to pin the python
version to 3.5 (it doesn't really matter for the docs build).
* Propagate unicode conversion failure
If returning a std::string with invalid utf-8 data, we currently fail
with an uninformative TypeError instead of propagating the
UnicodeDecodeError that Python sets on failure.
* Add support for u16/u32strings and literals
This adds support for wchar{16,32}_t character literals and the
associated std::u{16,32}string types. It also folds the
character/string conversion into a single type_caster template, since
the type casters for string and wstring were mostly the same anyway.
* Added too-long and too-big character conversion errors
With this commit, when casting to a single character, as opposed to a
C-style string, we make sure the input wasn't a multi-character string
or a single character with codepoint too large for the character type.
This also changes the character cast op to CharT instead of CharT& (we
need to be able to return a temporary decoded char value, but also
because there's little gained by bothering with an lvalue return here).
Finally it changes the char caster to 'has-a-string-caster' instead of
'is-a-string-caster' because, with the cast_op change above, there's
nothing at all gained from inheritance. This also lets us remove the
`success` from the string caster (which was only there for the char
caster) into the char caster itself. (I also renamed it to 'none' and
inverted its value to better reflect its purpose). The None -> nullptr
loading also now takes place only under a `convert = true` load pass.
Although it's unlikely that a function taking a char also has overloads
that can take a None, it seems marginally more correct to treat it as a
conversion.
This commit simplifies the size assumptions about character sizes with
static_asserts to back them up.
This changes the function dispatching code for overloaded functions into
a two-pass procedure where we first try all overloads with
`convert=false` for all arguments. If no function calls succeeds in the
first pass, we then try a second pass where we allow arguments to have
`convert=true` (unless, of course, the argument was explicitly specified
with `py::arg().noconvert()`).
For non-overloaded methods, the two-pass procedure is skipped (we just
make the overload-allowed call). The second pass is also skipped if it
would result in the same thing (i.e. where all arguments are
`.noconvert()` arguments).
This adds support for controlling the `convert` flag of arguments
through the py::arg annotation. This then allows arguments to be
flagged as non-converting, which the type_caster is able to use to
request different behaviour.
Currently, AFAICS `convert` is only used for type converters of regular
pybind11-registered types; all of the other core type_casters ignore it.
We can, however, repurpose it to control internal conversion of
converters like Eigen and `array`: most usefully to give callers a way
to disable the conversion that would otherwise occur when a
`Eigen::Ref<const Eigen::Matrix>` argument is passed a numpy array that
requires conversion (either because it has an incompatible stride or the
wrong dtype).
Specifying a noconvert looks like one of these:
m.def("f1", &f, "a"_a.noconvert() = "default"); // Named, default, noconvert
m.def("f2", &f, "a"_a.noconvert()); // Named, no default, no converting
m.def("f3", &f, py::arg().noconvert()); // Unnamed, no default, no converting
(The last part--being able to declare a py::arg without a name--is new:
previous py::arg() only accepted named keyword arguments).
Such an non-convert argument is then passed `convert = false` by the
type caster when loading the argument. Whether this has an effect is up
to the type caster itself, but as mentioned above, this would be
extremely helpful for the Eigen support to give a nicer way to specify
a "no-copy" mode than the custom wrapper in the current PR, and
moreover isn't an Eigen-specific hack.
* Minor doc syntax fix
The numpy documentation had a bad :file: reference (was using double
backticks instead of single backticks).
* Changed long-outdated "example" -> "tests" wording
The ConstructorStats internal docs still had "from example import", and
the main testing cpp file still used "example" in the module
description.
This commit rewrites the function dispatcher code to support mixing
regular arguments with py::args/py::kwargs arguments. It also
simplifies the argument loader noticeably as it no longer has to worry
about args/kwargs: all of that is now sorted out in the dispatcher,
which now simply appends a tuple/dict if the function takes
py::args/py::kwargs, then passes all the arguments in a vector.
When the argument loader hit a py::args or py::kwargs, it doesn't do
anything special: it just calls the appropriate type_caster just like it
does for any other argument (thus removing the previous special cases
for args/kwargs).
Switching to passing arguments in a single std::vector instead of a pair
of tuples also makes things simpler, both in the dispatch and the
argument_loader: since this argument list is strictly pybind-internal
(i.e. it never goes to Python) we have no particular reason to use a
Python tuple here.
Some (intentional) restrictions:
- you may not bind a function that has args/kwargs somewhere other than
the end (this somewhat matches Python, and keeps the dispatch code a
little cleaner by being able to not worry about where to inject the
args/kwargs in the argument list).
- If you specify an argument both positionally and via a keyword
argument, you get a TypeError alerting you to this (as you do in
Python).
* Abstract away some holder functionality (resolve#585)
Custom holder types which don't have `.get()` can select the correct
function to call by specializing `holder_traits`.
* Add support for move-only holders (fix#605)
* Clarify PYBIND11_NUMPY_DTYPE documentation
The current documentation and example reads as though
PYBIND11_NUMPY_DTYPE is a declarative macro along the same lines as
PYBIND11_DECLARE_HOLDER_TYPE, but it isn't. The changes the
documentation and docs example to make it clear that you need to "call"
the macro.
* Add satisfies_{all,any,none}_of<T, Preds>
`satisfies_all_of<T, Pred1, Pred2, Pred3>` is a nice legibility-enhanced
shortcut for `is_all<Pred1<T>, Pred2<T>, Pred3<T>>`.
* Give better error message for non-POD dtype attempts
If you try to use a non-POD data type, you get difficult-to-interpret
compilation errors (about ::name() not being a member of an internal
pybind11 struct, among others), for which isn't at all obvious what the
problem is.
This adds a static_assert for such cases.
It also changes the base case from an empty struct to the is_pod_struct
case by no longer using `enable_if<is_pod_struct>` but instead using a
static_assert: thus specializations avoid the base class, POD types
work, and non-POD types (and unimplemented POD types like std::array)
get a more informative static_assert failure.
* Prefix macros with PYBIND11_
numpy.h uses unprefixed macros, which seems undesirable. This prefixes
them with PYBIND11_ to match all the other macros in numpy.h (and
elsewhere).
* Add long double support
This adds long double and std::complex<long double> support for numpy
arrays.
This allows some simplification of the code used to generate format
descriptors; the new code uses fewer macros, instead putting the code as
different templated options; the template conditions end up simpler with
this because we are now supporting all basic C++ arithmetic types (and
so can use is_arithmetic instead of is_integral + multiple
different specializations).
In addition to testing that it is indeed working in the test script, it
also adds various offset and size calculations there, which
fixes the test failures under x86 compilations.
* Make 'any' the default markup role for Sphinx docs
* Automate generation of reference docs with doxygen and breathe
* Improve reference docs coverage
* Some clarifications to section on virtual fns
Primarily, I made it clear that PYBIND11_OVERLOAD_PURE_NAME is not "useful" but required in renaming situations. Also clarified that one should not bind to the trampoline helper class which I found tempting since it seems more explicit.
* Remove :emphasize-lines: from cpp block, seems to suppress formatting
* docs: emphasize default policy, clarify keep_alive
Emphasize the default return value policy since this statement is hidden in a wall of text.
Add a hint that call policies are probably required for container objects.
This commit includes modifications that are needed to get pybind11 to work with PyPy. The full test suite compiles and runs except for a last few functions that are commented out (due to problems in PyPy that were reported on the PyPy bugtracker).
Two somewhat intrusive changes were needed to make it possible: two new tags ``py::buffer_protocol()`` and ``py::metaclass()`` must now be specified to the ``class_`` constructor if the class uses the buffer protocol and/or requires a metaclass (e.g. for static properties).
Note that this is only for the PyPy version based on Python 2.7 for now. When the PyPy 3.x has caught up in terms of cpyext compliance, a PyPy 3.x patch will follow.
This adds automatic casting when assigning to python types like dict,
list, and attributes. Instead of:
dict["key"] = py::cast(val);
m.attr("foo") = py::cast(true);
list.append(py::cast(42));
you can now simply write:
dict["key"] = val;
m.attr("foo") = true;
list.append(42);
Casts needing extra parameters (e.g. for a non-default rvp) still
require the py::cast() call. set::add() is also supported.
All usage is channeled through a SFINAE implementation which either just returns or casts.
Combined non-converting handle and autocasting template methods via a
helper method that either just returns (handle) or casts (C++ type).
Following commit 90d278, the object code generated by the python
bindings of nanogui (github.com/wjakob/nanogui) went up by a whopping
12%. It turns out that that project has quite a few enums where we don't
really care about arithmetic operators.
This commit thus partially reverts the effects of #503 by introducing
an additional attribute py::arithmetic() that must be specified if the
arithmetic operators are desired.
* Deprecate the `py::object::str()` member function since `py::str(obj)`
is now equivalent and preferred
* Make `py::repr()` a free function
* Make sure obj.cast<T>() works as expected when T is a Python type
`obj.cast<T>()` should be the same as `T(obj)`, i.e. it should convert
the given object to a different Python type. However, `obj.cast<T>()`
usually calls `type_caster::load()` which only checks the type without
doing any actual conversion. That causes a very unexpected `cast_error`.
This commit makes it so that `obj.cast<T>()` and `T(obj)` are the same
when T is a Python type.
* Simplify pytypes converting constructor implementation
It's not necessary to maintain a full set of converting constructors
and assignment operators + const& and &&. A single converting const&
constructor will work and there is no impact on binary size. On the
other hand, the conversion functions can be significantly simplified.
* Add type caster for std::experimental::optional
* Add tests for std::experimental::optional
* Support both <optional> / <experimental/optional>
* Mention std{::experimental,}::optional in the docs
* Make reference(_internal) the default return value policy for properties
Before this, all `def_property*` functions used `automatic` as their
default return value policy. This commit makes it so that:
* Non-static properties use `reference_interal` by default, thus
matching `def_readonly` and `def_readwrite`.
* Static properties use `reference` by default, thus matching
`def_readonly_static` and `def_readwrite_static`.
In case `cpp_function` is passed to any `def_property*`, its policy will
be used instead of any defaults. User-defined arguments in `extras`
still have top priority and will override both the default policies and
the ones from `cpp_function`.
Resolves#436.
* Almost always use return_value_policy::move for rvalues
For functions which return rvalues or rvalue references, the only viable
return value policies are `copy` and `move`. `reference(_internal)` and
`take_ownership` would take the address of a temporary which is always
an error.
This commit prevents possible user errors by overriding the bad rvalue
policies with `move`. Besides `move`, only `copy` is allowed, and only
if it's explicitly selected by the user.
This is also a necessary safety feature to support the new default
return value policies for properties: `reference(_internal)`.
With this there is no more need for manual user declarations like
`PYBIND11_DECLARE_HOLDER_TYPE(T, std::shared_ptr<T>)`. Existing ones
will still compile without error -- they will just be ignored silently.
Resolves#446.
The custom exception handling added in PR #273 is robust, but is overly
complex for declaring the most common simple C++ -> Python exception
mapping that needs only to copy `what()`. This add a simpler
`py::register_exception<CppExp>(module, "PyExp");` function that greatly
simplifies the common basic case of translation of a simple CppException
into a simple PythonException, while not removing the more advanced
capabilities of defining custom exception handlers.
This adds a static local variable (in dead code unless actually needed)
in the overload code that is used for storage if the overload is for
some convert-by-value type (such as numeric values or std::string).
This has limitations (as written up in the advanced doc), but is better
than simply not being able to overload reference or pointer methods.
This commit adds support for forcing alias type initialization by
defining constructors with `py::init_alias<arg1, arg2>()` instead of
`py::init<arg1, arg2>()`. Currently py::init<> only results in Alias
initialization if the type is extended in python, or the given
arguments can't be used to construct the base type, but can be used to
construct the alias. py::init_alias<>, in contrast, always invokes the
constructor of the alias type.
It looks like this was already the intention of
`py::detail::init_alias`, which was forward-declared in
86d825f330, but was apparently never
finished: despite the existance of a .def method accepting it, the
`detail::init_alias` class isn't actually defined anywhere.
This commit completes the feature (or possibly repurposes it), allowing
declaration of classes that will always initialize the trampoline which
is (as I argued in #397) sometimes useful.
This allows a slightly cleaner base type specification of:
py::class_<Type, Base>("Type")
as an alternative to
py::class_<Type>("Type", py::base<Base>())
As with the other template parameters, the order relative to the holder
or trampoline types doesn't matter.
This also includes a compile-time assertion failure if attempting to
specify more than one base class (but is easily extendible to support
multiple inheritance, someday, by updating the class_selector::set_bases
function to set multiple bases).
The current pybind11::class_<Type, Holder, Trampoline> fixed template
ordering results in a requirement to repeat the Holder with its default
value (std::unique_ptr<Type>) argument, which is a little bit annoying:
it needs to be specified not because we want to override the default,
but rather because we need to specify the third argument.
This commit removes this limitation by making the class_ template take
the type name plus a parameter pack of options. It then extracts the
first valid holder type and the first subclass type for holder_type and
trampoline type_alias, respectively. (If unfound, both fall back to
their current defaults, `std::unique_ptr<type>` and `type`,
respectively). If any unmatched template arguments are provided, a
static assertion fails.
What this means is that you can specify or omit the arguments in any
order:
py::class_<A, PyA> c1(m, "A");
py::class_<B, PyB, std::shared_ptr<B>> c2(m, "B");
py::class_<C, std::shared_ptr<C>, PyB> c3(m, "C");
It also allows future class attributes (such as base types in the next
commit) to be passed as class template types rather than needing to use
a py::base<> wrapper.
Test compilation instructions for Windows were changed to use the
`cmake --build` command line invocation which should be easier than
manually setting up using the CMake GUI and Visual Studio.
For example keep_alive<0,1>() should work where the return value may sometimes be None. At present a "Could not allocate weak reference!" exception is thrown.
Update documentation to clarify behaviour of keep_alive when nurse is None or does not support weak references.
The missing empty line after `.. code-block::` resulted in incorrectly
parsed restructuredtext (sphinx warnings) and the code blocks were not
generated in the html output.
The `exclude_patterns` change just silences the orphaned file warning.
[ci skip]
The format strings that are known at compile time are now accessible
via both ::value and ::format(), and format strings for everything
else is accessible via ::format(). This makes it backwards compatible.
This allows exposing a dict-like interface to python code, allowing
iteration over keys via:
for k in custommapping:
...
while still allowing iteration over pairs, so that you can also
implement 'dict.items()' functionality which returns a pair iterator,
allowing:
for k, v in custommapping.items():
...
example-sequences-and-iterators is updated with a custom class providing
both types of iteration.
reference_internal requires an `instance` field to track the returned
reference's parent, but that's just a duplication of what
keep_alive<0,1> does, so use a keep alive to do this to eliminate the
duplication.
It was already pretty badly intrusive, but it also appears to make MSVC
segfault. Rather than investigating and fixing it, it's easier to just
remove it.
As discussed in #320.
The adds a documentation block that mentions that the trampoline classes
must provide overrides for both the classes' own virtual methods *and*
any inherited virtual methods. It also provides a templated solution to
avoiding method duplication.
The example includes a third method (only mentioned in the "see also"
section of the documentation addition), using multiple inheritance.
While this approach works, and avoids code generation in deep
hierarchies, it is intrusive by requiring that the wrapped classes use
virtual inheritance, which itself is more instrusive if any of the
virtual base classes need anything other than default constructors. As
per the discussion in #320, it is kept as an example, but not suggested
in the documentation.
Functions returning specialized Eigen matrices like Eigen::DiagonalMatrix and
Eigen::SelfAdjointView--which inherit from EigenBase but not
DenseBase--isn't currently allowed; such classes are explicitly copyable
into a Matrix (by definition), and so we can support functions that
return them by copying the value into a Matrix then casting that
resulting dense Matrix into a numpy.ndarray. This commit does exactly
that.
Sergey Lyskov pointed out that the trampoline mechanism used to override
virtual methods from within Python caused unnecessary overheads when
instantiating the original (i.e. non-extended) class.
This commit removes this inefficiency, but some syntax changes were
needed to achieve this. Projects using this features will need to make a
few changes:
In particular, the example below shows the old syntax to instantiate a
class with a trampoline:
class_<TrampolineClass>("MyClass")
.alias<MyClass>()
....
This is what should be used now:
class_<MyClass, std::unique_ptr<MyClass, TrampolineClass>("MyClass")
....
Importantly, the trampoline class is now specified as the *third*
argument to the class_ template, and the alias<..>() call is gone. The
second argument with the unique pointer is simply the default holder
type used by pybind11.
This somewhat heavyweight solution will avoid size_t/long long/long/int
mismatches on various platforms once and for all. The previous template
overloads could e.g. not handle size_t on Darwin.
One gotcha: the 'format_descriptor<T>::value()' syntax changed to just
'format_descriptor<T>::value'
- new pybind11::base<> attribute to indicate a subclass relationship
- unified infrastructure for parsing variadic arguments in class_ and cpp_function
- use 'handle' and 'object' more consistently everywhere
Previously, pybind11 required classes using std::shared_ptr<> to derive
from std::enable_shared_from_this<> (or compilation failures would ensue).
Everything now also works for classes that don't do this, assuming that
some basic rules are followed (e.g. never passing "raw" pointers of
instances manged by shared pointers). The safer
std::enable_shared_from_this<> approach continues to be supported.
This modification taps into some newer C++14 features (if present) to
generate function signatures considerably more efficiently at compile
time rather than at run time.
With this change, pybind11 binaries are now *2.1 times* smaller compared
to the Boost.Python baseline in the benchmark. Compilation times get a
nice improvement as well.
Visual Studio 2015 unfortunately doesn't implement 'constexpr' well
enough yet to support this change and uses a runtime fallback.
The cpp_function class accepts a variadic argument, which was formerly
processed twice -- once at registration time, and once in the dispatch
lambda function. This is not only unnecessarily slow but also leads to
code bloat since it adds to the object code generated for every bound
function. This change removes the second pass at dispatch time.
One noteworthy change of this commit is that default arguments are now
constructed (and converted to Python objects) right at declaration time.
Consider the following example:
py::class_<MyClass>("MyClass")
.def("myFunction", py::arg("arg") = SomeType(123));
In this case, the change means that pybind11 must already be set up to
deal with values of the type 'SomeType', or an exception will be thrown.
Another change is that the "preview" of the default argument in the
function signature is generated using the __repr__ special method. If
it is not available in this type, the signature may not be very helpful,
i.e.:
| myFunction(...)
| Signature : (MyClass, arg : SomeType = <SomeType object at 0x101b7b080>) -> None
One workaround (other than defining SomeType.__repr__) is to specify the
human-readable preview of the default argument manually using the more
cumbersome arg_t notation:
py::class_<MyClass>("MyClass")
.def("myFunction", py::arg_t<SomeType>("arg", SomeType(123), "SomeType(123)"));
Using object class to hold converted object automatically deallocates
object if an exception is thrown or scope is left before constructing
complete Python object.
Additionally added method object::release() that allows to release
ownership of python object without decreasing its reference count.