To fix a difficult-to-reproduce segfault on Python interpreter exit,
ensure that the tp_base field of a handful of new heap-types is
counted as a reference to that base type object.
This changes the pointer `cast()` in `PYBIND11_TYPE_CASTER` to recognize
the `take_ownership` policy: if casting a pointer with take-ownership,
the `cast()` now recalls `cast()` with a dereferenced rvalue (rather
than the previous code, which was always calling it with a const lvalue
reference), and deletes the pointer after the chained `cast()` is
complete.
This makes code like:
m.def("f", []() { return new std::vector<int>(100, 1); },
py::return_value_policy::take_ownership);
do the expected thing by taking over ownership of the returned pointer
(which is deleted once the chained cast completes).
PR #936 broke the ability to return a pointer to a stl container (and,
likewise, to a tuple) because the added deduced type matched a
non-const pointer argument: the pointer-accepting `cast` in
PYBIND11_TYPE_CASTER had a `const type *`, which is a worse match for a
non-const pointer than the universal reference template #936 added.
This changes the provided TYPE_CASTER cast(ptr) to take the pointer by
template arg (so that it will accept either const or non-const pointer).
It has two other effects: it slightly reduces .so size (because many
type casters never actually need the pointer cast at all), and it allows
type casters to provide their untemplated pointer `cast()` that will
take precedence over the templated version provided in the macro.
Currently select_cxx_standard(), which sets PYBIND11_CPP_STANDARD when
not externally set, is only called from pybind11_add_module(), but the
embed target setup (which runs unconditionally) makes use of
${PYBIND11_CPP_STANDARD}, which isn't set yet. This commit removes the
`select_cxx_standard` function completely and just always runs the
standard detection code.
This also tweaks the detection code to not bothering checking for the
`-std=c++11` flag when the `-std=c++14` detection succeeded.
In a Debug build, MSVC doesn't apply copy/move elision as often,
triggering a test failure. This relaxes the test count requirements
to let the test suite pass.
The value and holder iterator code had a past-the-end iterator
dereference. While of course invalid, the dereference didn't actually
cause any problems (which is why it wasn't caught before) because the
dereferenced value is never actually used and `vector` implementations
appear to allow dereferencing the past-the-end iterator. Under a MSVC
debug build, however, it fails a debug assertion and aborts.
This amends the iterator to just store and use a pointer to the vector
(rather than adding a second past-the-end iterator member), checking the
type index against the type vector size.
ICC was reporting that `try_direct_conversions()` cannot be `constexpr`
because `handle` is not a literal type. The fix removes `constexpr`
from the function since it isn't strictly needed.
This commit also suppresses new false positive warnings which mostly
appear in constexpr contexts (where the compiler knows conversions are
safe).
This updates the std::tuple, std::pair and `stl.h` type casters to
forward their contained value according to whether the container being
cast is an lvalue or rvalue reference. This fixes an issue where
subcaster casts were always called with a const lvalue which meant
nested type casters didn't have the desired `cast()` overload invoked.
For example, this caused Eigen values in a tuple to end up with a
readonly flag (issue #935) and made it impossible to return a container
of move-only types (issue #853).
This fixes both issues by adding templated universal reference `cast()`
methods to the various container types that forward container elements
according to the container reference type.
The std::pair caster can be written as a special case of the std::tuple
caster; this combines them via a base `tuple_caster` class (which is
essentially identical to the previous std::tuple caster).
This also removes the special empty tuple base case: returning an empty
tuple is relatively rare, and the base case still works perfectly well
even when the tuple types is an empty list.
When defining method from a member function pointer (e.g. `.def("f",
&Derived::f)`) we run into a problem if `&Derived::f` is actually
implemented in some base class `Base` when `Base` isn't
pybind-registered.
This happens because the class type is deduced from the member function
pointer, which then becomes a lambda with first argument this deduced
type. For a base class implementation, the deduced type is `Base`, not
`Derived`, and so we generate and registered an overload which takes a
`Base *` as first argument. Trying to call this fails if `Base` isn't
registered (e.g. because it's an implementation detail class that isn't
intended to be exposed to Python) because the type caster for an
unregistered type always fails.
This commit adds a `method_adaptor` function that rebinds a member
function to a derived type member function and otherwise (i.e. regular
functions/lambda) leaves the argument as-is. This is now used for class
definitions so that they are bound with type being registered rather
than a potential base type.
A closely related fix in this commit is to similarly update the lambdas
used for `def_readwrite` (and related) to bind to the class type being
registered rather than the deduced type so that registering a property
that resolves to a base class member similarly generates a usable
function.
Fixes#854, #910.
Co-Authored-By: Dean Moldovan <dean0x7d@gmail.com>
When casting to an unsigned type from a python 2 `int`, we currently
cast using `(unsigned long long) PyLong_AsUnsignedLong(src.ptr())`.
If the Python cast fails, it returns (unsigned long) -1, but then we
cast this to `unsigned long long`, which means we get 4294967295, but
because that isn't equal to `(unsigned long long) -1`, we don't detect
the failure.
This commit moves the unsigned casting into a `detail::as_unsigned`
function which, upon error, casts -1 to the final type, and otherwise
casts the return value to the final type to avoid the problematic double
cast when an error occurs.
The error most commonly shows up wherever `long` is 32-bits (e.g. under
both 32- and 64-bit Windows, and under 32-bit linux) when passing a
negative value to a bound function taking an `unsigned long`.
Fixes#929.
The added tests also trigger a latent segfault under PyPy: when casting
to an integer smaller than `long` (e.g. casting to a `uint32_t` on a
64-bit `long` architecture) we check both for a Python error and also
that the resulting intermediate value will fit in the final type. If
there is no conversion error, but we get a value that would overflow, we
end up calling `PyErr_ExceptionMatches()` illegally: that call is only
allowed when there is a current exception. Under PyPy, this segfaults
the test suite. It doesn't appear to segfault under CPython, but the
documentation suggests that it *could* do so. The fix is to only check
for the exception match if we actually got an error.
gcc 7 is now in debian testing ("buster"), with a proper stable upstream
release; this updates the associated travis-ci to use "buster" (rather
than "sid"), and removes the build from allow_failures.
This fixes#856. Instead of the weakref trick, the internals structure
holds an unordered_map from PyObject* to a vector of references. To
avoid the cost of the unordered_map lookup for objects that don't have
any keep_alive patients, a flag is added to each instance to indicate
whether there is anything to do.
Using `std::type_info::operator==` fails under libc++ because the .so
is loaded with RTLD_LOCAL. libc++ considers types under such .sos
distinct, and so comparing typeid() values directly isn't going to work.
This adds a custom hasher and equality class for the type lookup maps
when not under stdlibc++, and adds a `detail::same_type` function to
perform the equality test. It also converts a few pointer arguments to
const lvalue references, particularly since doing the pointer
comparison wasn't technically valid to being with (though in practice,
appeared to work everywhere).
This fixes#912.
Fixes a race condition when multiple threads try to acquire the GIL
before `detail::internals` have been initialized. `gil_scoped_release`
is now tasked with initializing `internals` (guaranteed single-threaded)
to ensure the safety of subsequent `acquire` calls from multiple threads.
Fixes the issue as described in the comments of commit e27ea47. This
just adds `enable_if_t<std::is_move_constructible<T>::value>` to
`make_move_constructor`. The change fixes MSVC and is harmless with
other compilers.
CLion slows to a crawl when evaluating the intricate `PYBIND11_NUMPY_DTYPE`
macro. This commit replaces the macro cascade with a simple `(void)0`
to ease IDE evaluation.
Debian stretch was just released, so `debian:testing` and
`debian:stetch` are starting to diverge; this commit keeps the travis-ci
docker image on stretch for gcc6 and clang3.9.
Debian has also moved gcc 7 from experimental to unstable, so this
switches the gcc7 build to `sid`. Once it migrates to `testing` I'll
switch the gcc 7 build docker image to `testing` and take it out of
failure-allowed.
./tools/check-style.sh fails on stock OS X currently; this fixes it:
- use pipes directly rather than exec redirection (macOS's ancient
version of bash fails with the latter)
- macOS's ancient bash doesn't support '\e' escapes in `echo -e`;
replace with \033 instead
- BSD grep doesn't support GREP_COLORS, but does allow GREP_COLOR.
Adding both doesn't hurt GNU grep: GREP_COLOR is deprecated, and won't
be used when GREP_COLORS is set.
- BSD grep doesn't collapse multiple /'s in the listed filename, so
failures under `include/` would should up as
`include//pybind11/whatever.h`. This removes the / from the include
directory argument.
Minor other changes:
- The CRLF detection runs with -l, so GREP_COLORS wasn't doing
anything; removed it.
- The trailing whitespace test would trigger on CRLFs, but the CR would
result in messed up output. Changed the test to just match trailing
spaces and tabs, rather than all whitespace.
The clang 4.0/cpp17 build wasn't enabling -flto because the system
linker didn't like the output generated by clang for some reason. This
switches the build to use llvm's lld instead, which lets -flto work
again (and links considerably faster, too).
numpy 1.13.0 fails with pypy 5.7.1, so this upgrades to 5.8.0. I've
also uploaded pre-built .whl files to imaginary.ca (checked every 4
hours and rebuilt if needed), and list that as an extra pypi location
under the pypy pip install to avoid the long travis pypy build times for
a new release or branch.
This commit allows multiple inheritance of pybind11 classes from
Python, e.g.
class MyType(Base1, Base2):
def __init__(self):
Base1.__init__(self)
Base2.__init__(self)
where Base1 and Base2 are pybind11-exported classes.
This requires collapsing the various builtin base objects
(pybind11_object_56, ...) introduced in 2.1 into a single
pybind11_object of a fixed size; this fixed size object allocates enough
space to contain either a simple object (one base class & small* holder
instance), or a pointer to a new allocation that can contain an
arbitrary number of base classes and holders, with holder size
unrestricted.
* "small" here means having a sizeof() of at most 2 pointers, which is
enough to fit unique_ptr (sizeof is 1 ptr) and shared_ptr (sizeof is 2
ptrs).
To minimize the performance impact, this repurposes
`internals::registered_types_py` to store a vector of pybind-registered
base types. For direct-use pybind types (e.g. the `PyA` for a C++ `A`)
this is simply storing the same thing as before, but now in a vector;
for Python-side inherited types, the map lets us avoid having to do a
base class traversal as long as we've seen the class before. The
change to vector is needed for multiple inheritance: Python types
inheriting from multiple registered bases have one entry per base.
Fixes#896.
From Python docs: "Once an iterator’s `__next__()` method raises
`StopIteration`, it must continue to do so on subsequent calls.
Implementations that do not obey this property are deemed broken."
Passing utf8 encoded strings from python to a C++ function taking a
std::string was broken. The previous version was trying to call
'PyUnicode_FromObject' on this data, which failed to convert the string
to unicode with the default ascii codec. Also this incurs an unnecessary
conversion to unicode for data this is immediately converted back to
utf8.
Fix by treating python 2 strings the same python 3 bytes objects, and just
copying over the data if possible.
libc++ 3.8 (and possibly others--including the derived version on OS X),
doesn't define the macro, but does support std::experimental::optional.
This removes the extra macro check and just assumes the header existing
is enough, which is what we do for <optional> and <variant>.
Py_Finalize could potentially invoke code that calls `get_internals()`,
which could create a new internals object if one didn't exist.
`finalize_interpreter()` didn't catch this because it only used the
pre-finalize interpreter pointer status; if this happens, it results in
the internals pointer not being properly destroyed with the interpreter,
which leaks, and also causes a `get_internals()` under a future
interpreter to return an internals object that is wrong in various ways.
`accessor` currently relies on an implicit default copy constructor, but that is deprecated in C++11 when a copy assignment operator is present and can, in some cases, raise deprecation warnings (see #888). This commit explicitly specifies the default copy constructor and also adds a default move constructor.