This adds VS 2017 to the build matrix, plus various other small
appveyor build changes:
- conda version bumped from 3.5 to 3.6
- build newer versions/architectures/python first (i.e. VS2017/x64/3.6
is the first build, VS2015/x86/2.7 is the last)
- stop building after a job failure: often a build failure in one
occurs everywhere; this just stops processing jobs (freeing them up
for other PRs) if an error is hit.
Annoyingly, appveyor doesn't allow excluding tests: i.e. the test matrix
is always dense (appveyor issue 386), so for now we'll just run
everything. (Once appveyor issue 386 is resolved, we can come back and
cut this down to 4-5 builds).
Allows use of vectors as python buffers, so for example they can be adopted without a copy by numpy.asarray
Allows faster conversion of buffers to vectors by copying instead of individually casting the elements
* Add value_type member alias to py::array_t (resolve#632)
* Use numpy scalar name in py::array_t function signatures (e.g. float32/64 instead of just float)
The `decltype(...)` in the template parameter that gives us SFINAE
matching for a lambda makes MSVC 2017 ICE; this works around if by
changing the test to an explicit not-a-function-or-pointer test, which
seems to work everywhere.
RTD updated their build environment which broke the 1.8.14.dev build of
doxygen that we were using. The update also breaks the conda-forge build
of 1.8.13 (but that version has other issues).
Luckily, the RTD update did bring their doxygen version up to 1.8.11
which is enough to parse the C++11 code we need (ref qualifiers) and it
also avoids the segfault found in 1.8.13.
Since we're using the native doxygen, conda isn't required anymore and
we can simplify the RTD configuration.
[skip ci]
Some versions of Python 2.7 reportedly (#713) have issues with
PyUnicode_Decode being passed the encoding string, so just skip it
entirely by calling the PyUnicode_DecodeUTF* function directly. This
will also be slightly more efficient by avoiding having to check the
encoding string, and (for python 2) going through the unicode class's
decode (python 3 fast-tracks this for all utf-{8,16,32} encodings;
python 2 only fast-tracked for the exact string "utf-8", which we
weren't passing anyway (we had "utf8")).
This doesn't work for PyPy, however: its `PyUnicode_DecodeUTF{8,16,32}`
appear rather broken: the UTF8 one segfaults, while the 16/32 require
recasting into a non-const `char *` (and might segfault; I didn't get
far enough to find out). Just avoid the whole thing by keeping the
encoding-passed-as-string version for PyPy, which seems to work
reliably.
The duration calculation was using %, but that's only supported on
duration objects when the arithmetic type supports %, and hence fails
for floats. Fixed by subtracting off the calculated values instead.
* Add `pytest.ini` config file and set default options there instead of
in `CMakeLists.txt` (command line arguments).
* Change all output capture from `capfd` (filedescriptors) to `capsys`
(Python's `sys.stdout` and `sys.stderr`). This avoids capturing
low-level C errors, e.g. from the debug build of Python.
* Set pytest minimum version to 3.0 to make it easier to use new
features. Removed conditional use of `excinfo.match()`.
* Clean up some leftover function-level `@pytest.requires_numpy`.
Nightlies for pypy no longer run on Ubuntu 12.04; change the pypy build
distribution to the travis-ci trusty (i.e. 14.04) beta container.
The pypy build was also installing numpy and scipy for the *system*
python version, which was pointless; this also adds a guard to the
eigen/numpy/scipy install code with a !PYPY check.
When using pybind::options to disable function signatures, user-defined
docstrings only get appended if they exist, but newlines were getting
appended unconditionally, so the docstring could end up with blank lines
(depending on which overloads, in particular, provided docstrings).
This commit suppresses the empty lines by only adding newlines for
overloads when needed.
This makes array_t respect overload resolution and noconvert by failing
to load when `convert = false` if the src isn't already an array of the
correct type.
Added in 6fb48490ef
The second constructor can't be doing anything--the signatures are
exactly the same, and so the first is always going to be the one
invoked by the dispatcher.
Commit 11a337f1 added major and minor python version
checking to cast.h but does not use the macros defined
via the Python.h inclusion. This may be due to an
intention to use the variables defined by the cmake
module FindPythonInterpreter, but nothing in the
pybind11 repo does anything to convert the cmake
variables to preprocessor defines.
* The definition of `PySequence_Fast` is more restrictive on PyPy, so
use the slow path instead.
* `PyDict_Next` has been fixed in PyPy -> remove workaround.
Before this, `py::iterator` didn't do any error handling, so code like:
```c++
for (auto item : py::int_(1)) {
// ...
}
```
would just silently skip the loop. The above now throws `TypeError` as
expected. This is a breaking behavior change, but any code which relied
on the silent skip was probably broken anyway.
Also, errors returned by `PyIter_Next()` are now properly handled.
test_eigen.py and test_numpy_*.py have the same
@pytest.requires_eigen_and_numpy or @pytest.requires_numpy on every
single test; this changes them to use pytest's global `pytestmark = ...`
instead to disable the entire module when numpy and/or eigen aren't
available.
This commit largely rewrites the Eigen dense matrix support to avoid
copying in many cases: Eigen arguments can now reference numpy data, and
numpy objects can now reference Eigen data (given compatible types).
Eigen::Ref<...> arguments now also make use of the new `convert`
argument use (added in PR #634) to avoid conversion, allowing
`py::arg().noconvert()` to be used when binding a function to prohibit
copying when invoking the function. Respecting `convert` also means
Eigen overloads that avoid copying will be preferred during overload
resolution to ones that require copying.
This commit also rewrites the Eigen documentation and test suite to
explain and test the new capabilities.
Eigen::Ref objects, when returned, are almost always returned as
rvalues; what's important is the data they reference, not the outer
shell, and so we want to be able to use `::copy`,
`::reference_internal`, etc. to refer to the data the Eigen::Ref
references (in the following commits), rather than the Eigen::Ref
instance itself.
This moves the policy override into a struct so that code that wants to
avoid it (or wants to provide some other Return-type-conditional
override) can create a specialization of
return_value_policy_override<Return> in order to override the override.
This lets an Eigen::Ref-returning function be bound with `rvp::copy`,
for example, to specify that the data should be copied into a new numpy
array rather than referenced, or `rvp::reference_internal` to indicate
that it should be referenced, but a keep-alive used (actually, we used
the array's `base` rather than a py::keep_alive in such a case, but it
accomplishes the same thing).
Numpy raises ValueError when attempting to modify an array, while
py::array is raising a RuntimeError. This changes the exception to a
std::domain_error, which gets mapped to the expected ValueError in
python.
numpy arrays aren't currently properly setting base: by setting `->base`
directly, the base doesn't follow what numpy expects and documents (that
is, following chained array bases to the root array).
This fixes the behaviour by using numpy's PyArray_SetBaseObject to set
the base instead, and then updates the tests to reflect the fixed
behaviour.
A few of pybind's numpy constants are using the numpy-deprecated names
(without "ARRAY_" in them); updated our names to be consistent with
current numpy code.
`is_template_base_of<T>` fails when `T` is `const` (because its
implementation relies on being able to convert a `T*` to a `Base<U>*`,
which won't work when `T` is const).
(This also agrees with std::is_base_of, which ignores cv qualification.)
Currently when we do a conversion between a numpy array and an Eigen
Vector, we allow the conversion only if the Eigen type is a
compile-time vector (i.e. at least one dimension is fixed at 1 at
compile time), or if the type is dynamic on *both* dimensions.
This means we can run into cases where MatrixXd allow things that
conforming, compile-time sizes does not: for example,
`Matrix<double,4,Dynamic>` is currently not allowed, even when assigning
from a 4-element vector, but it *is* allowed for a
`Matrix<double,Dynamic,Dynamic>`.
This commit also reverts the current behaviour of using the matrix's
storage order to determine the structure when the Matrix is fully
dynamic (i.e. in both dimensions). Currently we assign to an eigen row
if the storage order is row-major, and column otherwise: this seems
wrong (the storage order has nothing to do with the shape!). While
numpy doesn't distinguish between a row/column vector, Eigen does, but
it makes more sense to consistently choose one than to produce
something with a different shape based on the intended storage layout.
With the previous commit, output can be very confusing because you only
see positional arguments in the "invoked with" line, but you can have a
failure from kwargs as well (in particular, when a value is invalidly
specified via both via positional and kwargs). This commits adds
kwargs to the output, and updates the associated tests to match.
* Make tests buildable independently
This makes "tests" buildable as a separate project that uses
find_package(pybind11 CONFIG) when invoked independently.
This also moves the WERROR option into tests/CMakeLists.txt, as that's
the only place it is used.
* Use Eigen 3.3.1's cmake target, if available
This changes the eigen finding code to attempt to use Eigen's
system-installed Eigen3Config first. In Eigen 3.3.1, it exports a cmake
Eigen3::Eigen target to get dependencies from (rather than setting the
include path directly).
If it fails, we fall back to the trying to load allowing modules (i.e.
allowing our tools/FindEigen3.cmake). If we either fallback, or the
eigen version is older than 3.3.1 (or , we still set the include
directory manually; otherwise, for CONFIG + new Eigen, we get it via
the target.
This is also needed to allow 'tests' to be built independently, when
the find_package(Eigen3) is going to find via the system-installed
Eigen3Config.cmake.
* Add a install-then-build test, using clang on linux
This tests that `make install` to the actual system, followed by a build
of the tests (without the main pybind11 repository available) works as
expected.
To also expand the testing variety a bit, it also builds using
clang-3.9 instead of gcc.
* Don't try loading Eigen3Config in cmake < 3.0
It could FATAL_ERROR as the newer cmake includes a cmake 3.0 required
line.
If doing an independent, out-of-tree "tests" build, the regular
find_package(Eigen3) is likely to fail with the same error, but I think
we can just let that be: if you want a recent Eigen with proper cmake
loading support *and* want to do an independent tests build, you'll
need at least cmake 3.0.
* Make string conversion stricter
The string conversion logic added in PR #624 for all std::basic_strings
was derived from the old std::wstring logic, but that was underused and
turns out to have had a bug in accepting almost anything convertible to
unicode, while the previous std::string logic was much stricter. This
restores the previous std::string logic by only allowing actual unicode
or string types.
Fixes#685.
* Added missing 'requires numpy' decorator
(I forgot that the change to a global decorator here is in the
not-yet-merged Eigen PR)
Now that only one shared metaclass is ever allocated, it's extremely
cheap to enable it for all pybind11 types.
* Deprecate the default py::metaclass() since it's not needed anymore.
* Allow users to specify a custom metaclass via py::metaclass(handle).
In order to fully satisfy Python's inheritance type layout requirements,
all types should have a common 'solid' base. A solid base is one which
has the same instance size as the derived type (not counting the space
required for the optional `dict_ptr` and `weakrefs_ptr`). Thus, `object`
does not qualify as a solid base for pybind11 types and this can lead to
issues with multiple inheritance.
To get around this, new base types are created: one per unique instance
size. There is going to be very few of these bases. They ensure Python's
MRO checks will pass when multiple bases are involved.
Instead of creating a new unique metaclass for each type, the builtin
`property` type is subclassed to support static properties. The new
setter/getters always pass types instead of instances in their `self`
argument. A metaclass is still required to support this behavior, but
it doesn't store any data anymore, so a new one doesn't need to be
created for each class. There is now only one common metaclass which
is shared by all pybind11 types.