2016-10-16 17:12:43 +00:00
|
|
|
.. _numpy:
|
|
|
|
|
|
|
|
NumPy
|
|
|
|
#####
|
|
|
|
|
|
|
|
Buffer protocol
|
|
|
|
===============
|
|
|
|
|
|
|
|
Python supports an extremely general and convenient approach for exchanging
|
|
|
|
data between plugin libraries. Types can expose a buffer view [#f2]_, which
|
|
|
|
provides fast direct access to the raw internal data representation. Suppose we
|
|
|
|
want to bind the following simplistic Matrix class:
|
|
|
|
|
|
|
|
.. code-block:: cpp
|
|
|
|
|
|
|
|
class Matrix {
|
|
|
|
public:
|
|
|
|
Matrix(size_t rows, size_t cols) : m_rows(rows), m_cols(cols) {
|
|
|
|
m_data = new float[rows*cols];
|
|
|
|
}
|
|
|
|
float *data() { return m_data; }
|
|
|
|
size_t rows() const { return m_rows; }
|
|
|
|
size_t cols() const { return m_cols; }
|
|
|
|
private:
|
|
|
|
size_t m_rows, m_cols;
|
|
|
|
float *m_data;
|
|
|
|
};
|
|
|
|
|
|
|
|
The following binding code exposes the ``Matrix`` contents as a buffer object,
|
|
|
|
making it possible to cast Matrices into NumPy arrays. It is even possible to
|
|
|
|
completely avoid copy operations with Python expressions like
|
|
|
|
``np.array(matrix_instance, copy = False)``.
|
|
|
|
|
|
|
|
.. code-block:: cpp
|
|
|
|
|
2016-12-16 14:00:46 +00:00
|
|
|
py::class_<Matrix>(m, "Matrix", py::buffer_protocol())
|
2016-10-16 17:12:43 +00:00
|
|
|
.def_buffer([](Matrix &m) -> py::buffer_info {
|
|
|
|
return py::buffer_info(
|
|
|
|
m.data(), /* Pointer to buffer */
|
|
|
|
sizeof(float), /* Size of one scalar */
|
|
|
|
py::format_descriptor<float>::format(), /* Python struct-style format descriptor */
|
|
|
|
2, /* Number of dimensions */
|
|
|
|
{ m.rows(), m.cols() }, /* Buffer dimensions */
|
2017-04-06 22:16:35 +00:00
|
|
|
{ sizeof(float) * m.rows(), /* Strides (in bytes) for each index */
|
|
|
|
sizeof(float) }
|
2016-10-16 17:12:43 +00:00
|
|
|
);
|
|
|
|
});
|
|
|
|
|
2016-12-16 14:00:46 +00:00
|
|
|
Supporting the buffer protocol in a new type involves specifying the special
|
|
|
|
``py::buffer_protocol()`` tag in the ``py::class_`` constructor and calling the
|
|
|
|
``def_buffer()`` method with a lambda function that creates a
|
|
|
|
``py::buffer_info`` description record on demand describing a given matrix
|
|
|
|
instance. The contents of ``py::buffer_info`` mirror the Python buffer protocol
|
|
|
|
specification.
|
2016-10-16 17:12:43 +00:00
|
|
|
|
|
|
|
.. code-block:: cpp
|
|
|
|
|
|
|
|
struct buffer_info {
|
|
|
|
void *ptr;
|
2017-04-14 20:33:44 +00:00
|
|
|
ssize_t itemsize;
|
2016-10-16 17:12:43 +00:00
|
|
|
std::string format;
|
2017-04-14 20:33:44 +00:00
|
|
|
ssize_t ndim;
|
|
|
|
std::vector<ssize_t> shape;
|
2017-04-05 22:13:04 +00:00
|
|
|
std::vector<ssize_t> strides;
|
2016-10-16 17:12:43 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
To create a C++ function that can take a Python buffer object as an argument,
|
|
|
|
simply use the type ``py::buffer`` as one of its arguments. Buffers can exist
|
|
|
|
in a great variety of configurations, hence some safety checks are usually
|
|
|
|
necessary in the function body. Below, you can see an basic example on how to
|
|
|
|
define a custom constructor for the Eigen double precision matrix
|
|
|
|
(``Eigen::MatrixXd``) type, which supports initialization from compatible
|
|
|
|
buffer objects (e.g. a NumPy matrix).
|
|
|
|
|
|
|
|
.. code-block:: cpp
|
|
|
|
|
|
|
|
/* Bind MatrixXd (or some other Eigen type) to Python */
|
|
|
|
typedef Eigen::MatrixXd Matrix;
|
|
|
|
|
|
|
|
typedef Matrix::Scalar Scalar;
|
|
|
|
constexpr bool rowMajor = Matrix::Flags & Eigen::RowMajorBit;
|
|
|
|
|
2016-12-16 14:00:46 +00:00
|
|
|
py::class_<Matrix>(m, "Matrix", py::buffer_protocol())
|
2016-10-16 17:12:43 +00:00
|
|
|
.def("__init__", [](Matrix &m, py::buffer b) {
|
|
|
|
typedef Eigen::Stride<Eigen::Dynamic, Eigen::Dynamic> Strides;
|
|
|
|
|
|
|
|
/* Request a buffer descriptor from Python */
|
|
|
|
py::buffer_info info = b.request();
|
|
|
|
|
|
|
|
/* Some sanity checks ... */
|
|
|
|
if (info.format != py::format_descriptor<Scalar>::format())
|
|
|
|
throw std::runtime_error("Incompatible format: expected a double array!");
|
|
|
|
|
|
|
|
if (info.ndim != 2)
|
|
|
|
throw std::runtime_error("Incompatible buffer dimension!");
|
|
|
|
|
|
|
|
auto strides = Strides(
|
2017-04-14 20:33:44 +00:00
|
|
|
info.strides[rowMajor ? 0 : 1] / (py::ssize_t)sizeof(Scalar),
|
|
|
|
info.strides[rowMajor ? 1 : 0] / (py::ssize_t)sizeof(Scalar));
|
2016-10-16 17:12:43 +00:00
|
|
|
|
|
|
|
auto map = Eigen::Map<Matrix, 0, Strides>(
|
|
|
|
static_cat<Scalar *>(info.ptr), info.shape[0], info.shape[1], strides);
|
|
|
|
|
|
|
|
new (&m) Matrix(map);
|
|
|
|
});
|
|
|
|
|
|
|
|
For reference, the ``def_buffer()`` call for this Eigen data type should look
|
|
|
|
as follows:
|
|
|
|
|
|
|
|
.. code-block:: cpp
|
|
|
|
|
|
|
|
.def_buffer([](Matrix &m) -> py::buffer_info {
|
|
|
|
return py::buffer_info(
|
2017-04-14 20:33:44 +00:00
|
|
|
m.data(), /* Pointer to buffer */
|
|
|
|
sizeof(Scalar), /* Size of one scalar */
|
|
|
|
py::format_descriptor<Scalar>::format(), /* Python struct-style format descriptor */
|
|
|
|
2, /* Number of dimensions */
|
|
|
|
{ m.rows(), m.cols() }, /* Buffer dimensions */
|
2017-04-06 22:16:35 +00:00
|
|
|
{ sizeof(Scalar) * (rowMajor ? m.cols() : 1),
|
|
|
|
sizeof(Scalar) * (rowMajor ? 1 : m.rows()) }
|
2017-04-14 20:33:44 +00:00
|
|
|
/* Strides (in bytes) for each index */
|
2016-10-16 17:12:43 +00:00
|
|
|
);
|
|
|
|
})
|
|
|
|
|
|
|
|
For a much easier approach of binding Eigen types (although with some
|
|
|
|
limitations), refer to the section on :doc:`/advanced/cast/eigen`.
|
|
|
|
|
|
|
|
.. seealso::
|
|
|
|
|
|
|
|
The file :file:`tests/test_buffers.cpp` contains a complete example
|
|
|
|
that demonstrates using the buffer protocol with pybind11 in more detail.
|
|
|
|
|
|
|
|
.. [#f2] http://docs.python.org/3/c-api/buffer.html
|
|
|
|
|
|
|
|
Arrays
|
|
|
|
======
|
|
|
|
|
|
|
|
By exchanging ``py::buffer`` with ``py::array`` in the above snippet, we can
|
|
|
|
restrict the function so that it only accepts NumPy arrays (rather than any
|
|
|
|
type of Python object satisfying the buffer protocol).
|
|
|
|
|
|
|
|
In many situations, we want to define a function which only accepts a NumPy
|
|
|
|
array of a certain data type. This is possible via the ``py::array_t<T>``
|
|
|
|
template. For instance, the following function requires the argument to be a
|
|
|
|
NumPy array containing double precision values.
|
|
|
|
|
|
|
|
.. code-block:: cpp
|
|
|
|
|
|
|
|
void f(py::array_t<double> array);
|
|
|
|
|
|
|
|
When it is invoked with a different type (e.g. an integer or a list of
|
|
|
|
integers), the binding code will attempt to cast the input into a NumPy array
|
|
|
|
of the requested type. Note that this feature requires the
|
2017-01-31 16:28:29 +00:00
|
|
|
:file:`pybind11/numpy.h` header to be included.
|
2016-10-16 17:12:43 +00:00
|
|
|
|
|
|
|
Data in NumPy arrays is not guaranteed to packed in a dense manner;
|
|
|
|
furthermore, entries can be separated by arbitrary column and row strides.
|
|
|
|
Sometimes, it can be useful to require a function to only accept dense arrays
|
|
|
|
using either the C (row-major) or Fortran (column-major) ordering. This can be
|
|
|
|
accomplished via a second template argument with values ``py::array::c_style``
|
|
|
|
or ``py::array::f_style``.
|
|
|
|
|
|
|
|
.. code-block:: cpp
|
|
|
|
|
|
|
|
void f(py::array_t<double, py::array::c_style | py::array::forcecast> array);
|
|
|
|
|
|
|
|
The ``py::array::forcecast`` argument is the default value of the second
|
|
|
|
template parameter, and it ensures that non-conforming arguments are converted
|
|
|
|
into an array satisfying the specified requirements instead of trying the next
|
|
|
|
function overload.
|
|
|
|
|
|
|
|
Structured types
|
|
|
|
================
|
|
|
|
|
Numpy: better compilation errors, long double support (#619)
* Clarify PYBIND11_NUMPY_DTYPE documentation
The current documentation and example reads as though
PYBIND11_NUMPY_DTYPE is a declarative macro along the same lines as
PYBIND11_DECLARE_HOLDER_TYPE, but it isn't. The changes the
documentation and docs example to make it clear that you need to "call"
the macro.
* Add satisfies_{all,any,none}_of<T, Preds>
`satisfies_all_of<T, Pred1, Pred2, Pred3>` is a nice legibility-enhanced
shortcut for `is_all<Pred1<T>, Pred2<T>, Pred3<T>>`.
* Give better error message for non-POD dtype attempts
If you try to use a non-POD data type, you get difficult-to-interpret
compilation errors (about ::name() not being a member of an internal
pybind11 struct, among others), for which isn't at all obvious what the
problem is.
This adds a static_assert for such cases.
It also changes the base case from an empty struct to the is_pod_struct
case by no longer using `enable_if<is_pod_struct>` but instead using a
static_assert: thus specializations avoid the base class, POD types
work, and non-POD types (and unimplemented POD types like std::array)
get a more informative static_assert failure.
* Prefix macros with PYBIND11_
numpy.h uses unprefixed macros, which seems undesirable. This prefixes
them with PYBIND11_ to match all the other macros in numpy.h (and
elsewhere).
* Add long double support
This adds long double and std::complex<long double> support for numpy
arrays.
This allows some simplification of the code used to generate format
descriptors; the new code uses fewer macros, instead putting the code as
different templated options; the template conditions end up simpler with
this because we are now supporting all basic C++ arithmetic types (and
so can use is_arithmetic instead of is_integral + multiple
different specializations).
In addition to testing that it is indeed working in the test script, it
also adds various offset and size calculations there, which
fixes the test failures under x86 compilations.
2017-01-31 16:00:15 +00:00
|
|
|
In order for ``py::array_t`` to work with structured (record) types, we first
|
|
|
|
need to register the memory layout of the type. This can be done via
|
|
|
|
``PYBIND11_NUMPY_DTYPE`` macro, called in the plugin definition code, which
|
|
|
|
expects the type followed by field names:
|
2016-10-16 17:12:43 +00:00
|
|
|
|
|
|
|
.. code-block:: cpp
|
|
|
|
|
|
|
|
struct A {
|
|
|
|
int x;
|
|
|
|
double y;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct B {
|
|
|
|
int z;
|
|
|
|
A a;
|
|
|
|
};
|
|
|
|
|
Numpy: better compilation errors, long double support (#619)
* Clarify PYBIND11_NUMPY_DTYPE documentation
The current documentation and example reads as though
PYBIND11_NUMPY_DTYPE is a declarative macro along the same lines as
PYBIND11_DECLARE_HOLDER_TYPE, but it isn't. The changes the
documentation and docs example to make it clear that you need to "call"
the macro.
* Add satisfies_{all,any,none}_of<T, Preds>
`satisfies_all_of<T, Pred1, Pred2, Pred3>` is a nice legibility-enhanced
shortcut for `is_all<Pred1<T>, Pred2<T>, Pred3<T>>`.
* Give better error message for non-POD dtype attempts
If you try to use a non-POD data type, you get difficult-to-interpret
compilation errors (about ::name() not being a member of an internal
pybind11 struct, among others), for which isn't at all obvious what the
problem is.
This adds a static_assert for such cases.
It also changes the base case from an empty struct to the is_pod_struct
case by no longer using `enable_if<is_pod_struct>` but instead using a
static_assert: thus specializations avoid the base class, POD types
work, and non-POD types (and unimplemented POD types like std::array)
get a more informative static_assert failure.
* Prefix macros with PYBIND11_
numpy.h uses unprefixed macros, which seems undesirable. This prefixes
them with PYBIND11_ to match all the other macros in numpy.h (and
elsewhere).
* Add long double support
This adds long double and std::complex<long double> support for numpy
arrays.
This allows some simplification of the code used to generate format
descriptors; the new code uses fewer macros, instead putting the code as
different templated options; the template conditions end up simpler with
this because we are now supporting all basic C++ arithmetic types (and
so can use is_arithmetic instead of is_integral + multiple
different specializations).
In addition to testing that it is indeed working in the test script, it
also adds various offset and size calculations there, which
fixes the test failures under x86 compilations.
2017-01-31 16:00:15 +00:00
|
|
|
// ...
|
|
|
|
PYBIND11_PLUGIN(test) {
|
|
|
|
// ...
|
2016-10-16 17:12:43 +00:00
|
|
|
|
Numpy: better compilation errors, long double support (#619)
* Clarify PYBIND11_NUMPY_DTYPE documentation
The current documentation and example reads as though
PYBIND11_NUMPY_DTYPE is a declarative macro along the same lines as
PYBIND11_DECLARE_HOLDER_TYPE, but it isn't. The changes the
documentation and docs example to make it clear that you need to "call"
the macro.
* Add satisfies_{all,any,none}_of<T, Preds>
`satisfies_all_of<T, Pred1, Pred2, Pred3>` is a nice legibility-enhanced
shortcut for `is_all<Pred1<T>, Pred2<T>, Pred3<T>>`.
* Give better error message for non-POD dtype attempts
If you try to use a non-POD data type, you get difficult-to-interpret
compilation errors (about ::name() not being a member of an internal
pybind11 struct, among others), for which isn't at all obvious what the
problem is.
This adds a static_assert for such cases.
It also changes the base case from an empty struct to the is_pod_struct
case by no longer using `enable_if<is_pod_struct>` but instead using a
static_assert: thus specializations avoid the base class, POD types
work, and non-POD types (and unimplemented POD types like std::array)
get a more informative static_assert failure.
* Prefix macros with PYBIND11_
numpy.h uses unprefixed macros, which seems undesirable. This prefixes
them with PYBIND11_ to match all the other macros in numpy.h (and
elsewhere).
* Add long double support
This adds long double and std::complex<long double> support for numpy
arrays.
This allows some simplification of the code used to generate format
descriptors; the new code uses fewer macros, instead putting the code as
different templated options; the template conditions end up simpler with
this because we are now supporting all basic C++ arithmetic types (and
so can use is_arithmetic instead of is_integral + multiple
different specializations).
In addition to testing that it is indeed working in the test script, it
also adds various offset and size calculations there, which
fixes the test failures under x86 compilations.
2017-01-31 16:00:15 +00:00
|
|
|
PYBIND11_NUMPY_DTYPE(A, x, y);
|
|
|
|
PYBIND11_NUMPY_DTYPE(B, z, a);
|
|
|
|
/* now both A and B can be used as template arguments to py::array_t */
|
|
|
|
}
|
2016-10-16 17:12:43 +00:00
|
|
|
|
|
|
|
Vectorizing functions
|
|
|
|
=====================
|
|
|
|
|
|
|
|
Suppose we want to bind a function with the following signature to Python so
|
|
|
|
that it can process arbitrary NumPy array arguments (vectors, matrices, general
|
|
|
|
N-D arrays) in addition to its normal arguments:
|
|
|
|
|
|
|
|
.. code-block:: cpp
|
|
|
|
|
|
|
|
double my_func(int x, float y, double z);
|
|
|
|
|
|
|
|
After including the ``pybind11/numpy.h`` header, this is extremely simple:
|
|
|
|
|
|
|
|
.. code-block:: cpp
|
|
|
|
|
|
|
|
m.def("vectorized_func", py::vectorize(my_func));
|
|
|
|
|
|
|
|
Invoking the function like below causes 4 calls to be made to ``my_func`` with
|
|
|
|
each of the array elements. The significant advantage of this compared to
|
|
|
|
solutions like ``numpy.vectorize()`` is that the loop over the elements runs
|
|
|
|
entirely on the C++ side and can be crunched down into a tight, optimized loop
|
|
|
|
by the compiler. The result is returned as a NumPy array of type
|
|
|
|
``numpy.dtype.float64``.
|
|
|
|
|
|
|
|
.. code-block:: pycon
|
|
|
|
|
|
|
|
>>> x = np.array([[1, 3],[5, 7]])
|
|
|
|
>>> y = np.array([[2, 4],[6, 8]])
|
|
|
|
>>> z = 3
|
|
|
|
>>> result = vectorized_func(x, y, z)
|
|
|
|
|
|
|
|
The scalar argument ``z`` is transparently replicated 4 times. The input
|
|
|
|
arrays ``x`` and ``y`` are automatically converted into the right types (they
|
|
|
|
are of type ``numpy.dtype.int64`` but need to be ``numpy.dtype.int32`` and
|
|
|
|
``numpy.dtype.float32``, respectively)
|
|
|
|
|
|
|
|
Sometimes we might want to explicitly exclude an argument from the vectorization
|
|
|
|
because it makes little sense to wrap it in a NumPy array. For instance,
|
|
|
|
suppose the function signature was
|
|
|
|
|
|
|
|
.. code-block:: cpp
|
|
|
|
|
|
|
|
double my_func(int x, float y, my_custom_type *z);
|
|
|
|
|
|
|
|
This can be done with a stateful Lambda closure:
|
|
|
|
|
|
|
|
.. code-block:: cpp
|
|
|
|
|
|
|
|
// Vectorize a lambda function with a capture object (e.g. to exclude some arguments from the vectorization)
|
|
|
|
m.def("vectorized_func",
|
|
|
|
[](py::array_t<int> x, py::array_t<float> y, my_custom_type *z) {
|
|
|
|
auto stateful_closure = [z](int x, float y) { return my_func(x, y, z); };
|
|
|
|
return py::vectorize(stateful_closure)(x, y);
|
|
|
|
}
|
|
|
|
);
|
|
|
|
|
|
|
|
In cases where the computation is too complicated to be reduced to
|
|
|
|
``vectorize``, it will be necessary to create and access the buffer contents
|
|
|
|
manually. The following snippet contains a complete example that shows how this
|
|
|
|
works (the code is somewhat contrived, since it could have been done more
|
|
|
|
simply using ``vectorize``).
|
|
|
|
|
|
|
|
.. code-block:: cpp
|
|
|
|
|
|
|
|
#include <pybind11/pybind11.h>
|
|
|
|
#include <pybind11/numpy.h>
|
|
|
|
|
|
|
|
namespace py = pybind11;
|
|
|
|
|
|
|
|
py::array_t<double> add_arrays(py::array_t<double> input1, py::array_t<double> input2) {
|
|
|
|
auto buf1 = input1.request(), buf2 = input2.request();
|
|
|
|
|
|
|
|
if (buf1.ndim != 1 || buf2.ndim != 1)
|
|
|
|
throw std::runtime_error("Number of dimensions must be one");
|
|
|
|
|
|
|
|
if (buf1.size != buf2.size)
|
|
|
|
throw std::runtime_error("Input shapes must match");
|
|
|
|
|
|
|
|
/* No pointer is passed, so NumPy will allocate the buffer */
|
|
|
|
auto result = py::array_t<double>(buf1.size);
|
|
|
|
|
|
|
|
auto buf3 = result.request();
|
|
|
|
|
|
|
|
double *ptr1 = (double *) buf1.ptr,
|
|
|
|
*ptr2 = (double *) buf2.ptr,
|
|
|
|
*ptr3 = (double *) buf3.ptr;
|
|
|
|
|
|
|
|
for (size_t idx = 0; idx < buf1.shape[0]; idx++)
|
|
|
|
ptr3[idx] = ptr1[idx] + ptr2[idx];
|
|
|
|
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
PYBIND11_PLUGIN(test) {
|
|
|
|
py::module m("test");
|
|
|
|
m.def("add_arrays", &add_arrays, "Add two NumPy arrays");
|
|
|
|
return m.ptr();
|
|
|
|
}
|
|
|
|
|
|
|
|
.. seealso::
|
|
|
|
|
|
|
|
The file :file:`tests/test_numpy_vectorize.cpp` contains a complete
|
|
|
|
example that demonstrates using :func:`vectorize` in more detail.
|
2017-03-19 04:14:23 +00:00
|
|
|
|
|
|
|
Direct access
|
|
|
|
=============
|
|
|
|
|
|
|
|
For performance reasons, particularly when dealing with very large arrays, it
|
|
|
|
is often desirable to directly access array elements without internal checking
|
|
|
|
of dimensions and bounds on every access when indices are known to be already
|
|
|
|
valid. To avoid such checks, the ``array`` class and ``array_t<T>`` template
|
|
|
|
class offer an unchecked proxy object that can be used for this unchecked
|
|
|
|
access through the ``unchecked<N>`` and ``mutable_unchecked<N>`` methods,
|
|
|
|
where ``N`` gives the required dimensionality of the array:
|
|
|
|
|
|
|
|
.. code-block:: cpp
|
|
|
|
|
|
|
|
m.def("sum_3d", [](py::array_t<double> x) {
|
|
|
|
auto r = x.unchecked<3>(); // x must have ndim = 3; can be non-writeable
|
|
|
|
double sum = 0;
|
2017-04-14 20:33:44 +00:00
|
|
|
for (ssize_t i = 0; i < r.shape(0); i++)
|
|
|
|
for (ssize_t j = 0; j < r.shape(1); j++)
|
|
|
|
for (ssize_t k = 0; k < r.shape(2); k++)
|
2017-03-19 04:14:23 +00:00
|
|
|
sum += r(i, j, k);
|
|
|
|
return sum;
|
|
|
|
});
|
|
|
|
m.def("increment_3d", [](py::array_t<double> x) {
|
|
|
|
auto r = x.mutable_unchecked<3>(); // Will throw if ndim != 3 or flags.writeable is false
|
2017-04-14 20:33:44 +00:00
|
|
|
for (ssize_t i = 0; i < r.shape(0); i++)
|
|
|
|
for (ssize_t j = 0; j < r.shape(1); j++)
|
|
|
|
for (ssize_t k = 0; k < r.shape(2); k++)
|
2017-03-19 04:14:23 +00:00
|
|
|
r(i, j, k) += 1.0;
|
|
|
|
}, py::arg().noconvert());
|
|
|
|
|
|
|
|
To obtain the proxy from an ``array`` object, you must specify both the data
|
|
|
|
type and number of dimensions as template arguments, such as ``auto r =
|
|
|
|
myarray.mutable_unchecked<float, 2>()``.
|
|
|
|
|
2017-03-20 20:48:38 +00:00
|
|
|
If the number of dimensions is not known at compile time, you can omit the
|
|
|
|
dimensions template parameter (i.e. calling ``arr_t.unchecked()`` or
|
|
|
|
``arr.unchecked<T>()``. This will give you a proxy object that works in the
|
|
|
|
same way, but results in less optimizable code and thus a small efficiency
|
|
|
|
loss in tight loops.
|
|
|
|
|
2017-03-19 04:14:23 +00:00
|
|
|
Note that the returned proxy object directly references the array's data, and
|
|
|
|
only reads its shape, strides, and writeable flag when constructed. You must
|
|
|
|
take care to ensure that the referenced array is not destroyed or reshaped for
|
|
|
|
the duration of the returned object, typically by limiting the scope of the
|
|
|
|
returned instance.
|
|
|
|
|
2017-03-20 20:48:38 +00:00
|
|
|
The returned proxy object supports some of the same methods as ``py::array`` so
|
|
|
|
that it can be used as a drop-in replacement for some existing, index-checked
|
|
|
|
uses of ``py::array``:
|
|
|
|
|
|
|
|
- ``r.ndim()`` returns the number of dimensions
|
|
|
|
|
|
|
|
- ``r.data(1, 2, ...)`` and ``r.mutable_data(1, 2, ...)``` returns a pointer to
|
|
|
|
the ``const T`` or ``T`` data, respectively, at the given indices. The
|
|
|
|
latter is only available to proxies obtained via ``a.mutable_unchecked()``.
|
|
|
|
|
|
|
|
- ``itemsize()`` returns the size of an item in bytes, i.e. ``sizeof(T)``.
|
|
|
|
|
|
|
|
- ``ndim()`` returns the number of dimensions.
|
|
|
|
|
|
|
|
- ``shape(n)`` returns the size of dimension ``n``
|
|
|
|
|
|
|
|
- ``size()`` returns the total number of elements (i.e. the product of the shapes).
|
|
|
|
|
|
|
|
- ``nbytes()`` returns the number of bytes used by the referenced elements
|
|
|
|
(i.e. ``itemsize()`` times ``size()``).
|
|
|
|
|
2017-03-19 04:14:23 +00:00
|
|
|
.. seealso::
|
|
|
|
|
|
|
|
The file :file:`tests/test_numpy_array.cpp` contains additional examples
|
|
|
|
demonstrating the use of this feature.
|