Compare commits

...

25 Commits

Author SHA1 Message Date
Francesco Rizzi
73fd5a97e0
Merge 8917a1e9b3 into 83b92ceb35 2024-11-21 20:04:37 +08:00
Michael Šimáček
83b92ceb35
Try to fix reentrant write transient failures in tests (#5447)
* Disable print_destroyed in tests on GraalPy

* Reenable test_iostream on GraalPy
2024-11-18 14:39:59 -08:00
Ralf W. Grosse-Kunstleve
330aae51cf
Remove mingw-w64-i686-python-numpy from mingw32 build (it does not seem to exist anymore). (#5445)
Last successful: Sat, 16 Nov 2024 18:25:21 GMT

First failure: Sat, 16 Nov 2024 21:43:28 GMT

```
Installing additional packages through pacman...
  C:\Windows\system32\cmd.exe /D /S /C D:\a\_temp\setup-msys2\msys2.cmd -c "'pacman' '--noconfirm' '-S' '--needed' '--overwrite' '*' 'git' 'mingw-w64-i686-gcc' 'mingw-w64-i686-python-pip' 'mingw-w64-i686-python-numpy' 'mingw-w64-i686-cmake' 'mingw-w64-i686-make' 'mingw-w64-i686-python-pytest' 'mingw-w64-i686-boost' 'mingw-w64-i686-catch'"
  error: target not found: mingw-w64-i686-python-numpy
  Error: The process 'C:\Windows\system32\cmd.exe' failed with exit code 1
```
2024-11-17 11:46:55 -08:00
Maarten Baert
f41dae31a3
Add dtype::normalized_num and dtype::num_of (#5429)
* Add dtype::normalized_num and dtype::num_of

* Fix compiler warning and improve NumPy 1.x compatibility

* Fix clang-tidy warning

* Fix another clang-tidy warning

* Add extra comment
2024-11-17 07:56:02 -08:00
Maarten Baert
b9fb3168ab
Add support for array_t<handle> and array_t<object> (#5427)
* Add support for array_t<handle> and array_t<object>

* style: pre-commit fixes

* Remove loops that aren't strictly needed

* Fix compiler warning

* Disable GC-dependent checks when running on pypy or graalpy

* style: pre-commit fixes

* Remove PyValueHolder counter again

* Move tests to templates to avoid code duplication

* Rerun pre-commit

* Restore import that was erroneously removed by pre-commit

* Reduce code duplication with more template magic

* Bring back `.attr("value")` in `return_array_cpp_loop()`

This was meant to further stress-test correctness of refcount handling.

All modified test functions were manually leak-checked (`while True`, top command, Python 3.12.3, Ubuntu 24.01, gcc 13.2.0).

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Ralf W. Grosse-Kunstleve <rgrossekunst@nvidia.com>
2024-11-16 13:45:59 -08:00
Michael Šimáček
08095d9c70
Run pytest in verbose mode (#5443) 2024-11-14 09:03:56 -08:00
dependabot[bot]
0ed20f26ac
chore(deps): bump actions/attest-build-provenance in the actions group (#5440)
Bumps the actions group with 1 update: [actions/attest-build-provenance](https://github.com/actions/attest-build-provenance).


Updates `actions/attest-build-provenance` from 1.4.3 to 1.4.4
- [Release notes](https://github.com/actions/attest-build-provenance/releases)
- [Changelog](https://github.com/actions/attest-build-provenance/blob/main/RELEASE.md)
- [Commits](1c608d11d6...ef244123eb)

---
updated-dependencies:
- dependency-name: actions/attest-build-provenance
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-11 16:55:21 -08:00
Xuehai Pan
7f94f24d64
feat(typing): allow annotate methods with pos_only when only have the self argument (#5403)
* feat: allow annotate methods with `pos_only` when only have the `self` argument

* chore(typing): make arguments for auto-generated dunder methods positional-only

* docs: add more comments to improve readability

* style: fix nit suggestions

* Add test_self_only_pos_only() in tests/test_methods_and_attributes

* test: add docstring tests for generated dunder methods

* test: remove failed tests

* fix(test): run `gc.collect()` three times for refcount tests

---------

Co-authored-by: Ralf W. Grosse-Kunstleve <rgrossekunst@nvidia.com>
2024-11-11 15:35:28 -08:00
gentlegiantJGC
6d98d4d8d4
Add type hints for args and kwargs (#5357)
* Allow subclasses of args and kwargs

The current implementation disallows subclasses of args and kwargs

* Added object type hint to args and kwargs

* Added type hinted args and kwargs classes

* Changed default type hint to typing.Any

* Removed args and kwargs type hint

* Updated tests

Modified the tests from #5381 to use the real Args and KWArgs classes

* Added comment
2024-11-11 14:51:01 -08:00
Ralf W. Grosse-Kunstleve
a90e2af88d
Factor out pybind11/conduit/pybind11_platform_abi_id.h (#5375)
* Factor out pybind11/compat/wrap_include_python_h.h

* Fixes to resolve tests_packaging failures.

* Factor out pybind11/compat/pybind11_platform_abi_id.h

* Add pybind11/compat/README.txt and a couple source code comments.

* Minor changes to comments.

* Factor out pybind11/compat/pybind11_conduit_v1.h

* Add long comment to pybind11/compat/pybind11_conduit_v1.h

* Add pybind11/compat/README.txt to wheels.

* Add `-fno-exceptions` to compiler options for exo_planet_c_api

* 1. Move `target_compile_options()` into loop over test targets, in case the `"exo_planet_c_api"` target does not exist.  2. Add `-fno-exceptions` option also for `NVHPC`.  3. Also check for `__cpp_exceptions` in exo_planet_c_api.cpp.

* 1. Fix accident (forgot to undo temporary change).  2. Special-case __EMSCRIPTEN__ in exo_planet_c_api.cpp

* Give up on compiling exo_planet_c_api.cpp with MSVC `/EHs-c-`:

There was one trouble maker (all other jobs worked):

Visual Studio 15 2017:

```
cl : Command line warning D9025: overriding '/EHc' with '/EHc-' [C:\projects\pybind11\tests\exo_planet_c_api.vcxproj]
...
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include\xlocale(319): error C2220: warning treated as error - no 'object' file generated [C:\projects\pybind11\tests\exo_planet_c_api.vcxproj]
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include\xlocale(319): warning C4530: C++ exception handler used, but unwind semantics are not enabled. Specify /EHsc
```

* Move pybind11/compat to pybind11/conduit as suggested by @henryiii:

https://github.com/pybind/pybind11/pull/5375#pullrequestreview-2329006001
2024-11-10 12:17:35 -08:00
Isuru Fernando
ec9c26817f
Fix MSVC MT/MD incompatibility in PYBIND11_BUILD_ABI (#4953)
* Fix MSVC MT/MD incompatibility in PYBIND11_BUILD_ABI

* Update comment about which PR

* Use msvc major version

* Use _MSC_VER/100

* Fix figuring out MD vs MT

* Add some test runs

* Skip one test

* Fix preprocessor

* simplify code

* fix if

* support only msvc 19

* Fold in changes from experimental PR #5411. Polish error messages.

* Remove `&& defined(_DLL)` (TBD: is it needed? but what is correct?)

* Fix MT vs MD

* Add a couple comments, based on https://github.com/pybind/pybind11/pull/4953#issuecomment-2435138593 (posted by @isuruf).

* Replace misleading comment: NVHPC is NOT outdated.

* Update include/pybind11/detail/internals.h

Co-authored-by: Robert Maynard <robertjmaynard@gmail.com>

---------

Co-authored-by: Ralf W. Grosse-Kunstleve <rgrossekunst@nvidia.com>
Co-authored-by: Ralf W. Grosse-Kunstleve <rwgkio@gmail.com>
Co-authored-by: Robert Maynard <robertjmaynard@gmail.com>
2024-11-10 09:24:29 -08:00
Elliott Sales de Andrade
037310ea8a
Use std::unique_ptr in pybind11_getbuffer (#5435)
* Use std::unique_ptr in pybind11_getbuffer

* Move final assignment later
2024-11-07 21:58:24 -08:00
vfdev
ce2f005594
Fixed data race in all_type_info in free-threading mode (#5419)
* Fix data race all_type_info_populate in free-threading mode
Description:
- fixed data race all_type_info_populate in free-threading mode
- added test

For example, we have 2 threads entering `all_type_info`.
Both enter `all_type_info_get_cache`` function and
there is a first one which inserts a tuple (type, empty_vector) to the map
and second is waiting. Inserting thread gets the (iter_to_key, True) and non-inserting thread
after waiting gets (iter_to_key, False).
Inserting thread than will add a weakref and will then call into `all_type_info_populate`.
However, non-inserting thread is not entering `if (ins.second) {` clause and
returns `ins.first->second;`` which is just empty_vector.
Finally, non-inserting thread is failing the check in `allocate_layout`:
```c++
if (n_types == 0) {
    pybind11_fail(
        "instance allocation failed: new instance has no pybind11-registered base types");
}
```

* style: pre-commit fixes

* Addressed PR comments

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-07 09:32:09 -08:00
Tim Stumbaugh
f46f5be4fa
Fix incorrect link syntax in upgrade guide (#5434)
Looks like some markdown spilled into our restructured text
2024-11-06 11:21:33 -08:00
pre-commit-ci[bot]
5c07feef2f
chore(deps): update pre-commit hooks (#5432)
* chore(deps): update pre-commit hooks

updates:
- [github.com/pre-commit/mirrors-clang-format: v18.1.8 → v19.1.3](https://github.com/pre-commit/mirrors-clang-format/compare/v18.1.8...v19.1.3)
- [github.com/astral-sh/ruff-pre-commit: v0.6.3 → v0.7.2](https://github.com/astral-sh/ruff-pre-commit/compare/v0.6.3...v0.7.2)
- [github.com/pre-commit/mirrors-mypy: v1.11.2 → v1.13.0](https://github.com/pre-commit/mirrors-mypy/compare/v1.11.2...v1.13.0)
- [github.com/pre-commit/pre-commit-hooks: v4.6.0 → v5.0.0](https://github.com/pre-commit/pre-commit-hooks/compare/v4.6.0...v5.0.0)
- [github.com/adamchainz/blacken-docs: 1.18.0 → 1.19.1](https://github.com/adamchainz/blacken-docs/compare/1.18.0...1.19.1)
- [github.com/mgedmin/check-manifest: 0.49 → 0.50](https://github.com/mgedmin/check-manifest/compare/0.49...0.50)
- [github.com/PyCQA/pylint: v3.2.7 → v3.3.1](https://github.com/PyCQA/pylint/compare/v3.2.7...v3.3.1)
- [github.com/python-jsonschema/check-jsonschema: 0.29.2 → 0.29.4](https://github.com/python-jsonschema/check-jsonschema/compare/0.29.2...0.29.4)

* style: pre-commit fixes

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-06 11:19:25 -08:00
Elliott Sales de Andrade
bc041de0db
Fix buffer protocol implementation (#5407)
* Fix buffer protocol implementation

According to the buffer protocol, `ndim` is a _required_ field [1], and
should always be set correctly. Additionally, `shape` should be set if
flags includes `PyBUF_ND` or higher [2]. The current implementation only
set those fields if flags was `PyBUF_STRIDES`.

[1] https://docs.python.org/3/c-api/buffer.html#request-independent-fields
[2] https://docs.python.org/3/c-api/buffer.html#shape-strides-suboffsets

* Apply suggestions from review

* Obey contiguity requests for buffer protocol

If a contiguous buffer is requested, and the underlying buffer isn't,
then that should raise. This matches NumPy behaviour if you do something
like:
```
struct.unpack_from('5d', np.arange(20.0)[::4])  # Raises for contiguity
```

Also, if a buffer is contiguous, then it can masquerade as a
less-complex buffer, either by dropping strides, or even pretending to
be 1D. This matches NumPy behaviour if you do something like:
```
a = np.full((3, 5), 30.0)
struct.unpack_from('15d', a)  # --> Produces 1D tuple from 2D buffer.
```

* Handle review comments

* Test buffer protocol against NumPy

* Also check PyBUF_FORMAT results
2024-11-05 10:14:24 -08:00
Michael Šimáček
75e48c5f95
Skip transient tests on GraalPy (#5422) 2024-10-25 08:28:15 -07:00
Francesco Rizzi
8917a1e9b3 add bounds check and test 2023-06-29 09:12:17 +02:00
Francesco Rizzi
d2ea386ef7 make changes as per review comments 2023-06-28 20:51:17 +02:00
Francesco Rizzi
136c664b5a reduce redundancy 2023-06-28 18:45:13 +02:00
Francesco Rizzi
17912ba667 move check 2023-06-28 18:20:52 +02:00
Francesco Rizzi
f52a31a004 fix preproc condition 2023-06-28 18:04:40 +02:00
Francesco Rizzi
87fd2c5121 array_t: tests, refine impl for indexing via operator() 2023-06-28 17:12:01 +02:00
Francesco Rizzi
cba2b32ead add tests 2023-06-28 15:25:55 +02:00
Francesco Rizzi
4d785be984 array_t: overload operator () for indexing 2023-06-28 15:19:04 +02:00
50 changed files with 1449 additions and 340 deletions

View File

@ -65,6 +65,24 @@ jobs:
# Inject a couple Windows 2019 runs
- runs-on: windows-2019
python: '3.9'
# Inject a few runs with different runtime libraries
- runs-on: windows-2022
python: '3.9'
args: >
-DCMAKE_MSVC_RUNTIME_LIBRARY=MultiThreaded
- runs-on: windows-2022
python: '3.10'
args: >
-DCMAKE_MSVC_RUNTIME_LIBRARY=MultiThreadedDLL
# This needs a python built with MTd
# - runs-on: windows-2022
# python: '3.11'
# args: >
# -DCMAKE_MSVC_RUNTIME_LIBRARY=MultiThreadedDebug
- runs-on: windows-2022
python: '3.12'
args: >
-DCMAKE_MSVC_RUNTIME_LIBRARY=MultiThreadedDebugDLL
# Extra ubuntu latest job
- runs-on: ubuntu-latest
python: '3.11'
@ -127,6 +145,7 @@ jobs:
-DPYBIND11_DISABLE_HANDLE_TYPE_NAME_DEFAULT_IMPLEMENTATION=ON
-DPYBIND11_SIMPLE_GIL_MANAGEMENT=ON
-DPYBIND11_NUMPY_1_ONLY=ON
-DPYBIND11_PYTEST_ARGS=-v
-DDOWNLOAD_CATCH=ON
-DDOWNLOAD_EIGEN=ON
-DCMAKE_CXX_STANDARD=11
@ -156,6 +175,7 @@ jobs:
-DPYBIND11_WERROR=ON
-DPYBIND11_SIMPLE_GIL_MANAGEMENT=OFF
-DPYBIND11_NUMPY_1_ONLY=ON
-DPYBIND11_PYTEST_ARGS=-v
-DDOWNLOAD_CATCH=ON
-DDOWNLOAD_EIGEN=ON
-DCMAKE_CXX_STANDARD=17
@ -175,6 +195,7 @@ jobs:
run: >
cmake -S . -B build3
-DPYBIND11_WERROR=ON
-DPYBIND11_PYTEST_ARGS=-v
-DDOWNLOAD_CATCH=ON
-DDOWNLOAD_EIGEN=ON
-DCMAKE_CXX_STANDARD=17
@ -998,7 +1019,6 @@ jobs:
git
mingw-w64-${{matrix.env}}-gcc
mingw-w64-${{matrix.env}}-python-pip
mingw-w64-${{matrix.env}}-python-numpy
mingw-w64-${{matrix.env}}-cmake
mingw-w64-${{matrix.env}}-make
mingw-w64-${{matrix.env}}-python-pytest
@ -1010,7 +1030,7 @@ jobs:
with:
msystem: ${{matrix.sys}}
install: >-
git
mingw-w64-${{matrix.env}}-python-numpy
mingw-w64-${{matrix.env}}-python-scipy
mingw-w64-${{matrix.env}}-eigen3

View File

@ -103,7 +103,7 @@ jobs:
- uses: actions/download-artifact@v4
- name: Generate artifact attestation for sdist and wheel
uses: actions/attest-build-provenance@1c608d11d69870c2092266b3f9a6f3abbf17002c # v1.4.3
uses: actions/attest-build-provenance@ef244123eb79f2f7a7e75d99086184180e6d0018 # v1.4.4
with:
subject-path: "*/pybind11*"

View File

@ -25,14 +25,14 @@ repos:
# Clang format the codebase automatically
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: "v18.1.8"
rev: "v19.1.3"
hooks:
- id: clang-format
types_or: [c++, c, cuda]
# Ruff, the Python auto-correcting linter/formatter written in Rust
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.6.3
rev: v0.7.2
hooks:
- id: ruff
args: ["--fix", "--show-fixes"]
@ -40,7 +40,7 @@ repos:
# Check static types with mypy
- repo: https://github.com/pre-commit/mirrors-mypy
rev: "v1.11.2"
rev: "v1.13.0"
hooks:
- id: mypy
args: []
@ -62,7 +62,7 @@ repos:
# Standard hooks
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: "v4.6.0"
rev: "v5.0.0"
hooks:
- id: check-added-large-files
- id: check-case-conflict
@ -79,7 +79,7 @@ repos:
# Also code format the docs
- repo: https://github.com/adamchainz/blacken-docs
rev: "1.18.0"
rev: "1.19.1"
hooks:
- id: blacken-docs
additional_dependencies:
@ -108,7 +108,7 @@ repos:
# Checks the manifest for missing files (native support)
- repo: https://github.com/mgedmin/check-manifest
rev: "0.49"
rev: "0.50"
hooks:
- id: check-manifest
# This is a slow hook, so only run this if --hook-stage manual is passed
@ -142,14 +142,14 @@ repos:
# PyLint has native support - not always usable, but works for us
- repo: https://github.com/PyCQA/pylint
rev: "v3.2.7"
rev: "v3.3.1"
hooks:
- id: pylint
files: ^pybind11
# Check schemas on some of our YAML files
- repo: https://github.com/python-jsonschema/check-jsonschema
rev: 0.29.2
rev: 0.29.4
hooks:
- id: check-readthedocs
- id: check-github-workflows

View File

@ -143,6 +143,9 @@ set(PYBIND11_HEADERS
include/pybind11/chrono.h
include/pybind11/common.h
include/pybind11/complex.h
include/pybind11/conduit/pybind11_conduit_v1.h
include/pybind11/conduit/pybind11_platform_abi_id.h
include/pybind11/conduit/wrap_include_python_h.h
include/pybind11/options.h
include/pybind11/eigen.h
include/pybind11/eigen/common.h

View File

@ -24,7 +24,8 @@ changes are that:
function is not available anymore.
Due to NumPy changes, you may experience difficulties updating to NumPy 2.
Please see the [NumPy 2 migration guide](https://numpy.org/devdocs/numpy_2_0_migration_guide.html) for details.
Please see the `NumPy 2 migration guide <https://numpy.org/devdocs/numpy_2_0_migration_guide.html>`_
for details.
For example, a more direct change could be that the default integer ``"int_"``
(and ``"uint"``) is now ``ssize_t`` and not ``long`` (affects 64bit windows).

View File

@ -1012,10 +1012,18 @@ template <>
struct handle_type_name<args> {
static constexpr auto name = const_name("*args");
};
template <typename T>
struct handle_type_name<Args<T>> {
static constexpr auto name = const_name("*args: ") + make_caster<T>::name;
};
template <>
struct handle_type_name<kwargs> {
static constexpr auto name = const_name("**kwargs");
};
template <typename T>
struct handle_type_name<KWArgs<T>> {
static constexpr auto name = const_name("**kwargs: ") + make_caster<T>::name;
};
template <>
struct handle_type_name<obj_attr_accessor> {
static constexpr auto name = const_name<obj_attr_accessor>();

View File

@ -0,0 +1,15 @@
NOTE
----
The C++ code here
** only depends on <Python.h> **
and nothing else.
DO NOT ADD CODE WITH OTHER EXTERNAL DEPENDENCIES TO THIS DIRECTORY.
Read on:
pybind11_conduit_v1.h — Type-safe interoperability between different
independent Python/C++ bindings systems.

View File

@ -0,0 +1,111 @@
// Copyright (c) 2024 The pybind Community.
/* The pybind11_conduit_v1 feature enables type-safe interoperability between
* different independent Python/C++ bindings systems,
* including pybind11 versions with different PYBIND11_INTERNALS_VERSION's.
The naming of the feature is a bit misleading:
* The feature is in no way tied to pybind11 internals.
* It just happens to originate from pybind11 and currently still lives there.
* The only external dependency is <Python.h>.
The implementation is a VERY light-weight dependency. It is designed to be
compatible with any ISO C++11 (or higher) compiler, and does NOT require
C++ Exception Handling to be enabled.
Please see https://github.com/pybind/pybind11/pull/5296 for more background.
The implementation involves a
def _pybind11_conduit_v1_(
self,
pybind11_platform_abi_id: bytes,
cpp_type_info_capsule: capsule,
pointer_kind: bytes) -> capsule
method that is meant to be added to Python objects wrapping C++ objects
(e.g. pybind11::class_-wrapped types).
The design of the _pybind11_conduit_v1_ feature provides two layers of
protection against C++ ABI mismatches:
* The first and most important layer is that the pybind11_platform_abi_id's
must match between extensions. This will never be perfect, but is the same
pragmatic approach used in pybind11 since 2017
(https://github.com/pybind/pybind11/commit/96997a4b9d4ec3d389a570604394af5d5eee2557,
PYBIND11_INTERNALS_ID).
* The second layer is that the typeid(std::type_info).name()'s must match
between extensions.
The implementation below (which is shorter than this comment!), serves as a
battle-tested specification. The main API is this one function:
auto *cpp_pointer = pybind11_conduit_v1::get_type_pointer_ephemeral<YourType>(py_obj);
It is meant to be a minimalistic reference implementation, intentionally
without comprehensive error reporting. It is expected that major bindings
systems will roll their own, compatible implementations, potentially with
system-specific error reporting. The essential specifications all bindings
systems need to agree on are merely:
* PYBIND11_PLATFORM_ABI_ID (const char* literal).
* The cpp_type_info capsule (see below: a void *ptr and a const char *name).
* The cpp_conduit capsule (see below: a void *ptr and a const char *name).
* "raw_pointer_ephemeral" means: the lifetime of the pointer is the lifetime
of the py_obj.
*/
// THIS MUST STAY AT THE TOP!
#include "pybind11_platform_abi_id.h"
#include <Python.h>
#include <typeinfo>
namespace pybind11_conduit_v1 {
inline void *get_raw_pointer_ephemeral(PyObject *py_obj, const std::type_info *cpp_type_info) {
PyObject *cpp_type_info_capsule
= PyCapsule_New(const_cast<void *>(static_cast<const void *>(cpp_type_info)),
typeid(std::type_info).name(),
nullptr);
if (cpp_type_info_capsule == nullptr) {
return nullptr;
}
PyObject *cpp_conduit = PyObject_CallMethod(py_obj,
"_pybind11_conduit_v1_",
"yOy",
PYBIND11_PLATFORM_ABI_ID,
cpp_type_info_capsule,
"raw_pointer_ephemeral");
Py_DECREF(cpp_type_info_capsule);
if (cpp_conduit == nullptr) {
return nullptr;
}
void *raw_ptr = PyCapsule_GetPointer(cpp_conduit, cpp_type_info->name());
Py_DECREF(cpp_conduit);
if (PyErr_Occurred()) {
return nullptr;
}
return raw_ptr;
}
template <typename T>
T *get_type_pointer_ephemeral(PyObject *py_obj) {
void *raw_ptr = get_raw_pointer_ephemeral(py_obj, &typeid(T));
if (raw_ptr == nullptr) {
return nullptr;
}
return static_cast<T *>(raw_ptr);
}
} // namespace pybind11_conduit_v1

View File

@ -0,0 +1,88 @@
#pragma once
// Copyright (c) 2024 The pybind Community.
// To maximize reusability:
// DO NOT ADD CODE THAT REQUIRES C++ EXCEPTION HANDLING.
#include "wrap_include_python_h.h"
// Implementation details. DO NOT USE ELSEWHERE. (Unfortunately we cannot #undef them.)
// This is duplicated here to maximize portability.
#define PYBIND11_PLATFORM_ABI_ID_STRINGIFY(x) #x
#define PYBIND11_PLATFORM_ABI_ID_TOSTRING(x) PYBIND11_PLATFORM_ABI_ID_STRINGIFY(x)
// On MSVC, debug and release builds are not ABI-compatible!
#if defined(_MSC_VER) && defined(_DEBUG)
# define PYBIND11_BUILD_TYPE "_debug"
#else
# define PYBIND11_BUILD_TYPE ""
#endif
// Let's assume that different compilers are ABI-incompatible.
// A user can manually set this string if they know their
// compiler is compatible.
#ifndef PYBIND11_COMPILER_TYPE
# if defined(_MSC_VER)
# define PYBIND11_COMPILER_TYPE "_msvc"
# elif defined(__INTEL_COMPILER)
# define PYBIND11_COMPILER_TYPE "_icc"
# elif defined(__clang__)
# define PYBIND11_COMPILER_TYPE "_clang"
# elif defined(__PGI)
# define PYBIND11_COMPILER_TYPE "_pgi"
# elif defined(__MINGW32__)
# define PYBIND11_COMPILER_TYPE "_mingw"
# elif defined(__CYGWIN__)
# define PYBIND11_COMPILER_TYPE "_gcc_cygwin"
# elif defined(__GNUC__)
# define PYBIND11_COMPILER_TYPE "_gcc"
# else
# define PYBIND11_COMPILER_TYPE "_unknown"
# endif
#endif
// Also standard libs
#ifndef PYBIND11_STDLIB
# if defined(_LIBCPP_VERSION)
# define PYBIND11_STDLIB "_libcpp"
# elif defined(__GLIBCXX__) || defined(__GLIBCPP__)
# define PYBIND11_STDLIB "_libstdcpp"
# else
# define PYBIND11_STDLIB ""
# endif
#endif
#ifndef PYBIND11_BUILD_ABI
# if defined(__GXX_ABI_VERSION) // Linux/OSX.
# define PYBIND11_BUILD_ABI "_cxxabi" PYBIND11_PLATFORM_ABI_ID_TOSTRING(__GXX_ABI_VERSION)
# elif defined(_MSC_VER) // See PR #4953.
# if defined(_MT) && defined(_DLL) // Corresponding to CL command line options /MD or /MDd.
# if (_MSC_VER) / 100 == 19
# define PYBIND11_BUILD_ABI "_md_mscver19"
# else
# error "Unknown major version for MSC_VER: PLEASE REVISE THIS CODE."
# endif
# elif defined(_MT) // Corresponding to CL command line options /MT or /MTd.
# define PYBIND11_BUILD_ABI "_mt_mscver" PYBIND11_PLATFORM_ABI_ID_TOSTRING(_MSC_VER)
# else
# if (_MSC_VER) / 100 == 19
# define PYBIND11_BUILD_ABI "_none_mscver19"
# else
# error "Unknown major version for MSC_VER: PLEASE REVISE THIS CODE."
# endif
# endif
# elif defined(__NVCOMPILER) // NVHPC (PGI-based).
# define PYBIND11_BUILD_ABI "" // TODO: What should be here, to prevent UB?
# else
# error "Unknown platform or compiler: PLEASE REVISE THIS CODE."
# endif
#endif
#ifndef PYBIND11_INTERNALS_KIND
# define PYBIND11_INTERNALS_KIND ""
#endif
#define PYBIND11_PLATFORM_ABI_ID \
PYBIND11_INTERNALS_KIND PYBIND11_COMPILER_TYPE PYBIND11_STDLIB PYBIND11_BUILD_ABI \
PYBIND11_BUILD_TYPE

View File

@ -0,0 +1,72 @@
#pragma once
// Copyright (c) 2024 The pybind Community.
// STRONG REQUIREMENT:
// This header is a wrapper around `#include <Python.h>`, therefore it
// MUST BE INCLUDED BEFORE ANY STANDARD HEADERS are included.
// See also:
// https://docs.python.org/3/c-api/intro.html#include-files
// Quoting from there:
// Note: Since Python may define some pre-processor definitions which affect
// the standard headers on some systems, you must include Python.h before
// any standard headers are included.
// To maximize reusability:
// DO NOT ADD CODE THAT REQUIRES C++ EXCEPTION HANDLING.
// Disable linking to pythonX_d.lib on Windows in debug mode.
#if defined(_MSC_VER) && defined(_DEBUG) && !defined(Py_DEBUG)
// Workaround for a VS 2022 issue.
// See https://github.com/pybind/pybind11/pull/3497 for full context.
// NOTE: This workaround knowingly violates the Python.h include order
// requirement (see above).
# include <yvals.h>
# if _MSVC_STL_VERSION >= 143
# include <crtdefs.h>
# endif
# define PYBIND11_DEBUG_MARKER
# undef _DEBUG
#endif
// Don't let Python.h #define (v)snprintf as macro because they are implemented
// properly in Visual Studio since 2015.
#if defined(_MSC_VER)
# define HAVE_SNPRINTF 1
#endif
#if defined(_MSC_VER)
# pragma warning(push)
# pragma warning(disable : 4505)
// C4505: 'PySlice_GetIndicesEx': unreferenced local function has been removed
#endif
#include <Python.h>
#include <frameobject.h>
#include <pythread.h>
#if defined(_MSC_VER)
# pragma warning(pop)
#endif
#if defined(PYBIND11_DEBUG_MARKER)
# define _DEBUG
# undef PYBIND11_DEBUG_MARKER
#endif
// Python #defines overrides on all sorts of core functions, which
// tends to wreak havok in C++ codebases that expect these to work
// like regular functions (potentially with several overloads).
#if defined(isalnum)
# undef isalnum
# undef isalpha
# undef islower
# undef isspace
# undef isupper
# undef tolower
# undef toupper
#endif
#if defined(copysign)
# undef copysign
#endif

View File

@ -583,9 +583,9 @@ extern "C" inline int pybind11_getbuffer(PyObject *obj, Py_buffer *view, int fla
return -1;
}
std::memset(view, 0, sizeof(Py_buffer));
buffer_info *info = nullptr;
std::unique_ptr<buffer_info> info = nullptr;
try {
info = tinfo->get_buffer(obj, tinfo->get_buffer_data);
info.reset(tinfo->get_buffer(obj, tinfo->get_buffer_data));
} catch (...) {
try_translate_exceptions();
raise_from(PyExc_BufferError, "Error getting buffer");
@ -596,29 +596,72 @@ extern "C" inline int pybind11_getbuffer(PyObject *obj, Py_buffer *view, int fla
}
if ((flags & PyBUF_WRITABLE) == PyBUF_WRITABLE && info->readonly) {
delete info;
// view->obj = nullptr; // Was just memset to 0, so not necessary
set_error(PyExc_BufferError, "Writable buffer requested for readonly storage");
return -1;
}
view->obj = obj;
view->ndim = 1;
view->internal = info;
view->buf = info->ptr;
// Fill in all the information, and then downgrade as requested by the caller, or raise an
// error if that's not possible.
view->itemsize = info->itemsize;
view->len = view->itemsize;
for (auto s : info->shape) {
view->len *= s;
}
view->ndim = static_cast<int>(info->ndim);
view->shape = info->shape.data();
view->strides = info->strides.data();
view->readonly = static_cast<int>(info->readonly);
if ((flags & PyBUF_FORMAT) == PyBUF_FORMAT) {
view->format = const_cast<char *>(info->format.c_str());
}
if ((flags & PyBUF_STRIDES) == PyBUF_STRIDES) {
view->ndim = (int) info->ndim;
view->strides = info->strides.data();
view->shape = info->shape.data();
// Note, all contiguity flags imply PyBUF_STRIDES and lower.
if ((flags & PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS) {
if (PyBuffer_IsContiguous(view, 'C') == 0) {
std::memset(view, 0, sizeof(Py_buffer));
set_error(PyExc_BufferError,
"C-contiguous buffer requested for discontiguous storage");
return -1;
}
} else if ((flags & PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS) {
if (PyBuffer_IsContiguous(view, 'F') == 0) {
std::memset(view, 0, sizeof(Py_buffer));
set_error(PyExc_BufferError,
"Fortran-contiguous buffer requested for discontiguous storage");
return -1;
}
} else if ((flags & PyBUF_ANY_CONTIGUOUS) == PyBUF_ANY_CONTIGUOUS) {
if (PyBuffer_IsContiguous(view, 'A') == 0) {
std::memset(view, 0, sizeof(Py_buffer));
set_error(PyExc_BufferError, "Contiguous buffer requested for discontiguous storage");
return -1;
}
} else if ((flags & PyBUF_STRIDES) != PyBUF_STRIDES) {
// If no strides are requested, the buffer must be C-contiguous.
// https://docs.python.org/3/c-api/buffer.html#contiguity-requests
if (PyBuffer_IsContiguous(view, 'C') == 0) {
std::memset(view, 0, sizeof(Py_buffer));
set_error(PyExc_BufferError,
"C-contiguous buffer requested for discontiguous storage");
return -1;
}
view->strides = nullptr;
// Since this is a contiguous buffer, it can also pretend to be 1D.
if ((flags & PyBUF_ND) != PyBUF_ND) {
view->shape = nullptr;
view->ndim = 0;
}
}
// Set these after all checks so they don't leak out into the caller, and can be automatically
// cleaned up on error.
view->buf = info->ptr;
view->internal = info.release();
view->obj = obj;
Py_INCREF(view->obj);
return 0;
}

View File

@ -9,6 +9,11 @@
#pragma once
#include <pybind11/conduit/wrap_include_python_h.h>
#if PY_VERSION_HEX < 0x03080000
# error "PYTHON < 3.8 IS UNSUPPORTED. pybind11 v2.13 was the last to support Python 3.7."
#endif
#define PYBIND11_VERSION_MAJOR 2
#define PYBIND11_VERSION_MINOR 14
#define PYBIND11_VERSION_PATCH 0.dev1
@ -204,31 +209,6 @@
# define PYBIND11_MAYBE_UNUSED __attribute__((__unused__))
#endif
/* Don't let Python.h #define (v)snprintf as macro because they are implemented
properly in Visual Studio since 2015. */
#if defined(_MSC_VER)
# define HAVE_SNPRINTF 1
#endif
/// Include Python header, disable linking to pythonX_d.lib on Windows in debug mode
#if defined(_MSC_VER)
PYBIND11_WARNING_PUSH
PYBIND11_WARNING_DISABLE_MSVC(4505)
// C4505: 'PySlice_GetIndicesEx': unreferenced local function has been removed (PyPy only)
# if defined(_DEBUG) && !defined(Py_DEBUG)
// Workaround for a VS 2022 issue.
// NOTE: This workaround knowingly violates the Python.h include order requirement:
// https://docs.python.org/3/c-api/intro.html#include-files
// See https://github.com/pybind/pybind11/pull/3497 for full context.
# include <yvals.h>
# if _MSVC_STL_VERSION >= 143
# include <crtdefs.h>
# endif
# define PYBIND11_DEBUG_MARKER
# undef _DEBUG
# endif
#endif
// https://en.cppreference.com/w/c/chrono/localtime
#if defined(__STDC_LIB_EXT1__) && !defined(__STDC_WANT_LIB_EXT1__)
# define __STDC_WANT_LIB_EXT1__
@ -263,30 +243,6 @@ PYBIND11_WARNING_DISABLE_MSVC(4505)
# endif
#endif
#include <Python.h>
#if PY_VERSION_HEX < 0x03080000
# error "PYTHON < 3.8 IS UNSUPPORTED. pybind11 v2.13 was the last to support Python 3.7."
#endif
#include <frameobject.h>
#include <pythread.h>
/* Python #defines overrides on all sorts of core functions, which
tends to weak havok in C++ codebases that expect these to work
like regular functions (potentially with several overloads) */
#if defined(isalnum)
# undef isalnum
# undef isalpha
# undef islower
# undef isspace
# undef isupper
# undef tolower
# undef toupper
#endif
#if defined(copysign)
# undef copysign
#endif
#if defined(PYBIND11_NUMPY_1_ONLY)
# define PYBIND11_INTERNAL_NUMPY_1_ONLY_DETECTED
#endif
@ -295,14 +251,6 @@ PYBIND11_WARNING_DISABLE_MSVC(4505)
# define PYBIND11_SIMPLE_GIL_MANAGEMENT
#endif
#if defined(_MSC_VER)
# if defined(PYBIND11_DEBUG_MARKER)
# define _DEBUG
# undef PYBIND11_DEBUG_MARKER
# endif
PYBIND11_WARNING_POP
#endif
#include <cstddef>
#include <cstring>
#include <exception>
@ -1145,14 +1093,14 @@ struct overload_cast_impl {
}
template <typename Return, typename Class>
constexpr auto operator()(Return (Class::*pmf)(Args...),
std::false_type = {}) const noexcept -> decltype(pmf) {
constexpr auto operator()(Return (Class::*pmf)(Args...), std::false_type = {}) const noexcept
-> decltype(pmf) {
return pmf;
}
template <typename Return, typename Class>
constexpr auto operator()(Return (Class::*pmf)(Args...) const,
std::true_type) const noexcept -> decltype(pmf) {
constexpr auto operator()(Return (Class::*pmf)(Args...) const, std::true_type) const noexcept
-> decltype(pmf) {
return pmf;
}
};

View File

@ -156,9 +156,8 @@ constexpr auto concat(const descr<N, Ts...> &d, const Args &...args) {
}
#else
template <size_t N, typename... Ts, typename... Args>
constexpr auto concat(const descr<N, Ts...> &d,
const Args &...args) -> decltype(std::declval<descr<N + 2, Ts...>>()
+ concat(args...)) {
constexpr auto concat(const descr<N, Ts...> &d, const Args &...args)
-> decltype(std::declval<descr<N + 2, Ts...>>() + concat(args...)) {
return d + const_name(", ") + concat(args...);
}
#endif

View File

@ -410,7 +410,7 @@ struct pickle_factory<Get, Set, RetState(Self), NewInstance(ArgState)> {
template <typename Class, typename... Extra>
void execute(Class &cl, const Extra &...extra) && {
cl.def("__getstate__", std::move(get));
cl.def("__getstate__", std::move(get), pos_only());
#if defined(PYBIND11_CPP14)
cl.def(

View File

@ -15,6 +15,7 @@
# include <pybind11/gil.h>
#endif
#include <pybind11/conduit/pybind11_platform_abi_id.h>
#include <pybind11/pytypes.h>
#include <exception>
@ -269,67 +270,6 @@ struct type_info {
bool module_local : 1;
};
/// On MSVC, debug and release builds are not ABI-compatible!
#if defined(_MSC_VER) && defined(_DEBUG)
# define PYBIND11_BUILD_TYPE "_debug"
#else
# define PYBIND11_BUILD_TYPE ""
#endif
/// Let's assume that different compilers are ABI-incompatible.
/// A user can manually set this string if they know their
/// compiler is compatible.
#ifndef PYBIND11_COMPILER_TYPE
# if defined(_MSC_VER)
# define PYBIND11_COMPILER_TYPE "_msvc"
# elif defined(__INTEL_COMPILER)
# define PYBIND11_COMPILER_TYPE "_icc"
# elif defined(__clang__)
# define PYBIND11_COMPILER_TYPE "_clang"
# elif defined(__PGI)
# define PYBIND11_COMPILER_TYPE "_pgi"
# elif defined(__MINGW32__)
# define PYBIND11_COMPILER_TYPE "_mingw"
# elif defined(__CYGWIN__)
# define PYBIND11_COMPILER_TYPE "_gcc_cygwin"
# elif defined(__GNUC__)
# define PYBIND11_COMPILER_TYPE "_gcc"
# else
# define PYBIND11_COMPILER_TYPE "_unknown"
# endif
#endif
/// Also standard libs
#ifndef PYBIND11_STDLIB
# if defined(_LIBCPP_VERSION)
# define PYBIND11_STDLIB "_libcpp"
# elif defined(__GLIBCXX__) || defined(__GLIBCPP__)
# define PYBIND11_STDLIB "_libstdcpp"
# else
# define PYBIND11_STDLIB ""
# endif
#endif
/// On Linux/OSX, changes in __GXX_ABI_VERSION__ indicate ABI incompatibility.
/// On MSVC, changes in _MSC_VER may indicate ABI incompatibility (#2898).
#ifndef PYBIND11_BUILD_ABI
# if defined(__GXX_ABI_VERSION)
# define PYBIND11_BUILD_ABI "_cxxabi" PYBIND11_TOSTRING(__GXX_ABI_VERSION)
# elif defined(_MSC_VER)
# define PYBIND11_BUILD_ABI "_mscver" PYBIND11_TOSTRING(_MSC_VER)
# else
# define PYBIND11_BUILD_ABI ""
# endif
#endif
#ifndef PYBIND11_INTERNALS_KIND
# define PYBIND11_INTERNALS_KIND ""
#endif
#define PYBIND11_PLATFORM_ABI_ID \
PYBIND11_INTERNALS_KIND PYBIND11_COMPILER_TYPE PYBIND11_STDLIB PYBIND11_BUILD_ABI \
PYBIND11_BUILD_TYPE
#define PYBIND11_INTERNALS_ID \
"__pybind11_internals_v" PYBIND11_TOSTRING(PYBIND11_INTERNALS_VERSION) \
PYBIND11_PLATFORM_ABI_ID "__"
@ -671,8 +611,8 @@ inline std::uint64_t mix64(std::uint64_t z) {
}
template <typename F>
inline auto with_instance_map(const void *ptr,
const F &cb) -> decltype(cb(std::declval<instance_map &>())) {
inline auto with_instance_map(const void *ptr, const F &cb)
-> decltype(cb(std::declval<instance_map &>())) {
auto &internals = get_internals();
#ifdef Py_GIL_DISABLED

View File

@ -117,7 +117,6 @@ PYBIND11_NOINLINE void all_type_info_populate(PyTypeObject *t, std::vector<type_
for (handle parent : reinterpret_borrow<tuple>(t->tp_bases)) {
check.push_back((PyTypeObject *) parent.ptr());
}
auto const &type_dict = get_internals().registered_types_py;
for (size_t i = 0; i < check.size(); i++) {
auto *type = check[i];
@ -176,13 +175,7 @@ PYBIND11_NOINLINE void all_type_info_populate(PyTypeObject *t, std::vector<type_
* The value is cached for the lifetime of the Python type.
*/
inline const std::vector<detail::type_info *> &all_type_info(PyTypeObject *type) {
auto ins = all_type_info_get_cache(type);
if (ins.second) {
// New cache entry: populate it
all_type_info_populate(type, ins.first->second);
}
return ins.first->second;
return all_type_info_get_cache(type).first->second;
}
/**
@ -1161,14 +1154,14 @@ protected:
does not have a private operator new implementation. A comma operator is used in the
decltype argument to apply SFINAE to the public copy/move constructors.*/
template <typename T, typename = enable_if_t<is_copy_constructible<T>::value>>
static auto make_copy_constructor(const T *) -> decltype(new T(std::declval<const T>()),
Constructor{}) {
static auto make_copy_constructor(const T *)
-> decltype(new T(std::declval<const T>()), Constructor{}) {
return [](const void *arg) -> void * { return new T(*reinterpret_cast<const T *>(arg)); };
}
template <typename T, typename = enable_if_t<is_move_constructible<T>::value>>
static auto make_move_constructor(const T *) -> decltype(new T(std::declval<T &&>()),
Constructor{}) {
static auto make_move_constructor(const T *)
-> decltype(new T(std::declval<T &&>()), Constructor{}) {
return [](const void *arg) -> void * {
return new T(std::move(*const_cast<T *>(reinterpret_cast<const T *>(arg))));
};

View File

@ -124,9 +124,9 @@ struct eigen_tensor_helper<
template <typename Type, bool ShowDetails, bool NeedsWriteable = false>
struct get_tensor_descriptor {
static constexpr auto details
= const_name<NeedsWriteable>(", flags.writeable", "")
+ const_name<static_cast<int>(Type::Layout) == static_cast<int>(Eigen::RowMajor)>(
", flags.c_contiguous", ", flags.f_contiguous");
= const_name<NeedsWriteable>(", flags.writeable", "") + const_name
< static_cast<int>(Type::Layout)
== static_cast<int>(Eigen::RowMajor) > (", flags.c_contiguous", ", flags.f_contiguous");
static constexpr auto value
= const_name("numpy.ndarray[") + npy_format_descriptor<typename Type::Scalar>::name
+ const_name("[") + eigen_tensor_helper<remove_cv_t<Type>>::dimensions_descriptor

View File

@ -212,6 +212,7 @@ constexpr int platform_lookup(int I, Ints... Is) {
}
struct npy_api {
// If you change this code, please review `normalized_dtype_num` below.
enum constants {
NPY_ARRAY_C_CONTIGUOUS_ = 0x0001,
NPY_ARRAY_F_CONTIGUOUS_ = 0x0002,
@ -384,6 +385,74 @@ private:
}
};
// This table normalizes typenums by mapping NPY_INT_, NPY_LONG, ... to NPY_INT32_, NPY_INT64, ...
// This is needed to correctly handle situations where multiple typenums map to the same type,
// e.g. NPY_LONG_ may be equivalent to NPY_INT_ or NPY_LONGLONG_ despite having a different
// typenum. The normalized typenum should always match the values used in npy_format_descriptor.
// If you change this code, please review `enum constants` above.
static constexpr int normalized_dtype_num[npy_api::NPY_VOID_ + 1] = {
// NPY_BOOL_ =>
npy_api::NPY_BOOL_,
// NPY_BYTE_ =>
npy_api::NPY_BYTE_,
// NPY_UBYTE_ =>
npy_api::NPY_UBYTE_,
// NPY_SHORT_ =>
npy_api::NPY_INT16_,
// NPY_USHORT_ =>
npy_api::NPY_UINT16_,
// NPY_INT_ =>
sizeof(int) == sizeof(std::int16_t) ? npy_api::NPY_INT16_
: sizeof(int) == sizeof(std::int32_t) ? npy_api::NPY_INT32_
: sizeof(int) == sizeof(std::int64_t) ? npy_api::NPY_INT64_
: npy_api::NPY_INT_,
// NPY_UINT_ =>
sizeof(unsigned int) == sizeof(std::uint16_t) ? npy_api::NPY_UINT16_
: sizeof(unsigned int) == sizeof(std::uint32_t) ? npy_api::NPY_UINT32_
: sizeof(unsigned int) == sizeof(std::uint64_t) ? npy_api::NPY_UINT64_
: npy_api::NPY_UINT_,
// NPY_LONG_ =>
sizeof(long) == sizeof(std::int16_t) ? npy_api::NPY_INT16_
: sizeof(long) == sizeof(std::int32_t) ? npy_api::NPY_INT32_
: sizeof(long) == sizeof(std::int64_t) ? npy_api::NPY_INT64_
: npy_api::NPY_LONG_,
// NPY_ULONG_ =>
sizeof(unsigned long) == sizeof(std::uint16_t) ? npy_api::NPY_UINT16_
: sizeof(unsigned long) == sizeof(std::uint32_t) ? npy_api::NPY_UINT32_
: sizeof(unsigned long) == sizeof(std::uint64_t) ? npy_api::NPY_UINT64_
: npy_api::NPY_ULONG_,
// NPY_LONGLONG_ =>
sizeof(long long) == sizeof(std::int16_t) ? npy_api::NPY_INT16_
: sizeof(long long) == sizeof(std::int32_t) ? npy_api::NPY_INT32_
: sizeof(long long) == sizeof(std::int64_t) ? npy_api::NPY_INT64_
: npy_api::NPY_LONGLONG_,
// NPY_ULONGLONG_ =>
sizeof(unsigned long long) == sizeof(std::uint16_t) ? npy_api::NPY_UINT16_
: sizeof(unsigned long long) == sizeof(std::uint32_t) ? npy_api::NPY_UINT32_
: sizeof(unsigned long long) == sizeof(std::uint64_t) ? npy_api::NPY_UINT64_
: npy_api::NPY_ULONGLONG_,
// NPY_FLOAT_ =>
npy_api::NPY_FLOAT_,
// NPY_DOUBLE_ =>
npy_api::NPY_DOUBLE_,
// NPY_LONGDOUBLE_ =>
npy_api::NPY_LONGDOUBLE_,
// NPY_CFLOAT_ =>
npy_api::NPY_CFLOAT_,
// NPY_CDOUBLE_ =>
npy_api::NPY_CDOUBLE_,
// NPY_CLONGDOUBLE_ =>
npy_api::NPY_CLONGDOUBLE_,
// NPY_OBJECT_ =>
npy_api::NPY_OBJECT_,
// NPY_STRING_ =>
npy_api::NPY_STRING_,
// NPY_UNICODE_ =>
npy_api::NPY_UNICODE_,
// NPY_VOID_ =>
npy_api::NPY_VOID_,
};
inline PyArray_Proxy *array_proxy(void *ptr) { return reinterpret_cast<PyArray_Proxy *>(ptr); }
inline const PyArray_Proxy *array_proxy(const void *ptr) {
@ -684,6 +753,13 @@ public:
return detail::npy_format_descriptor<typename std::remove_cv<T>::type>::dtype();
}
/// Return the type number associated with a C++ type.
/// This is the constexpr equivalent of `dtype::of<T>().num()`.
template <typename T>
static constexpr int num_of() {
return detail::npy_format_descriptor<typename std::remove_cv<T>::type>::value;
}
/// Size of the data type in bytes.
#ifdef PYBIND11_NUMPY_1_ONLY
ssize_t itemsize() const { return detail::array_descriptor_proxy(m_ptr)->elsize; }
@ -725,7 +801,9 @@ public:
return detail::array_descriptor_proxy(m_ptr)->type;
}
/// type number of dtype.
/// Type number of dtype. Note that different values may be returned for equivalent types,
/// e.g. even though ``long`` may be equivalent to ``int`` or ``long long``, they still have
/// different type numbers. Consider using `normalized_num` to avoid this.
int num() const {
// Note: The signature, `dtype::num` follows the naming of NumPy's public
// Python API (i.e., ``dtype.num``), rather than its internal
@ -733,6 +811,17 @@ public:
return detail::array_descriptor_proxy(m_ptr)->type_num;
}
/// Type number of dtype, normalized to match the return value of `num_of` for equivalent
/// types. This function can be used to write switch statements that correctly handle
/// equivalent types with different type numbers.
int normalized_num() const {
int value = num();
if (value >= 0 && value <= detail::npy_api::NPY_VOID_) {
return detail::normalized_dtype_num[value];
}
return value;
}
/// Single character for byteorder
char byteorder() const { return detail::array_descriptor_proxy(m_ptr)->byteorder; }
@ -1232,9 +1321,7 @@ public:
// Reference to element at a given index
template <typename... Ix>
const T &at(Ix... index) const {
if ((ssize_t) sizeof...(index) != ndim()) {
fail_dim_check(sizeof...(index), "index dimension mismatch");
}
check_rank_precondition(sizeof...(index));
return *(static_cast<const T *>(array::data())
+ byte_offset(ssize_t(index)...) / itemsize());
}
@ -1242,13 +1329,33 @@ public:
// Mutable reference to element at a given index
template <typename... Ix>
T &mutable_at(Ix... index) {
if ((ssize_t) sizeof...(index) != ndim()) {
fail_dim_check(sizeof...(index), "index dimension mismatch");
}
check_rank_precondition(sizeof...(index));
return *(static_cast<T *>(array::mutable_data())
+ byte_offset(ssize_t(index)...) / itemsize());
}
// const-reference to element at a given index without bounds checking
template <typename... Ix>
const T &operator()(Ix... index) const {
#if !defined(NDEBUG)
check_rank_precondition(sizeof...(index));
check_dimensions(index...);
#endif
return *(static_cast<const T *>(array::data())
+ detail::byte_offset_unsafe(strides(), ssize_t(index)...) / itemsize());
}
// mutable reference to element at a given index without bounds checking
template <typename... Ix>
T &operator()(Ix... index) {
#if !defined(NDEBUG)
check_rank_precondition(sizeof...(index));
check_dimensions(index...);
#endif
return *(static_cast<T *>(array::mutable_data())
+ detail::byte_offset_unsafe(strides(), ssize_t(index)...) / itemsize());
}
/**
* Returns a proxy object that provides access to the array's data without bounds or
* dimensionality checking. Will throw if the array is missing the `writeable` flag. Use with
@ -1305,6 +1412,13 @@ protected:
| ExtraFlags,
nullptr);
}
private:
void check_rank_precondition(ssize_t dim) const {
if (dim != ndim()) {
fail_dim_check(dim, "index dimension mismatch");
}
}
};
template <typename T>
@ -1428,7 +1542,11 @@ public:
};
template <typename T>
struct npy_format_descriptor<T, enable_if_t<is_same_ignoring_cvref<T, PyObject *>::value>> {
struct npy_format_descriptor<
T,
enable_if_t<is_same_ignoring_cvref<T, PyObject *>::value
|| ((std::is_same<T, handle>::value || std::is_same<T, object>::value)
&& sizeof(T) == sizeof(PyObject *))>> {
static constexpr auto name = const_name("object");
static constexpr int value = npy_api::NPY_OBJECT_;

View File

@ -301,9 +301,20 @@ protected:
constexpr bool has_kw_only_args = any_of<std::is_same<kw_only, Extra>...>::value,
has_pos_only_args = any_of<std::is_same<pos_only, Extra>...>::value,
has_arg_annotations = any_of<is_keyword<Extra>...>::value;
constexpr bool has_is_method = any_of<std::is_same<is_method, Extra>...>::value;
// The implicit `self` argument is not present and not counted in method definitions.
constexpr bool has_args = cast_in::args_pos >= 0;
constexpr bool is_method_with_self_arg_only = has_is_method && !has_args;
static_assert(has_arg_annotations || !has_kw_only_args,
"py::kw_only requires the use of argument annotations");
static_assert(has_arg_annotations || !has_pos_only_args,
static_assert(((/* Need `py::arg("arg_name")` annotation in function/method. */
has_arg_annotations)
|| (/* Allow methods with no arguments `def method(self, /): ...`.
* A method has at least one argument `self`. There can be no
* `py::arg` annotation. E.g. `class.def("method", py::pos_only())`.
*/
is_method_with_self_arg_only))
|| !has_pos_only_args,
"py::pos_only requires the use of argument annotations (for docstrings "
"and aligning the annotations to the argument)");
@ -2022,9 +2033,11 @@ struct enum_base {
.format(std::move(type_name), enum_name(arg), int_(arg));
},
name("__repr__"),
is_method(m_base));
is_method(m_base),
pos_only());
m_base.attr("name") = property(cpp_function(&enum_name, name("name"), is_method(m_base)));
m_base.attr("name")
= property(cpp_function(&enum_name, name("name"), is_method(m_base), pos_only()));
m_base.attr("__str__") = cpp_function(
[](handle arg) -> str {
@ -2032,7 +2045,8 @@ struct enum_base {
return pybind11::str("{}.{}").format(std::move(type_name), enum_name(arg));
},
name("__str__"),
is_method(m_base));
is_method(m_base),
pos_only());
if (options::show_enum_members_docstring()) {
m_base.attr("__doc__") = static_property(
@ -2087,7 +2101,8 @@ struct enum_base {
}, \
name(op), \
is_method(m_base), \
arg("other"))
arg("other"), \
pos_only())
#define PYBIND11_ENUM_OP_CONV(op, expr) \
m_base.attr(op) = cpp_function( \
@ -2097,7 +2112,8 @@ struct enum_base {
}, \
name(op), \
is_method(m_base), \
arg("other"))
arg("other"), \
pos_only())
#define PYBIND11_ENUM_OP_CONV_LHS(op, expr) \
m_base.attr(op) = cpp_function( \
@ -2107,7 +2123,8 @@ struct enum_base {
}, \
name(op), \
is_method(m_base), \
arg("other"))
arg("other"), \
pos_only())
if (is_convertible) {
PYBIND11_ENUM_OP_CONV_LHS("__eq__", !b.is_none() && a.equal(b));
@ -2127,7 +2144,8 @@ struct enum_base {
m_base.attr("__invert__")
= cpp_function([](const object &arg) { return ~(int_(arg)); },
name("__invert__"),
is_method(m_base));
is_method(m_base),
pos_only());
}
} else {
PYBIND11_ENUM_OP_STRICT("__eq__", int_(a).equal(int_(b)), return false);
@ -2147,11 +2165,15 @@ struct enum_base {
#undef PYBIND11_ENUM_OP_CONV
#undef PYBIND11_ENUM_OP_STRICT
m_base.attr("__getstate__") = cpp_function(
[](const object &arg) { return int_(arg); }, name("__getstate__"), is_method(m_base));
m_base.attr("__getstate__") = cpp_function([](const object &arg) { return int_(arg); },
name("__getstate__"),
is_method(m_base),
pos_only());
m_base.attr("__hash__") = cpp_function(
[](const object &arg) { return int_(arg); }, name("__hash__"), is_method(m_base));
m_base.attr("__hash__") = cpp_function([](const object &arg) { return int_(arg); },
name("__hash__"),
is_method(m_base),
pos_only());
}
PYBIND11_NOINLINE void value(char const *name_, object value, const char *doc = nullptr) {
@ -2243,9 +2265,9 @@ public:
m_base.init(is_arithmetic, is_convertible);
def(init([](Scalar i) { return static_cast<Type>(i); }), arg("value"));
def_property_readonly("value", [](Type value) { return (Scalar) value; });
def("__int__", [](Type value) { return (Scalar) value; });
def("__index__", [](Type value) { return (Scalar) value; });
def_property_readonly("value", [](Type value) { return (Scalar) value; }, pos_only());
def("__int__", [](Type value) { return (Scalar) value; }, pos_only());
def("__index__", [](Type value) { return (Scalar) value; }, pos_only());
attr("__setstate__") = cpp_function(
[](detail::value_and_holder &v_h, Scalar arg) {
detail::initimpl::setstate<Base>(
@ -2254,7 +2276,8 @@ public:
detail::is_new_style_constructor(),
pybind11::name("__setstate__"),
is_method(*this),
arg("state"));
arg("state"),
pos_only());
}
/// Export enumeration entries into the parent scope
@ -2326,13 +2349,20 @@ keep_alive_impl(size_t Nurse, size_t Patient, function_call &call, handle ret) {
inline std::pair<decltype(internals::registered_types_py)::iterator, bool>
all_type_info_get_cache(PyTypeObject *type) {
auto res = with_internals([type](internals &internals) {
return internals
auto ins = internals
.registered_types_py
#ifdef __cpp_lib_unordered_map_try_emplace
.try_emplace(type);
#else
.emplace(type, std::vector<detail::type_info *>());
#endif
if (ins.second) {
// For free-threading mode, this call must be under
// the with_internals() mutex lock, to avoid that other threads
// continue running with the empty ins.first->second.
all_type_info_populate(type, ins.first->second);
}
return ins;
});
if (res.second) {
// New cache entry created; set up a weak reference to automatically remove it if the type
@ -2433,7 +2463,8 @@ iterator make_iterator_impl(Iterator first, Sentinel last, Extra &&...extra) {
if (!detail::get_type_info(typeid(state), false)) {
class_<state>(handle(), "iterator", pybind11::module_local())
.def("__iter__", [](state &s) -> state & { return s; })
.def(
"__iter__", [](state &s) -> state & { return s; }, pos_only())
.def(
"__next__",
[](state &s) -> ValueType {
@ -2450,6 +2481,7 @@ iterator make_iterator_impl(Iterator first, Sentinel last, Extra &&...extra) {
// NOLINTNEXTLINE(readability-const-return-type) // PR #3263
},
std::forward<Extra>(extra)...,
pos_only(),
Policy);
}

View File

@ -2226,6 +2226,18 @@ class kwargs : public dict {
PYBIND11_OBJECT_DEFAULT(kwargs, dict, PyDict_Check)
};
// Subclasses of args and kwargs to support type hinting
// as defined in PEP 484. See #5357 for more info.
template <typename T>
class Args : public args {
using args::args;
};
template <typename T>
class KWArgs : public kwargs {
using kwargs::kwargs;
};
class anyset : public object {
public:
PYBIND11_OBJECT(anyset, object, PyAnySet_Check)

View File

@ -491,6 +491,12 @@ foreach(target ${test_targets})
if(SKBUILD)
install(TARGETS ${target} LIBRARY DESTINATION .)
endif()
if("${target}" STREQUAL "exo_planet_c_api")
if(CMAKE_CXX_COMPILER_ID MATCHES "(GNU|Intel|Clang|NVHPC)")
target_compile_options(${target} PRIVATE -fno-exceptions)
endif()
endif()
endforeach()
# Provide nice organisation in IDEs

View File

@ -198,10 +198,11 @@ def pytest_assertrepr_compare(op, left, right): # noqa: ARG001
def gc_collect():
"""Run the garbage collector twice (needed when running
"""Run the garbage collector three times (needed when running
reference counting tests with PyPy)"""
gc.collect()
gc.collect()
gc.collect()
def pytest_configure():

View File

@ -312,8 +312,16 @@ void print_created(T *inst, Values &&...values) {
}
template <class T, typename... Values>
void print_destroyed(T *inst, Values &&...values) { // Prints but doesn't store given values
/*
* On GraalPy, destructors can trigger anywhere and this can cause random
* failures in unrelated tests.
*/
#if !defined(GRAALVM_PYTHON)
print_constr_details(inst, "destroyed", values...);
track_destroyed(inst);
#else
py::detail::silence_unused_warnings(inst, values...);
#endif
}
template <class T, typename... Values>
void print_values(T *inst, Values &&...values) {

View File

@ -1,59 +1,33 @@
// Copyright (c) 2024 The pybind Community.
// In production situations it is totally fine to build with
// C++ Exception Handling enabled. However, here we want to ensure that
// C++ Exception Handling is not required.
#if defined(_MSC_VER) || defined(__EMSCRIPTEN__)
// Too much trouble making the required cmake changes (see PR #5375).
#else
# ifdef __cpp_exceptions
// https://isocpp.org/std/standing-documents/sd-6-sg10-feature-test-recommendations#__cpp_exceptions
# error This test is meant to be built with C++ Exception Handling disabled, but __cpp_exceptions is defined.
# endif
# ifdef __EXCEPTIONS
// https://gcc.gnu.org/onlinedocs/cpp/Common-Predefined-Macros.html
# error This test is meant to be built with C++ Exception Handling disabled, but __EXCEPTIONS is defined.
# endif
#endif
// THIS MUST STAY AT THE TOP!
#include <pybind11/pybind11.h> // EXCLUSIVELY for PYBIND11_PLATFORM_ABI_ID
// Potential future direction to maximize reusability:
// (e.g. for use from SWIG, Cython, PyCLIF, nanobind):
// #include <pybind11/compat/platform_abi_id.h>
// This would only depend on:
// 1. A C++ compiler, WITHOUT requiring -fexceptions.
// 2. Python.h
#include <pybind11/conduit/pybind11_conduit_v1.h> // VERY light-weight dependency.
#include "test_cpp_conduit_traveler_types.h"
#include <Python.h>
#include <typeinfo>
namespace {
void *get_cpp_conduit_void_ptr(PyObject *py_obj, const std::type_info *cpp_type_info) {
PyObject *cpp_type_info_capsule
= PyCapsule_New(const_cast<void *>(static_cast<const void *>(cpp_type_info)),
typeid(std::type_info).name(),
nullptr);
if (cpp_type_info_capsule == nullptr) {
return nullptr;
}
PyObject *cpp_conduit = PyObject_CallMethod(py_obj,
"_pybind11_conduit_v1_",
"yOy",
PYBIND11_PLATFORM_ABI_ID,
cpp_type_info_capsule,
"raw_pointer_ephemeral");
Py_DECREF(cpp_type_info_capsule);
if (cpp_conduit == nullptr) {
return nullptr;
}
void *void_ptr = PyCapsule_GetPointer(cpp_conduit, cpp_type_info->name());
Py_DECREF(cpp_conduit);
if (PyErr_Occurred()) {
return nullptr;
}
return void_ptr;
}
template <typename T>
T *get_cpp_conduit_type_ptr(PyObject *py_obj) {
void *void_ptr = get_cpp_conduit_void_ptr(py_obj, &typeid(T));
if (void_ptr == nullptr) {
return nullptr;
}
return static_cast<T *>(void_ptr);
}
extern "C" PyObject *wrapGetLuggage(PyObject * /*self*/, PyObject *traveler) {
const auto *cpp_traveler
= get_cpp_conduit_type_ptr<pybind11_tests::test_cpp_conduit::Traveler>(traveler);
const auto *cpp_traveler = pybind11_conduit_v1::get_type_pointer_ephemeral<
pybind11_tests::test_cpp_conduit::Traveler>(traveler);
if (cpp_traveler == nullptr) {
return nullptr;
}
@ -61,9 +35,8 @@ extern "C" PyObject *wrapGetLuggage(PyObject * /*self*/, PyObject *traveler) {
}
extern "C" PyObject *wrapGetPoints(PyObject * /*self*/, PyObject *premium_traveler) {
const auto *cpp_premium_traveler
= get_cpp_conduit_type_ptr<pybind11_tests::test_cpp_conduit::PremiumTraveler>(
premium_traveler);
const auto *cpp_premium_traveler = pybind11_conduit_v1::get_type_pointer_ephemeral<
pybind11_tests::test_cpp_conduit::PremiumTraveler>(premium_traveler);
if (cpp_premium_traveler == nullptr) {
return nullptr;
}

View File

@ -50,6 +50,13 @@ main_headers = {
"include/pybind11/warnings.h",
}
conduit_headers = {
"include/pybind11/conduit/README.txt",
"include/pybind11/conduit/pybind11_conduit_v1.h",
"include/pybind11/conduit/pybind11_platform_abi_id.h",
"include/pybind11/conduit/wrap_include_python_h.h",
}
detail_headers = {
"include/pybind11/detail/class.h",
"include/pybind11/detail/common.h",
@ -97,7 +104,7 @@ py_files = {
"setup_helpers.py",
}
headers = main_headers | detail_headers | eigen_headers | stl_headers
headers = main_headers | conduit_headers | detail_headers | eigen_headers | stl_headers
src_files = headers | cmake_files | pkgconfig_files
all_files = src_files | py_files
@ -106,6 +113,7 @@ sdist_files = {
"pybind11",
"pybind11/include",
"pybind11/include/pybind11",
"pybind11/include/pybind11/conduit",
"pybind11/include/pybind11/detail",
"pybind11/include/pybind11/eigen",
"pybind11/include/pybind11/stl",

View File

@ -128,4 +128,9 @@ PYBIND11_MODULE(pybind11_tests, m, py::mod_gil_not_used()) {
for (const auto &initializer : initializers()) {
initializer(m);
}
py::class_<TestContext>(m, "TestContext")
.def(py::init<>(&TestContext::createNewContextForInit))
.def("__enter__", &TestContext::contextEnter)
.def("__exit__", &TestContext::contextExit);
}

View File

@ -96,3 +96,24 @@ void ignoreOldStyleInitWarnings(F &&body) {
)",
py::dict(py::arg("body") = py::cpp_function(body)));
}
// See PR #5419 for background.
class TestContext {
public:
TestContext() = delete;
TestContext(const TestContext &) = delete;
TestContext(TestContext &&) = delete;
static TestContext *createNewContextForInit() { return new TestContext("new-context"); }
pybind11::object contextEnter() {
py::object contextObj = py::cast(*this);
return contextObj;
}
void contextExit(const pybind11::object & /*excType*/,
const pybind11::object & /*excVal*/,
const pybind11::object & /*excTb*/) {}
private:
explicit TestContext(const std::string &context) : context(context) {}
std::string context;
};

View File

@ -167,6 +167,125 @@ TEST_SUBMODULE(buffers, m) {
sizeof(float)});
});
// A matrix that uses Fortran storage order.
class FortranMatrix : public Matrix {
public:
FortranMatrix(py::ssize_t rows, py::ssize_t cols) : Matrix(cols, rows) {
print_created(this,
std::to_string(rows) + "x" + std::to_string(cols) + " Fortran matrix");
}
float operator()(py::ssize_t i, py::ssize_t j) const { return Matrix::operator()(j, i); }
float &operator()(py::ssize_t i, py::ssize_t j) { return Matrix::operator()(j, i); }
using Matrix::data;
py::ssize_t rows() const { return Matrix::cols(); }
py::ssize_t cols() const { return Matrix::rows(); }
};
py::class_<FortranMatrix, Matrix>(m, "FortranMatrix", py::buffer_protocol())
.def(py::init<py::ssize_t, py::ssize_t>())
.def("rows", &FortranMatrix::rows)
.def("cols", &FortranMatrix::cols)
/// Bare bones interface
.def("__getitem__",
[](const FortranMatrix &m, std::pair<py::ssize_t, py::ssize_t> i) {
if (i.first >= m.rows() || i.second >= m.cols()) {
throw py::index_error();
}
return m(i.first, i.second);
})
.def("__setitem__",
[](FortranMatrix &m, std::pair<py::ssize_t, py::ssize_t> i, float v) {
if (i.first >= m.rows() || i.second >= m.cols()) {
throw py::index_error();
}
m(i.first, i.second) = v;
})
/// Provide buffer access
.def_buffer([](FortranMatrix &m) -> py::buffer_info {
return py::buffer_info(m.data(), /* Pointer to buffer */
{m.rows(), m.cols()}, /* Buffer dimensions */
/* Strides (in bytes) for each index */
{sizeof(float), sizeof(float) * size_t(m.rows())});
});
// A matrix that uses a discontiguous underlying memory block.
class DiscontiguousMatrix : public Matrix {
public:
DiscontiguousMatrix(py::ssize_t rows,
py::ssize_t cols,
py::ssize_t row_factor,
py::ssize_t col_factor)
: Matrix(rows * row_factor, cols * col_factor), m_row_factor(row_factor),
m_col_factor(col_factor) {
print_created(this,
std::to_string(rows) + "(*" + std::to_string(row_factor) + ")x"
+ std::to_string(cols) + "(*" + std::to_string(col_factor)
+ ") matrix");
}
~DiscontiguousMatrix() {
print_destroyed(this,
std::to_string(rows() / m_row_factor) + "(*"
+ std::to_string(m_row_factor) + ")x"
+ std::to_string(cols() / m_col_factor) + "(*"
+ std::to_string(m_col_factor) + ") matrix");
}
float operator()(py::ssize_t i, py::ssize_t j) const {
return Matrix::operator()(i * m_row_factor, j * m_col_factor);
}
float &operator()(py::ssize_t i, py::ssize_t j) {
return Matrix::operator()(i * m_row_factor, j * m_col_factor);
}
using Matrix::data;
py::ssize_t rows() const { return Matrix::rows() / m_row_factor; }
py::ssize_t cols() const { return Matrix::cols() / m_col_factor; }
py::ssize_t row_factor() const { return m_row_factor; }
py::ssize_t col_factor() const { return m_col_factor; }
private:
py::ssize_t m_row_factor;
py::ssize_t m_col_factor;
};
py::class_<DiscontiguousMatrix, Matrix>(m, "DiscontiguousMatrix", py::buffer_protocol())
.def(py::init<py::ssize_t, py::ssize_t, py::ssize_t, py::ssize_t>())
.def("rows", &DiscontiguousMatrix::rows)
.def("cols", &DiscontiguousMatrix::cols)
/// Bare bones interface
.def("__getitem__",
[](const DiscontiguousMatrix &m, std::pair<py::ssize_t, py::ssize_t> i) {
if (i.first >= m.rows() || i.second >= m.cols()) {
throw py::index_error();
}
return m(i.first, i.second);
})
.def("__setitem__",
[](DiscontiguousMatrix &m, std::pair<py::ssize_t, py::ssize_t> i, float v) {
if (i.first >= m.rows() || i.second >= m.cols()) {
throw py::index_error();
}
m(i.first, i.second) = v;
})
/// Provide buffer access
.def_buffer([](DiscontiguousMatrix &m) -> py::buffer_info {
return py::buffer_info(m.data(), /* Pointer to buffer */
{m.rows(), m.cols()}, /* Buffer dimensions */
/* Strides (in bytes) for each index */
{size_t(m.col_factor()) * sizeof(float) * size_t(m.cols())
* size_t(m.row_factor()),
size_t(m.col_factor()) * sizeof(float)});
});
class BrokenMatrix : public Matrix {
public:
BrokenMatrix(py::ssize_t rows, py::ssize_t cols) : Matrix(rows, cols) {}
@ -268,4 +387,56 @@ TEST_SUBMODULE(buffers, m) {
});
m.def("get_buffer_info", [](const py::buffer &buffer) { return buffer.request(); });
// Expose Py_buffer for testing.
m.attr("PyBUF_FORMAT") = PyBUF_FORMAT;
m.attr("PyBUF_SIMPLE") = PyBUF_SIMPLE;
m.attr("PyBUF_ND") = PyBUF_ND;
m.attr("PyBUF_STRIDES") = PyBUF_STRIDES;
m.attr("PyBUF_INDIRECT") = PyBUF_INDIRECT;
m.attr("PyBUF_C_CONTIGUOUS") = PyBUF_C_CONTIGUOUS;
m.attr("PyBUF_F_CONTIGUOUS") = PyBUF_F_CONTIGUOUS;
m.attr("PyBUF_ANY_CONTIGUOUS") = PyBUF_ANY_CONTIGUOUS;
m.def("get_py_buffer", [](const py::object &object, int flags) {
Py_buffer buffer;
memset(&buffer, 0, sizeof(Py_buffer));
if (PyObject_GetBuffer(object.ptr(), &buffer, flags) == -1) {
throw py::error_already_set();
}
auto SimpleNamespace = py::module_::import("types").attr("SimpleNamespace");
py::object result = SimpleNamespace("len"_a = buffer.len,
"readonly"_a = buffer.readonly,
"itemsize"_a = buffer.itemsize,
"format"_a = buffer.format,
"ndim"_a = buffer.ndim,
"shape"_a = py::none(),
"strides"_a = py::none(),
"suboffsets"_a = py::none());
if (buffer.shape != nullptr) {
py::list l;
for (auto i = 0; i < buffer.ndim; i++) {
l.append(buffer.shape[i]);
}
py::setattr(result, "shape", l);
}
if (buffer.strides != nullptr) {
py::list l;
for (auto i = 0; i < buffer.ndim; i++) {
l.append(buffer.strides[i]);
}
py::setattr(result, "strides", l);
}
if (buffer.suboffsets != nullptr) {
py::list l;
for (auto i = 0; i < buffer.ndim; i++) {
l.append(buffer.suboffsets[i]);
}
py::setattr(result, "suboffsets", l);
}
PyBuffer_Release(&buffer);
return result;
});
}

View File

@ -239,3 +239,163 @@ def test_buffer_exception():
memoryview(m.BrokenMatrix(1, 1))
assert isinstance(excinfo.value.__cause__, RuntimeError)
assert "for context" in str(excinfo.value.__cause__)
@pytest.mark.parametrize("type", ["pybind11", "numpy"])
def test_c_contiguous_to_pybuffer(type):
if type == "pybind11":
mat = m.Matrix(5, 4)
elif type == "numpy":
mat = np.empty((5, 4), dtype=np.float32)
else:
raise ValueError(f"Unknown parametrization {type}")
info = m.get_py_buffer(mat, m.PyBUF_SIMPLE)
assert info.format is None
assert info.itemsize == ctypes.sizeof(ctypes.c_float)
assert info.len == 5 * 4 * info.itemsize
assert info.ndim == 0 # See discussion on PR #5407.
assert info.shape is None
assert info.strides is None
assert info.suboffsets is None
assert not info.readonly
info = m.get_py_buffer(mat, m.PyBUF_SIMPLE | m.PyBUF_FORMAT)
assert info.format == "f"
assert info.itemsize == ctypes.sizeof(ctypes.c_float)
assert info.len == 5 * 4 * info.itemsize
assert info.ndim == 0 # See discussion on PR #5407.
assert info.shape is None
assert info.strides is None
assert info.suboffsets is None
assert not info.readonly
info = m.get_py_buffer(mat, m.PyBUF_ND)
assert info.itemsize == ctypes.sizeof(ctypes.c_float)
assert info.len == 5 * 4 * info.itemsize
assert info.ndim == 2
assert info.shape == [5, 4]
assert info.strides is None
assert info.suboffsets is None
assert not info.readonly
info = m.get_py_buffer(mat, m.PyBUF_STRIDES)
assert info.itemsize == ctypes.sizeof(ctypes.c_float)
assert info.len == 5 * 4 * info.itemsize
assert info.ndim == 2
assert info.shape == [5, 4]
assert info.strides == [4 * info.itemsize, info.itemsize]
assert info.suboffsets is None
assert not info.readonly
info = m.get_py_buffer(mat, m.PyBUF_INDIRECT)
assert info.itemsize == ctypes.sizeof(ctypes.c_float)
assert info.len == 5 * 4 * info.itemsize
assert info.ndim == 2
assert info.shape == [5, 4]
assert info.strides == [4 * info.itemsize, info.itemsize]
assert info.suboffsets is None # Should be filled in here, but we don't use it.
assert not info.readonly
@pytest.mark.parametrize("type", ["pybind11", "numpy"])
def test_fortran_contiguous_to_pybuffer(type):
if type == "pybind11":
mat = m.FortranMatrix(5, 4)
elif type == "numpy":
mat = np.empty((5, 4), dtype=np.float32, order="F")
else:
raise ValueError(f"Unknown parametrization {type}")
# A Fortran-shaped buffer can only be accessed at PyBUF_STRIDES level or higher.
info = m.get_py_buffer(mat, m.PyBUF_STRIDES)
assert info.itemsize == ctypes.sizeof(ctypes.c_float)
assert info.len == 5 * 4 * info.itemsize
assert info.ndim == 2
assert info.shape == [5, 4]
assert info.strides == [info.itemsize, 5 * info.itemsize]
assert info.suboffsets is None
assert not info.readonly
info = m.get_py_buffer(mat, m.PyBUF_INDIRECT)
assert info.itemsize == ctypes.sizeof(ctypes.c_float)
assert info.len == 5 * 4 * info.itemsize
assert info.ndim == 2
assert info.shape == [5, 4]
assert info.strides == [info.itemsize, 5 * info.itemsize]
assert info.suboffsets is None # Should be filled in here, but we don't use it.
assert not info.readonly
@pytest.mark.parametrize("type", ["pybind11", "numpy"])
def test_discontiguous_to_pybuffer(type):
if type == "pybind11":
mat = m.DiscontiguousMatrix(5, 4, 2, 3)
elif type == "numpy":
mat = np.empty((5 * 2, 4 * 3), dtype=np.float32)[::2, ::3]
else:
raise ValueError(f"Unknown parametrization {type}")
info = m.get_py_buffer(mat, m.PyBUF_STRIDES)
assert info.itemsize == ctypes.sizeof(ctypes.c_float)
assert info.len == 5 * 4 * info.itemsize
assert info.ndim == 2
assert info.shape == [5, 4]
assert info.strides == [2 * 4 * 3 * info.itemsize, 3 * info.itemsize]
assert info.suboffsets is None
assert not info.readonly
@pytest.mark.parametrize("type", ["pybind11", "numpy"])
def test_to_pybuffer_contiguity(type):
def check_strides(mat):
# The full block is memset to 0, so fill it with non-zero in real spots.
expected = np.arange(1, 5 * 4 + 1).reshape((5, 4))
for i in range(5):
for j in range(4):
mat[i, j] = expected[i, j]
# If all strides are correct, the exposed buffer should match the input.
np.testing.assert_array_equal(np.array(mat), expected)
if type == "pybind11":
cmat = m.Matrix(5, 4) # C contiguous.
fmat = m.FortranMatrix(5, 4) # Fortran contiguous.
dmat = m.DiscontiguousMatrix(5, 4, 2, 3) # Not contiguous.
expected_exception = BufferError
elif type == "numpy":
cmat = np.empty((5, 4), dtype=np.float32) # C contiguous.
fmat = np.empty((5, 4), dtype=np.float32, order="F") # Fortran contiguous.
dmat = np.empty((5 * 2, 4 * 3), dtype=np.float32)[::2, ::3] # Not contiguous.
# NumPy incorrectly raises ValueError; when the minimum NumPy requirement is
# above the version that fixes https://github.com/numpy/numpy/issues/3634 then
# BufferError can be used everywhere.
expected_exception = (BufferError, ValueError)
else:
raise ValueError(f"Unknown parametrization {type}")
check_strides(cmat)
# Should work in C-contiguous mode, but not Fortran order.
m.get_py_buffer(cmat, m.PyBUF_C_CONTIGUOUS)
m.get_py_buffer(cmat, m.PyBUF_ANY_CONTIGUOUS)
with pytest.raises(expected_exception):
m.get_py_buffer(cmat, m.PyBUF_F_CONTIGUOUS)
check_strides(fmat)
# These flags imply C-contiguity, so won't work.
with pytest.raises(expected_exception):
m.get_py_buffer(fmat, m.PyBUF_SIMPLE)
with pytest.raises(expected_exception):
m.get_py_buffer(fmat, m.PyBUF_ND)
# Should work in Fortran-contiguous mode, but not C order.
with pytest.raises(expected_exception):
m.get_py_buffer(fmat, m.PyBUF_C_CONTIGUOUS)
m.get_py_buffer(fmat, m.PyBUF_ANY_CONTIGUOUS)
m.get_py_buffer(fmat, m.PyBUF_F_CONTIGUOUS)
check_strides(dmat)
# Should never work.
with pytest.raises(expected_exception):
m.get_py_buffer(dmat, m.PyBUF_SIMPLE)
with pytest.raises(expected_exception):
m.get_py_buffer(dmat, m.PyBUF_ND)
with pytest.raises(expected_exception):
m.get_py_buffer(dmat, m.PyBUF_C_CONTIGUOUS)
with pytest.raises(expected_exception):
m.get_py_buffer(dmat, m.PyBUF_ANY_CONTIGUOUS)
with pytest.raises(expected_exception):
m.get_py_buffer(dmat, m.PyBUF_F_CONTIGUOUS)

View File

@ -190,6 +190,7 @@ def test_alive_gc_multi_derived(capture):
)
@pytest.mark.skipif("env.GRAALPY", reason="Cannot reliably trigger GC")
def test_return_none(capture):
n_inst = ConstructorStats.detail_reg_inst()
with capture:
@ -217,6 +218,7 @@ def test_return_none(capture):
assert capture == "Releasing parent."
@pytest.mark.skipif("env.GRAALPY", reason="Cannot reliably trigger GC")
def test_keep_alive_constructor(capture):
n_inst = ConstructorStats.detail_reg_inst()

View File

@ -1,5 +1,6 @@
from __future__ import annotations
import sys
from unittest import mock
import pytest
@ -27,6 +28,9 @@ def test_instance(msg):
instance = m.NoConstructor.new_instance()
if env.GRAALPY:
pytest.skip("ConstructorStats is incompatible with GraalPy.")
cstats = ConstructorStats.get(m.NoConstructor)
assert cstats.alive() == 1
del instance
@ -35,6 +39,10 @@ def test_instance(msg):
def test_instance_new():
instance = m.NoConstructorNew() # .__new__(m.NoConstructor.__class__)
if env.GRAALPY:
pytest.skip("ConstructorStats is incompatible with GraalPy.")
cstats = ConstructorStats.get(m.NoConstructorNew)
assert cstats.alive() == 1
del instance
@ -501,3 +509,31 @@ def test_pr4220_tripped_over_this():
m.Empty0().get_msg()
== "This is really only meant to exercise successful compilation."
)
@pytest.mark.skipif(sys.platform.startswith("emscripten"), reason="Requires threads")
def test_all_type_info_multithreaded():
# See PR #5419 for background.
import threading
from pybind11_tests import TestContext
class Context(TestContext):
pass
num_runs = 10
num_threads = 4
barrier = threading.Barrier(num_threads)
def func():
barrier.wait()
with Context():
pass
for _ in range(num_runs):
threads = [threading.Thread(target=func) for _ in range(num_threads)]
for thread in threads:
thread.start()
for thread in threads:
thread.join()

View File

@ -2,6 +2,7 @@ from __future__ import annotations
import pytest
import env # noqa: F401
from pybind11_tests import copy_move_policies as m
@ -17,6 +18,7 @@ def test_lacking_move_ctor():
assert "is neither movable nor copyable!" in str(excinfo.value)
@pytest.mark.skipif("env.GRAALPY", reason="Cannot reliably trigger GC")
def test_move_and_copy_casts():
"""Cast some values in C++ via custom type casters and count the number of moves/copies."""
@ -44,6 +46,7 @@ def test_move_and_copy_casts():
assert c_m.alive() + c_mc.alive() + c_c.alive() == 0
@pytest.mark.skipif("env.GRAALPY", reason="Cannot reliably trigger GC")
def test_move_and_copy_loads():
"""Call some functions that load arguments via custom type casters and count the number of
moves/copies."""
@ -77,6 +80,7 @@ def test_move_and_copy_loads():
@pytest.mark.skipif(not m.has_optional, reason="no <optional>")
@pytest.mark.skipif("env.GRAALPY", reason="Cannot reliably trigger GC")
def test_move_and_copy_load_optional():
"""Tests move/copy loads of std::optional arguments"""

View File

@ -2,6 +2,7 @@ from __future__ import annotations
import pytest
import env # noqa: F401
from pybind11_tests import custom_type_casters as m
@ -94,6 +95,7 @@ def test_noconvert_args(msg):
)
@pytest.mark.skipif("env.GRAALPY", reason="Cannot reliably trigger GC")
def test_custom_caster_destruction():
"""Tests that returning a pointer to a type that gets converted with a custom type caster gets
destroyed when the function has py::return_value_policy::take_ownership policy applied.

View File

@ -395,6 +395,7 @@ def test_eigen_return_references():
np.testing.assert_array_equal(a_copy5, c5want)
@pytest.mark.skipif("env.GRAALPY", reason="Cannot reliably trigger GC")
def assert_keeps_alive(cl, method, *args):
cstats = ConstructorStats.get(cl)
start_with = cstats.alive()

View File

@ -1,6 +1,8 @@
# ruff: noqa: SIM201 SIM300 SIM202
from __future__ import annotations
import re
import pytest
import env # noqa: F401
@ -271,3 +273,61 @@ def test_docstring_signatures():
def test_str_signature():
for enum_type in [m.ScopedEnum, m.UnscopedEnum]:
assert enum_type.__str__.__doc__.startswith("__str__")
def test_generated_dunder_methods_pos_only():
for enum_type in [m.ScopedEnum, m.UnscopedEnum]:
for binary_op in [
"__eq__",
"__ne__",
"__ge__",
"__gt__",
"__lt__",
"__le__",
"__and__",
"__rand__",
# "__or__", # fail with some compilers (__doc__ = "Return self|value.")
# "__ror__", # fail with some compilers (__doc__ = "Return value|self.")
"__xor__",
"__rxor__",
"__rxor__",
]:
method = getattr(enum_type, binary_op, None)
if method is not None:
assert (
re.match(
rf"^{binary_op}\(self: [\w\.]+, other: [\w\.]+, /\)",
method.__doc__,
)
is not None
)
for unary_op in [
"__int__",
"__index__",
"__hash__",
"__str__",
"__repr__",
]:
method = getattr(enum_type, unary_op, None)
if method is not None:
assert (
re.match(
rf"^{unary_op}\(self: [\w\.]+, /\)",
method.__doc__,
)
is not None
)
assert (
re.match(
r"^__getstate__\(self: [\w\.]+, /\)",
enum_type.__getstate__.__doc__,
)
is not None
)
assert (
re.match(
r"^__setstate__\(self: [\w\.]+, state: [\w\.]+, /\)",
enum_type.__setstate__.__doc__,
)
is not None
)

View File

@ -6,14 +6,8 @@ from io import StringIO
import pytest
import env # noqa: F401
from pybind11_tests import iostream as m
pytestmark = pytest.mark.skipif(
"env.GRAALPY",
reason="Delayed prints from finalizers from other tests can end up in the output",
)
def test_captured(capsys):
msg = "I've been redirected to Python, I hope!"

View File

@ -14,26 +14,6 @@
#include <utility>
// Classes needed for subclass test.
class ArgsSubclass : public py::args {
using py::args::args;
};
class KWArgsSubclass : public py::kwargs {
using py::kwargs::kwargs;
};
namespace pybind11 {
namespace detail {
template <>
struct handle_type_name<ArgsSubclass> {
static constexpr auto name = const_name("*Args");
};
template <>
struct handle_type_name<KWArgsSubclass> {
static constexpr auto name = const_name("**KWArgs");
};
} // namespace detail
} // namespace pybind11
TEST_SUBMODULE(kwargs_and_defaults, m) {
auto kw_func
= [](int x, int y) { return "x=" + std::to_string(x) + ", y=" + std::to_string(y); };
@ -345,7 +325,7 @@ TEST_SUBMODULE(kwargs_and_defaults, m) {
// Test support for args and kwargs subclasses
m.def("args_kwargs_subclass_function",
[](const ArgsSubclass &args, const KWArgsSubclass &kwargs) {
[](const py::Args<std::string> &args, const py::KWArgs<std::string> &kwargs) {
return py::make_tuple(args, kwargs);
});
}

View File

@ -20,7 +20,7 @@ def test_function_signatures(doc):
)
assert (
doc(m.args_kwargs_subclass_function)
== "args_kwargs_subclass_function(*Args, **KWArgs) -> tuple"
== "args_kwargs_subclass_function(*args: str, **kwargs: str) -> tuple"
)
assert (
doc(m.KWClass.foo0)

View File

@ -294,7 +294,7 @@ TEST_SUBMODULE(methods_and_attributes, m) {
static_cast<py::str (ExampleMandA::*)(int, int)>(
&ExampleMandA::overloaded));
})
.def("__str__", &ExampleMandA::toString)
.def("__str__", &ExampleMandA::toString, py::pos_only())
.def_readwrite("value", &ExampleMandA::value);
// test_copy_method

View File

@ -19,6 +19,13 @@ NO_DELETER_MSG = (
)
def test_self_only_pos_only():
assert (
m.ExampleMandA.__str__.__doc__
== "__str__(self: pybind11_tests.methods_and_attributes.ExampleMandA, /) -> str\n"
)
def test_methods_and_attributes():
instance1 = m.ExampleMandA()
instance2 = m.ExampleMandA(32)

View File

@ -131,6 +131,15 @@ arr_t &mutate_at_t(arr_t &a, Ix... idx) {
a.mutable_at(idx...)++;
return a;
}
template <typename... Ix>
arr_t &subscript_via_call_operator_t(arr_t &a, Ix... idx) {
a(idx...)++;
return a;
}
template <typename... Ix>
py::ssize_t const_subscript_via_call_operator_t(const arr_t &a, Ix... idx) {
return a(idx...);
}
#define def_index_fn(name, type) \
sm.def(#name, [](type a) { return name(a); }); \
@ -156,6 +165,55 @@ py::handle auxiliaries(T &&r, T2 &&r2) {
return l.release();
}
template <typename PyObjectType>
PyObjectType convert_to_pyobjecttype(py::object obj);
template <>
PyObject *convert_to_pyobjecttype<PyObject *>(py::object obj) {
return obj.release().ptr();
}
template <>
py::handle convert_to_pyobjecttype<py::handle>(py::object obj) {
return obj.release();
}
template <>
py::object convert_to_pyobjecttype<py::object>(py::object obj) {
return obj;
}
template <typename PyObjectType>
std::string pass_array_return_sum_str_values(const py::array_t<PyObjectType> &objs) {
std::string sum_str_values;
for (const auto &obj : objs) {
sum_str_values += py::str(obj.attr("value"));
}
return sum_str_values;
}
template <typename PyObjectType>
py::list pass_array_return_as_list(const py::array_t<PyObjectType> &objs) {
return objs;
}
template <typename PyObjectType>
py::array_t<PyObjectType> return_array_cpp_loop(const py::list &objs) {
py::size_t arr_size = py::len(objs);
py::array_t<PyObjectType> arr_from_list(static_cast<py::ssize_t>(arr_size));
PyObjectType *data = arr_from_list.mutable_data();
for (py::size_t i = 0; i < arr_size; i++) {
assert(!data[i]);
data[i] = convert_to_pyobjecttype<PyObjectType>(objs[i].attr("value"));
}
return arr_from_list;
}
template <typename PyObjectType>
py::array_t<PyObjectType> return_array_from_list(const py::list &objs) {
return objs;
}
// note: declaration at local scope would create a dangling reference!
static int data_i = 42;
@ -197,6 +255,13 @@ TEST_SUBMODULE(numpy_array, sm) {
sm.def("nbytes", [](const arr &a) { return a.nbytes(); });
sm.def("owndata", [](const arr &a) { return a.owndata(); });
sm.attr("defined_NDEBUG") =
#ifdef NDEBUG
true;
#else
false;
#endif
// test_index_offset
def_index_fn(index_at, const arr &);
def_index_fn(index_at_t, const arr_t &);
@ -210,6 +275,8 @@ TEST_SUBMODULE(numpy_array, sm) {
def_index_fn(mutate_data_t, arr_t &);
def_index_fn(at_t, const arr_t &);
def_index_fn(mutate_at_t, arr_t &);
def_index_fn(subscript_via_call_operator_t, arr_t &);
def_index_fn(const_subscript_via_call_operator_t, const arr_t &);
// test_make_c_f_array
sm.def("make_f_array", [] { return py::array_t<float>({2, 2}, {4, 8}); });
@ -520,28 +587,21 @@ TEST_SUBMODULE(numpy_array, sm) {
sm.def("round_trip_float", [](double d) { return d; });
sm.def("pass_array_pyobject_ptr_return_sum_str_values",
[](const py::array_t<PyObject *> &objs) {
std::string sum_str_values;
for (const auto &obj : objs) {
sum_str_values += py::str(obj.attr("value"));
}
return sum_str_values;
});
pass_array_return_sum_str_values<PyObject *>);
sm.def("pass_array_handle_return_sum_str_values",
pass_array_return_sum_str_values<py::handle>);
sm.def("pass_array_object_return_sum_str_values",
pass_array_return_sum_str_values<py::object>);
sm.def("pass_array_pyobject_ptr_return_as_list",
[](const py::array_t<PyObject *> &objs) -> py::list { return objs; });
sm.def("pass_array_pyobject_ptr_return_as_list", pass_array_return_as_list<PyObject *>);
sm.def("pass_array_handle_return_as_list", pass_array_return_as_list<py::handle>);
sm.def("pass_array_object_return_as_list", pass_array_return_as_list<py::object>);
sm.def("return_array_pyobject_ptr_cpp_loop", [](const py::list &objs) {
py::size_t arr_size = py::len(objs);
py::array_t<PyObject *> arr_from_list(static_cast<py::ssize_t>(arr_size));
PyObject **data = arr_from_list.mutable_data();
for (py::size_t i = 0; i < arr_size; i++) {
assert(data[i] == nullptr);
data[i] = py::cast<PyObject *>(objs[i].attr("value"));
}
return arr_from_list;
});
sm.def("return_array_pyobject_ptr_cpp_loop", return_array_cpp_loop<PyObject *>);
sm.def("return_array_handle_cpp_loop", return_array_cpp_loop<py::handle>);
sm.def("return_array_object_cpp_loop", return_array_cpp_loop<py::object>);
sm.def("return_array_pyobject_ptr_from_list",
[](const py::list &objs) -> py::array_t<PyObject *> { return objs; });
sm.def("return_array_pyobject_ptr_from_list", return_array_from_list<PyObject *>);
sm.def("return_array_handle_from_list", return_array_from_list<py::handle>);
sm.def("return_array_object_from_list", return_array_from_list<py::object>);
}

View File

@ -111,20 +111,32 @@ def test_data(arr, args, ret):
assert all(m.data(arr, *args)[(1 if byteorder == "little" else 0) :: 2] == 0)
@pytest.mark.parametrize(
"func",
[
m.at_t,
m.mutate_at_t,
m.const_subscript_via_call_operator_t,
m.subscript_via_call_operator_t,
][: 2 if m.defined_NDEBUG else 99],
)
@pytest.mark.parametrize("dim", [0, 1, 3])
def test_at_fail(arr, dim):
for func in m.at_t, m.mutate_at_t:
def test_elem_reference(arr, func, dim):
with pytest.raises(IndexError) as excinfo:
func(arr, *([0] * dim))
assert str(excinfo.value) == f"index dimension mismatch: {dim} (ndim = 2)"
def test_at(arr):
assert m.at_t(arr, 0, 2) == 3
assert m.at_t(arr, 1, 0) == 4
@pytest.mark.parametrize("func", [m.at_t, m.const_subscript_via_call_operator_t])
def test_const_elem_reference(arr, func):
assert func(arr, 0, 2) == 3
assert func(arr, 1, 0) == 4
assert all(m.mutate_at_t(arr, 0, 2).ravel() == [1, 2, 4, 4, 5, 6])
assert all(m.mutate_at_t(arr, 1, 0).ravel() == [1, 2, 4, 5, 5, 6])
@pytest.mark.parametrize("func", [m.mutate_at_t, m.subscript_via_call_operator_t])
def test_mutable_elem_reference(arr, func):
assert all(func(arr, 0, 2).ravel() == [1, 2, 4, 4, 5, 6])
assert all(func(arr, 1, 0).ravel() == [1, 2, 4, 5, 5, 6])
def test_mutate_readonly(arr):
@ -153,8 +165,9 @@ def test_mutate_data(arr):
assert all(m.mutate_data_t(arr, 1, 2).ravel() == [6, 19, 27, 68, 84, 197])
def test_bounds_check(arr):
for func in (
@pytest.mark.parametrize(
"func",
[
m.index_at,
m.index_at_t,
m.data,
@ -163,7 +176,11 @@ def test_bounds_check(arr):
m.mutate_data_t,
m.at_t,
m.mutate_at_t,
):
m.const_subscript_via_call_operator_t,
m.subscript_via_call_operator_t,
][: 8 if m.defined_NDEBUG else 99],
)
def test_bounds_check(arr, func):
with pytest.raises(IndexError) as excinfo:
func(arr, 2, 0)
assert str(excinfo.value) == "index 2 is out of bounds for axis 0 with size 2"
@ -629,45 +646,61 @@ def UnwrapPyValueHolder(vhs):
return [vh.value for vh in vhs]
def test_pass_array_pyobject_ptr_return_sum_str_values_ndarray():
PASS_ARRAY_PYOBJECT_RETURN_SUM_STR_VALUES_FUNCTIONS = [
m.pass_array_pyobject_ptr_return_sum_str_values,
m.pass_array_handle_return_sum_str_values,
m.pass_array_object_return_sum_str_values,
]
@pytest.mark.parametrize(
"pass_array", PASS_ARRAY_PYOBJECT_RETURN_SUM_STR_VALUES_FUNCTIONS
)
def test_pass_array_object_return_sum_str_values_ndarray(pass_array):
# Intentionally all temporaries, do not change.
assert (
m.pass_array_pyobject_ptr_return_sum_str_values(
np.array(WrapWithPyValueHolder(-3, "four", 5.0), dtype=object)
)
pass_array(np.array(WrapWithPyValueHolder(-3, "four", 5.0), dtype=object))
== "-3four5.0"
)
def test_pass_array_pyobject_ptr_return_sum_str_values_list():
@pytest.mark.parametrize(
"pass_array", PASS_ARRAY_PYOBJECT_RETURN_SUM_STR_VALUES_FUNCTIONS
)
def test_pass_array_object_return_sum_str_values_list(pass_array):
# Intentionally all temporaries, do not change.
assert (
m.pass_array_pyobject_ptr_return_sum_str_values(
WrapWithPyValueHolder(2, "three", -4.0)
)
== "2three-4.0"
)
assert pass_array(WrapWithPyValueHolder(2, "three", -4.0)) == "2three-4.0"
def test_pass_array_pyobject_ptr_return_as_list():
@pytest.mark.parametrize(
"pass_array",
[
m.pass_array_pyobject_ptr_return_as_list,
m.pass_array_handle_return_as_list,
m.pass_array_object_return_as_list,
],
)
def test_pass_array_object_return_as_list(pass_array):
# Intentionally all temporaries, do not change.
assert UnwrapPyValueHolder(
m.pass_array_pyobject_ptr_return_as_list(
np.array(WrapWithPyValueHolder(-1, "two", 3.0), dtype=object)
)
pass_array(np.array(WrapWithPyValueHolder(-1, "two", 3.0), dtype=object))
) == [-1, "two", 3.0]
@pytest.mark.parametrize(
("return_array_pyobject_ptr", "unwrap"),
("return_array", "unwrap"),
[
(m.return_array_pyobject_ptr_cpp_loop, list),
(m.return_array_handle_cpp_loop, list),
(m.return_array_object_cpp_loop, list),
(m.return_array_pyobject_ptr_from_list, UnwrapPyValueHolder),
(m.return_array_handle_from_list, UnwrapPyValueHolder),
(m.return_array_object_from_list, UnwrapPyValueHolder),
],
)
def test_return_array_pyobject_ptr_cpp_loop(return_array_pyobject_ptr, unwrap):
def test_return_array_object_cpp_loop(return_array, unwrap):
# Intentionally all temporaries, do not change.
arr_from_list = return_array_pyobject_ptr(WrapWithPyValueHolder(6, "seven", -8.0))
arr_from_list = return_array(WrapWithPyValueHolder(6, "seven", -8.0))
assert isinstance(arr_from_list, np.ndarray)
assert arr_from_list.dtype == np.dtype("O")
assert unwrap(arr_from_list) == [6, "seven", -8.0]

View File

@ -11,6 +11,9 @@
#include "pybind11_tests.h"
#include <cstdint>
#include <stdexcept>
#ifdef __GNUC__
# define PYBIND11_PACKED(cls) cls __attribute__((__packed__))
#else
@ -297,6 +300,15 @@ py::list test_dtype_ctors() {
return list;
}
template <typename T>
py::array_t<T> dispatch_array_increment(py::array_t<T> arr) {
py::array_t<T> res(arr.shape(0));
for (py::ssize_t i = 0; i < arr.shape(0); ++i) {
res.mutable_at(i) = T(arr.at(i) + 1);
}
return res;
}
struct A {};
struct B {};
@ -496,6 +508,98 @@ TEST_SUBMODULE(numpy_dtypes, m) {
}
return list;
});
m.def("test_dtype_num_of", []() -> py::list {
py::list res;
#define TEST_DTYPE(T) res.append(py::make_tuple(py::dtype::of<T>().num(), py::dtype::num_of<T>()));
TEST_DTYPE(bool)
TEST_DTYPE(char)
TEST_DTYPE(unsigned char)
TEST_DTYPE(short)
TEST_DTYPE(unsigned short)
TEST_DTYPE(int)
TEST_DTYPE(unsigned int)
TEST_DTYPE(long)
TEST_DTYPE(unsigned long)
TEST_DTYPE(long long)
TEST_DTYPE(unsigned long long)
TEST_DTYPE(float)
TEST_DTYPE(double)
TEST_DTYPE(long double)
TEST_DTYPE(std::complex<float>)
TEST_DTYPE(std::complex<double>)
TEST_DTYPE(std::complex<long double>)
TEST_DTYPE(int8_t)
TEST_DTYPE(uint8_t)
TEST_DTYPE(int16_t)
TEST_DTYPE(uint16_t)
TEST_DTYPE(int32_t)
TEST_DTYPE(uint32_t)
TEST_DTYPE(int64_t)
TEST_DTYPE(uint64_t)
#undef TEST_DTYPE
return res;
});
m.def("test_dtype_normalized_num", []() -> py::list {
py::list res;
#define TEST_DTYPE(NT, T) \
res.append(py::make_tuple(py::dtype(py::detail::npy_api::NT).normalized_num(), \
py::dtype::num_of<T>()));
TEST_DTYPE(NPY_BOOL_, bool)
TEST_DTYPE(NPY_BYTE_, char);
TEST_DTYPE(NPY_UBYTE_, unsigned char);
TEST_DTYPE(NPY_SHORT_, short);
TEST_DTYPE(NPY_USHORT_, unsigned short);
TEST_DTYPE(NPY_INT_, int);
TEST_DTYPE(NPY_UINT_, unsigned int);
TEST_DTYPE(NPY_LONG_, long);
TEST_DTYPE(NPY_ULONG_, unsigned long);
TEST_DTYPE(NPY_LONGLONG_, long long);
TEST_DTYPE(NPY_ULONGLONG_, unsigned long long);
TEST_DTYPE(NPY_FLOAT_, float);
TEST_DTYPE(NPY_DOUBLE_, double);
TEST_DTYPE(NPY_LONGDOUBLE_, long double);
TEST_DTYPE(NPY_CFLOAT_, std::complex<float>);
TEST_DTYPE(NPY_CDOUBLE_, std::complex<double>);
TEST_DTYPE(NPY_CLONGDOUBLE_, std::complex<long double>);
TEST_DTYPE(NPY_INT8_, int8_t);
TEST_DTYPE(NPY_UINT8_, uint8_t);
TEST_DTYPE(NPY_INT16_, int16_t);
TEST_DTYPE(NPY_UINT16_, uint16_t);
TEST_DTYPE(NPY_INT32_, int32_t);
TEST_DTYPE(NPY_UINT32_, uint32_t);
TEST_DTYPE(NPY_INT64_, int64_t);
TEST_DTYPE(NPY_UINT64_, uint64_t);
#undef TEST_DTYPE
return res;
});
m.def("test_dtype_switch", [](const py::array &arr) -> py::array {
switch (arr.dtype().normalized_num()) {
case py::dtype::num_of<int8_t>():
return dispatch_array_increment<int8_t>(arr);
case py::dtype::num_of<uint8_t>():
return dispatch_array_increment<uint8_t>(arr);
case py::dtype::num_of<int16_t>():
return dispatch_array_increment<int16_t>(arr);
case py::dtype::num_of<uint16_t>():
return dispatch_array_increment<uint16_t>(arr);
case py::dtype::num_of<int32_t>():
return dispatch_array_increment<int32_t>(arr);
case py::dtype::num_of<uint32_t>():
return dispatch_array_increment<uint32_t>(arr);
case py::dtype::num_of<int64_t>():
return dispatch_array_increment<int64_t>(arr);
case py::dtype::num_of<uint64_t>():
return dispatch_array_increment<uint64_t>(arr);
case py::dtype::num_of<float>():
return dispatch_array_increment<float>(arr);
case py::dtype::num_of<double>():
return dispatch_array_increment<double>(arr);
case py::dtype::num_of<long double>():
return dispatch_array_increment<long double>(arr);
default:
throw std::runtime_error("Unsupported dtype");
}
});
m.def("test_dtype_methods", []() {
py::list list;
auto dt1 = py::dtype::of<int32_t>();

View File

@ -188,6 +188,28 @@ def test_dtype(simple_dtype):
chr(np.dtype(ch).flags) for ch in expected_chars
]
for a, b in m.test_dtype_num_of():
assert a == b
for a, b in m.test_dtype_normalized_num():
assert a == b
arr = np.array([4, 84, 21, 36])
# Note: "ulong" does not work in NumPy 1.x, so we use "L"
assert (m.test_dtype_switch(arr.astype("byte")) == arr + 1).all()
assert (m.test_dtype_switch(arr.astype("ubyte")) == arr + 1).all()
assert (m.test_dtype_switch(arr.astype("short")) == arr + 1).all()
assert (m.test_dtype_switch(arr.astype("ushort")) == arr + 1).all()
assert (m.test_dtype_switch(arr.astype("intc")) == arr + 1).all()
assert (m.test_dtype_switch(arr.astype("uintc")) == arr + 1).all()
assert (m.test_dtype_switch(arr.astype("long")) == arr + 1).all()
assert (m.test_dtype_switch(arr.astype("L")) == arr + 1).all()
assert (m.test_dtype_switch(arr.astype("longlong")) == arr + 1).all()
assert (m.test_dtype_switch(arr.astype("ulonglong")) == arr + 1).all()
assert (m.test_dtype_switch(arr.astype("single")) == arr + 1).all()
assert (m.test_dtype_switch(arr.astype("double")) == arr + 1).all()
assert (m.test_dtype_switch(arr.astype("longdouble")) == arr + 1).all()
def test_recarray(simple_dtype, packed_dtype):
elements = [(False, 0, 0.0, -0.0), (True, 1, 1.5, -2.5), (False, 2, 3.0, -5.0)]

View File

@ -2,6 +2,7 @@ from __future__ import annotations
import pytest
import env
from pybind11_tests import ConstructorStats, UserType
from pybind11_tests import opaque_types as m
@ -30,6 +31,8 @@ def test_pointers(msg):
living_before = ConstructorStats.get(UserType).alive()
assert m.get_void_ptr_value(m.return_void_ptr()) == 0x1234
assert m.get_void_ptr_value(UserType()) # Should also work for other C++ types
if not env.GRAALPY:
assert ConstructorStats.get(UserType).alive() == living_before
with pytest.raises(TypeError) as excinfo:

View File

@ -2,7 +2,7 @@ from __future__ import annotations
import pytest
import env # noqa: F401
import env
from pybind11_tests import ConstructorStats
from pybind11_tests import operators as m
@ -51,6 +51,9 @@ def test_operator_overloading():
v2 /= v1
assert str(v2) == "[2.000000, 8.000000]"
if env.GRAALPY:
pytest.skip("ConstructorStats is incompatible with GraalPy.")
cstats = ConstructorStats.get(m.Vector2)
assert cstats.alive() == 3
del v1

View File

@ -93,3 +93,20 @@ def test_roundtrip_simple_cpp_derived():
# Issue #3062: pickleable base C++ classes can incur object slicing
# if derived typeid is not registered with pybind11
assert not m.check_dynamic_cast_SimpleCppDerived(p2)
def test_new_style_pickle_getstate_pos_only():
assert (
re.match(
r"^__getstate__\(self: [\w\.]+, /\)", m.PickleableNew.__getstate__.__doc__
)
is not None
)
if hasattr(m, "PickleableWithDictNew"):
assert (
re.match(
r"^__getstate__\(self: [\w\.]+, /\)",
m.PickleableWithDictNew.__getstate__.__doc__,
)
is not None
)

View File

@ -1,5 +1,7 @@
from __future__ import annotations
import re
import pytest
from pytest import approx # noqa: PT013
@ -253,16 +255,12 @@ def test_python_iterator_in_cpp():
def test_iterator_passthrough():
"""#181: iterator passthrough did not compile"""
from pybind11_tests.sequences_and_iterators import iterator_passthrough
values = [3, 5, 7, 9, 11, 13, 15]
assert list(iterator_passthrough(iter(values))) == values
assert list(m.iterator_passthrough(iter(values))) == values
def test_iterator_rvp():
"""#388: Can't make iterators via make_iterator() with different r/v policies"""
import pybind11_tests.sequences_and_iterators as m
assert list(m.make_iterator_1()) == [1, 2, 3]
assert list(m.make_iterator_2()) == [1, 2, 3]
assert not isinstance(m.make_iterator_1(), type(m.make_iterator_2()))
@ -274,3 +272,25 @@ def test_carray_iterator():
arr_h = m.CArrayHolder(*args_gt)
args = list(arr_h)
assert args_gt == args
def test_generated_dunder_methods_pos_only():
string_map = m.StringMap({"hi": "bye", "black": "white"})
for it in (
m.make_iterator_1(),
m.make_iterator_2(),
m.iterator_passthrough(iter([3, 5, 7])),
iter(m.Sequence(5)),
iter(string_map),
string_map.items(),
string_map.values(),
iter(m.CArrayHolder(*[float(i) for i in range(3)])),
):
assert (
re.match(r"^__iter__\(self: [\w\.]+, /\)", type(it).__iter__.__doc__)
is not None
)
assert (
re.match(r"^__next__\(self: [\w\.]+, /\)", type(it).__next__.__doc__)
is not None
)

View File

@ -26,12 +26,14 @@ class InstallHeadersNested(install_headers):
main_headers = glob.glob("pybind11/include/pybind11/*.h")
conduit_headers = sum([glob.glob(f"pybind11/include/pybind11/conduit/*.{ext}")
for ext in ("h", "txt")], [])
detail_headers = glob.glob("pybind11/include/pybind11/detail/*.h")
eigen_headers = glob.glob("pybind11/include/pybind11/eigen/*.h")
stl_headers = glob.glob("pybind11/include/pybind11/stl/*.h")
cmake_files = glob.glob("pybind11/share/cmake/pybind11/*.cmake")
pkgconfig_files = glob.glob("pybind11/share/pkgconfig/*.pc")
headers = main_headers + detail_headers + stl_headers + eigen_headers
headers = main_headers + conduit_headers + detail_headers + eigen_headers + stl_headers
cmdclass = {"install_headers": InstallHeadersNested}
$extra_cmd
@ -55,6 +57,7 @@ setup(
(base + "share/cmake/pybind11", cmake_files),
(base + "share/pkgconfig", pkgconfig_files),
(base + "include/pybind11", main_headers),
(base + "include/pybind11/conduit", conduit_headers),
(base + "include/pybind11/detail", detail_headers),
(base + "include/pybind11/eigen", eigen_headers),
(base + "include/pybind11/stl", stl_headers),

View File

@ -14,6 +14,7 @@ setup(
packages=[
"pybind11",
"pybind11.include.pybind11",
"pybind11.include.pybind11.conduit",
"pybind11.include.pybind11.detail",
"pybind11.include.pybind11.eigen",
"pybind11.include.pybind11.stl",
@ -23,6 +24,7 @@ setup(
package_data={
"pybind11": ["py.typed"],
"pybind11.include.pybind11": ["*.h"],
"pybind11.include.pybind11.conduit": ["*.h", "*.txt"],
"pybind11.include.pybind11.detail": ["*.h"],
"pybind11.include.pybind11.eigen": ["*.h"],
"pybind11.include.pybind11.stl": ["*.h"],