summaryrefslogtreecommitdiff
path: root/doc/source/user
diff options
context:
space:
mode:
authorPauli Virtanen <pav@iki.fi>2009-03-21 21:19:53 +0000
committerPauli Virtanen <pav@iki.fi>2009-03-21 21:19:53 +0000
commitbab64b897064cfdf8cf86fcc62b44e21df1153ee (patch)
tree6e1cee5b837bbccdfb2c78f12f3f6205ed40953a /doc/source/user
parentb2634ff922176acd12ddd3725434d3dfaaf25422 (diff)
downloadnumpy-bab64b897064cfdf8cf86fcc62b44e21df1153ee.tar.gz
docs: strip trailing whitespace from RST files
Diffstat (limited to 'doc/source/user')
-rw-r--r--doc/source/user/basics.indexing.rst4
-rw-r--r--doc/source/user/basics.types.rst2
-rw-r--r--doc/source/user/c-info.beyond-basics.rst162
-rw-r--r--doc/source/user/c-info.how-to-extend.rst104
-rw-r--r--doc/source/user/c-info.python-as-glue.rst324
-rw-r--r--doc/source/user/index.rst2
6 files changed, 299 insertions, 299 deletions
diff --git a/doc/source/user/basics.indexing.rst b/doc/source/user/basics.indexing.rst
index 7427874a5..f218fd060 100644
--- a/doc/source/user/basics.indexing.rst
+++ b/doc/source/user/basics.indexing.rst
@@ -6,9 +6,9 @@ Indexing
.. seealso:: :ref:`Indexing routines <routines.indexing>`
-.. note::
+.. note::
- XXX: Combine ``numpy.doc.indexing`` with material
+ XXX: Combine ``numpy.doc.indexing`` with material
section 2.2 Basic indexing?
Or incorporate the material directly here?
diff --git a/doc/source/user/basics.types.rst b/doc/source/user/basics.types.rst
index 1a95dc6b4..4982045a2 100644
--- a/doc/source/user/basics.types.rst
+++ b/doc/source/user/basics.types.rst
@@ -4,7 +4,7 @@ Data types
.. seealso:: :ref:`Data type objects <arrays.dtypes>`
-.. note::
+.. note::
XXX: Combine ``numpy.doc.indexing`` with material from
"Guide to Numpy" (section 2.1 Data-Type descriptors)?
diff --git a/doc/source/user/c-info.beyond-basics.rst b/doc/source/user/c-info.beyond-basics.rst
index 905ab67eb..491c2c9ae 100644
--- a/doc/source/user/c-info.beyond-basics.rst
+++ b/doc/source/user/c-info.beyond-basics.rst
@@ -3,12 +3,12 @@ Beyond the Basics
*****************
| The voyage of discovery is not in seeking new landscapes but in having
-| new eyes.
-| --- *Marcel Proust*
+| new eyes.
+| --- *Marcel Proust*
| Discovery is seeing what everyone else has seen and thinking what no
-| one else has thought.
-| --- *Albert Szent-Gyorgi*
+| one else has thought.
+| --- *Albert Szent-Gyorgi*
Iterating over elements in the array
@@ -27,7 +27,7 @@ using, then you can always write nested for loops to accomplish the
iteration. If, however, you want to write code that works with any
number of dimensions, then you can make use of the array iterator. An
array iterator object is returned when accessing the .flat attribute
-of an array.
+of an array.
.. index::
single: array iterator
@@ -42,7 +42,7 @@ size of the array, ``iter->index``, which contains the current 1-d
index into the array, and ``iter->dataptr`` which is a pointer to the
data for the current element of the array. Sometimes it is also
useful to access ``iter->ao`` which is a pointer to the underlying
-ndarray object.
+ndarray object.
After processing data at the current element of the array, the next
element of the array can be obtained using the macro
@@ -54,7 +54,7 @@ array of npy_intp data-type with space to handle at least the number
of dimensions in the underlying array. Occasionally it is useful to
use :cfunc:`PyArray_ITER_GOTO1D` ( ``iter``, ``index`` ) which will jump
to the 1-d index given by the value of ``index``. The most common
-usage, however, is given in the following example.
+usage, however, is given in the following example.
.. code-block:: c
@@ -71,7 +71,7 @@ usage, however, is given in the following example.
You can also use :cfunc:`PyArrayIter_Check` ( ``obj`` ) to ensure you have
an iterator object and :cfunc:`PyArray_ITER_RESET` ( ``iter`` ) to reset an
-iterator object back to the beginning of the array.
+iterator object back to the beginning of the array.
It should be emphasized at this point that you may not need the array
iterator if your array is already contiguous (using an array iterator
@@ -82,7 +82,7 @@ many places in the NumPy source code itself. If you already know your
array is contiguous (Fortran or C), then simply adding the element-
size to a running pointer variable will step you through the array
very efficiently. In other words, code like this will probably be
-faster for you in the contiguous case (assuming doubles).
+faster for you in the contiguous case (assuming doubles).
.. code-block:: c
@@ -110,7 +110,7 @@ to a small(er) fraction of the total time. Even if the interior of the
loop is performed without a function call it can be advantageous to
perform the inner loop over the dimension with the highest number of
elements to take advantage of speed enhancements available on micro-
-processors that use pipelining to enhance fundmental operations.
+processors that use pipelining to enhance fundmental operations.
The :cfunc:`PyArray_IterAllButAxis` ( ``array``, ``&dim`` ) constructs an
iterator object that is modified so that it will not iterate over the
@@ -123,7 +123,7 @@ PyArrayIterObject \*. All that's been done is to modify the strides
and dimensions of the returned iterator to simulate iterating over
array[...,0,...] where 0 is placed on the
:math:`\textrm{dim}^{\textrm{th}}` dimension. If dim is negative, then
-the dimension with the largest axis is found and used.
+the dimension with the largest axis is found and used.
Iterating over multiple arrays
@@ -135,7 +135,7 @@ behavior. If all you want to do is iterate over arrays with the same
shape, then simply creating several iterator objects is the standard
procedure. For example, the following code iterates over two arrays
assumed to be the same shape and size (actually obj1 just has to have
-at least as many total elements as does obj2):
+at least as many total elements as does obj2):
.. code-block:: c
@@ -175,7 +175,7 @@ multiterator ``obj`` as either a :ctype:`PyArrayMultiObject *` or a
:ctype:`PyObject *`). The data from input number ``i`` is available using
:cfunc:`PyArray_MultiIter_DATA` ( ``obj``, ``i`` ) and the total (broadcasted)
size as :cfunc:`PyArray_MultiIter_SIZE` ( ``obj``). An example of using this
-feature follows.
+feature follows.
.. code-block:: c
@@ -194,14 +194,14 @@ iteration does not take place over the largest dimension (it makes
that dimension of size 1). The code being looped over that makes use
of the pointers will very-likely also need the strides data for each
of the iterators. This information is stored in
-multi->iters[i]->strides.
+multi->iters[i]->strides.
.. index::
single: array iterator
There are several examples of using the multi-iterator in the NumPy
source code as it makes N-dimensional broadcasting-code very simple to
-write. Browse the source for more examples.
+write. Browse the source for more examples.
.. _`sec:Creating-a-new`:
@@ -216,7 +216,7 @@ ufuncs. It provides a great many examples of how to create a universal
function. Creating your own ufunc that will make use of the ufunc
machinery is not difficult either. Suppose you have a function that
you want to operate element-by-element over its inputs. By creating a
-new ufunc you will obtain a function that handles
+new ufunc you will obtain a function that handles
- broadcasting
@@ -231,7 +231,7 @@ a 1-d loop for each data-type you want to support. Each 1-d loop must
have a specific signature, and only ufuncs for fixed-size data-types
can be used. The function call used to create a new ufunc to work on
built-in data-types is given below. A different mechanism is used to
-register ufuncs for user-defined data-types.
+register ufuncs for user-defined data-types.
.. cfunction:: PyObject *PyUFunc_FromFuncAndData( PyUFuncGenericFunction* func, void** data, char* types, int ntypes, int nin, int nout, int identity, char* name, char* doc, int check_return)
@@ -240,34 +240,34 @@ register ufuncs for user-defined data-types.
A pointer to an array of 1-d functions to use. This array must be at
least ntypes long. Each entry in the array must be a ``PyUFuncGenericFunction`` function. This function has the following signature. An example of a
valid 1d loop function is also given.
-
+
.. cfunction:: void loop1d(char** args, npy_intp* dimensions, npy_intp* steps, void* data)
-
+
*args*
An array of pointers to the actual data for the input and output
arrays. The input arguments are given first followed by the output
arguments.
-
+
*dimensions*
A pointer to the size of the dimension over which this function is
looping.
-
+
*steps*
A pointer to the number of bytes to jump to get to the
next element in this dimension for each of the input and
output arguments.
-
+
*data*
Arbitrary data (extra arguments, function names, *etc.* )
that can be stored with the ufunc and will be passed in
when it is called.
-
+
.. code-block:: c
-
+
static void
double_add(char *args, npy_intp *dimensions, npy_intp *steps, void *extra)
{
@@ -281,7 +281,7 @@ register ufuncs for user-defined data-types.
i1 += is1; i2 += is2; op += os;
}
}
-
+
*data*
An array of data. There should be ntypes entries (or NULL) --- one for
@@ -289,7 +289,7 @@ register ufuncs for user-defined data-types.
in to the 1-d loop. One common use of this data variable is to pass in
an actual function to call to compute the result when a generic 1-d
loop (e.g. :cfunc:`PyUFunc_d_d`) is being used.
-
+
*types*
An array of type-number signatures (type ``char`` ). This
@@ -300,46 +300,46 @@ register ufuncs for user-defined data-types.
(length-2 func and data arrays) that takes 2 inputs and
returns 1 output that is always a complex double, then the
types array would be
-
-
+
+
The bit-width names can also be used (e.g. :cdata:`NPY_INT32`,
:cdata:`NPY_COMPLEX128` ) if desired.
-
+
*ntypes*
The number of data-types supported. This is equal to the number of 1-d
loops provided.
-
+
*nin*
The number of input arguments.
-
+
*nout*
The number of output arguments.
-
+
*identity*
Either :cdata:`PyUFunc_One`, :cdata:`PyUFunc_Zero`, :cdata:`PyUFunc_None`.
This specifies what should be returned when an empty array is
passed to the reduce method of the ufunc.
-
+
*name*
A ``NULL`` -terminated string providing the name of this ufunc
(should be the Python name it will be called).
-
+
*doc*
A documentation string for this ufunc (will be used in generating the
response to ``{ufunc_name}.__doc__``). Do not include the function
signature or the name as this is generated automatically.
-
+
*check_return*
Not presently used, but this integer value does get set in the
structure-member of similar name.
-
+
.. index::
pair: ufunc; adding new
@@ -347,13 +347,13 @@ register ufuncs for user-defined data-types.
placed in a (module) dictionary under the same name as was used in the
name argument to the ufunc-creation routine. The following example is
adapted from the umath module
-
+
.. code-block:: c
static PyUFuncGenericFunction atan2_functions[]=\
{PyUFunc_ff_f, PyUFunc_dd_d,
PyUFunc_gg_g, PyUFunc_OO_O_method};
- static void* atan2_data[]=\
+ static void* atan2_data[]=\
{(void *)atan2f,(void *) atan2,
(void *)atan2l,(void *)"arctan2"};
static char atan2_signatures[]=\
@@ -361,7 +361,7 @@ register ufuncs for user-defined data-types.
NPY_DOUBLE, NPY_DOUBLE,
NPY_DOUBLE, NPY_LONGDOUBLE,
NPY_LONGDOUBLE, NPY_LONGDOUBLE
- NPY_OBJECT, NPY_OBJECT,
+ NPY_OBJECT, NPY_OBJECT,
NPY_OBJECT};
...
/* in the module initialization code */
@@ -369,9 +369,9 @@ register ufuncs for user-defined data-types.
...
dict = PyModule_GetDict(module);
...
- f = PyUFunc_FromFuncAndData(atan2_functions,
- atan2_data, atan2_signatures, 4, 2, 1,
- PyUFunc_None, "arctan2",
+ f = PyUFunc_FromFuncAndData(atan2_functions,
+ atan2_data, atan2_signatures, 4, 2, 1,
+ PyUFunc_None, "arctan2",
"a safe and correct arctan(x1/x2)", 0);
PyDict_SetItemString(dict, "arctan2", f);
Py_DECREF(f);
@@ -396,7 +396,7 @@ if you can't do what you want to do using the OBJECT or VOID
data-types that are already available. As an example of what I
consider a useful application of the ability to add data-types is the
possibility of adding a data-type of arbitrary precision floats to
-NumPy.
+NumPy.
.. index::
pair: dtype; adding new
@@ -421,7 +421,7 @@ type. For example, a suitable structure for the new Python type is:
typedef struct {
PyObject_HEAD;
- some_data_type obval;
+ some_data_type obval;
/* the name can be whatever you want */
} PySomeDataTypeObject;
@@ -432,7 +432,7 @@ required functions in the ".f" member must be defined: nonzero,
copyswap, copyswapn, setitem, getitem, and cast. The more functions in
the ".f" member you define, however, the more useful the new data-type
will be. It is very important to intialize unused functions to NULL.
-This can be achieved using :cfunc:`PyArray_InitArrFuncs` (f).
+This can be achieved using :cfunc:`PyArray_InitArrFuncs` (f).
Once a new :ctype:`PyArray_Descr` structure is created and filled with the
needed information and useful functions you call
@@ -442,7 +442,7 @@ specifies your data-type. This type number should be stored and made
available by your module so that other modules can use it to recognize
your data-type (the other mechanism for finding a user-defined
data-type number is to search based on the name of the type-object
-associated with the data-type using :cfunc:`PyArray_TypeNumFromName` ).
+associated with the data-type using :cfunc:`PyArray_TypeNumFromName` ).
Registering a casting function
@@ -454,7 +454,7 @@ possible, you must register a casting function with the data-type you
want to be able to cast from. This requires writing low-level casting
functions for each conversion you want to support and then registering
these functions with the data-type descriptor. A low-level casting
-function has the signature.
+function has the signature.
.. cfunction:: void castfunc( void* from, void* to, npy_intp n, void* fromarr, void* toarr)
@@ -501,7 +501,7 @@ function :cfunc:`PyArray_RegisterCanCast` (from_descr, totype_number,
scalarkind) should be used to specify that the data-type object
from_descr can be cast to the data-type with type number
totype_number. If you are not trying to alter scalar coercion rules,
-then use :cdata:`PyArray_NOSCALAR` for the scalarkind argument.
+then use :cdata:`PyArray_NOSCALAR` for the scalarkind argument.
If you want to allow your new data-type to also be able to share in
the scalar coercion rules, then you need to specify the scalarkind
@@ -511,7 +511,7 @@ available to that function). Then, you can register data-types that
can be cast to separately for each scalar kind that may be returned
from your user-defined data-type. If you don't register scalar
coercion handling, then all of your user-defined data-types will be
-seen as :cdata:`PyArray_NOSCALAR`.
+seen as :cdata:`PyArray_NOSCALAR`.
Registering a ufunc loop
@@ -521,30 +521,30 @@ You may also want to register low-level ufunc loops for your data-type
so that an ndarray of your data-type can have math applied to it
seamlessly. Registering a new loop with exactly the same arg_types
signature, silently replaces any previously registered loops for that
-data-type.
+data-type.
Before you can register a 1-d loop for a ufunc, the ufunc must be
previously created. Then you call :cfunc:`PyUFunc_RegisterLoopForType`
(...) with the information needed for the loop. The return value of
this function is ``0`` if the process was successful and ``-1`` with
-an error condition set if it was not successful.
+an error condition set if it was not successful.
.. cfunction:: int PyUFunc_RegisterLoopForType( PyUFuncObject* ufunc, int usertype, PyUFuncGenericFunction function, int* arg_types, void* data)
*ufunc*
The ufunc to attach this loop to.
-
+
*usertype*
The user-defined type this loop should be indexed under. This number
must be a user-defined type or an error occurs.
-
+
*function*
The ufunc inner 1-d loop. This function must have the signature as
explained in Section `3 <#sec-creating-a-new>`__ .
-
+
*arg_types*
(optional) If given, this should contain an array of integers of at
@@ -553,15 +553,15 @@ an error condition set if it was not successful.
the memory for this argument should be deleted after calling this
function. If this is NULL, then it will be assumed that all data-types
are of type usertype.
-
+
*data*
(optional) Specify any optional data needed by the function which will
- be passed when the function is called.
-
+ be passed when the function is called.
+
.. index::
pair: dtype; adding new
-
+
Subtyping the ndarray in C
==========================
@@ -577,7 +577,7 @@ type, sub-typing from multiple parent types is also possible. Multiple
inheritence in C is generally less useful than it is in Python because
a restriction on Python sub-types is that they have a binary
compatible memory layout. Perhaps for this reason, it is somewhat
-easier to sub-type from a single parent type.
+easier to sub-type from a single parent type.
.. index::
pair: ndarray; subtyping
@@ -592,7 +592,7 @@ the parent structure ( *i.e.* it will cast a given pointer to a
pointer to the parent structure and then dereference one of it's
members). If the memory layouts are not compatible, then this attempt
will cause unpredictable behavior (eventually leading to a memory
-violation and program crash).
+violation and program crash).
One of the elements in :cmacro:`PyObject_HEAD` is a pointer to a
type-object structure. A new Python type is created by creating a new
@@ -605,7 +605,7 @@ while a :ctype:`PyArrayObject *` variable is a pointer to a particular instance
of an ndarray (one of the members of the ndarray structure is, in
turn, a pointer to the type- object table :cdata:`&PyArray_Type`). Finally
:cfunc:`PyType_Ready` (<pointer_to_type_object>) must be called for
-every new Python type.
+every new Python type.
Creating sub-types
@@ -615,22 +615,22 @@ To create a sub-type, a similar proceedure must be followed except
only behaviors that are different require new entries in the type-
object structure. All other entires can be NULL and will be filled in
by :cfunc:`PyType_Ready` with appropriate functions from the parent
-type(s). In particular, to create a sub-type in C follow these steps:
+type(s). In particular, to create a sub-type in C follow these steps:
1. If needed create a new C-structure to handle each instance of your
type. A typical C-structure would be:
-
+
.. code-block:: c
-
+
typedef _new_struct {
PyArrayObject base;
/* new things here */
} NewArrayObject;
-
+
Notice that the full PyArrayObject is used as the first entry in order
to ensure that the binary layout of instances of the new type is
- identical to the PyArrayObject.
-
+ identical to the PyArrayObject.
+
2. Fill in a new Python type-object structure with pointers to new
functions that will over-ride the default behavior while leaving any
function that should remain the same unfilled (or NULL). The tp_name
@@ -650,14 +650,14 @@ type(s). In particular, to create a sub-type in C follow these steps:
module dictionary so it can be accessed from Python.
More information on creating sub-types in C can be learned by reading
-PEP 253 (available at http://www.python.org/dev/peps/pep-0253).
+PEP 253 (available at http://www.python.org/dev/peps/pep-0253).
Specific features of ndarray sub-typing
---------------------------------------
Some special methods and attributes are used by arrays in order to
-facilitate the interoperation of sub-types with the base ndarray type.
+facilitate the interoperation of sub-types with the base ndarray type.
.. note:: XXX: some of the documentation below needs to be moved to the
reference guide.
@@ -667,7 +667,7 @@ The __array_finalize\__ method
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. attribute:: ndarray.__array_finalize__
-
+
Several array-creation functions of the ndarray allow
specification of a particular sub-type to be created. This allows
sub-types to be handled seamlessly in many routines. When a
@@ -678,25 +678,25 @@ The __array_finalize\__ method
attribute is looked-up in the object dictionary. If it is present
and not None, then it can be either a CObject containing a pointer
to a :cfunc:`PyArray_FinalizeFunc` or it can be a method taking a
- single argument (which could be None).
-
+ single argument (which could be None).
+
If the :obj:`__array_finalize__` attribute is a CObject, then the pointer
must be a pointer to a function with the signature:
-
+
.. code-block:: c
-
+
(int) (PyArrayObject *, PyObject *)
-
+
The first argument is the newly created sub-type. The second argument
(if not NULL) is the "parent" array (if the array was created using
slicing or some other operation where a clearly-distinguishable parent
is present). This routine can do anything it wants to. It should
- return a -1 on error and 0 otherwise.
-
+ return a -1 on error and 0 otherwise.
+
If the :obj:`__array_finalize__` attribute is not None nor a CObject,
then it must be a Python method that takes the parent array as an
argument (which could be None if there is no parent), and returns
- nothing. Errors in this method will be caught and handled.
+ nothing. Errors in this method will be caught and handled.
The __array_priority\__ attribute
@@ -715,7 +715,7 @@ The __array_priority\__ attribute
ndarray type and 1.0 for a sub-type. This attribute can also be
defined by objects that are not sub-types of the ndarray and can be
used to determine which :obj:`__array_wrap__` method should be called for
- the return output.
+ the return output.
The __array_wrap\__ method
^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -728,7 +728,7 @@ The __array_wrap\__ method
ufuncs (and other NumPy functions) to allow other objects to pass
through. For Python >2.4, it can also be used to write a decorator
that converts a function that works only with ndarrays to one that
- works with any type with :obj:`__array__` and :obj:`__array_wrap__` methods.
-
+ works with any type with :obj:`__array__` and :obj:`__array_wrap__` methods.
+
.. index::
pair: ndarray; subtyping
diff --git a/doc/source/user/c-info.how-to-extend.rst b/doc/source/user/c-info.how-to-extend.rst
index 56f3c99f1..b2921239e 100644
--- a/doc/source/user/c-info.how-to-extend.rst
+++ b/doc/source/user/c-info.how-to-extend.rst
@@ -3,11 +3,11 @@ How to extend NumPy
*******************
| That which is static and repetitive is boring. That which is dynamic
-| and random is confusing. In between lies art.
-| --- *John A. Locke*
+| and random is confusing. In between lies art.
+| --- *John A. Locke*
-| Science is a differential equation. Religion is a boundary condition.
-| --- *Alan Turing*
+| Science is a differential equation. Religion is a boundary condition.
+| --- *Alan Turing*
.. _`sec:Writing-an-extension`:
@@ -25,7 +25,7 @@ that numpy includes f2py so that an easy-to-use mechanisms for linking
available. You are encouraged to use and improve this mechanism. The
purpose of this section is not to document this tool but to document
the more basic steps to writing an extension module that this tool
-depends on.
+depends on.
.. index::
single: extension module
@@ -36,7 +36,7 @@ into Python as if it were a standard python file. It will contain
objects and methods that have been defined and compiled in C code. The
basic steps for doing this in Python are well-documented and you can
find more information in the documentation for Python itself available
-online at `www.python.org <http://www.python.org>`_ .
+online at `www.python.org <http://www.python.org>`_ .
In addition to the Python C-API, there is a full and rich C-API for
NumPy allowing sophisticated manipulations on a C-level. However, for
@@ -45,7 +45,7 @@ you need to do is extract a pointer to memory along with some shape
information to pass to another calculation routine, then you will use
very different calls, then if you are trying to create a new array-
like type or add a new data type for ndarrays. This chapter documents
-the API calls and macros that are most commonly used.
+the API calls and macros that are most commonly used.
Required subroutine
@@ -63,7 +63,7 @@ to place these commands will show itself as an ugly segmentation fault
actually possible to have multiple init{name} functions in a single
file in which case multiple modules will be defined by that file.
However, there are some tricks to get that to work correctly and it is
-not covered here.
+not covered here.
A minimal ``init{name}`` method looks like:
@@ -71,7 +71,7 @@ A minimal ``init{name}`` method looks like:
PyMODINIT_FUNC
init{name}(void)
- {
+ {
(void)Py_InitModule({name}, mymethods);
import_array();
}
@@ -88,7 +88,7 @@ whatever you like to the module manually. An easier way to add objects
to the module is to use one of three additional Python C-API calls
that do not require a separate extraction of the module dictionary.
These are documented in the Python documentation, but repeated here
-for convenience:
+for convenience:
.. cfunction:: int PyModule_AddObject(PyObject* module, char* name, PyObject* value)
@@ -132,12 +132,12 @@ this function, and 4) The docstring for the function. Any number of
functions may be defined for a single module by adding more entries to
this table. The last entry must be all NULL as shown to act as a
sentinel. Python looks for this entry to know that all of the
-functions for the module have been defined.
+functions for the module have been defined.
The last thing that must be done to finish the extension module is to
actually write the code that performs the desired functions. There are
two kinds of functions: those that don't accept keyword arguments, and
-those that do.
+those that do.
Functions without keyword arguments
@@ -172,7 +172,7 @@ that may be of use. In particular, the :cfunc:`PyArray_DescrConverter`
function is very useful to support arbitrary data-type specification.
This function transforms any valid data-type Python object into a
:ctype:`PyArray_Descr *` object. Remember to pass in the address of the
-C-variables that should be filled in.
+C-variables that should be filled in.
There are lots of examples of how to use :cfunc:`PyArg_ParseTuple`
throughout the NumPy source code. The standard usage is like this:
@@ -196,7 +196,7 @@ was successful but the integer conversion failed, then you would need
to release the reference count to the data-type object before
returning. A typical way to do this is to set *dtype* to ``NULL``
before calling :cfunc:`PyArg_ParseTuple` and then use :cfunc:`Py_XDECREF`
-on *dtype* before returning.
+on *dtype* before returning.
After the input arguments are processed, the code that actually does
the work is written (likely calling other functions as needed). The
@@ -216,7 +216,7 @@ corresponding :ctype:`PyObject *` C-variable. You should use 'N' if you ave
already created a reference for the object and just want to give that
reference to the tuple. You should use 'O' if you only have a borrowed
reference to an object and need to create one to provide for the
-tuple.
+tuple.
Functions with keyword arguments
@@ -243,11 +243,11 @@ char \*kwlist[], addresses...). The kwlist parameter to this function
is a ``NULL`` -terminated array of strings providing the expected
keyword arguments. There should be one string for each entry in the
format_string. Using this function will raise a TypeError if invalid
-keyword arguments are passed in.
+keyword arguments are passed in.
For more help on this function please see section 1.8 (Keyword
Paramters for Extension Functions) of the Extending and Embedding
-tutorial in the Python documentation.
+tutorial in the Python documentation.
Reference counting
@@ -269,7 +269,7 @@ being not using DECREF on objects before exiting early from a routine
due to some error. In second place, is the common error of not owning
the reference on an object that is passed to a function or macro that
is going to steal the reference ( *e.g.* :cfunc:`PyTuple_SET_ITEM`, and
-most functions that take :ctype:`PyArray_Descr` objects).
+most functions that take :ctype:`PyArray_Descr` objects).
.. index::
single: reference counting
@@ -304,7 +304,7 @@ variable is deleted and the reference count decremented by one, there
will still be that extra reference count, and the array will never be
deallocated. You will have a reference-counting induced memory leak.
Using the 'N' character will avoid this situation as it will return to
-the caller an object (inside the tuple) with a single reference count.
+the caller an object (inside the tuple) with a single reference count.
.. index::
single: reference counting
@@ -318,7 +318,7 @@ Dealing with array objects
Most extension modules for NumPy will need to access the memory for an
ndarray object (or one of it's sub-classes). The easiest way to do
this doesn't require you to know much about the internals of NumPy.
-The method is to
+The method is to
1. Ensure you are dealing with a well-behaved array (aligned, in machine
byte-order and single-segment) of the correct type and number of
@@ -326,12 +326,12 @@ The method is to
1. By converting it from some Python object using
:cfunc:`PyArray_FromAny` or a macro built on it.
-
+
2. By constructing a new ndarray of your desired shape and type
using :cfunc:`PyArray_NewFromDescr` or a simpler macro or function
based on it.
-
-
+
+
2. Get the shape of the array and a pointer to its actual data.
3. Pass the data and shape information on to a subroutine or other
@@ -343,7 +343,7 @@ The method is to
you can relax your requirements so as not to force a single-segment
array and the data-copying that might result.
-Each of these sub-topics is covered in the following sub-sections.
+Each of these sub-topics is covered in the following sub-sections.
Converting an arbitrary sequence object
@@ -389,35 +389,35 @@ writeable). The syntax is
requirements flag. A copy is made only if necessary. If you
want to guarantee a copy, then pass in :cdata:`NPY_ENSURECOPY`
to the requirements flag.
-
+
*typenum*
One of the enumerated types or :cdata:`NPY_NOTYPE` if the data-type
should be determined from the object itself. The C-based names
can be used:
-
+
:cdata:`NPY_BOOL`, :cdata:`NPY_BYTE`, :cdata:`NPY_UBYTE`,
:cdata:`NPY_SHORT`, :cdata:`NPY_USHORT`, :cdata:`NPY_INT`,
:cdata:`NPY_UINT`, :cdata:`NPY_LONG`, :cdata:`NPY_ULONG`,
:cdata:`NPY_LONGLONG`, :cdata:`NPY_ULONGLONG`, :cdata:`NPY_DOUBLE`,
:cdata:`NPY_LONGDOUBLE`, :cdata:`NPY_CFLOAT`, :cdata:`NPY_CDOUBLE`,
- :cdata:`NPY_CLONGDOUBLE`, :cdata:`NPY_OBJECT`.
-
+ :cdata:`NPY_CLONGDOUBLE`, :cdata:`NPY_OBJECT`.
+
Alternatively, the bit-width names can be used as supported on the
platform. For example:
-
+
:cdata:`NPY_INT8`, :cdata:`NPY_INT16`, :cdata:`NPY_INT32`,
:cdata:`NPY_INT64`, :cdata:`NPY_UINT8`,
:cdata:`NPY_UINT16`, :cdata:`NPY_UINT32`,
:cdata:`NPY_UINT64`, :cdata:`NPY_FLOAT32`,
:cdata:`NPY_FLOAT64`, :cdata:`NPY_COMPLEX64`,
:cdata:`NPY_COMPLEX128`.
-
+
The object will be converted to the desired type only if it
can be done without losing precision. Otherwise ``NULL`` will
be returned and an error raised. Use :cdata:`NPY_FORCECAST` in the
requirements flag to override this behavior.
-
+
*requirements*
The memory model for an ndarray admits arbitrary strides in
@@ -431,7 +431,7 @@ writeable). The syntax is
the array data. Both of these problems can be solved by
converting the Python object into an array that is more
"well-behaved" for your specific usage.
-
+
The requirements flag allows specification of what kind of array is
acceptable. If the object passed in does not satisfy this requirements
then a copy is made so that thre returned object will satisfy the
@@ -440,7 +440,7 @@ writeable). The syntax is
returned array object. All of the flags are explained in the detailed
API chapter. The flags most commonly needed are :cdata:`NPY_IN_ARRAY`,
:cdata:`NPY_OUT_ARRAY`, and :cdata:`NPY_INOUT_ARRAY`:
-
+
.. cvar:: NPY_IN_ARRAY
Equivalent to :cdata:`NPY_CONTIGUOUS` \|
@@ -448,7 +448,7 @@ writeable). The syntax is
for arrays that must be in C-contiguous order and aligned.
These kinds of arrays are usually input arrays for some
algorithm.
-
+
.. cvar:: NPY_OUT_ARRAY
Equivalent to :cdata:`NPY_CONTIGUOUS` \|
@@ -458,7 +458,7 @@ writeable). The syntax is
as well. Such an array is usually returned as output
(although normally such output arrays are created from
scratch).
-
+
.. cvar:: NPY_INOUT_ARRAY
Equivalent to :cdata:`NPY_CONTIGUOUS` \|
@@ -476,19 +476,19 @@ writeable). The syntax is
with the :cdata:`NPY_UPDATEIFCOPY` flag set. This will
delete the array without causing the contents to be copied
back into the original array.
-
-
+
+
Other useful flags that can be OR'd as additional requirements are:
-
+
.. cvar:: NPY_FORCECAST
Cast to the desired type, even if it can't be done without losing
information.
-
+
.. cvar:: NPY_ENSURECOPY
Make sure the resulting array is a copy of the original.
-
+
.. cvar:: NPY_ENSUREARRAY
Make sure the resulting object is an actual ndarray and not a sub-
@@ -514,7 +514,7 @@ to get an ndarray object of whatever data-type is needed. The most
general function for doing this is :cfunc:`PyArray_NewFromDescr`. All array
creation functions go through this heavily re-used code. Because of
its flexibility, it can be somewhat confusing to use. As a result,
-simpler forms exist that are easier to use.
+simpler forms exist that are easier to use.
.. cfunction:: PyObject *PyArray_SimpleNew(int nd, npy_intp* dims, int typenum)
@@ -570,7 +570,7 @@ For arrays less than 4-dimensions there are :cfunc:`PyArray_GETPTR{k}`
using the array strides easier. The arguments .... represent {k} non-
negative integer indices into the array. For example, suppose ``E`` is
a 3-dimensional ndarray. A (void*) pointer to the element ``E[i,j,k]``
-is obtained as :cfunc:`PyArray_GETPTR3` (E, i, j, k).
+is obtained as :cfunc:`PyArray_GETPTR3` (E, i, j, k).
As explained previously, C-style contiguous arrays and Fortran-style
contiguous arrays have particular striding patterns. Two array flags
@@ -597,7 +597,7 @@ Example
The following example shows how you might write a wrapper that accepts
two input arguments (that will be converted to an array) and an output
argument (that must be an array). The function returns None and
-updates the output array.
+updates the output array.
.. code-block:: c
@@ -606,33 +606,33 @@ updates the output array.
{
PyObject *arg1=NULL, *arg2=NULL, *out=NULL;
PyObject *arr1=NULL, *arr2=NULL, *oarr=NULL;
-
+
if (!PyArg_ParseTuple(args, OOO&, &arg1, *arg2,
&PyArrayType, *out)) return NULL;
-
+
arr1 = PyArray_FROM_OTF(arg1, NPY_DOUBLE, NPY_IN_ARRAY);
if (arr1 == NULL) return NULL;
- arr2 = PyArray_FROM_OTF(arg2, NPY_DOUBLE, NPY_IN_ARRAY);
+ arr2 = PyArray_FROM_OTF(arg2, NPY_DOUBLE, NPY_IN_ARRAY);
if (arr2 == NULL) goto fail;
oarr = PyArray_FROM_OTF(out, NPY_DOUBLE, NPY_INOUT_ARRAY);
if (oarr == NULL) goto fail;
-
+
/* code that makes use of arguments */
- /* You will probably need at least
+ /* You will probably need at least
nd = PyArray_NDIM(<..>) -- number of dimensions
- dims = PyArray_DIMS(<..>) -- npy_intp array of length nd
+ dims = PyArray_DIMS(<..>) -- npy_intp array of length nd
showing length in each dim.
dptr = (double *)PyArray_DATA(<..>) -- pointer to data.
-
+
If an error occurs goto fail.
*/
-
+
Py_DECREF(arr1);
Py_DECREF(arr2);
Py_DECREF(oarr);
Py_INCREF(Py_None);
return Py_None;
-
+
fail:
Py_XDECREF(arr1);
Py_XDECREF(arr2);
diff --git a/doc/source/user/c-info.python-as-glue.rst b/doc/source/user/c-info.python-as-glue.rst
index 0e0c73cd8..4fb337821 100644
--- a/doc/source/user/c-info.python-as-glue.rst
+++ b/doc/source/user/c-info.python-as-glue.rst
@@ -3,12 +3,12 @@ Using Python as glue
********************
| There is no conversation more boring than the one where everybody
-| agrees.
-| --- *Michel de Montaigne*
+| agrees.
+| --- *Michel de Montaigne*
| Duct tape is like the force. It has a light side, and a dark side, and
-| it holds the universe together.
-| --- *Carl Zwanzig*
+| it holds the universe together.
+| --- *Carl Zwanzig*
Many people like to say that Python is a fantastic glue language.
Hopefully, this Chapter will convince you that this is true. The first
@@ -19,21 +19,21 @@ Perl, in addition, the ability to easily extend Python made it
relatively easy to create new classes and types specifically adapted
to the problems being solved. From the interactions of these early
contributors, Numeric emerged as an array-like object that could be
-used to pass data between these applications.
+used to pass data between these applications.
As Numeric has matured and developed into NumPy, people have been able
to write more code directly in NumPy. Often this code is fast-enough
for production use, but there are still times that there is a need to
access compiled code. Either to get that last bit of efficiency out of
the algorithm or to make it easier to access widely-available codes
-written in C/C++ or Fortran.
+written in C/C++ or Fortran.
This chapter will review many of the tools that are available for the
purpose of accessing code written in other compiled languages. There
are many resources available for learning to call other compiled
libraries from Python and the purpose of this Chapter is not to make
you an expert. The main goal is to make you aware of some of the
-possibilities so that you will know what to "Google" in order to learn more.
+possibilities so that you will know what to "Google" in order to learn more.
The http://www.scipy.org website also contains a great deal of useful
information about many of these tools. For example, there is a nice
@@ -42,7 +42,7 @@ http://www.scipy.org/PerformancePython. This link provides several
ways to solve the same problem showing how to use and connect with
compiled code to get the best performance. In the process you can get
a taste for several of the approaches that will be discussed in this
-chapter.
+chapter.
Calling other compiled libraries from Python
@@ -60,21 +60,21 @@ critical portions of your code). Therefore one of the most common
needs is to call out from Python code to a fast, machine-code routine
(e.g. compiled using C/C++ or Fortran). The fact that this is
relatively easy to do is a big reason why Python is such an excellent
-high-level language for scientific and engineering programming.
+high-level language for scientific and engineering programming.
Their are two basic approaches to calling compiled code: writing an
extension module that is then imported to Python using the import
command, or calling a shared-library subroutine directly from Python
using the ctypes module (included in the standard distribution with
Python 2.5). The first method is the most common (but with the
-inclusion of ctypes into Python 2.5 this status may change).
+inclusion of ctypes into Python 2.5 this status may change).
.. warning::
Calling C-code from Python can result in Python crashes if you are not
careful. None of the approaches in this chapter are immune. You have
to know something about the way data is handled by both NumPy and by
- the third-party library being used.
+ the third-party library being used.
Hand-generated wrappers
@@ -89,7 +89,7 @@ between Python objects and C data-types. For standard C data-types
there is probably already a built-in converter. For others you may
need to write your own converter and use the "O&" format string which
allows you to specify a function that will be used to perform the
-conversion from the Python object to whatever C-structures are needed.
+conversion from the Python object to whatever C-structures are needed.
Once the conversions to the appropriate C-structures and C data-types
have been performed, the next step in the wrapper is to call the
@@ -100,7 +100,7 @@ using your compiler and platform. This can vary somewhat platforms and
compilers (which is another reason f2py makes life much simpler for
interfacing Fortran code) but generally involves underscore mangling
of the name and the fact that all variables are passed by reference
-(i.e. all arguments are pointers).
+(i.e. all arguments are pointers).
The advantage of the hand-generated wrapper is that you have complete
control over how the C-library gets used and called which can lead to
@@ -113,7 +113,7 @@ regimented, code-generation procedures have been developed to make
this process easier. One of these code- generation techniques is
distributed with NumPy and allows easy integration with Fortran and
(simple) C code. This package, f2py, will be covered briefly in the
-next session.
+next session.
f2py
@@ -124,7 +124,7 @@ interfaces to routines in Fortran 77/90/95 code. It has the ability to
parse Fortran 77/90/95 code and automatically generate Python
signatures for the subroutines it encounters, or you can guide how the
subroutine interfaces with Python by constructing an interface-
-defintion-file (or modifying the f2py-produced one).
+defintion-file (or modifying the f2py-produced one).
.. index::
single: f2py
@@ -148,7 +148,7 @@ example. Here is one of the subroutines contained in a file named
DO 20 J = 1, N
C(J) = A(J)+B(J)
20 CONTINUE
- END
+ END
This routine simply adds the elements in two contiguous arrays and
places the result in a third. The memory for all three arrays must be
@@ -160,7 +160,7 @@ routine can be automatically generated by f2py::
You should be able to run this command assuming your search-path is
set-up properly. This command will produce an extension module named
addmodule.c in the current directory. This extension module can now be
-compiled and used from Python just like any other extension module.
+compiled and used from Python just like any other extension module.
Creating a compiled extension module
@@ -181,13 +181,13 @@ information about how the module method may be called:
>>> import add
>>> print add.zadd.__doc__
- zadd - Function signature:
+ zadd - Function signature:
zadd(a,b,c,n)
- Required arguments:
+ Required arguments:
a : input rank-1 array('D') with bounds (*)
b : input rank-1 array('D') with bounds (*)
c : input rank-1 array('D') with bounds (*)
- n : input int
+ n : input int
Improving the basic interface
@@ -200,13 +200,13 @@ attempt to convert all arguments to their required types (and shapes)
and issue an error if unsuccessful. However, because it knows nothing
about the semantics of the arguments (such that C is an output and n
should really match the array sizes), it is possible to abuse this
-function in ways that can cause Python to crash. For example:
+function in ways that can cause Python to crash. For example:
>>> add.zadd([1,2,3],[1,2],[3,4],1000)
will cause a program crash on most systems. Under the covers, the
lists are being converted to proper arrays but then the underlying add
-loop is told to cycle way beyond the borders of the allocated memory.
+loop is told to cycle way beyond the borders of the allocated memory.
In order to improve the interface, directives should be provided. This
is accomplished by constructing an interface definition file. It is
@@ -221,11 +221,11 @@ section of this file corresponding to zadd is:
.. code-block:: none
- subroutine zadd(a,b,c,n) ! in :add:add.f
- double complex dimension(*) :: a
- double complex dimension(*) :: b
- double complex dimension(*) :: c
- integer :: n
+ subroutine zadd(a,b,c,n) ! in :add:add.f
+ double complex dimension(*) :: a
+ double complex dimension(*) :: b
+ double complex dimension(*) :: c
+ integer :: n
end subroutine zadd
By placing intent directives and checking code, the interface can be
@@ -234,11 +234,11 @@ to use and more robust.
.. code-block:: none
- subroutine zadd(a,b,c,n) ! in :add:add.f
- double complex dimension(n) :: a
- double complex dimension(n) :: b
- double complex intent(out),dimension(n) :: c
- integer intent(hide),depend(a) :: n=len(a)
+ subroutine zadd(a,b,c,n) ! in :add:add.f
+ double complex dimension(n) :: a
+ double complex dimension(n) :: b
+ double complex intent(out),dimension(n) :: c
+ integer intent(hide),depend(a) :: n=len(a)
end subroutine zadd
The intent directive, intent(out) is used to tell f2py that ``c`` is
@@ -248,25 +248,25 @@ to not allow the user to specify the variable, ``n``, but instead to
get it from the size of ``a``. The depend( ``a`` ) directive is
necessary to tell f2py that the value of n depends on the input a (so
that it won't try to create the variable n until the variable a is
-created).
+created).
The new interface has docstring:
>>> print add.zadd.__doc__
- zadd - Function signature:
- c = zadd(a,b)
- Required arguments:
- a : input rank-1 array('D') with bounds (n)
- b : input rank-1 array('D') with bounds (n)
- Return objects:
- c : rank-1 array('D') with bounds (n)
+ zadd - Function signature:
+ c = zadd(a,b)
+ Required arguments:
+ a : input rank-1 array('D') with bounds (n)
+ b : input rank-1 array('D') with bounds (n)
+ Return objects:
+ c : rank-1 array('D') with bounds (n)
-Now, the function can be called in a much more robust way:
+Now, the function can be called in a much more robust way:
>>> add.zadd([1,2,3],[4,5,6])
array([ 5.+0.j, 7.+0.j, 9.+0.j])
-Notice the automatic conversion to the correct format that occurred.
+Notice the automatic conversion to the correct format that occurred.
Inserting directives in Fortran source
@@ -305,7 +305,7 @@ contained A(N) instead of A(\*) and so forth with B and C, then I
could obtain (nearly) the same interface simply by placing the
INTENT(OUT) :: C comment line in the source code. The only difference
is that N would be an optional input that would default to the length
-of A.
+of A.
A filtering example
@@ -315,7 +315,7 @@ For comparison with the other methods to be discussed. Here is another
example of a function that filters a two-dimensional array of double
precision floating-point numbers using a fixed averaging filter. The
advantage of using Fortran to index into multi-dimensional arrays
-should be clear from this example.
+should be clear from this example.
.. code-block:: none
@@ -329,7 +329,7 @@ should be clear from this example.
CF2PY INTENT(HIDE) :: M
DO 20 I = 2,M-1
DO 40 J=2,N-1
- B(I,J) = A(I,J) +
+ B(I,J) = A(I,J) +
$ (A(I-1,J)+A(I+1,J) +
$ A(I,J-1)+A(I,J+1) )*0.5D0 +
$ (A(I-1,J-1) + A(I-1,J+1) +
@@ -345,7 +345,7 @@ filter using::
This will produce an extension module named filter.so in the current
directory with a method named dfilter2d that returns a filtered
-version of the input.
+version of the input.
Calling f2py from Python
@@ -367,7 +367,7 @@ executed using Python code is:
The source string can be any valid Fortran code. If you want to save
the extension-module source code then a suitable file-name can be
-provided by the source_fn keyword to the compile function.
+provided by the source_fn keyword to the compile function.
Automatic extension module generation
@@ -387,7 +387,7 @@ so that it would be loaded as f2py_examples.add) is:
config = Configuration('f2py_examples',parent_package, top_path)
config.add_extension('add', sources=['add.pyf','add.f'])
return config
-
+
if __name__ == '__main__':
from numpy.distutils.core import setup
setup(**configuration(top_path='').todict())
@@ -401,7 +401,7 @@ packages directory for the version of Python you are using. For the
resulting package to work, you need to create a file named __init__.py
(in the same directory as add.pyf). Notice the extension module is
defined entirely in terms of the "add.pyf" and "add.f" files. The
-conversion of the .pyf file to a .c file is handled by numpy.disutils.
+conversion of the .pyf file to a .c file is handled by numpy.disutils.
Conclusion
@@ -413,7 +413,7 @@ for f2py found in the numpy/f2py/docs directory where-ever NumPy is
installed on your system (usually under site-packages). There is also
more information on using f2py (including how to use it to wrap C
codes) at http://www.scipy.org/Cookbook under the "Using NumPy with
-Other Languages" heading.
+Other Languages" heading.
The f2py method of linking compiled code is currently the most
sophisticated and integrated approach. It allows clean separation of
@@ -427,7 +427,7 @@ is still the easiest way to write fast and clear code for scientific
computing. It handles complex numbers, and multi-dimensional indexing
in the most straightforward way. Be aware, however, that some Fortran
compilers will not be able to optimize code as well as good hand-
-written C-code.
+written C-code.
.. index::
single: f2py
@@ -443,7 +443,7 @@ temporary variables, to directly "inline" C/C++ code into Python, or
to create a fully-named extension module. You must either install
scipy or get the weave package separately and install it using the
standard python setup.py install. You must also have a C/C++-compiler
-installed and useable by Python distutils in order to use weave.
+installed and useable by Python distutils in order to use weave.
.. index::
single: weave
@@ -451,7 +451,7 @@ installed and useable by Python distutils in order to use weave.
Somewhat dated, but still useful documentation for weave can be found
at the link http://www.scipy/Weave. There are also many examples found
in the examples directory which is installed under the weave directory
-in the place where weave is installed on your system.
+in the place where weave is installed on your system.
Speed up code involving arrays (also see scipy.numexpr)
@@ -470,7 +470,7 @@ quickly than the equivalent NumPy expression. This is especially true
if your array sizes are large and the expression would require NumPy
to create several temporaries. Only expressions involving basic
arithmetic operations and basic array slicing can be converted to
-Blitz C++ code.
+Blitz C++ code.
For example, consider the expression::
@@ -489,12 +489,12 @@ execution time is only about 0.20 seconds (about 0.14 seconds spent in
weave and the rest in allocating space for d). Thus, we've sped up the
code by a factor of 2 using only a simnple command (weave.blitz). Your
mileage may vary, but factors of 2-8 speed-ups are possible with this
-very simple technique.
+very simple technique.
If you are interested in using weave in this way, then you should also
look at scipy.numexpr which is another similar way to speed up
expressions by eliminating the need for temporary variables. Using
-numexpr does not require a C/C++ compiler.
+numexpr does not require a C/C++ compiler.
Inline C-code
@@ -514,24 +514,24 @@ following example shows how to use weave on basic Python objects:
.. code-block:: python
- code = r"""
- int i;
- py::tuple results(2);
- for (i=0; i<a.length(); i++) {
+ code = r"""
+ int i;
+ py::tuple results(2);
+ for (i=0; i<a.length(); i++) {
a[i] = i;
- }
- results[0] = 3.0;
- results[1] = 4.0;
+ }
+ results[0] = 3.0;
+ results[1] = 4.0;
return_val = results;
- """
- a = [None]*10
+ """
+ a = [None]*10
res = weave.inline(code,['a'])
The C++ code shown in the code string uses the name 'a' to refer to
the Python list that is passed in. Because the Python List is a
mutable type, the elements of the list itself are modified by the C++
code. A set of C++ classes are used to access Python objects using
-simple syntax.
+simple syntax.
The main advantage of using C-code, however, is to speed up processing
on an array of data. Accessing a NumPy array in C++ code using weave,
@@ -540,16 +540,16 @@ arrays to C++ code. The default converter creates 5 variables for the
C-code for every NumPy array passed in to weave.inline. The following
table shows these variables which can all be used in the C++ code. The
table assumes that ``myvar`` is the name of the array in Python with
-data-type {dtype} (i.e. float64, float32, int8, etc.)
+data-type {dtype} (i.e. float64, float32, int8, etc.)
=========== ============== =========================================
-Variable Type Contents
+Variable Type Contents
=========== ============== =========================================
-myvar {dtype}* Pointer to the first element of the array
-Nmyvar npy_intp* A pointer to the dimensions array
-Smyvar npy_intp* A pointer to the strides array
-Dmyvar int The number of dimensions
-myvar_array PyArrayObject* The entire structure for the array
+myvar {dtype}* Pointer to the first element of the array
+Nmyvar npy_intp* A pointer to the dimensions array
+Smyvar npy_intp* A pointer to the strides array
+Dmyvar int The number of dimensions
+myvar_array PyArrayObject* The entire structure for the array
=========== ============== =========================================
The in-lined code can contain references to any of these variables as
@@ -561,7 +561,7 @@ checking (be-sure to use the correct macro and ensure the array is
aligned and in correct byte-swap order in order to get useful
results). The following code shows how you might use these variables
and macros to code a loop in C that computes a simple 2-d weighted
-averaging filter.
+averaging filter.
.. code-block:: c++
@@ -582,7 +582,7 @@ The above code doesn't have any error checking and so could fail with
a Python crash if, ``a`` had the wrong number of dimensions, or ``b``
did not have the same shape as ``a``. However, it could be placed
inside a standard Python function with the necessary error checking to
-produce a robust but fast subroutine.
+produce a robust but fast subroutine.
One final note about weave.inline: if you have additional code you
want to include in the final extension module such as supporting
@@ -592,7 +592,7 @@ support_code=support)``. If you need the extension module to link
against an additional library then you can also pass in
distutils-style keyword arguments such as library_dirs, libraries,
and/or runtime_library_dirs which point to the appropriate libraries
-and directories.
+and directories.
Simplify creation of an extension module
----------------------------------------
@@ -604,9 +604,9 @@ codes to execute in C, it would be better to make them all separate
functions in a single extension module with multiple functions. You
can also use the tools weave provides to produce this larger extension
module. In fact, the weave.inline function just uses these more
-general tools to do its work.
+general tools to do its work.
-The approach is to:
+The approach is to:
1. construct a extension module object using
ext_tools.ext_module(``module_name``);
@@ -626,7 +626,7 @@ The approach is to:
Several examples are available in the examples directory where weave
is installed on your system. Look particularly at ramp2.py,
-increment_example.py and fibonacii.py
+increment_example.py and fibonacii.py
Conclusion
@@ -643,7 +643,7 @@ normal way *(* using a setup.py file). While you can use weave to
build larger extension modules with many methods, creating methods
with a variable- number of arguments is not possible. Thus, for a more
sophisticated module, you will still probably want a Python-layer that
-calls the weave-produced extension.
+calls the weave-produced extension.
.. index::
single: weave
@@ -661,7 +661,7 @@ to interface to a large library of code. However, if you are writing
an extension module that will include quite a bit of your own
algorithmic code, as well, then Pyrex is a good match. A big weakness
perhaps is the inability to easily and quickly access the elements of
-a multidimensional array.
+a multidimensional array.
.. index::
single: pyrex
@@ -678,12 +678,12 @@ write in a setup.py file:
from Pyrex.Distutils import build_ext
from distutils.extension import Extension
from distutils.core import setup
-
+
import numpy
py_ext = Extension('mine', ['mine.pyx'],
include_dirs=[numpy.get_include()])
-
- setup(name='mine', description='Nothing',
+
+ setup(name='mine', description='Nothing',
ext_modules=[pyx_ext],
cmdclass = {'build_ext':build_ext})
@@ -694,7 +694,7 @@ also include support for automatically producing the extension-module
and linking it from a ``.pyx`` file. It works so that if the user does
not have Pyrex installed, then it looks for a file with the same
file-name but a ``.c`` extension which it then uses instead of trying
-to produce the ``.c`` file again.
+to produce the ``.c`` file again.
Pyrex does not natively understand NumPy arrays. However, it is not
difficult to include information that lets Pyrex deal with them
@@ -709,7 +709,7 @@ located in the .../site-packages/numpy/doc/pyrex directory where you
have Python installed. There is also an example in that directory of
using Pyrex to construct a simple extension module. It shows that
Pyrex looks a lot like Python but also contains some new syntax that
-is necessary in order to get C-like speed.
+is necessary in order to get C-like speed.
If you just use Pyrex to compile a standard Python module, then you
will get a C-extension module that runs either as fast or, possibly,
@@ -724,7 +724,7 @@ use a special construct to create for loops:
Let's look at two examples we've seen before to see how they might be
implemented using Pyrex. These examples were compiled into extension
-modules using Pyrex-0.9.3.1.
+modules using Pyrex-0.9.3.1.
Pyrex-add
@@ -739,16 +739,16 @@ functions we previously implemented using f2py:
from c_numpy cimport import_array, ndarray, npy_intp, npy_cdouble, \
npy_cfloat, NPY_DOUBLE, NPY_CDOUBLE, NPY_FLOAT, \
NPY_CFLOAT
-
+
#We need to initialize NumPy
import_array()
-
+
def zadd(object ao, object bo):
cdef ndarray c, a, b
cdef npy_intp i
- a = c_numpy.PyArray_ContiguousFromAny(ao,
+ a = c_numpy.PyArray_ContiguousFromAny(ao,
NPY_CDOUBLE, 1, 1)
- b = c_numpy.PyArray_ContiguousFromAny(bo,
+ b = c_numpy.PyArray_ContiguousFromAny(bo,
NPY_CDOUBLE, 1, 1)
c = c_numpy.PyArray_SimpleNew(a.nd, a.dimensions,
a.descr.type_num)
@@ -778,7 +778,7 @@ Python objects, Pyrex inserts the checks for NULL into the C-code for
you and returns with failure if need be. There is also a way to get
Pyrex to automatically check for exceptions when you call functions
that don't return Python objects. See the documentation of Pyrex for
-details.
+details.
Pyrex-filter
@@ -787,15 +787,15 @@ Pyrex-filter
The two-dimensional example we created using weave is a bit uglierto
implement in Pyrex because two-dimensional indexing using Pyrex is not
as simple. But, it is straightforward (and possibly faster because of
-pre-computed indices). Here is the Pyrex-file I named image.pyx.
+pre-computed indices). Here is the Pyrex-file I named image.pyx.
.. code-block:: none
cimport c_numpy
- from c_numpy cimport import_array, ndarray, npy_intp,\
+ from c_numpy cimport import_array, ndarray, npy_intp,\
NPY_DOUBLE, NPY_CDOUBLE, \
NPY_FLOAT, NPY_CFLOAT, NPY_ALIGNED \
-
+
#We need to initialize NumPy
import_array()
def filter(object ao):
@@ -803,7 +803,7 @@ pre-computed indices). Here is the Pyrex-file I named image.pyx.
cdef npy_intp i, j, M, N, oS
cdef npy_intp r,rm1,rp1,c,cm1,cp1
cdef double value
- # Require an ALIGNED array
+ # Require an ALIGNED array
# (but not necessarily contiguous)
# We will use strides to access the elements.
a = c_numpy.PyArray_FROMANY(ao, NPY_DOUBLE, \
@@ -829,7 +829,7 @@ pre-computed indices). Here is the Pyrex-file I named image.pyx.
(<double *>(a.data+rp1+c))[0] + \
(<double *>(a.data+r+cm1))[0] + \
(<double *>(a.data+r+cp1))[0])*0.5 + \
- ((<double *>(a.data+rm1+cm1))[0] + \
+ ((<double *>(a.data+rm1+cm1))[0] + \
(<double *>(a.data+rp1+cm1))[0] + \
(<double *>(a.data+rp1+cp1))[0] + \
(<double *>(a.data+rm1+cp1))[0])*0.25
@@ -849,7 +849,7 @@ particularly easy to understand what is happening. A 2-d image, ``in``
Conclusion
----------
-There are several disadvantages of using Pyrex:
+There are several disadvantages of using Pyrex:
1. The syntax for Pyrex can get a bit bulky, and it can be confusing at
first to understand what kind of objects you are getting and how to
@@ -859,13 +859,13 @@ There are several disadvantages of using Pyrex:
mismatches can result in failures such as
1. Pyrex failing to generate the extension module source code,
-
+
2. Compiler failure while generating the extension module binary due to
incorrect C syntax,
-
+
3. Python failure when trying to use the module.
-
-
+
+
3. It is easy to lose a clean separation between Python and C which makes
re-using your C-code for other non-Python-related projects more
difficult.
@@ -886,7 +886,7 @@ be over-looked. It is especially useful for people that can't or won't
write C-code or Fortran code. But, if you are already able to write
simple subroutines in C or Fortran, then I would use one of the other
approaches such as f2py (for Fortran), ctypes (for C shared-
-libraries), or weave (for inline C-code).
+libraries), or weave (for inline C-code).
.. index::
single: pyrex
@@ -910,7 +910,7 @@ location. The responsibility is then on you that the subroutine will
not access memory outside the actual array area. But, if you don't
mind living a little dangerously ctypes can be an effective tool for
quickly taking advantage of a large shared library (or writing
-extended functionality in your own shared library).
+extended functionality in your own shared library).
.. index::
single: ctypes
@@ -926,9 +926,9 @@ extension-module interface. However, this overhead should be neglible
if the C-routine being called is doing any significant amount of work.
If you are a great Python programmer with weak C-skills, ctypes is an
easy way to write a useful interface to a (shared) library of compiled
-code.
+code.
-To use c-types you must
+To use c-types you must
1. Have a shared library.
@@ -945,7 +945,7 @@ Having a shared library
There are several requirements for a shared library that can be used
with c-types that are platform specific. This guide assumes you have
some familiarity with making a shared library on your system (or
-simply have a shared library available to you). Items to remember are:
+simply have a shared library available to you). Items to remember are:
- A shared library must be compiled in a special way ( *e.g.* using
the -shared flag with gcc).
@@ -953,25 +953,25 @@ simply have a shared library available to you). Items to remember are:
- On some platforms (*e.g.* Windows) , a shared library requires a
.def file that specifies the functions to be exported. For example a
mylib.def file might contain.
-
+
::
-
+
LIBRARY mylib.dll
EXPORTS
cool_function1
cool_function2
-
+
Alternatively, you may be able to use the storage-class specifier
__declspec(dllexport) in the C-definition of the function to avoid the
- need for this .def file.
-
+ need for this .def file.
+
There is no standard way in Python distutils to create a standard
shared library (an extension module is a "special" shared library
Python understands) in a cross-platform manner. Thus, a big
disadvantage of ctypes at the time of writing this book is that it is
difficult to distribute in a cross-platform manner a Python extension
that uses c-types and includes your own code which should be compiled
-as a shared library on the users system.
+as a shared library on the users system.
Loading the shared library
@@ -994,7 +994,7 @@ foolproof. Complicating matters, different platforms have different
default extensions used by shared libraries (e.g. .dll -- Windows, .so
-- Linux, .dylib -- Mac OS X). This must also be taken into account if
you are using c-types to wrap code that needs to work on several
-platforms.
+platforms.
NumPy provides a convenience function called
:func:`ctypeslib.load_library` (name, path). This function takes the name
@@ -1005,13 +1005,13 @@ cannot be found or raises an ImportError if the ctypes module is not
available. (Windows users: the ctypes library object loaded using
:func:`load_library` is always loaded assuming cdecl calling convention.
See the ctypes documentation under ctypes.windll and/or ctypes.oledll
-for ways to load libraries under other calling conventions).
+for ways to load libraries under other calling conventions).
The functions in the shared library are available as attributes of the
ctypes library object (returned from :func:`ctypeslib.load_library`) or
as items using ``lib['func_name']`` syntax. The latter method for
retrieving a function name is particularly useful if the function name
-contains characters that are not allowable in Python variable names.
+contains characters that are not allowable in Python variable names.
Converting arguments
@@ -1022,7 +1022,7 @@ converted as needed to equivalent c-types arguments The None object is
also converted automatically to a NULL pointer. All other Python
objects must be converted to ctypes-specific types. There are two ways
around this restriction that allow c-types to integrate with other
-objects.
+objects.
1. Don't set the argtypes attribute of the function object and define an
:obj:`_as_parameter_` method for the object you want to pass in. The
@@ -1042,7 +1042,7 @@ associated. As a result, one can pass this ctypes attribute object
directly to a function expecting a pointer to the data in your
ndarray. The caller must be sure that the ndarray object is of the
correct type, shape, and has the correct flags set or risk nasty
-crashes if the data-pointer to inappropriate arrays are passsed in.
+crashes if the data-pointer to inappropriate arrays are passsed in.
To implement the second method, NumPy provides the class-factory
function :func:`ndpointer` in the :mod:`ctypeslib` module. This
@@ -1057,7 +1057,7 @@ number-of-dimensions, the shape, and/or the state of the flags on any
array passed. The return value of the from_param method is the ctypes
attribute of the array which (because it contains the _as_parameter\_
attribute pointing to the array data area) can be used by ctypes
-directly.
+directly.
The ctypes attribute of an ndarray is also endowed with additional
attributes that may be convenient when passing additional information
@@ -1075,7 +1075,7 @@ the shape/strides arrays using an underlying base type of your choice.
For convenience, the **ctypeslib** module also contains **c_intp** as
a ctypes integer data-type whose size is the same as the size of
``c_void_p`` on the platform (it's value is None if ctypes is not
-installed).
+installed).
Calling the function
@@ -1105,7 +1105,7 @@ the function in order to have ctypes check the types of the input
arguments when the function is called. Use the :func:`ndpointer` factory
function to generate a ready-made class for data-type, shape, and
flags checking on your new function. The :func:`ndpointer` function has the
-signature
+signature
.. function:: ndpointer(dtype=None, ndim=None, shape=None, flags=None)
@@ -1127,7 +1127,7 @@ area of an ndarray. You may still want to wrap the function in an
additional Python wrapper to make it user-friendly (hiding some
obvious arguments and making some arguments output arguments). In this
process, the **requires** function in NumPy may be useful to return the right kind of array from
-a given input.
+a given input.
Complete example
@@ -1149,8 +1149,8 @@ dfilter2d. The zadd function is:
while (n--) {
c->real = a->real + b->real;
c->imag = a->imag + b->imag;
- a++; b++; c++;
- }
+ a++; b++; c++;
+ }
}
with similar code for cadd, dadd, and sadd that handles complex float,
@@ -1163,16 +1163,16 @@ double, and float data-types, respectively:
while (n--) {
c->real = a->real + b->real;
c->imag = a->imag + b->imag;
- a++; b++; c++;
- }
+ a++; b++; c++;
+ }
}
- void dadd(double *a, double *b, double *c, long n)
+ void dadd(double *a, double *b, double *c, long n)
{
while (n--) {
*c++ = *a++ + *b++;
- }
+ }
}
- void sadd(float *a, float *b, float *c, long n)
+ void sadd(float *a, float *b, float *c, long n)
{
while (n--) {
*c++ = *a++ + *b++;
@@ -1183,17 +1183,17 @@ The code.c file also contains the function dfilter2d:
.. code-block:: c
- /* Assumes b is contiguous and
+ /* Assumes b is contiguous and
a has strides that are multiples of sizeof(double)
*/
- void
+ void
dfilter2d(double *a, double *b, int *astrides, int *dims)
{
int i, j, M, N, S0, S1;
int r, c, rm1, rp1, cp1, cm1;
-
+
M = dims[0]; N = dims[1];
- S0 = astrides[0]/sizeof(double);
+ S0 = astrides[0]/sizeof(double);
S1=astrides[1]/sizeof(double);
for (i=1; i<M-1; i++) {
r = i*S0; rp1 = r+S0; rm1 = r-S0;
@@ -1220,7 +1220,7 @@ Linux system this is accomplished using::
Which creates a shared_library named code.so in the current directory.
On Windows don't forget to either add __declspec(dllexport) in front
of void on the line preceeding each function definition, or write a
-code.def file that lists the names of the functions to be exported.
+code.def file that lists the names of the functions to be exported.
A suitable Python interface to this shared library should be
constructed. To do this create a file named interface.py with the
@@ -1229,10 +1229,10 @@ following lines at the top:
.. code-block:: python
__all__ = ['add', 'filter2d']
-
+
import numpy as N
import os
-
+
_path = os.path.dirname('__file__')
lib = N.ctypeslib.load_library('code', _path)
_typedict = {'zadd' : complex, 'sadd' : N.single,
@@ -1241,11 +1241,11 @@ following lines at the top:
val = getattr(lib, name)
val.restype = None
_type = _typedict[name]
- val.argtypes = [N.ctypeslib.ndpointer(_type,
+ val.argtypes = [N.ctypeslib.ndpointer(_type,
flags='aligned, contiguous'),
- N.ctypeslib.ndpointer(_type,
+ N.ctypeslib.ndpointer(_type,
flags='aligned, contiguous'),
- N.ctypeslib.ndpointer(_type,
+ N.ctypeslib.ndpointer(_type,
flags='aligned, contiguous,'\
'writeable'),
N.ctypeslib.c_intp]
@@ -1255,7 +1255,7 @@ same path as this file. It then adds a return type of void to the
functions contained in the library. It also adds argument checking to
the functions in the library so that ndarrays can be passed as the
first three arguments along with an integer (large enough to hold a
-pointer on the platform) as the fourth argument.
+pointer on the platform) as the fourth argument.
Setting up the filtering function is similar and allows the filtering
function to be called with ndarray arguments as the first two
@@ -1269,9 +1269,9 @@ strides and shape of an ndarray) as the last two arguments.:
flags='aligned'),
N.ctypeslib.ndpointer(float, ndim=2,
flags='aligned, contiguous,'\
- 'writeable'),
- ctypes.POINTER(N.ctypeslib.c_intp),
- ctypes.POINTER(N.ctypeslib.c_intp)]
+ 'writeable'),
+ ctypes.POINTER(N.ctypeslib.c_intp),
+ ctypes.POINTER(N.ctypeslib.c_intp)]
Next, define a simple selection function that chooses which addition
function to call in the shared library based on the data-type:
@@ -1322,23 +1322,23 @@ Conclusion
single: ctypes
Using ctypes is a powerful way to connect Python with arbitrary
-C-code. It's advantages for extending Python include
+C-code. It's advantages for extending Python include
- clean separation of C-code from Python code
- no need to learn a new syntax except Python and C
-
+
- allows re-use of C-code
-
+
- functionality in shared libraries written for other purposes can be
obtained with a simple Python wrapper and search for the library.
-
-
+
+
- easy integration with NumPy through the ctypes attribute
- full argument checking with the ndpointer class factory
-It's disadvantages include
+It's disadvantages include
- It is difficult to distribute an extension module made using ctypes
because of a lack of support for building shared libraries in
@@ -1356,7 +1356,7 @@ package creation. However, ctypes is a close second and will probably
be growing in popularity now that it is part of the Python
distribution. This should bring more features to ctypes that should
eliminate the difficulty in extending Python and distributing the
-extension using ctypes.
+extension using ctypes.
Additional tools you may find useful
@@ -1373,7 +1373,7 @@ provided here would be quickly dated. Do not assume that just because
it is included in this list, I don't think the package deserves your
attention. I'm including information about these packages because many
people have found them useful and I'd like to give you as many options
-as possible for tackling the problem of easily integrating your code.
+as possible for tackling the problem of easily integrating your code.
SWIG
@@ -1399,7 +1399,7 @@ methods that have emerged that are more targeted to Python. SWIG can
actually target extensions for several languages, but the typemaps
usually have to be language-specific. Nonetheless, with modifications
to the Python-specific typemaps, SWIG can be used to interface a
-library with other languages such as Perl, Tcl, and Ruby.
+library with other languages such as Perl, Tcl, and Ruby.
My experience with SWIG has been generally positive in that it is
relatively easy to use and quite powerful. I used to use it quite
@@ -1409,7 +1409,7 @@ must be done using the concept of typemaps which are not Python
specific and are written in a C-like syntax. Therefore, I tend to
prefer other gluing strategies and would only attempt to use SWIG to
wrap a very-large C/C++ library. Nonetheless, there are others who use
-SWIG quite happily.
+SWIG quite happily.
SIP
@@ -1426,7 +1426,7 @@ but the interface file looks a lot like a C/C++ header file. While SIP
is not a full C++ parser, it understands quite a bit of C++ syntax as
well as its own special directives that allow modification of how the
Python binding is accomplished. It also allows the user to define
-mappings between Python types and C/C++ structrues and classes.
+mappings between Python types and C/C++ structrues and classes.
Boost Python
@@ -1445,7 +1445,7 @@ have not used Boost.Python because I am not a big user of C++ and
using Boost to wrap simple C-subroutines is usually over-kill. It's
primary purpose is to make C++ classes available in Python. So, if you
have a set of C++ classes that need to be integrated cleanly into
-Python, consider learning about and using Boost.Python.
+Python, consider learning about and using Boost.Python.
Instant
@@ -1469,18 +1469,18 @@ arrays (adapted from the test2 included in the Instant distribution):
PyObject* add(PyObject* a_, PyObject* b_){
/*
various checks
- */
+ */
PyArrayObject* a=(PyArrayObject*) a_;
PyArrayObject* b=(PyArrayObject*) b_;
int n = a->dimensions[0];
int dims[1];
- dims[0] = n;
+ dims[0] = n;
PyArrayObject* ret;
- ret = (PyArrayObject*) PyArray_FromDims(1, dims, NPY_DOUBLE);
+ ret = (PyArrayObject*) PyArray_FromDims(1, dims, NPY_DOUBLE);
int i;
char *aj=a->data;
char *bj=b->data;
- double *retj = (double *)ret->data;
+ double *retj = (double *)ret->data;
for (i=0; i < n; i++) {
*retj++ = *((double *)aj) + *((double *)bj);
aj += a->strides[0];
@@ -1500,7 +1500,7 @@ arrays (adapted from the test2 included in the Instant distribution):
d = test2b_ext.add(a,b)
Except perhaps for the dependence on SWIG, Instant is a
-straightforward utility for writing extension modules.
+straightforward utility for writing extension modules.
PyInline
@@ -1509,7 +1509,7 @@ PyInline
This is a much older module that allows automatic building of
extension modules so that C-code can be included with Python code.
It's latest release (version 0.03) was in 2001, and it appears that it
-is not being updated.
+is not being updated.
PyFort
@@ -1520,4 +1520,4 @@ into Python with support for Numeric arrays. It was written by Paul
Dubois, a distinguished computer scientist and the very first
maintainer of Numeric (now retired). It is worth mentioning in the
hopes that somebody will update PyFort to work with NumPy arrays as
-well which now support either Fortran or C-style contiguous arrays.
+well which now support either Fortran or C-style contiguous arrays.
diff --git a/doc/source/user/index.rst b/doc/source/user/index.rst
index 750062d50..8d8812c80 100644
--- a/doc/source/user/index.rst
+++ b/doc/source/user/index.rst
@@ -18,7 +18,7 @@ and classes, see :ref:`reference`.
.. toctree::
:maxdepth: 2
-
+
howtofind
basics
performance