diff options
author | Charles Harris <charlesr.harris@gmail.com> | 2013-08-18 11:16:06 -0600 |
---|---|---|
committer | Charles Harris <charlesr.harris@gmail.com> | 2013-08-18 11:20:45 -0600 |
commit | 8ddb0ce0acafe75d78df528b4d2540dfbf4b364d (patch) | |
tree | 156b23f48f14c7c1df699874007c521b5482d1a4 /doc | |
parent | 13b0b272f764c14bc4ac34f5b19fd030d9c611a4 (diff) | |
download | numpy-8ddb0ce0acafe75d78df528b4d2540dfbf4b364d.tar.gz |
STY: Giant whitespace cleanup.
Now is as good a time as any with open PR's at a low.
Diffstat (limited to 'doc')
40 files changed, 138 insertions, 164 deletions
diff --git a/doc/CAPI.rst.txt b/doc/CAPI.rst.txt index 41241ce5a..4bd51baca 100644 --- a/doc/CAPI.rst.txt +++ b/doc/CAPI.rst.txt @@ -15,7 +15,7 @@ of the API) that will need to be changed: * If you used any of the function pointers in the ``PyArray_Descr`` structure you will have to modify your usage of those. First, - the pointers are all under the member named ``f``. So ``descr->cast`` + the pointers are all under the member named ``f``. So ``descr->cast`` is now ``descr->f->cast``. In addition, the casting functions have eliminated the strides argument (use ``PyArray_CastTo`` if you need strided casting). All functions have @@ -238,7 +238,7 @@ segfaults may result. There are 6 (binary) flags that describe the memory area used by the data buffer. These constants are defined in ``arrayobject.h`` and determine the bit-position of the flag. Python exposes a nice attribute- -based interface as well as a dictionary-like interface for getting +based interface as well as a dictionary-like interface for getting (and, if appropriate, setting) these flags. Memory areas of all kinds can be pointed to by an ndarray, necessitating @@ -254,7 +254,7 @@ PyArray_FromAny function. ``NPY_FORTRAN`` True if the array is (Fortran-style) contiguous in memory. -Notice that contiguous 1-d arrays are always both ``NPY_FORTRAN`` contiguous +Notice that contiguous 1-d arrays are always both ``NPY_FORTRAN`` contiguous and C contiguous. Both of these flags can be checked and are convenience flags only as whether or not an array is ``NPY_CONTIGUOUS`` or ``NPY_FORTRAN`` can be determined by the ``strides``, ``dimensions``, and ``itemsize`` diff --git a/doc/DISTUTILS.rst.txt b/doc/DISTUTILS.rst.txt index 363112ea9..01bc9cc43 100644 --- a/doc/DISTUTILS.rst.txt +++ b/doc/DISTUTILS.rst.txt @@ -29,8 +29,8 @@ Requirements for SciPy packages SciPy consists of Python packages, called SciPy packages, that are available to Python users via the ``scipy`` namespace. Each SciPy package -may contain other SciPy packages. And so on. Therefore, the SciPy -directory tree is a tree of packages with arbitrary depth and width. +may contain other SciPy packages. And so on. Therefore, the SciPy +directory tree is a tree of packages with arbitrary depth and width. Any SciPy package may depend on NumPy packages but the dependence on other SciPy packages should be kept minimal or zero. @@ -46,12 +46,12 @@ Their contents are described below. The ``setup.py`` file ''''''''''''''''''''' -In order to add a Python package to SciPy, its build script (``setup.py``) -must meet certain requirements. The most important requirement is that the -package define a ``configuration(parent_package='',top_path=None)`` function -which returns a dictionary suitable for passing to -``numpy.distutils.core.setup(..)``. To simplify the construction of -this dictionary, ``numpy.distutils.misc_util`` provides the +In order to add a Python package to SciPy, its build script (``setup.py``) +must meet certain requirements. The most important requirement is that the +package define a ``configuration(parent_package='',top_path=None)`` function +which returns a dictionary suitable for passing to +``numpy.distutils.core.setup(..)``. To simplify the construction of +this dictionary, ``numpy.distutils.misc_util`` provides the ``Configuration`` class, described below. SciPy pure Python package example @@ -72,13 +72,13 @@ Below is an example of a minimal ``setup.py`` file for a pure SciPy package:: The arguments of the ``configuration`` function specifiy the name of parent SciPy package (``parent_package``) and the directory location -of the main ``setup.py`` script (``top_path``). These arguments, +of the main ``setup.py`` script (``top_path``). These arguments, along with the name of the current package, should be passed to the ``Configuration`` constructor. The ``Configuration`` constructor has a fourth optional argument, ``package_path``, that can be used when package files are located in -a different location than the directory of the ``setup.py`` file. +a different location than the directory of the ``setup.py`` file. Remaining ``Configuration`` arguments are all keyword arguments that will be used to initialize attributes of ``Configuration`` @@ -159,12 +159,12 @@ in writing setup scripts: sun.dat bar/ car.dat - can.dat + can.dat Path to data files can be a function taking no arguments and returning path(s) to data files -- this is a useful when data files are generated while building the package. (XXX: explain the step - when this function are called exactly) + when this function are called exactly) + ``config.add_data_dir(data_path)`` --- add directory ``data_path`` recursively to ``data_files``. The whole directory tree starting at @@ -174,14 +174,14 @@ in writing setup scripts: directory and the second element specifies the path to data directory. By default, data directory are copied under package installation directory under the basename of ``data_path``. For example, - + :: config.add_data_dir('fun') # fun/ contains foo.dat bar/car.dat config.add_data_dir(('sun','fun')) config.add_data_dir(('gun','/full/path/to/fun')) - will install data files to the following locations + will install data files to the following locations :: @@ -204,7 +204,7 @@ in writing setup scripts: modules of the current package. + ``config.add_headers(*files)`` --- prepend ``files`` to ``headers`` - list. By default, headers will be installed under + list. By default, headers will be installed under ``<prefix>/include/pythonX.X/<config.name.replace('.','/')>/`` directory. If ``files`` item is a tuple then it's first argument specifies the installation suffix relative to @@ -216,7 +216,7 @@ in writing setup scripts: list. Scripts will be installed under ``<prefix>/bin/`` directory. + ``config.add_extension(name,sources,*kw)`` --- create and add an - ``Extension`` instance to ``ext_modules`` list. The first argument + ``Extension`` instance to ``ext_modules`` list. The first argument ``name`` defines the name of the extension module that will be installed under ``config.name`` package. The second argument is a list of sources. ``add_extension`` method takes also keyword @@ -269,10 +269,10 @@ in writing setup scripts: more information on arguments. + ``config.have_f77c()`` --- return True if Fortran 77 compiler is - available (read: a simple Fortran 77 code compiled succesfully). + available (read: a simple Fortran 77 code compiled succesfully). + ``config.have_f90c()`` --- return True if Fortran 90 compiler is - available (read: a simple Fortran 90 code compiled succesfully). + available (read: a simple Fortran 90 code compiled succesfully). + ``config.get_version()`` --- return version string of the current package, ``None`` if version information could not be detected. This methods @@ -405,7 +405,7 @@ The header of a typical SciPy ``__init__.py`` is:: """ Package docstring, typically with a brief description and function listing. """ - + # py3k related imports from __future__ import division, print_function, absolute_import @@ -414,7 +414,7 @@ The header of a typical SciPy ``__init__.py`` is:: ... __all__ = [s for s in dir() if not s.startswith('_')] - + from numpy.testing import Tester test = Tester().test bench = Tester().bench @@ -441,7 +441,7 @@ will compile the ``library`` sources without optimization flags. It's recommended to specify only those config_fc options in such a way that are compiler independent. -Getting extra Fortran 77 compiler options from source +Getting extra Fortran 77 compiler options from source ----------------------------------------------------- Some old Fortran codes need special compiler options in order to @@ -452,7 +452,7 @@ pattern:: CF77FLAGS(<fcompiler type>) = <fcompiler f77flags> in the first 20 lines of the source and use the ``f77flags`` for -specified type of the fcompiler (the first character ``C`` is optional). +specified type of the fcompiler (the first character ``C`` is optional). TODO: This feature can be easily extended for Fortran 90 codes as well. Let us know if you would need such a feature. diff --git a/doc/HOWTO_DOCUMENT.rst.txt b/doc/HOWTO_DOCUMENT.rst.txt index 13e9b2607..8e841755a 100644 --- a/doc/HOWTO_DOCUMENT.rst.txt +++ b/doc/HOWTO_DOCUMENT.rst.txt @@ -142,7 +142,7 @@ The sections of the docstring are: 2. **Deprecation warning** A section (use if applicable) to warn users that the object is deprecated. - Section contents should include: + Section contents should include: * In what Numpy version the object was deprecated, and when it will be removed. @@ -150,7 +150,7 @@ The sections of the docstring are: * Reason for deprecation if this is useful information (e.g., object is superseded, duplicates functionality found elsewhere, etc.). - * New recommended way of obtaining the same functionality. + * New recommended way of obtaining the same functionality. This section should use the note Sphinx directive instead of an underlined section header. @@ -182,7 +182,7 @@ The sections of the docstring are: x : type Description of parameter `x`. - Enclose variables in single backticks. The colon must be preceded + Enclose variables in single backticks. The colon must be preceded by a space, or omitted if the type is absent. For the parameter types, be as precise as possible. Below are a @@ -195,7 +195,7 @@ The sections of the docstring are: filename : str copy : bool dtype : data-type - iterable : iterable object + iterable : iterable object shape : int or tuple of int files : list of str @@ -370,7 +370,7 @@ The sections of the docstring are: Referencing sources of a temporary nature, like web pages, is discouraged. References are meant to augment the docstring, but should not be required to understand it. References are numbered, starting - from one, in the order in which they are cited. + from one, in the order in which they are cited. 11. **Examples** @@ -397,7 +397,7 @@ The sections of the docstring are: >>> import numpy.random >>> np.random.rand(2) - array([ 0.35773152, 0.38568979]) #random + array([ 0.35773152, 0.38568979]) #random You can run examples as doctests using:: @@ -427,7 +427,7 @@ The sections of the docstring are: *matplotlib* for plotting, but should import it explicitly, e.g., ``import matplotlib.pyplot as plt``. - + Documenting classes ------------------- @@ -498,7 +498,7 @@ Document these as you would any other function. Do not include ``self`` in the list of parameters. If a method has an equivalent function (which is the case for many ndarray methods for example), the function docstring should contain the detailed documentation, and the method docstring -should refer to it. Only put brief summary and **See Also** sections in the +should refer to it. Only put brief summary and **See Also** sections in the method docstring. @@ -514,7 +514,7 @@ instances a useful docstring, we do the following: * Multiple instances: If multiple instances are exposed, docstrings for each instance are written and assigned to the instances' ``__doc__`` attributes at run time. The class is documented as usual, and - the exposed instances can be mentioned in the **Notes** and **See Also** + the exposed instances can be mentioned in the **Notes** and **See Also** sections. @@ -553,16 +553,16 @@ hard to get a good overview of all functionality provided by looking at the source file(s) or the ``__all__`` dict. Note that license and author info, while often included in source files, do not -belong in docstrings. +belong in docstrings. Other points to keep in mind ---------------------------- -* Equations : as discussed in the **Notes** section above, LaTeX formatting - should be kept to a minimum. Often it's possible to show equations as - Python code or pseudo-code instead, which is much more readable in a - terminal. For inline display use double backticks (like ``y = np.sin(x)``). - For display with blank lines above and below, use a double colon and indent +* Equations : as discussed in the **Notes** section above, LaTeX formatting + should be kept to a minimum. Often it's possible to show equations as + Python code or pseudo-code instead, which is much more readable in a + terminal. For inline display use double backticks (like ``y = np.sin(x)``). + For display with blank lines above and below, use a double colon and indent the code, like:: end of previous sentence:: @@ -597,7 +597,7 @@ output. New paragraphs are marked with a blank line. Use *italics*, **bold**, and ``monospace`` if needed in any explanations (but not for variable names and doctest code or multi-line code). -Variable, module, function, and class names should be written between +Variable, module, function, and class names should be written between single back-ticks (```numpy```). A more extensive example of reST markup can be found in `this example diff --git a/doc/Py3K.rst.txt b/doc/Py3K.rst.txt index ad76fe240..e06461794 100644 --- a/doc/Py3K.rst.txt +++ b/doc/Py3K.rst.txt @@ -483,7 +483,7 @@ So what is done in ``PyArray_FromAny`` currently is that: 3118 buffers, so that:: array([some_3118_object]) - + will treat the object similarly as it would handle an `ndarray`. However, again, bytes (and unicode) have priority and will not be @@ -491,13 +491,13 @@ So what is done in ``PyArray_FromAny`` currently is that: This amounts to possible semantic changes: -- ``array(buffer)`` will no longer create an object array +- ``array(buffer)`` will no longer create an object array ``array([buffer], dtype='O')``, but will instead expand to a view on the buffer. .. todo:: - Take a second look at places that used PyBuffer_FromMemory and + Take a second look at places that used PyBuffer_FromMemory and PyBuffer_FromReadWriteMemory -- what can be done with these? .. todo:: @@ -633,7 +633,7 @@ Currently, the following is done: 1) Numpy's integer types no longer inherit from Python integer. 2) int is taken dtype-equivalent to NPY_LONG -3) ints are converted to NPY_LONG +3) ints are converted to NPY_LONG PyInt methods are currently replaced by PyLong, via macros in npy_3kcompat.h. diff --git a/doc/TESTS.rst.txt b/doc/TESTS.rst.txt index 2b66b5caa..bfea0e117 100644 --- a/doc/TESTS.rst.txt +++ b/doc/TESTS.rst.txt @@ -206,7 +206,7 @@ but ``test_evens`` is a generator that returns a series of tests, using A problem with generator tests can be that if a test is failing, it's hard to see for which parameters. To avoid this problem, ensure that: - - No computation related to the features tested is done in the + - No computation related to the features tested is done in the ``test_*`` generator function, but delegated to a corresponding ``check_*`` function (can be inside the generator, to share namespace). - The generators are used *solely* for loops over parameters. @@ -236,7 +236,7 @@ for numpy.lib:: The doctests are run as if they are in a fresh Python instance which has executed ``import numpy as np``. Tests that are part of a SciPy subpackage will have that subpackage already imported. E.g. for a test -in ``scipy/linalg/tests/``, the namespace will be created such that +in ``scipy/linalg/tests/``, the namespace will be created such that ``from scipy import linalg`` has already executed. diff --git a/doc/cython/README.txt b/doc/cython/README.txt index ff0abb0fe..f527e358b 100644 --- a/doc/cython/README.txt +++ b/doc/cython/README.txt @@ -17,4 +17,4 @@ To run it locally, simply type:: make help which shows you the currently available targets (these are just handy -shorthands for common commands).
\ No newline at end of file +shorthands for common commands). diff --git a/doc/neps/datetime-proposal.rst b/doc/neps/datetime-proposal.rst index f72bab3ae..3bd15db34 100644 --- a/doc/neps/datetime-proposal.rst +++ b/doc/neps/datetime-proposal.rst @@ -6,7 +6,7 @@ :Contact: oliphant@enthought.com :Date: 2009-06-09 -Revised only slightly from the third proposal by +Revised only slightly from the third proposal by :Author: Francesc Alted i Abad :Contact: faltet@pytables.com @@ -115,10 +115,10 @@ lower units respectively). Finally, a date-time data-type can be created with support for tracking sequential events within a basic unit: [D]//100, [Y]//4 (notice the required brackets). These ``modulo`` event units provide the following -interpretation to the date-time integer: +interpretation to the date-time integer: - * the divisor is the number of events in each period - * the (integer) quotient is the integer number representing the base units + * the divisor is the number of events in each period + * the (integer) quotient is the integer number representing the base units * the remainder is the particular event in the period. Modulo event-units can be combined with any derived units, but brackets @@ -140,7 +140,7 @@ i.e. you cannot specify 'M8[us]//5' as 'M8//5' or as '//5' ``datetime64`` ============== - + This dtype represents a time that is absolute (i.e. not relative). It is implemented internally as an ``int64`` type. The integer represents units from the internal POSIX epoch (see [3]_). Like POSIX, the @@ -565,27 +565,27 @@ Necessary changes to NumPy ========================== In order to facilitate the addition of the date-time data-types a few changes -to NumPy were made: +to NumPy were made: Addition of metadata to dtypes ------------------------------ All data-types now have a metadata dictionary. It can be set using the -metadata keyword during construction of the object. +metadata keyword during construction of the object. -Date-time data-types will place the word "__frequency__" in the meta-data +Date-time data-types will place the word "__frequency__" in the meta-data dictionary containing a 4-tuple with the following parameters. -(basic unit string (str), - number of multiples (int), - number of sub-divisions (int), - number of events (int)). +(basic unit string (str), + number of multiples (int), + number of sub-divisions (int), + number of events (int)). -Simple time units like 'D' for days will thus be specified by ('D', 1, 1, 1) in -the "__frequency__" key of the metadata. More complicated time units (like '[2W/5]//50') will be indicated by ('D', 2, 5, 50). +Simple time units like 'D' for days will thus be specified by ('D', 1, 1, 1) in +the "__frequency__" key of the metadata. More complicated time units (like '[2W/5]//50') will be indicated by ('D', 2, 5, 50). -The "__frequency__" key is reserved for metadata and cannot be set with a -dtype constructor. +The "__frequency__" key is reserved for metadata and cannot be set with a +dtype constructor. Ufunc interface extension @@ -595,18 +595,18 @@ ufuncs that have datetime and timedelta arguments can use the Python API during ufunc calls (to raise errors). There is a new ufunc C-API call to set the data for a particular -function pointer (for a particular set of data-types) to be the list of arrays -passed in to the ufunc. +function pointer (for a particular set of data-types) to be the list of arrays +passed in to the ufunc. Array Intervace Extensions -------------------------- The array interface is extended to both handle datetime and timedelta -typestr (including extended notation). +typestr (including extended notation). In addition, the typestr element of the __array_interface__ can be a tuple -as long as the version string is 4. The tuple is -('typestr', metadata dictionary). +as long as the version string is 4. The tuple is +('typestr', metadata dictionary). This extension to the typestr concept extends to the descr portion of the __array_interface__. Thus, the second element in the tuple of a @@ -626,13 +626,13 @@ Multiple of basic units are simple to handle. Divisors of basic units are harder to handle arbitrarily, but it is common to mentally think of a month as 1/12 of a year, or a day as 1/7 of a week. Therefore, the ability to specify a unit in terms of a fraction of a "larger" unit was -implemented. +implemented. The event notion (//50) was added to solve a use-case of a commercial sponsor of this NEP. The idea is to allow timestamp to carry both event number and timestamp information. The remainder carries the event number information, while the quotient carries the timestamp -information. +information. Why the ``origin`` metadata disappeared @@ -672,4 +672,3 @@ allowed. .. coding: utf-8 .. fill-column: 72 .. End: - diff --git a/doc/neps/datetime-proposal3.rst b/doc/neps/datetime-proposal3.rst index 6874aac13..ae98b8f03 100644 --- a/doc/neps/datetime-proposal3.rst +++ b/doc/neps/datetime-proposal3.rst @@ -572,4 +572,3 @@ for this proposal purposes. .. coding: utf-8 .. fill-column: 72 .. End: - diff --git a/doc/neps/deferred-ufunc-evaluation.rst b/doc/neps/deferred-ufunc-evaluation.rst index 95633cab5..634a1f238 100644 --- a/doc/neps/deferred-ufunc-evaluation.rst +++ b/doc/neps/deferred-ufunc-evaluation.rst @@ -97,7 +97,7 @@ Here's how it might be used in NumPy.:: There may be some surprising behavior, though.:: with np.deferredstate(True): - + d = a + b + c # d is deferred @@ -159,7 +159,7 @@ The API would be expanded with a number of functions. ``int PyArray_CalculateAllDeferred()`` This function forces all currently deferred calculations to occur. - + For example, if the error state is set to ignore all, and np.seterr({all='raise'}), this would change what happens to already deferred expressions. Thus, all the existing @@ -185,7 +185,7 @@ The API would be expanded with a number of functions. as an operand. The Python API would be expanded as follows. - + ``numpy.setdeferred(state)`` Enables or disables deferred evaluation. True means to always @@ -266,7 +266,7 @@ Other Implementation Details ============================ When a deferred array is created, it gets references to all the -operands of the UFunc, along with the UFunc itself. The +operands of the UFunc, along with the UFunc itself. The 'DeferredUsageCount' is incremented for each operand, and later gets decremented when the deferred expression is calculated or the deferred array is destroyed. diff --git a/doc/neps/generalized-ufuncs.rst b/doc/neps/generalized-ufuncs.rst index d9f3818b9..98e436990 100644 --- a/doc/neps/generalized-ufuncs.rst +++ b/doc/neps/generalized-ufuncs.rst @@ -91,14 +91,14 @@ dimensions. The signature is represented by a string of the following format: * Core dimensions of each input or output array are represented by a - list of dimension names in parentheses, ``(i_1,...,i_N)``; a scalar + list of dimension names in parentheses, ``(i_1,...,i_N)``; a scalar input/output is denoted by ``()``. Instead of ``i_1``, ``i_2``, etc, one can use any valid Python variable name. * Dimension lists for different arguments are separated by ``","``. Input/output arguments are separated by ``"->"``. * If one uses the same dimension name in multiple locations, this enforces the same size (or broadcastable size) of the corresponding - dimensions. + dimensions. The formal syntax of signatures is as follows:: diff --git a/doc/neps/groupby_additions.rst b/doc/neps/groupby_additions.rst index 28e1c29ac..a86bdd642 100644 --- a/doc/neps/groupby_additions.rst +++ b/doc/neps/groupby_additions.rst @@ -24,7 +24,7 @@ Suppose you have a NumPy structured array containing information about the number of purchases at several stores over multiple days. To be clear, the structured array data-type is: -dt = [('year', i2), ('month', i1), ('day', i1), ('time', float), +dt = [('year', i2), ('month', i1), ('day', i1), ('time', float), ('store', i4), ('SKU', 'S6'), ('number', i4)] Suppose there is a 1-d NumPy array of this data-type and you would like @@ -44,7 +44,7 @@ Ufunc methods proposed It is proposed to add two new reduce-style methods to the ufuncs: reduceby and reducein. The reducein method is intended to be a simpler to use version of reduceat, while the reduceby method is intended to -provide group-by capability on reductions. +provide group-by capability on reductions. reducein:: @@ -54,24 +54,24 @@ reducein:: The reduction occurs along the provided axis, using the provided data-type to calculate intermediate results, storing the result into - the array out (if provided). + the array out (if provided). The indices array provides the start and end indices for the reduction. If the length of the indices array is odd, then the final index provides the beginning point for the final reduction and the ending point is the end of arr. - This generalizes along the given axis, the behavior: + This generalizes along the given axis, the behavior: - [<ufunc>.reduce(arr[indices[2*i]:indices[2*i+1]]) + [<ufunc>.reduce(arr[indices[2*i]:indices[2*i+1]]) for i in range(len(indices)/2)] - This assumes indices is of even length + This assumes indices is of even length - Example: + Example: >>> a = [0,1,2,4,5,6,9,10] - >>> add.reducein(a,[0,3,2,5,-2]) - [3, 11, 19] + >>> add.reducein(a,[0,3,2,5,-2]) + [3, 11, 19] Notice that sum(a[0:3]) = 3; sum(a[2:5]) = 11; and sum(a[-2:]) = 19 @@ -79,7 +79,7 @@ reduceby:: <ufunc>.reduceby(arr, by, dtype=None, out=None) - Perform a reduction in arr over unique non-negative integers in by. + Perform a reduction in arr over unique non-negative integers in by. Let N=arr.ndim and M=by.ndim. Then, by.shape[:N] == arr.shape. @@ -109,4 +109,3 @@ edges:: .. coding: utf-8 .. fill-column: 72 .. End: - diff --git a/doc/neps/missing-data.rst b/doc/neps/missing-data.rst index 338a8da96..0d03d7774 100644 --- a/doc/neps/missing-data.rst +++ b/doc/neps/missing-data.rst @@ -105,12 +105,12 @@ an array of all missing values must produce the same result as the mean of a zero-sized array without missing value support. This kind of data can arise when conforming sparsely sampled data -into a regular sampling pattern, and is a useful interpretation to +into a regular sampling pattern, and is a useful interpretation to use when attempting to get best-guess answers for many statistical queries. In R, many functions take a parameter "na.rm=T" which means to treat the data as if the NA values are not part of the data set. This proposal -defines a standard parameter "skipna=True" for this same purpose. +defines a standard parameter "skipna=True" for this same purpose. ******************************************** Implementation Techniques For Missing Values @@ -1174,7 +1174,7 @@ the discussion are:: Olivier Delalleau Alan G Isaac E. Antero Tammi - Jason Grout + Jason Grout Dag Sverre Seljebotn Joe Harrington Gary Strangman diff --git a/doc/neps/new-iterator-ufunc.rst b/doc/neps/new-iterator-ufunc.rst index 6c4bb6488..b253e874b 100644 --- a/doc/neps/new-iterator-ufunc.rst +++ b/doc/neps/new-iterator-ufunc.rst @@ -375,7 +375,7 @@ In general, it should be possible to emulate the current behavior where it is desired, but I believe the default should be to produce and manipulate memory layouts which will give the best performance. -To support the new cache-friendly behavior, we introduce a new +To support the new cache-friendly behavior, we introduce a new option ‘K’ (for “keep”) for any ``order=`` parameter. The proposed ‘order=’ flags become as follows: @@ -691,7 +691,7 @@ Construction and Destruction If copying is allowed, it will make a temporary copy if the data is castable. If ``UPDATEIFCOPY`` is enabled, it will also copy the data back with another cast upon iterator destruction. - + If ``a_ndim`` is greater than zero, ``axes`` must also be provided. In this case, ``axes`` is an ``a_ndim``-sized array of ``op``'s axes. A value of -1 in ``axes`` means ``newaxis``. Within the ``axes`` @@ -748,7 +748,7 @@ Construction and Destruction for each ``op[i]``. The parameter ``oa_ndim``, when non-zero, specifies the number of - dimensions that will be iterated with customized broadcasting. + dimensions that will be iterated with customized broadcasting. If it is provided, ``op_axes`` must also be provided. These two parameters let you control in detail how the axes of the operand arrays get matched together and iterated. @@ -778,7 +778,7 @@ Construction and Destruction iterator, are: ``NPY_ITER_C_INDEX``, ``NPY_ITER_F_INDEX`` - + Causes the iterator to track an index matching C or Fortran order. These options are mutually exclusive. @@ -813,7 +813,7 @@ Construction and Destruction data type, calculated based on the ufunc type promotion rules. The flags for each operand must be set so that the appropriate casting is permitted, and copying or buffering must be enabled. - + If the common data type is known ahead of time, don't use this flag. Instead, set the requested dtype for all the operands. @@ -936,7 +936,7 @@ Construction and Destruction is flagged for writing and is copied, causes the data in a copy to be copied back to ``op[i]`` when the iterator is destroyed. - + If the operand is flagged as write-only and a copy is needed, an uninitialized temporary array will be created and then copied to back to ``op[i]`` on destruction, instead of doing @@ -988,7 +988,7 @@ Construction and Destruction For use with ``NPY_ITER_ALLOCATE``, this flag disables allocating an array subtype for the output, forcing it to be a straight ndarray. - + TODO: Maybe it would be better to introduce a function ``NpyIter_GetWrappedOutput`` and remove this flag? @@ -1009,7 +1009,7 @@ Construction and Destruction Makes a copy of the given iterator. This function is provided primarily to enable multi-threaded iteration of the data. - + *TODO*: Move this to a section about multithreaded iteration. The recommended approach to multithreaded iteration is to @@ -1052,7 +1052,7 @@ Construction and Destruction for any operand that later has ``NpyIter_UpdateIter`` called on it. The flags that may be passed in ``op_flags`` are - ``NPY_ITER_COPY``, ``NPY_ITER_UPDATEIFCOPY``, + ``NPY_ITER_COPY``, ``NPY_ITER_UPDATEIFCOPY``, ``NPY_ITER_NBO``, ``NPY_ITER_ALIGNED``, ``NPY_ITER_CONTIG``. ``int NpyIter_RemoveAxis(NpyIter *iter, npy_intp axis)`` @@ -1242,7 +1242,7 @@ Construction and Destruction When using ranged iteration to multithread a reduction, there are two possible ways to do the reduction: - + If there is a big reduction to a small output, make a temporary array initialized to the reduction unit for each thread, then have each thread reduce into its temporary. When that is complete, @@ -1341,14 +1341,14 @@ Construction and Destruction handle their processing manually. By calling this function before removing the axes, you can get the strides for the manual processing. - + Returns ``NULL`` on error. ``int NpyIter_GetShape(NpyIter *iter, npy_intp *outshape)`` Returns the broadcast shape of the iterator in ``outshape``. This can only be called on an iterator which supports coordinates. - + Returns ``NPY_SUCCEED`` or ``NPY_FAIL``. ``PyArray_Descr **NpyIter_GetDescrArray(NpyIter *iter)`` @@ -1658,7 +1658,7 @@ First, here is the definition of the ``luf`` function.:: def luf(lamdaexpr, *args, **kwargs): """Lambda UFunc - + e.g. c = luf(lambda i,j:i+j, a, b, order='K', casting='safe', buffersize=8192) @@ -1721,7 +1721,7 @@ Python iterator protocol.:: it = np.nditer([x,y,out], [], [['readonly'],['readonly'],['writeonly','allocate']]) - + for (a, b, c) in it: addop(a, b, c) @@ -1734,7 +1734,7 @@ Here is the same function, but following the C-style pattern.:: it = np.nditer([x,y,out], [], [['readonly'],['readonly'],['writeonly','allocate']]) - + while not it.finished: addop(it[0], it[1], it[2]) it.iternext() @@ -1772,7 +1772,7 @@ of the iterator, designed to help speed up the inner loops, is the flag it = np.nditer([x,y,out], ['no_inner_iteration'], [['readonly'],['readonly'],['writeonly','allocate']]) - + for (a, b, c) in it: addop(a, b, c) @@ -1809,7 +1809,7 @@ modify ``iter_add`` once again.:: def iter_add_itview(x, y, out=None): it = np.nditer([x,y,out], [], [['readonly'],['readonly'],['writeonly','allocate']]) - + (a, b, c) = it.itviews np.add(a, b, c) @@ -1900,7 +1900,7 @@ easy it was to add an optional output parameter.:: ....: it[3] += it[0] ....: it.iternext() ....: return it.operands[3] - + In [6]: timeit composite_over_it(image1, image2) 1 loops, best of 3: 197 ms per loop @@ -1978,4 +1978,3 @@ a dual core machine.:: In [31]: ne.set_num_threads(1) In [32]: timeit composite_over_ne_it(image1,image2) 10 loops, best of 3: 91.1 ms per loop - diff --git a/doc/neps/structured_array_extensions.txt b/doc/neps/structured_array_extensions.txt index 7020c772c..716b98a76 100644 --- a/doc/neps/structured_array_extensions.txt +++ b/doc/neps/structured_array_extensions.txt @@ -6,4 +6,3 @@ 2. Allow structured arrays to be sliced by their column (i.e. one additional indexing option for structured arrays) so that a[:4, 'foo':'bar'] would be allowed. - diff --git a/doc/neps/warnfix.txt b/doc/neps/warnfix.txt index 03b809e3d..b6f307bcb 100644 --- a/doc/neps/warnfix.txt +++ b/doc/neps/warnfix.txt @@ -65,7 +65,7 @@ When applied to a variable, one would get: int foo(int * NPY_UNUSED(dummy)) -expanded to +expanded to int foo(int * __NPY_UNUSED_TAGGEDdummy __COMP_NPY_UNUSED) diff --git a/doc/newdtype_example/floatint/__init__.py b/doc/newdtype_example/floatint/__init__.py index ebede2753..1d0f69b67 100644 --- a/doc/newdtype_example/floatint/__init__.py +++ b/doc/newdtype_example/floatint/__init__.py @@ -1,3 +1 @@ from __future__ import division, absolute_import, print_function - - diff --git a/doc/postprocess.py b/doc/postprocess.py index 3955ad6c5..2e50c115e 100755 --- a/doc/postprocess.py +++ b/doc/postprocess.py @@ -44,7 +44,7 @@ def process_html(fn, lines): def process_tex(lines): """ Remove unnecessary section titles from the LaTeX file. - + """ new_lines = [] for line in lines: diff --git a/doc/records.rst.txt b/doc/records.rst.txt index 6c4824d41..a608880d7 100644 --- a/doc/records.rst.txt +++ b/doc/records.rst.txt @@ -84,4 +84,3 @@ reference. There is a new function and a new method of array objects both labelled dtypescr which can be used to try out the ``PyArray_DescrConverter``. - diff --git a/doc/source/conf.py b/doc/source/conf.py index 233f2e409..13341b56a 100644 --- a/doc/source/conf.py +++ b/doc/source/conf.py @@ -84,7 +84,7 @@ themedir = os.path.join(os.pardir, 'scipy-sphinx-theme', '_theme') if not os.path.isdir(themedir): raise RuntimeError("Get the scipy-sphinx-theme first, " "via git submodule init && git submodule update") - + html_theme = 'scipy' html_theme_path = [themedir] @@ -109,7 +109,7 @@ else: html_additional_pages = { 'index': 'indexcontent.html', -} +} html_title = "%s v%s Manual" % (project, version) html_static_path = ['_static'] diff --git a/doc/source/dev/gitwash/configure_git.rst b/doc/source/dev/gitwash/configure_git.rst index 7e8cf8cbd..c62f33671 100644 --- a/doc/source/dev/gitwash/configure_git.rst +++ b/doc/source/dev/gitwash/configure_git.rst @@ -16,7 +16,7 @@ Here is an example ``.gitconfig`` file:: [user] name = Your Name email = you@yourdomain.example.com - + [alias] ci = commit -a co = checkout @@ -24,7 +24,7 @@ Here is an example ``.gitconfig`` file:: stat = status -a br = branch wdiff = diff --color-words - + [core] editor = vim @@ -33,7 +33,7 @@ Here is an example ``.gitconfig`` file:: You can edit this file directly or you can use the ``git config --global`` command:: - + git config --global user.name "Your Name" git config --global user.email you@yourdomain.example.com git config --global alias.ci "commit -a" diff --git a/doc/source/dev/gitwash/development_workflow.rst b/doc/source/dev/gitwash/development_workflow.rst index 8606b9018..523e04c0a 100644 --- a/doc/source/dev/gitwash/development_workflow.rst +++ b/doc/source/dev/gitwash/development_workflow.rst @@ -16,7 +16,7 @@ Basic workflow In short: -1. Update your ``master`` branch if it's not up to date. +1. Update your ``master`` branch if it's not up to date. Then start a new *feature branch* for each set of edits that you do. See :ref:`below <making-a-new-feature-branch>`. @@ -100,7 +100,7 @@ In git >= 1.7 you can ensure that the link is correctly set by using the ``--set-upstream`` option:: git push --set-upstream origin my-new-feature - + From now on git_ will know that ``my-new-feature`` is related to the ``my-new-feature`` branch in your own github_ repo. @@ -144,7 +144,7 @@ In more detail #. Check what the actual changes are with ``git diff`` (`git diff`_). #. Add any new files to version control ``git add new_file_name`` (see - `git add`_). + `git add`_). #. To commit all modified files into the local copy of your repo,, do ``git commit -am 'A commit message'``. Note the ``-am`` options to ``commit``. The ``m`` flag just signals that you're going to type a @@ -155,7 +155,7 @@ In more detail `tangled working copy problem`_. The section on :ref:`commit messages <writing-the-commit-message>` below might also be useful. #. To push the changes up to your forked repo on github_, do a ``git - push`` (see `git push`). + push`` (see `git push`). .. _writing-the-commit-message: diff --git a/doc/source/dev/gitwash/git_development.rst b/doc/source/dev/gitwash/git_development.rst index fb997abec..ee7787fec 100644 --- a/doc/source/dev/gitwash/git_development.rst +++ b/doc/source/dev/gitwash/git_development.rst @@ -12,4 +12,3 @@ Contents: development_setup configure_git development_workflow - diff --git a/doc/source/dev/gitwash/git_resources.rst b/doc/source/dev/gitwash/git_resources.rst index ae350806e..5f0c1d020 100644 --- a/doc/source/dev/gitwash/git_resources.rst +++ b/doc/source/dev/gitwash/git_resources.rst @@ -9,9 +9,9 @@ Tutorials and summaries * `github help`_ has an excellent series of how-to guides. * `learn.github`_ has an excellent series of tutorials -* The `pro git book`_ is a good in-depth book on git. +* The `pro git book`_ is a good in-depth book on git. * A `git cheat sheet`_ is a page giving summaries of common commands. -* The `git user manual`_ +* The `git user manual`_ * The `git tutorial`_ * The `git community book`_ * `git ready`_ - a nice series of tutorials diff --git a/doc/source/dev/gitwash/index.rst b/doc/source/dev/gitwash/index.rst index f3038721e..9d733dd1c 100644 --- a/doc/source/dev/gitwash/index.rst +++ b/doc/source/dev/gitwash/index.rst @@ -12,5 +12,3 @@ Contents: following_latest git_development git_resources - - diff --git a/doc/source/reference/arrays.datetime.rst b/doc/source/reference/arrays.datetime.rst index 82316144a..0e8050b01 100644 --- a/doc/source/reference/arrays.datetime.rst +++ b/doc/source/reference/arrays.datetime.rst @@ -322,7 +322,7 @@ dates, use :func:`busday_count`: >>> np.busday_count(np.datetime64('2011-07-18'), np.datetime64('2011-07-11')) -5 -If you have an array of datetime64 day values, and you want a count of +If you have an array of datetime64 day values, and you want a count of how many of them are valid dates, you can do this: .. admonition:: Example diff --git a/doc/source/reference/c-api.config.rst b/doc/source/reference/c-api.config.rst index 0989c53d7..0073b37a4 100644 --- a/doc/source/reference/c-api.config.rst +++ b/doc/source/reference/c-api.config.rst @@ -101,4 +101,3 @@ Platform information Returns the endianness of the current platform. One of :cdata:`NPY_CPU_BIG`, :cdata:`NPY_CPU_LITTLE`, or :cdata:`NPY_CPU_UNKNOWN_ENDIAN`. - diff --git a/doc/source/reference/c-api.coremath.rst b/doc/source/reference/c-api.coremath.rst index dba092a20..2d5aedc73 100644 --- a/doc/source/reference/c-api.coremath.rst +++ b/doc/source/reference/c-api.coremath.rst @@ -347,7 +347,7 @@ __ http://www.openexr.com/about.html Returns the value of x with the sign bit copied from y. Works for any value, including Inf and NaN. - + .. cfunction:: npy_half npy_half_spacing(npy_half h) This is the same for half-precision float as npy_spacing and npy_spacingf @@ -376,5 +376,4 @@ __ http://www.openexr.com/about.html .. cfunction:: npy_uint64 npy_halfbits_to_doublebits(npy_uint16 h) Low-level function which converts a 16-bit half-precision float - into a 64-bit double-precision float, stored as a uint64. - + into a 64-bit double-precision float, stored as a uint64. diff --git a/doc/source/reference/c-api.deprecations.rst b/doc/source/reference/c-api.deprecations.rst index a7960648a..a382017a2 100644 --- a/doc/source/reference/c-api.deprecations.rst +++ b/doc/source/reference/c-api.deprecations.rst @@ -14,14 +14,14 @@ Numarray. The core API originated with Numeric in 1995 and there are patterns such as the heavy use of macros written to mimic Python's C-API as well as account for compiler technology of the late 90's. There is also only a small group of volunteers who have had very little -time to spend on improving this API. +time to spend on improving this API. There is an ongoing effort to improve the API. It is important in this effort to ensure that code that compiles for NumPy 1.X continues to compile for NumPy 1.X. At the same time, certain API's will be marked as deprecated so that future-looking code can avoid these API's and -follow better practices. +follow better practices. Another important role played by deprecation markings in the C API is to move towards hiding internal details of the NumPy implementation. For those @@ -44,7 +44,7 @@ versions of NumPy should not have major C-API changes, however, that prevent code that worked on a previous minor release. For example, we will do our best to ensure that code that compiled and worked on NumPy 1.4 should continue to work on NumPy 1.7 (but perhaps with compiler -warnings). +warnings). To use the NPY_NO_DEPRECATED_API mechanism, you need to #define it to the target API version of NumPy before #including any NumPy headers. diff --git a/doc/source/reference/c-api.iterator.rst b/doc/source/reference/c-api.iterator.rst index 1e3565bc1..153792ac8 100644 --- a/doc/source/reference/c-api.iterator.rst +++ b/doc/source/reference/c-api.iterator.rst @@ -57,7 +57,7 @@ Here is a conversion table for which functions to use with the new iterator: :cfunc:`PyArray_ITER_GOTO1D` :cfunc:`NpyIter_GotoIndex` or :cfunc:`NpyIter_GotoIterIndex` :cfunc:`PyArray_ITER_NOTDONE` Return value of ``iternext`` function pointer -*Multi-iterator Functions* +*Multi-iterator Functions* :cfunc:`PyArray_MultiIterNew` :cfunc:`NpyIter_MultiNew` :cfunc:`PyArray_MultiIter_RESET` :cfunc:`NpyIter_Reset` :cfunc:`PyArray_MultiIter_NEXT` Function pointer from :cfunc:`NpyIter_GetIterNext` @@ -69,7 +69,7 @@ Here is a conversion table for which functions to use with the new iterator: :cfunc:`PyArray_MultiIter_NOTDONE` Return value of ``iternext`` function pointer :cfunc:`PyArray_Broadcast` Handled by :cfunc:`NpyIter_MultiNew` :cfunc:`PyArray_RemoveSmallest` Iterator flag :cdata:`NPY_ITER_EXTERNAL_LOOP` -*Other Functions* +*Other Functions* :cfunc:`PyArray_ConvertToCommonType` Iterator flag :cdata:`NPY_ITER_COMMON_DTYPE` ===================================== ============================================= @@ -649,7 +649,7 @@ Construction and Destruction -1 which means ``newaxis``. Within each ``op_axes[j]`` array, axes may not be repeated. The following example is how normal broadcasting applies to a 3-D array, a 2-D array, a 1-D array and a scalar. - + **Note**: Before NumPy 1.8 ``oa_ndim == 0` was used for signalling that that ``op_axes`` and ``itershape`` are unused. This is deprecated and should be replaced with -1. Better backward compatibility may be @@ -1229,7 +1229,7 @@ Functions For Iteration Gets the array of data pointers directly into the arrays (never into the buffers), corresponding to iteration index 0. - + These pointers are different from the pointers accepted by ``NpyIter_ResetBasePointers``, because the direction along some axes may have been reversed. diff --git a/doc/source/reference/routines.char.rst b/doc/source/reference/routines.char.rst index 41af947c8..7413e3615 100644 --- a/doc/source/reference/routines.char.rst +++ b/doc/source/reference/routines.char.rst @@ -84,4 +84,3 @@ Convenience class :toctree: generated/ chararray - diff --git a/doc/source/reference/routines.datetime.rst b/doc/source/reference/routines.datetime.rst index aab6f1694..875ad1124 100644 --- a/doc/source/reference/routines.datetime.rst +++ b/doc/source/reference/routines.datetime.rst @@ -17,4 +17,3 @@ Business Day Functions is_busday busday_offset busday_count - diff --git a/doc/source/reference/routines.dual.rst b/doc/source/reference/routines.dual.rst index 456fc5c02..4ed7098d6 100644 --- a/doc/source/reference/routines.dual.rst +++ b/doc/source/reference/routines.dual.rst @@ -45,4 +45,3 @@ Other .. autosummary:: i0 - diff --git a/doc/source/reference/routines.emath.rst b/doc/source/reference/routines.emath.rst index 9f6c2aaa7..c0c5b61fc 100644 --- a/doc/source/reference/routines.emath.rst +++ b/doc/source/reference/routines.emath.rst @@ -7,4 +7,3 @@ Mathematical functions with automatic domain (:mod:`numpy.emath`) available after :mod:`numpy` is imported. .. automodule:: numpy.lib.scimath - diff --git a/doc/source/reference/routines.ma.rst b/doc/source/reference/routines.ma.rst index 736755338..6eda37578 100644 --- a/doc/source/reference/routines.ma.rst +++ b/doc/source/reference/routines.ma.rst @@ -400,5 +400,3 @@ Miscellanea ma.ediff1d ma.indices ma.where - - diff --git a/doc/source/reference/routines.numarray.rst b/doc/source/reference/routines.numarray.rst index 36e5aa764..dab63fbdf 100644 --- a/doc/source/reference/routines.numarray.rst +++ b/doc/source/reference/routines.numarray.rst @@ -3,4 +3,3 @@ Numarray compatibility (:mod:`numpy.numarray`) ********************************************** .. automodule:: numpy.numarray - diff --git a/doc/source/reference/routines.oldnumeric.rst b/doc/source/reference/routines.oldnumeric.rst index d7f15bcfd..9cab2e9d9 100644 --- a/doc/source/reference/routines.oldnumeric.rst +++ b/doc/source/reference/routines.oldnumeric.rst @@ -5,4 +5,3 @@ Old Numeric compatibility (:mod:`numpy.oldnumeric`) .. currentmodule:: numpy .. automodule:: numpy.oldnumeric - diff --git a/doc/source/reference/swig.interface-file.rst b/doc/source/reference/swig.interface-file.rst index 8ef6c80ab..d835a4c4f 100644 --- a/doc/source/reference/swig.interface-file.rst +++ b/doc/source/reference/swig.interface-file.rst @@ -444,7 +444,7 @@ arrays with views into memory that is managed. See the discussion `here * ``(DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEWM_ARRAY4)`` * ``(DATA_TYPE** ARGOUTVIEWM_FARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4)`` * ``(DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEWM_FARRAY4)`` - + Output Arrays ````````````` diff --git a/doc/source/reference/swig.testing.rst b/doc/source/reference/swig.testing.rst index 8b19e4b28..decc681c5 100644 --- a/doc/source/reference/swig.testing.rst +++ b/doc/source/reference/swig.testing.rst @@ -15,8 +15,8 @@ this results in 1,427 individual unit tests that are performed when To facilitate this many similar unit tests, some high-level programming techniques are employed, including C and `SWIG`_ macros, -as well as Python inheritance. The purpose of this document is to describe -the testing infrastructure employed to verify that the ``numpy.i`` +as well as Python inheritance. The purpose of this document is to describe +the testing infrastructure employed to verify that the ``numpy.i`` typemaps are working as expected. Testing Organization @@ -57,9 +57,9 @@ Two-dimensional arrays are tested in exactly the same manner. The above description applies, but with ``Matrix`` substituted for ``Vector``. For three-dimensional tests, substitute ``Tensor`` for ``Vector``. For four-dimensional tests, substitute ``SuperTensor`` -for ``Vector``. +for ``Vector``. For the descriptions that follow, we will reference the -``Vector`` tests, but the same information applies to ``Matrix``, +``Vector`` tests, but the same information applies to ``Matrix``, ``Tensor`` and ``SuperTensor`` tests. The command ``make test`` will ensure that all of the test software is @@ -108,7 +108,7 @@ Testing SWIG Interface Files ``Vector.i`` is a `SWIG`_ interface file that defines python module ``Vector``. It follows the conventions for using ``numpy.i`` as described in this chapter. It defines a `SWIG`_ macro -``%apply_numpy_typemaps`` that has a single argument ``TYPE``. +``%apply_numpy_typemaps`` that has a single argument ``TYPE``. It uses the `SWIG`_ directive ``%apply`` to apply the provided typemaps to the argument signatures found in ``Vector.h``. This macro is then implemented for all of the data types supported by diff --git a/doc/source/user/basics.io.rst b/doc/source/user/basics.io.rst index 73947a2ef..54a65662b 100644 --- a/doc/source/user/basics.io.rst +++ b/doc/source/user/basics.io.rst @@ -5,4 +5,4 @@ I/O with Numpy .. toctree:: :maxdepth: 2 - basics.io.genfromtxt
\ No newline at end of file + basics.io.genfromtxt diff --git a/doc/ufuncs.rst.txt b/doc/ufuncs.rst.txt index fa107cc21..d628b3f95 100644 --- a/doc/ufuncs.rst.txt +++ b/doc/ufuncs.rst.txt @@ -96,8 +96,3 @@ If there are object arrays involved then loop->obj gets set to 1. Then there ar - The buffer[i] memory receives the PyObject input after the cast. This is a new reference which will be "stolen" as it is copied over into memory. The only problem is that what is presently in memory must be DECREF'd first. - - - - - |