| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At the moment, we are guessing whether we have the MSVC compiler, by
looking at what Python was originally compiled for. That works only if
we are using the same compiler, but this is not the case when we compile
with e.g. mingw-w64 using Python.org Python.
Unfortunately, at the time we are specifying build flags, we don't know
what compiler we are using.
Allow build flags to clib to be callables that return lists of strings,
instead of strings, where the callables can do work like inspecting the
compiler, at build time.
Use this to check for MSVC at build time, when specifying the
`/GL-` flag.
See gh-9977 for a related discussion about these flags.
|
|
|
|
|
| |
This is just a technical prototype to measure and discuss the impact and
implication of moving to C++ for kernel code generation.
|
|
|
|
|
|
|
|
|
| |
The bug can occur only if the build option `build`
was passed before the option `bdist_wheel`.
You may still realize a duplicate printing for the compiler
optimization report in the build log, which is normal due to
multiple calling of command `build` by setuptools.
|
|
|
|
|
| |
The error appears when option `build` is represented
before `bdist_wheel`.
|
|
|
|
|
| |
Same usage as the C dispatch-able sources except files extensions
should be `.dispatcher.cpp` or `.dispatch.cxx` rather than `.dispatch.c`
|
|
|
|
|
|
|
|
|
|
| |
The new path becomes `build/src.*/numpy/distutils/include/npy_cpu_dispatch_config.h`
instead of `numpy/core/src/common/_cpu_dispatch.h`.
The new path allows other projects to re-use the CPU dispatcher
once we decide to expose the following headers:
- `numpy/core/src/common/npy_cpu_dispatch.h`
- `numpy/core/src/common/npy_cpu_features.h`
|
|
|
|
| |
This patch also cleans up `CCompilerOpt` calls in build_ext and build_clib.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Put `CCompilerOpt` in action through add two command line
arguments that passed directly to `CCompilerOpt`'s
parameters which explained as follows:
* `--cpu-baseline` minimal set of required optimizations,
default is 'min' which provides the minimum CPU features
that can safely run on a wide range of users platforms.
* `--cpu-dispatch` dispatched set of additional optimizations,
default is 'max-xop-fma4' which enables all CPU features,
except for AMD legacy features.
the new arguments can be reached from `build`, `build_clib`,
`build_ext`, if `build_clib` or `build_ext` are not specified
by the user the arguments of `build` will be used, which also
hold the default values.
- Activate the new compiler dispatcher that comes with `CCompilerOpt`,
through adding a hock inside `build_clib` and `build_ext`
that works as a filter taking any C source files ends with
`.dispatch.c` and pass it directly to `CCompilerOpt` and
then take returned objects and linked to the final C lib.
- Add a third command-line argument `--disable-optimization` which
explicitly disable the whole new infrastructure, also
It adds a new compiler definition called `NPY_DISABLE_OPTIMIZATION`.
when `--disable-optimization` is enabled the dispatch-able sources
that end with `.dispatch.c` will be treated as a normal
C sources, also due to this disabling any C headers that
generated by `CCompilerOpt` must guard it with `NPY_DISABLE_OPTIMIZATION`,
otherwise, it will definitely break the build.
- New auto-generated C header located at `core/include/numpy/_cpu_dispatch.h`,
the new header contains all definitions and headers of CPU features that
enabled according to specified configuration in `--cpu-baseline`
and `--cpu-dispatch`.
|
| |
|
|
|
|
|
|
|
| |
* Cleanup unused imports (F401) of mostly standard Python modules,
or some internal but unlikely referenced modules
* Where internal imports are potentially used, mark with noqa
* Avoid redefinition of imports (F811)
|
|
|
|
|
| |
As numpy is Python 3 only, these import statements are now unnecessary
and don't alter runtime behavior.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add concept of unlinkable Fortran object files on the level of
build_clib/build_ext.
Make build_clib generate fake static libs when unlinkable object files are
present, postponing the actual linkage to build_ext.
This enables MSVC+gfortran DLL chaining to only involve those DLLs that
are actually necessary for each .pyd file, rather than linking everything
in to every file. Linking everything to everywhere has issues due to
potential symbol clashes and the fact that library build order is
unspecified.
Record shared_libs on disk instead of in system_info. This is necessary
for partial builds -- it is not guaranteed the compiler is actually called
for all of the DLL files.
Remove magic from openblas msvc detection. That this worked previously
relied on the side effect that the generated openblas DLL would be added
to shared_libs, and then being linked to all generated outputs.
|
|
|
|
|
|
|
|
|
|
| |
This allows mingw's gfortran to work with MSVC. DLLs are
autogenerated using heuristics that should work with most
cases. In addition, a libopenblas DLL is compiled from
the static lib for use with MSVC.
All generated DLLs have randomized names so that no clashes
will occur.
|
|\
| |
| | |
BUG: Fix handling of dependencies between libraries
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
compiling with numpy.distutils. For example, something like this will now
work as a configuration function:
def configuration():
config = Configuration()
config.add_library('mylib1', sources=['mylib1.f'])
config.add_library('mylib2', sources=['mylib2.f'], libraries=['mylib1'])
config.add_extension('pymodule', sources=['pymodule.c'],
libraries=['mylib2'])
return config
Arbitrary handling of dependencies between libraries is still not
supported, but this should make some basic cases work properly.
|
|/
|
|
| |
python3.5 uses --parallel instead of --jobs
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allow extensions using numpy.distutils to compile in parallel.
By passing `--jobs=n` or `-j n` to `setup.py build` the compilation of
extensions is now performed in `n` parallel processes.
Additionally the environment variable NPY_NUM_BUILD_JOBS is used as
the default value, if its unset the default is serial compilation.
The parallelization is limited to within the files of an extension, so
only numpy multiarraymodule really profits but its still a nice
improvement when you have 2-4 cores.
Unfortunately Cython will not profit at all as it tends to build one
module per file.
|
|
|
|
|
|
|
| |
Run the 2to3 ws_comma fixer on *.py files. Some lines are now too long
and will need to be broken at some point. OTOH, some lines were already
too long and need to be broken at some point. Now seems as good a time
as any to do this with open PRs at a minimum.
|
|
|
|
| |
Now is as good a time as any with open PR's at a low.
|
|
|
|
|
|
|
| |
Add `print_function` to all `from __future__ import ...` statements
and use the python3 print function syntax everywhere.
Closes #3078.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new import `absolute_import` is added the `from __future__ import`
statement and The 2to3 `import` fixer is run to make the imports
compatible. There are several things that need to be dealt with to make
this work.
1) Files meant to be run as scripts run in a different environment than
files imported as part of a package, and so changes to those files need
to be skipped. The affected script files are:
* all setup.py files
* numpy/core/code_generators/generate_umath.py
* numpy/core/code_generators/generate_numpy_api.py
* numpy/core/code_generators/generate_ufunc_api.py
2) Some imported modules are not available as they are created during
the build process and consequently 2to3 is unable to handle them
correctly. Files that import those modules need a bit of extra work.
The affected files are:
* core/__init__.py,
* core/numeric.py,
* core/_internal.py,
* core/arrayprint.py,
* core/fromnumeric.py,
* numpy/__init__.py,
* lib/npyio.py,
* lib/function_base.py,
* fft/fftpack.py,
* random/__init__.py
Closes #3172
|
|
|
|
|
|
|
|
| |
This should be harmless, as we already are division clean. However,
placement of this import takes some care. In the future a script
can be used to append new features without worry, at least until
such time as it exceeds a single line. Having that ability will
make it easier to deal with absolute imports and printing updates.
|
|
|
|
|
|
| |
Configuration.add_extension. Configuration.add_library, and Extension. These options
allow specifying extra compile options for compiling Fortran sources within a
setup.py file.
|
|
|
|
|
|
| |
Revert "Introduce new options extra_f77_compiler_args and extra_f90_compiler_args to Configuration.add_extension. Configuration.add_library, and Extension. These options allow specifying extra compile options for compiling Fortran sources within a setup.py file."
This reverts commit 43862759384a86cb4a95e8adb4d39fa1522acb28.
|
|
|
|
| |
fcompiler value reported in ml.
|
|
|
|
| |
Configuration.add_extension. Configuration.add_library, and Extension. These options allow specifying extra compile options for compiling Fortran sources within a setup.py file.
|
| |
|
|
|
|
| |
C libraries in in-place build.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
returns None
|
| |
|
| |
|
|
|
|
| |
--fcompiler can be specified only once in a command line
|
|
|
|
| |
Improved failure handling.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Add better support for C++ in numpy.distutils. Instead of munging the
C compiler command, build_clib and build_ext call the new
Compiler.cxx_compiler() method to get a version of the compiler suitable for
C++ (this also takes care of the special needs of AIX).
- If config_fc is specified in the Extension definition, merge that info
instead of replacing it (otherwise, the name of the Fortran compiler is
overwritten). This is done at the key level (ex., compiler options are
replaced instead of appended).
- clean up compiler.py a bit
- clean up linking in build_ext
|
| |
|
|
|
|
| |
default f77 or f90 compiler.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|