Commit aff5cebf authored by Martin Reinecke's avatar Martin Reinecke
Browse files

Merge branch 'cleanup2' into 'NIFTy_4'


See merge request ift/NIFTy!233
parents 1cf013f7 bf16eb35
Pipeline #26293 passed with stages
in 5 minutes and 28 seconds
......@@ -18,7 +18,6 @@ before_script:
stage: test
- nosetests -q
- nosetests3 -q
- OMP_NUM_THREADS=1 mpiexec --allow-run-as-root -n 4 nosetests -q 2>/dev/null
- OMP_NUM_THREADS=1 mpiexec --allow-run-as-root -n 4 nosetests3 -q 2>/dev/null
find . -name "*.pyc" -delete
find . -name "__pycache__" -exec rm -rf {} \; 2> /dev/null
rm -rf build dist oprofile_data
rm -rf *.egg-info .eggs
rm -f log.log*
rm -f *.pdf *.png
rm -rf nifty2go
find . -type d -empty -delete
rm -rf docs/build docs/source/mod
Significant differences between NIFTy3 and nifty4
1) Field domains in nifty4 are stored in DomainTuple objects, which makes
comparisons between domains and computation of axis indices etc. much simpler
and more efficient.
No impact on the user ... these objects are generated whenever needed and
have all necessary functions to make them look like tuples of spaces.
2) In nifty4 an operator's domain and target refer to the _full_ domains of
the input and output fields read/written by times(), adjoint_times() etc.
In NIFTy nightly, domain and target only refer to the (sub-)domain on
which the operator actually acts. This leads to complications like the need
for the "default_spaces" argument in the operator constructor and the
"spaces" keywords in all operator calls.
Advantages of the nifty4 approach:
- less error-prone and easier to understand; less code overall
- operators have more knowledge and may tune themselves better
- difficulties with the design of ComposedOperator (issue 152) resolve
themselves automatically
- operators cannot be used as flexibly as before; in a few circumstances
it will be necessary to create more operator objects than with the current
However, I have not found any such situation in the current code base, so
it appears to be rare.
3) nifty4 uses one of two different "data_object" modules for array
storage instead of D2O.
A "data_object" module consists of a class called "data_object" which
provides a subset of the numpy.ndarray interface, plus a few additional
functions for manipulating these data objects.
If no MPI support is found on the system, or if a computation is run on a
single task, nifty4 automatically loads a minimalistic "data_object"
module where the data_object class is simply identical to numpy.ndarray.
The support functions are mostly trivial as well.
If MPI is required, another module is loaded, which supports parallel
array operations; this module is in a working state, but not polished and
tuned yet.
4) Spaces no longer have a weight() method; it has been replaced by
scalar_dvol() and dvol() methods, which return the scalar volume element,
if available, or a numpy array with all volume elements for the respective
By using scalar_dvol() whenever possible, one can save quite some
time when computing Field.weight().
5) renamings:
get_distance_array -> get_k_length_array
get_unique_distances -> get_unique_k_lengths
(to be specific which "distances" are meant and to make more clear why this
only exists for harmonic spaces)
kindex -> k_lengths
(because this is not an index)
6) In nifty4, PowerSpace is not a harmonic space.
7) In nifty4, parallel probing should work (needs systematic testing)
9) Many default arguments have been removed in nifty4, wherever there is no
sensible default (in my opinion). My personal impression is that this has
actually made the demos more readable, but I'm sure not everyone will agree
10) Plotting has been replaced by something very minimalistic but usable.
Currently supported output formats are PDF and PNG.
11) Co-domains are now obtained directly from the corresponding Space
objects via the method "get_default_codomain()". This is implemented for
RGSpace, LMSpace, HPSpace and GLSpace.
12) Instead of inheriting from "InvertibleOperatorMixin", support for numerical
inversion is now added via the "InversionEnabler" class, which takes the
original operator as a constructor argument.
13) External dependencies are only loaded when they are really needed: e.g.
pyHealpix is only imported within the spherical harmonic transform functions,
and pyfftw is only loaded within the RG transforms.
So there are no mandatory dependencies besides numpy (well, pyfftw is
more or less always needed).
14) A new approach is used for FFTs along axes that are distributed among
MPI tasks. As a consequence, nifty4 works well with the standard version
of pyfftw and does not need the MPI-enabled fork.
15) Arithmetic functions working on Fields have been moved from to
16) Operators can be comined via "*", "+" and "-", resulting in new combined
17) Every operator has the properties ".adjoint" and ".inverse", which return
its adjoint and inverse, respectively.
18) Handling of volume factors has been changed completely.
# -*- coding: utf-8 -*-
better apidoc
Parses a directory tree looking for Python modules and packages and creates
ReST files appropriately to create code documentation with Sphinx. It also
creates a modules index (named modules.<suffix>).
This is derived from the "sphinx-apidoc" script, which is:
Copyright 2007-2016 by the Sphinx team
It extends "sphinx-apidoc" by the --template / -t option, which allows to
render the output ReST files based on arbitrary Jinja templates.
:copyright: Copyright 2017 by Michael Goerz
:license: BSD, see LICENSE for details.
from __future__ import print_function
import os
import sys
import importlib
import optparse
from os import path
from six import binary_type
from fnmatch import fnmatch
from jinja2 import FileSystemLoader, TemplateNotFound
from jinja2.sandbox import SandboxedEnvironment
from sphinx.util.osutil import FileAvoidWrite, walk
#from sphinx import __display_version__
from sphinx.quickstart import EXTENSIONS
from sphinx.ext.autosummary import get_documenter
from sphinx.util.inspect import safe_getattr
# Add documenters to AutoDirective registry
from sphinx.ext.autodoc import add_documenter, \
ModuleDocumenter, ClassDocumenter, ExceptionDocumenter, DataDocumenter, \
FunctionDocumenter, MethodDocumenter, AttributeDocumenter, \
__version__ = '0.1.2'
__display_version__ = __version__
if False:
# For type annotation
from typing import Any, List, Tuple # NOQA
# automodule options
if 'SPHINX_APIDOC_OPTIONS' in os.environ:
OPTIONS = os.environ['SPHINX_APIDOC_OPTIONS'].split(',')
# 'inherited-members', # disabled because there's a bug in sphinx
PY_SUFFIXES = set(['.py', '.pyx'])
def _warn(msg):
# type: (unicode) -> None
print('WARNING: ' + msg, file=sys.stderr)
def makename(package, module):
# type: (unicode, unicode) -> unicode
"""Join package and module with a dot."""
# Both package and module can be None/empty.
if package:
name = package
if module:
name += '.' + module
name = module
return name
def write_file(name, text, opts):
# type: (unicode, unicode, Any) -> None
"""Write the output file for module/package <name>."""
fname = path.join(opts.destdir, '%s.%s' % (name, opts.suffix))
if opts.dryrun:
print('Would create file %s.' % fname)
if not opts.force and path.isfile(fname):
print('File %s already exists, skipping.' % fname)
print('Creating file %s.' % fname)
with FileAvoidWrite(fname) as f:
def format_heading(level, text):
# type: (int, unicode) -> unicode
"""Create a heading of <level> [1, 2 or 3 supported]."""
underlining = ['=', '-', '~', ][level - 1] * len(text)
return '%s\n%s\n\n' % (text, underlining)
def format_directive(module, package=None):
# type: (unicode, unicode) -> unicode
"""Create the automodule directive and add the options."""
directive = '.. automodule:: %s\n' % makename(package, module)
for option in OPTIONS:
directive += ' :%s:\n' % option
return directive
def create_module_file(package, module, opts):
# type: (unicode, unicode, Any) -> None
"""Generate RST for a top-level module (i.e., not part of a package)"""
if not opts.noheadings:
text = format_heading(1, '%s module' % module)
text = ''
# text += format_heading(2, ':mod:`%s` Module' % module)
text += format_directive(module, package)
if opts.templates:
template_loader = FileSystemLoader(opts.templates)
template_env = SandboxedEnvironment(loader=template_loader)
mod_ns = _get_mod_ns(
name=module, fullname=module,
template = template_env.get_template('module.rst')
text = template.render(**mod_ns)
except ImportError as e:
_warn('failed to import %r: %s' % (module, e))
write_file(makename(package, module), text, opts)
def _get_members(
mod, typ=None, include_imported=False, as_refs=False, in__all__=False):
"""Get (filtered) public/total members of the module or package `mod`.
mod: object resulting from importing a module or package
typ: filter on members. If None, include all members. If one of
'function', 'class', 'exception', 'data', only include members of
the matching type
include_imported: If True, also include members that are imports
as_refs: If True, return ReST-formatted reference strings for all
members, instead of just their names. In combinations with
`include_imported` or `in__all__`, these link to the original
location where the member is defined
in__all__: If True, return only members that are in ``mod.__all__``
lists `public` and `items`. The lists contains the public and private +
public members, as strings.
For data members, there is no way to tell whether they were imported or
defined locally (without parsing the source code). A module may define
one or both attributes
__local_data__: list of names of data objects defined locally
__imported_data__: dict of names to ReST-formatted references of where
a data object originates
If either one of these attributes is present, the member will be
classified accordingly. Otherwise, it will be classified as local if it
appeard in the __all__ list, or as imported otherwise
roles = {'function': 'func', 'module': 'mod', 'class': 'class',
'exception': 'exc', 'data': 'data'}
# not included, because they cannot occur at modul level:
# 'method': 'meth', 'attribute': 'attr', 'instanceattribute': 'attr'
def check_typ(typ, mod, member):
"""Check if mod.member is of the desired typ"""
documenter = get_documenter(member, mod)
if typ is None:
return True
if typ == getattr(documenter, 'objtype', None):
return True
if hasattr(documenter, 'directivetype'):
return roles[typ] == getattr(documenter, 'directivetype')
def is_local(mod, member, name):
"""Check whether mod.member is defined locally in module mod"""
if hasattr(member, '__module__'):
return getattr(member, '__module__') == mod.__name__
# we take missing __module__ to mean the member is a data object
if hasattr(mod, '__local_data__'):
return name in getattr(mod, '__local_data__')
if hasattr(mod, '__imported_data__'):
return name not in getattr(mod, '__imported_data__')
return name in getattr(mod, '__all__', [])
if typ is not None and typ not in roles:
raise ValueError("typ must be None or one of %s"
% str(list(roles.keys())))
items = []
public = []
all_list = getattr(mod, '__all__', [])
for name in dir(mod):
member = safe_getattr(mod, name)
except AttributeError:
if check_typ(typ, mod, member):
if in__all__ and name not in all_list:
if include_imported or is_local(mod, member, name):
if as_refs:
documenter = get_documenter(member, mod)
role = roles.get(documenter.objtype, 'obj')
ref = _get_member_ref_str(
name, obj=member, role=role,
known_refs=getattr(mod, '__imported_data__', None))
if not name.startswith('_'):
if not name.startswith('_'):
return public, items
def _get_member_ref_str(name, obj, role='obj', known_refs=None):
"""generate a ReST-formmated reference link to the given `obj` of type
`role`, using `name` as the link text"""
if known_refs is not None:
if name in known_refs:
return known_refs[name]
if hasattr(obj, '__name__'):
ref = obj.__module__ + '.' + obj.__name__
except AttributeError:
ref = obj.__name__
except TypeError: # e.g. obj.__name__ is None
ref = name
ref = name
return ":%s:`%s <%s>`" % (role, name, ref)
def _get_mod_ns(name, fullname, includeprivate):
"""Return the template context of module identified by `fullname` as a
ns = { # template variables
'name': name, 'fullname': fullname, 'members': [], 'functions': [],
'classes': [], 'exceptions': [], 'subpackages': [], 'submodules': [],
'all_refs': [], 'members_imports': [], 'members_imports_refs': [],
'data': [], 'doc':None}
p = 0
if includeprivate:
p = 1
mod = importlib.import_module(fullname)
ns['members'] = _get_members(mod)[p]
ns['functions'] = _get_members(mod, typ='function')[p]
ns['classes'] = _get_members(mod, typ='class')[p]
ns['exceptions'] = _get_members(mod, typ='exception')[p]
ns['all_refs'] = _get_members(mod, include_imported=True, in__all__=True, as_refs=True)[p]
ns['members_imports'] = _get_members(mod, include_imported=True)[p]
ns['members_imports_refs'] = _get_members(mod, include_imported=True, as_refs=True)[p]
ns['data'] = _get_members(mod, typ='data')[p]
ns['doc'] = mod.__doc__
return ns
def create_package_file(root, master_package, subroot, py_files, opts, subs, is_namespace):
# type: (unicode, unicode, unicode, List[unicode], Any, List[unicode], bool) -> None
"""Build the text of the file and write the file."""
use_templates = False
if opts.templates:
use_templates = True
template_loader = FileSystemLoader(opts.templates)
template_env = SandboxedEnvironment(loader=template_loader)
fullname = makename(master_package, subroot)
text = format_heading(
1, ('%s package' if not is_namespace else "%s namespace") % fullname)
if opts.modulefirst and not is_namespace:
text += format_directive(subroot, master_package)
text += '\n'
# build a list of directories that are szvpackages (contain an INITPY file)
subs = [sub for sub in subs if path.isfile(path.join(root, sub, INITPY))]
# if there are some package directories, add a TOC for theses subpackages
if subs:
text += format_heading(2, 'Subpackages')
text += '.. toctree::\n\n'
for sub in subs:
text += ' %s.%s\n' % (makename(master_package, subroot), sub)
text += '\n'
submods = [path.splitext(sub)[0] for sub in py_files
if not shall_skip(path.join(root, sub), opts) and
sub != INITPY]
if submods:
text += format_heading(2, 'Submodules')
if opts.separatemodules:
text += '.. toctree::\n\n'
for submod in submods:
modfile = makename(master_package, makename(subroot, submod))
text += ' %s\n' % modfile
# generate separate file for this module
if not opts.noheadings:
filetext = format_heading(1, '%s module' % modfile)
filetext = ''
filetext += format_directive(makename(subroot, submod),
if use_templates:
mod_ns = _get_mod_ns(
name=submod, fullname=modfile,
template = template_env.get_template('module.rst')
filetext = template.render(**mod_ns)
except ImportError as e:
_warn('failed to import %r: %s' % (modfile, e))
write_file(modfile, filetext, opts)
for submod in submods:
modfile = makename(master_package, makename(subroot, submod))
if not opts.noheadings:
text += format_heading(2, '%s module' % modfile)
text += format_directive(makename(subroot, submod),
text += '\n'
text += '\n'
if use_templates:
package_ns = _get_mod_ns(name=subroot, fullname=fullname,
package_ns['subpackages'] = subs
package_ns['submodules'] = submods
template = template_env.get_template('package.rst')
text = template.render(**package_ns)
except ImportError as e:
_warn('failed to import %r: %s' % (fullname, e))
if not opts.modulefirst and not is_namespace:
text += format_heading(2, 'Module contents')
text += format_directive(subroot, master_package)
write_file(makename(master_package, subroot), text, opts)
def create_modules_toc_file(modules, opts, name='modules'):
# type: (List[unicode], Any, unicode) -> None
"""Create the module's index."""
text = format_heading(1, '%s' % opts.header)
text += '.. toctree::\n'
text += ' :maxdepth: %s\n\n' % opts.maxdepth
prev_module = '' # type: unicode
for module in modules:
# look if the module is a subpackage and, if yes, ignore it
if module.startswith(prev_module + '.'):
prev_module = module
text += ' %s\n' % module
write_file(name, text, opts)
def shall_skip(module, opts):
# type: (unicode, Any) -> bool
"""Check if we want to skip this module."""
# skip if the file doesn't exist and not using implicit namespaces
if not opts.implicit_namespaces and not path.exists(module):
return True
# skip it if there is nothing (or just \n or \r\n) in the file
if path.exists(module) and path.getsize(module) <= 2:
return True
# skip if it has a "private" name and this is selected
filename = path.basename(module)
if filename != '' and filename.startswith('_') and \
not opts.includeprivate:
return True
return False
def recurse_tree(rootpath, excludes, opts):
# type: (unicode, List[unicode], Any) -> List[unicode]
Look for every file in the directory tree and create the corresponding
ReST files.
# check if the base directory is a package and get its name
if INITPY in os.listdir(rootpath):
root_package = rootpath.split(path.sep)[-1]
# otherwise, the base is a directory with packages
root_package = None
toplevels = []
followlinks = getattr(opts, 'followlinks', False)
includeprivate = getattr(opts, 'includeprivate', False)
implicit_namespaces = getattr(opts, 'implicit_namespaces', False)
for root, subs, files in walk(rootpath, followlinks=followlinks):
# document only Python module files (that aren't excluded)
py_files = sorted(f for f in files
if path.splitext(f)[1] in PY_SUFFIXES and
not is_excluded(path.join(root, f), excludes))
is_pkg = INITPY in py_files
is_namespace = INITPY not in py_files and implicit_namespaces
if is_pkg:
py_files.insert(0, INITPY)
elif root != rootpath:
# only accept non-package at toplevel unless using implicit namespaces
if not implicit_namespaces:
del subs[:]
# remove hidden ('.') and private ('_') directories, as well as
# excluded dirs
if includeprivate:
exclude_prefixes = ('.',) # type: Tuple[unicode, ...]
exclude_prefixes = ('.', '_')
subs[:] = sorted(sub for sub in subs if not sub.startswith(exclude_prefixes) and
not is_excluded(path.join(root, sub), excludes))
if is_pkg or is_namespace:
# we are in a package with something to document
if subs or len(py_files) > 1 or not shall_skip(path.join(root, INITPY), opts):
subpackage = root[len(rootpath):].lstrip(path.sep).\
replace(path.sep, '.')
# if this is not a namespace or
# a namespace and there is something there to document
if not is_namespace or len(py_files) > 0:
create_package_file(root, root_package, subpackage,
py_files, opts, subs, is_namespace)
toplevels.append(makename(root_package, subpackage))
# if we are at the root level, we don't require it to be a package
assert root == rootpath and root_package is None
if opts.templates:
sys.path.insert(0, rootpath)
for py_file in py_files:
if not shall_skip(path.join(rootpath, py_file), opts):
module = path.splitext(py_file)[0]
create_module_file(root_package, module, opts)
if opts.templates:
return toplevels
def normalize_excludes(rootpath, excludes):
# type: (unicode, List[unicode]) -> List[unicode]
"""Normalize the excluded directory list."""
return [path.abspath(exclude) for exclude in excludes]
def is_excluded(root, excludes):
# type: (unicode, List[unicode]) -> bool
"""Check if the directory is in the exclude list.
Note: by having trailing slashes, we avoid common prefix issues, like
e.g. an exlude "foo" also accidentally excluding "foobar".
for exclude in excludes:
if fnmatch(root, exclude):
return True
return False