- test
stage: test
- cd .. && rm -rf nomad-lab-base
- git clone --recursive
- cd nomad-lab-base
- git submodule foreach git checkout master
- git submodule foreach git pull
- sbt bigdft/test
- export PYTHONEXE=/labEnv/bin/python
- sbt bigdft/test
- master
- test
- spec2
This is the main repository of the [NOMAD]( parser for
This is a NOMAD parser for [BigDFT]( It will read BigDFT input and
output files and provide all information in NOMAD's unified Metainfo based Archive format.
# Example
from bigdftparser import BigDFTParser
import matplotlib.pyplot as mpl
## Preparing code input and output file for uploading to NOMAD
NOMAD accepts `.zip` and `.tar.gz` archives as uploads. Each upload can contain arbitrary
files and directories. NOMAD will automatically try to choose the right parser for you files.
For each parser (i.e. for each supported code) there is one type of file that the respective
parser can recognize. We call these files `mainfiles` as they typically are the main
output file a code. For each `mainfile` that NOMAD discovers it will create an entry
in the database that users can search, view, and download. NOMAD will associate all files
in the same directory as files that also belong to that entry. Parsers
might also read information from these auxillary files. This way you can add more files
to an entry, even if the respective parser/code might not directly support it.
For BigDFT please provide at least the files from this table if applicable to your
calculations (remember that you can provide more files if you want):
# 1. Initialize a parser with a set of default units.
default_units = ["eV"]
parser = BigDFTParser(default_units=default_units)
# 2. Parse a file
path = "path/to/main.file"
results = parser.parse(path)
# 3. Query the results with using the id's created specifically for NOMAD.
scf_energies = results["energy_total_scf_iteration"]
To create an upload with all calculations in a directory structure:
zip -r <upload-file>.zip <directory>/*
# Installation
The code is python 2 and python 3 compatible. First download and install
the nomadcore package:
Go to the [NOMAD upload page]( to upload files
or find instructions about how to upload files from the command line.
## Using the parser
git clone
cd python-common
pip install -r requirements.txt
pip install -e .
You can use NOMAD's parsers and normalizers locally on your computer. You need to install
NOMAD's pypi package:
pip install nomad-lab
Then download the metainfo definitions to the same folder where the
'python-common' repository was cloned:
To parse code input/output from the command line, you can use NOMAD's command line
interface (CLI) and print the processing results output to stdout:
git clone
nomad parse --show-archive <path-to-file>
To parse a file in Python, you can program something like this:
import sys
from nomad.cli.parse import parse, normalize_all
# match and run the parser
archive = parse(sys.argv[1])
# run all normalizers
Finally download and install the parser:
# get the 'main section' section_run as a metainfo object
section_run = archive.section_run[0]
git clone
cd parser-big-dft
pip install -e .
# get the same data as JSON serializable Python dict
python_dict = section_run.m_to_dict()
# Notes
The parser is based on BigDFT 1.8.
## Developing the parser
Also install NOMAD's pypi package:
pip install nomad-lab
Clone the parser project and install it in development mode:
git clone parser-bigdft
pip install -e parser-bigdft
Running the parser now, will use the parser's Python code from the clone project.
codeLabel: BigDFT
codeLabelStyle: 'Capitals: B,D,F,T,'
parserDirName: dependencies/parsers/bigdft/
parserSpecific: ''
tableOfFiles: ''