Improved GUI search tests
Initial set of search GUI tests to implement
All of the major search functionality in the GUI is missing proper GUI tests. We should start out with initial component-level tests for the following components:
-
search/input/InputField.js
: The workhorse for most search filters. -
search/input/InputSlider.js
: Search component for min/max filtering. -
search/input/InputList.js
: Used to display statistics above the search results for most quantities. -
search/input/InputPeriodicTable.js
: Used to add search filters by clicking the periodic table.
Steps towards better testing of API-GUI communication
In order to perform the search tests, the components need to communicate with the API. We are currently missing an overarching strategy for handling these cases where the GUI components rely on data provided by the API. We are currently using manually crafted files or variables to mock API results. These mocked responses will need manual edits whenever the API response structure changes and they cannot capture any problems with the API/GUI communication interface. In order to improve our GUI testing and to help in migrating towards real integration tests, I'm proposing a new API mock strategy, that works roughly like this:
- During tests, the components may freely perform arbitrary API calls using the exact same mechanisms as used in a production system (i.e. using the useAPI-hook).
- Introduce a Mock Service Worker (MSW) that can be used to capture real API traffic from a running API. This allows us to capture a stream of API calls, in the exact order as requested within tests and returned by an actual API. These captured streams can then be stored in git and used in most test scenarios. A re-recording is only necessary if the API response structure changes or the underlying data needs to be changed.
- Add MSW handlers that can read captured API streams from files.
- Create tools for creating initial backend states that can be used in the GUI tests+integration tests. This means e.g. booting up an API that can serve search traffic based on a set of data that captures all the needs of a certain set of tests. Each meaningful user workflow may get its own "state", that can be booted up (e.g. using the CLI) and used in integration tests or when preparing API snapshots for the GUI unit tests.
In order to implement this strategy, I have done the following initial steps:
-
Try adding a Mock Service Worker that can: -
Forward and capture API traffic -
Use a JSON-based snapshot file to mock the API responses.
-
-
Add helper functions for establishing an API state that can be re-used for several tests e.g. in the beforeEach/afterEach-blocks provided by jest.
Improvements based on the meeting on 22.2.
-
Use a pytest-like configuration scheme: one can provide relevant test-utilities at any level of hierarchy by providing a conftest.js
file. -
Add python scripts for preparing test states. These can be stored in nomad/tests/states. -
Improve the snapshot file format: calls should be identified by creating a hash over relevant request variables, the request should also be stored. -
Add coverage reporting + badge to the test suite -
Add possibility to control reading/writing of snapshot files. -
See if there exists a set of testing-library
compatible tools for MUI (none that I could find...)
How to write tests with the new setup:
Here is a simple test example that demonstrates how to use the new test features in practice:
Javascript test file:
import React from 'react'
import { waitFor } from '@testing-library/dom'
import { startAPI, closeAPI, screen } from '../../conftest'
import { renderSearchEntry, expectInputHeader } from '../conftest'
test('periodic table shows the elements retrieved through the API', async () => {
startAPI('tests.states.search.search', 'tests/data/search/terms_aggregation_elements')
renderSearchEntry(...)
expect(...)
closeAPI()
})
The first parameter of startAPI defines the search state that should be used.
These states are defined as functions under nomad-FAIR/tests/states
. A simple example of a function that prepares a state could look like this:
from nomad import infrastructure
from nomad.utils import create_uuid
from nomad.utils.exampledata import ExampleData
def search():
infrastructure.setup()
main_author = infrastructure.keycloak.get_user(username="test")
data = ExampleData(main_author=main_author)
upload_id = create_uuid()
data.create_upload(upload_id=upload_id, published=True, embargo_length=0)
data.create_entry(
upload_id=upload_id,
entry_id=create_uuid(),
mainfile="test_content/test_entry/mainfile.json",
results={
"material": {"elements": ["C", "H"]},
"method": {},
"properties": {}
}
)
data.save()
When running in the test-integration
or test-record
mode (see next chapter), this function will be executed in order to prepare the application backend. The second parameter of startAPI
is used to identify the file path for a snapshot: this file will contain the API traffic that has been recorded when running in the test-record
mode. This snapshot file will be used to mock the API traffic when running the tests with yarn test
, as done e.g. in the CI pipeline.
closeAPI
will handle cleaning the test state between successive startAPI
calls. It will completely wipe out MongoDB, ElasticSearch and the upload files. The tests are using a custom nomad-test.yaml
file that specifies a separate database config in order to prevent interacting with any other instances of NOMAD.
How to run tests with the new setup:
-
When you want to run your tests against the current snapshot files, run the tests as usual:
yarn test
-
When you want to run your tests against a live API, but not record any snapshots:
yarn test-integration
-
When you want to run your tests against a live API, and record snapshots:
yarn test-record
Note: Before running against a live API (yarn test-integration
and yarn test-record
), you need to boot up the infrastructure and ensure that the nomad package is available with the correct test configuration:
- Have the docker infrastructure running:
docker-compose up
- Have the nomad app+worker running with the config found in
nomad-FAIR/nomad-test.yaml
. This can be achieved e.g. withexport NOMAD_CONFIG=nomad-test.yaml; nomad admin run appworker
- Activate the correct python virtual environment before running the tests with yarn (yarn will run the python functions that prepare the state).