Writing and Running Tests#

Ideally 3 types of tests should be written for new functionality: unit, regression, and integration, which will be covered later in this section. Each of these test types should cover as much of the written code as possible. When adding new tests they should be decorated with one of the following to allow for code maintainers to track test coverage for the varying test types: @pytest.mark.unit, @pytest.mark.regression, or pytest.mark.integration. In practice this looks like:

@pytest.mark.unit
def test_simple_numeric_conversion():
    ... # test contents

For further information on how to write or run a test, please see the pytest documentation, which outlines many useful features for both writing and running tests.

When running tests (or building the docs) OpenMDAO produces a significant number of outputs files and folders, which can be cleaned up using openmdao clean. This will prompt you to confirm every folder, so if you don’t need to review the OpenMDAO output files, they can be universally wiped without prompts using the -f flag. Use --help for further usage instructions.

Chunking Lengthy or Complex Tests#

At times these tests can be lengthy because many checks could be required in a single test function, so it can be helpful to chunk these into subtests to allow the whole test to be run and fail at the end rather than failing at the first unsuccessful check. This is especially helpful when testing around complex logic and other subtests can provide insight into the nature of the failure. For these we utilize the subtests functionality. The example below will show the most basic usage to get started.

# h2integrate/core/test/test_utilities.py
@pytest.mark.unit
def test_BaseConfig(subtests):
    """Tests the BaseConfig class."""

    with subtests.test("Check basic passing inputs"):
        demo = BaseDemoModel({"x": 1})
        assert demo.config.x == 1
        assert demo.config.y == "y"

    with subtests.test("Check allowed inputs overload"):
        demo = BaseDemoModel({"x": 1, "z": 2})
        assert demo.config.x == 1
        assert demo.config.y == "y"

    ... # rest of the test

Unit Tests#

Unit tests should test the correctness of the code in isolation from the rest of the system. At a minimum, this involves testing data handling, utility methods, setup, and error handling. Care should also be taken to test around the edges of the program being written to ensure code can’t silently fail by producing erroneous results or failing unexpectedly.

Run pytest -m unit to run only the unit test suite or pytest -m not-unit to skip the unit tests.

An example of a unit test is in the example below where there is only a validation of the location of the output directory and subdirectory, and not the contents of those files.

@pytest.mark.unit
def test_check_resource_dir_no_dir(subtests):
    output_dir = check_resource_dir()
    with subtests.test("No resource_dir, no resource_subdir"):
        assert output_dir == RESOURCE_DEFAULT_DIR

    output_dir = check_resource_dir(resource_subdir="wind")
    with subtests.test("No resource_dir, with resource_subdir"):
        expected_output_dir = RESOURCE_DEFAULT_DIR / "wind"
        assert output_dir == expected_output_dir

Regression Tests#

In an analysis-focused code base, regression tests should test the results of running the code to ensure changes made do not alter expected results. These should not encapsulate more than the focal system if it can be helped.

Run pytest -m regression to run only the regression test suite or pytest -m not-regression to skip the regression tests.

An example of a regression test is in the example below where the model’s outputs are checked for stability with some tolerance.

@pytest.mark.regression
@pytest.mark.parametrize("save", [False])
def test_doc_standard_outputs(driver_config, plant_config, tech_config, subtests):
    doc_model = DOCPerformanceModel(
        driver_config=driver_config, plant_config=plant_config, tech_config=tech_config
    )
    prob = om.Problem(model=om.Group())
    prob.model.add_subsystem("comp", doc_model, promotes=["*"])
    prob.setup()
    rng = np.random.default_rng(seed=42)
    base_power = np.linspace(3.0e8, 2.0e8, 8760)  # 5 MW to 10 MW over 8760 hours
    noise = rng.normal(loc=0, scale=0.5e8, size=8760)  # ±0.5 MW noise
    power_profile = base_power + noise
    prob.set_val("comp.electricity_in", power_profile, units="W")

    # Run the model
    prob.run_model()

    int(plant_config["plant"]["plant_life"])
    int(plant_config["plant"]["simulation"]["n_timesteps"])

    with subtests.test("co2 captured mtpy == annual co2 produced"):
        assert (
            pytest.approx(prob.get_val("comp.co2_capture_mtpy", units="t/yr")[0], rel=1e-6)
            == prob.get_val("comp.annual_co2_produced", units="t/yr")[0]
        )

    annual_co2_from_cf_calc = (
        prob.get_val("comp.capacity_factor", units="unitless")
        * prob.get_val("comp.rated_co2_production", units="t/h")
        * 8760
    )

    with subtests.test("CF calculated properly"):
        assert (
            pytest.approx(annual_co2_from_cf_calc[0], rel=1e-6)
            == prob.get_val("comp.co2_capture_mtpy", units="t/yr")[0]
        )


Integration Tests#

Integration tests should test the integration of the new code into other systems in the code base. For a new model or technology, this will run it in conjunction with other technologies or models and, similar to regression tests, will test the results of that code, ensuring the combination of multiple components does not cause unexpected changes in any of the involved components.

An example would be to test that a model configuration that contributed to a publication always produces the same results, ensuring the legitimacy of those results and the underlying modeled systems.

Run pytest -m integration to run only the integration test suite or pytest -m not-integration to skip the integration tests.

For examples of integration tests, please see the examples/test/test_all_examples.py module where multiple components are combined to test the models outputs for stability when using tools in conjunction with each other.

Test coverage#

To test the code coverage of the testing suite, we use the pytest-cov package. To produce a coverage report in the terminal after the tests complete, simply run pytest as you normally would, with the following added to the end: --cov=h2integrate..

Additional helpful options are --cov-report=html --no-cov-on-fail will produce a detailed HTML report in htmlcov/ that can be viewed in the browser (open /path/to/H2INTEGRATE/htmlcov/index.html) and skip the coverage report if a test fails. More options exist, or the highlighted ones can be modified, for example to create a coverage report for a specific folder or file (e.g., --cov=h2integrate/core/utilities), which is especially helpful when developing tests for a new module.

Shared Fixtures#

In each test directory (or even subdirectory) there are a variety of common fixtures provided in each conftest.py. You may even notice that some top-level configuration and fixtures are imported insubsequent conftest.py. These fixtures and common setups enable streamlined setup and teardown for individual tests.

In general, it is highly encouraged to define general fixtures that can be reused many times in place of single-use fixtures by parameterizing them. In the below example (taken from h2integrate/resource/test/conftest.py), the timezone argument is able to be parameterized by individual tests. For this particular example there are only 2 variations actually used, however, by writing the fixture once, it allows for a consistent and simplified setup for easier to maintain tests.

@pytest.fixture
def plant_simulation(timezone):
    plant = {
        "plant_life": 30,
        "simulation": {
            "dt": 3600,
            "n_timesteps": 8760,
            "start_time": "01/01/1900 00:30:00",
            "timezone": timezone,
        },
    }
    return plant


The plant_simulation fixture (this variation is for h2integrate/resource can now be used in the solar resource tests, like in the example below for h2integrate/resource/solar/test/test_resource_models.py test_nrel_solar_resource_file_downloads, which uses a combination of parameterizations, including the plant_simulation fixture.

# fmt: off
@pytest.mark.unit
@pytest.mark.parametrize(
    "model,which,lat,lon,resource_year,model_name,timezone",
    [
        ("GOESAggregatedSolarAPI", "solar", 34.22, -102.75, 2012, "goes_aggregated_v4", 0),
        ("Himawari7SolarAPI", "solar", -27.3649, 152.67935, 2013, "himawari7_v3", 0),
        ("Himawari8SolarAPI", "solar", 3.25735, 101.656312, 2020, "himawari8_v3", 0),
        ("HimawariTMYSolarAPI", "solar", -27.3649, 152.67935, "tmy-2020", "himawari_tmy_v3", 0),
        ("MeteosatPrimeMeridianSolarAPI", "solar", 41.9077, 12.4368, 2008, "nsrdb_msg_v4", 0),
        ("MeteosatPrimeMeridianTMYSolarAPI", "solar", -27.3649, 152.67935, "tmy-2022", "himawari_tmy_v3", 0),  # noqa: E501
    ],
    ids=[
        "GOESAggregatedSolarAPI",
        "Himawari7SolarAPI",
        "Himawari8SolarAPI",
        "HimawariTMYSolarAPI",
        "MeteosatPrimeMeridianSolarAPI",
        "MeteosatPrimeMeridianTMYSolarAPI",
    ]
)
# fmt: on
def test_nrel_solar_resource_file_downloads(
    subtests,
    plant_simulation,
    site_config,
    model,
    which,
    lat,
    lon,
    resource_year,
    model_name,
):
    file_resource_year = None
    if model == "MeteosatPrimeMeridianTMYSolarAPI" and resource_year == "tmy-2022":
        file_resource_year = "tmy-2020"
    plant_config = {
        "site": site_config,
        "plant": plant_simulation,
    }

Using temporary directories to avoid saving output data#

For tests that utilize caching (similar to the HOPP) or non-openmdao ouputs (i.e., plots, data, etc.), the temp_dir fixture should be utilized for 2 reasons.

  1. The temp_dir fixture successfully removes the temporarily created files after running a module, including if a test fails.

  2. It avoids locally saving and manually removing example data or tested output files.

temp_dir can be incorporated into anything accepts fixtures (i.e., other fixtures and tests). In the first example, we pass the temp_dir to the driver configuration fixture so that the outputs are not stored until manually cleaned, and the common setup can be recycled for all applicable tests.

26@pytest.fixture(scope="module")
27def driver_config(temp_dir):  # noqa: F811
28    driver_config = {
29        "general": {
30            "folder_output": str(temp_dir),
31        },
32    }
33    return driver_config
34
35

In the second example, we pass the fixture to another test to show that we can still access the output data and work with it.

154@pytest.mark.unit
155def test_unsupported_simulation_parameters(temp_dir):
156    orig_plant_config = EXAMPLE_DIR / "01_onshore_steel_mn" / "plant_config.yaml"
157    temp_plant_config_ntimesteps = temp_dir / "temp_plant_config_ntimesteps.yaml"
158    temp_plant_config_dt = temp_dir / "temp_plant_config_dt.yaml"
159
160    shutil.copy(orig_plant_config, temp_plant_config_ntimesteps)
161    shutil.copy(orig_plant_config, temp_plant_config_dt)
162
163    # Load the plant_config YAML content
164    plant_config_data_ntimesteps = load_plant_yaml(temp_plant_config_ntimesteps)
165    plant_config_data_dt = load_plant_yaml(temp_plant_config_dt)

The other feature for working more extensively with examples is the temp_copy_of_example located in examples/test/test_all_examples.py. This fixture creates a temporary copy of the example so that an example can be run as it’s included in the examples directory. The example below demonstrates how to make use of the fixture and still have access to all the examples outputs during the test.

32@pytest.mark.integration
33@pytest.mark.parametrize("example_folder,resource_example_folder", [("01_onshore_steel_mn", None)])
34def test_steel_example(subtests, temp_copy_of_example):
35    example_folder = temp_copy_of_example
36
37    # Create a H2Integrate model
38    model = H2IntegrateModel(example_folder / "01_onshore_steel_mn.yaml")