Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor everest2ropt control parsing #9797

Closed
wants to merge 0 commits into from
Closed

Conversation

verveerpj
Copy link
Contributor

Issue
The parsing code in everest2ropt for the controls is extremely convoluted, due to the nested character of the controls section in the configuration. This PR simplifies the parsing code by creating an intermediate class that stores the controls and its properties in a linear fashion. The relevant code in everest2ropt is rewritten accordingly, becoming much more easy to understand.

  • PR title captures the intent of the changes, and is fitting for release notes.
  • Added appropriate release note label
  • Commit history is consistent and clean, in line with the contribution guidelines.
  • Make sure unit tests pass locally after every commit (git rebase -i main --exec 'pytest tests/ert/unit_tests -n auto -m "not integration_test"')

When applicable

  • When there are user facing changes: Updated documentation
  • New behavior or changes to existing untested code: Ensured that unit tests are added (See Ground Rules).
  • Large PR: Prepare changes in small commits for more convenient review
  • Bug fix: Add regression test for the bug
  • Bug fix: Create Backport PR to latest release

@verveerpj verveerpj self-assigned this Jan 17, 2025
@verveerpj verveerpj marked this pull request as draft January 17, 2025 16:56
@verveerpj verveerpj added everest release-notes:refactor PR changes code without changing ANY (!) behavior. labels Jan 17, 2025
Copy link

codspeed-hq bot commented Jan 17, 2025

CodSpeed Performance Report

Merging #9797 will not alter performance

Comparing refactor-control-parsing (3632cb5) with main (5ece8ad)

Summary

✅ 24 untouched benchmarks

@verveerpj verveerpj force-pushed the refactor-control-parsing branch 7 times, most recently from 6bc3ce1 to 2ee607c Compare January 20, 2025 08:31
@codecov-commenter
Copy link

codecov-commenter commented Jan 20, 2025

❌ 11 Tests Failed:

Tests completed Failed Passed Skipped
2849 11 2838 117
View the top 3 failed tests by shortest run time
tests/everest/test_samplers.py::test_sampler_uniform
Stack Traces | 0.193s run time
self = <ropt_dakota.dakota.DakotaOptimizer object at 0x7f9e65d7ede0>
initial_values = array([0.25, 0.25, 0.25])

    def _start_direct_interface(self, initial_values: NDArray[np.float64]) -> None:
        driver = _DakotaDriver(
            self._config.optimizer,
            self._optimizer_callback,
            self._constraint_indices,
            self._get_inputs(initial_values),
        )
        try:
>           driver.run_dakota(
                infile="dakota_Input.in",
                stdout="Report.txt",
                stderr="dakota_errors.txt",
            )

.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:354: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:437: in run_dakota
    run_dakota(infile, stdout, stderr, restart, throw_on_error)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

infile = 'dakota_Input.in', stdout = 'Report.txt', stderr = 'dakota_errors.txt'
restart = 0, throw_on_error = True

    def run_dakota(infile, stdout=None, stderr=None, restart=0, throw_on_error=True):
        """
        Run DAKOTA with the configuration file as provided as first argument 'infile'.
    
        `stdout` and `stderr` can be used to direct their respective DAKOTA
        stream to a filename.
    
        Set dakota in restart mode if restart is equal to 1
    
        :param infile: The name of the configuration file
        :type infile: str
        :param stdout: The stream where to redirect standard output
        :param stderr: The stream where to redirect standard error
        :param restart: The flag that tells dakota whether to restart or not.
        If set to 1 than dakota will be started in restart mode. Dakota will
        expect in this case the restart file dakota.rst to be present in the working directory
        :type restart: int
        :param throw_on_error: Dakota throws on error instead of aborting
        """
    
        # Checking for a Python exception via sys.exc_info() doesn't work, for
        # some reason it always returns (None, None, None).  So instead we pass
        # an object down and if an exception is thrown, the C++ level will fill
        # it with the exception information so we can re-raise it.
        err = 0
        exc = _ExcInfo()
        err = carolina.run_dakota(infile,
                                  stdout,
                                  stderr,
                                  exc,
                                  restart,
                                  throw_on_error)
    
        # Check for errors. We'll get here if Dakota::abort_mode has been set to
        # throw an exception rather than shut down the process.
        if err:
            if exc.type is None:
>               raise RuntimeError('DAKOTA analysis failed')
E               RuntimeError: DAKOTA analysis failed

.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12/site-packages/dakota.py:224: RuntimeError

The above exception was the direct cause of the following exception:

copy_math_func_test_data_to_tmp = None
evaluator_server_config_generator = <function evaluator_server_config_generator.<locals>.create_evaluator_server_config at 0x7f9e659c59e0>

    def test_sampler_uniform(
        copy_math_func_test_data_to_tmp, evaluator_server_config_generator
    ):
        config = EverestConfig.load_file(CONFIG_FILE_ADVANCED)
        config.controls[0].sampler = SamplerConfig(**{"method": "uniform"})
    
        run_model = EverestRunModel.create(config)
        evaluator_server_config = evaluator_server_config_generator(run_model)
>       run_model.run_experiment(evaluator_server_config)

.../tests/everest/test_samplers.py:18: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ert/run_models/everest_run_model.py:220: in run_experiment
    optimizer_exit_code = optimizer.run().exit_code
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/plan/_basic_optimizer.py:229: in run
    plan.run()
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/plan/_plan.py:114: in run
    self.run_steps(self._steps)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/plan/_plan.py:196: in run_steps
    task.run()
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../plugins/plan/_optimizer.py:208: in run
    exit_code = ensemble_optimizer.start(variables)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/optimization/_optimizer.py:145: in start
    self._optimizer.start(variables)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:76: in start
    self._start(initial_values)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:341: in _start
    self._start_direct_interface(initial_values)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:361: in _start_direct_interface
    raise driver.exception from err
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12/site-packages/dakota.py:288: in dakota_callback
    return driver.dakota_callback(**kwargs)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:392: in dakota_callback
    function_result, gradient_result = _compute_response(
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:478: in _compute_response
    functions, gradients = optimizer_callback(
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/optimization/_optimizer.py:186: in _optimizer_callback
    results = self._run_evaluations(
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/optimization/_optimizer.py:259: in _run_evaluations
    results = self._function_evaluator.calculate(
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/ensemble_evaluator/_ensemble_evaluator.py:157: in calculate
    return self._calculate_both(variables, self._config.variables.indices)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/ensemble_evaluator/_ensemble_evaluator.py:355: in _calculate_both
    f_eval_results, g_eval_results = _get_function_and_gradient_results(
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/ensemble_evaluator/_evaluator_results.py:203: in _get_function_and_gradient_results
    evaluator_result = evaluator(
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ert/run_models/everest_run_model.py:379: in _forward_model_evaluator
    self._setup_sim(sim_id, controls, ensemble)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ert/run_models/everest_run_model.py:515: in _setup_sim
    _check_suffix(ext_config, var_name, var_setting)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

ext_config = ExtParamConfig(name='point', forward_init=False, update=False, input_keys={'x': ['0', '1', '2']}, output_file='point.json', forward_init_file='')
key = 'x', assignment = {0: 0.25, 1: 0.25, 2: 0.25}

    def _check_suffix(
        ext_config: ExtParamConfig,
        key: str,
        assignment: dict[str, Any] | tuple[str, str] | str | int,
    ) -> None:
        if key not in ext_config:
            raise KeyError(f"No such key: {key}")
        if isinstance(assignment, dict):  # handle suffixes
            suffixes = ext_config[key]
            if len(assignment) != len(suffixes):
                missingsuffixes = set(suffixes).difference(set(assignment.keys()))
                raise KeyError(
                    f"Key {key} is missing values for "
                    f"these suffixes: {missingsuffixes}"
                )
            for suffix in assignment:
                if suffix not in suffixes:
>                   raise KeyError(
                        f"Key {key} has suffixes {suffixes}. "
                        f"Can't find the requested suffix {suffix}"
                    )
E                   KeyError: "Key x has suffixes ['0', '1', '2']. Can't find the requested suffix 0"

.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ert/run_models/everest_run_model.py:490: KeyError
tests/everest/test_api_snapshots.py::test_api_summary_snapshot[config_advanced.yml]
Stack Traces | 0.216s run time
self = <ropt_dakota.dakota.DakotaOptimizer object at 0x7f9e67e5a990>
initial_values = array([0.25, 0.25, 0.25])

    def _start_direct_interface(self, initial_values: NDArray[np.float64]) -> None:
        driver = _DakotaDriver(
            self._config.optimizer,
            self._optimizer_callback,
            self._constraint_indices,
            self._get_inputs(initial_values),
        )
        try:
>           driver.run_dakota(
                infile="dakota_Input.in",
                stdout="Report.txt",
                stderr="dakota_errors.txt",
            )

.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:354: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:437: in run_dakota
    run_dakota(infile, stdout, stderr, restart, throw_on_error)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

infile = 'dakota_Input.in', stdout = 'Report.txt', stderr = 'dakota_errors.txt'
restart = 0, throw_on_error = True

    def run_dakota(infile, stdout=None, stderr=None, restart=0, throw_on_error=True):
        """
        Run DAKOTA with the configuration file as provided as first argument 'infile'.
    
        `stdout` and `stderr` can be used to direct their respective DAKOTA
        stream to a filename.
    
        Set dakota in restart mode if restart is equal to 1
    
        :param infile: The name of the configuration file
        :type infile: str
        :param stdout: The stream where to redirect standard output
        :param stderr: The stream where to redirect standard error
        :param restart: The flag that tells dakota whether to restart or not.
        If set to 1 than dakota will be started in restart mode. Dakota will
        expect in this case the restart file dakota.rst to be present in the working directory
        :type restart: int
        :param throw_on_error: Dakota throws on error instead of aborting
        """
    
        # Checking for a Python exception via sys.exc_info() doesn't work, for
        # some reason it always returns (None, None, None).  So instead we pass
        # an object down and if an exception is thrown, the C++ level will fill
        # it with the exception information so we can re-raise it.
        err = 0
        exc = _ExcInfo()
        err = carolina.run_dakota(infile,
                                  stdout,
                                  stderr,
                                  exc,
                                  restart,
                                  throw_on_error)
    
        # Check for errors. We'll get here if Dakota::abort_mode has been set to
        # throw an exception rather than shut down the process.
        if err:
            if exc.type is None:
>               raise RuntimeError('DAKOTA analysis failed')
E               RuntimeError: DAKOTA analysis failed

.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12/site-packages/dakota.py:224: RuntimeError

The above exception was the direct cause of the following exception:

test_data_case = 'math_func/config_advanced.yml'

    def run_config(test_data_case: str):
        if cache.get(f"cached_example:{test_data_case}", None) is None:
            my_tmpdir = Path(tempfile.mkdtemp())
            config_path = (
                Path(__file__) / f"../../../test-data/everest/{test_data_case}"
            ).resolve()
            config_file = config_path.name
    
            shutil.copytree(config_path.parent, my_tmpdir / "everest")
            config = EverestConfig.load_file(my_tmpdir / "everest" / config_file)
            run_model = EverestRunModel.create(config)
            evaluator_server_config = evaluator_server_config_generator(run_model)
            try:
>               run_model.run_experiment(evaluator_server_config)

.../tests/everest/conftest.py:176: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ert/run_models/everest_run_model.py:220: in run_experiment
    optimizer_exit_code = optimizer.run().exit_code
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/plan/_basic_optimizer.py:229: in run
    plan.run()
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/plan/_plan.py:114: in run
    self.run_steps(self._steps)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/plan/_plan.py:196: in run_steps
    task.run()
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../plugins/plan/_optimizer.py:208: in run
    exit_code = ensemble_optimizer.start(variables)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/optimization/_optimizer.py:145: in start
    self._optimizer.start(variables)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:76: in start
    self._start(initial_values)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:341: in _start
    self._start_direct_interface(initial_values)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:361: in _start_direct_interface
    raise driver.exception from err
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12/site-packages/dakota.py:288: in dakota_callback
    return driver.dakota_callback(**kwargs)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:392: in dakota_callback
    function_result, gradient_result = _compute_response(
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:478: in _compute_response
    functions, gradients = optimizer_callback(
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/optimization/_optimizer.py:186: in _optimizer_callback
    results = self._run_evaluations(
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/optimization/_optimizer.py:259: in _run_evaluations
    results = self._function_evaluator.calculate(
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/ensemble_evaluator/_ensemble_evaluator.py:157: in calculate
    return self._calculate_both(variables, self._config.variables.indices)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/ensemble_evaluator/_ensemble_evaluator.py:355: in _calculate_both
    f_eval_results, g_eval_results = _get_function_and_gradient_results(
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/ensemble_evaluator/_evaluator_results.py:203: in _get_function_and_gradient_results
    evaluator_result = evaluator(
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ert/run_models/everest_run_model.py:379: in _forward_model_evaluator
    self._setup_sim(sim_id, controls, ensemble)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ert/run_models/everest_run_model.py:515: in _setup_sim
    _check_suffix(ext_config, var_name, var_setting)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

ext_config = ExtParamConfig(name='point', forward_init=False, update=False, input_keys={'x': ['0', '1', '2']}, output_file='point.json', forward_init_file='')
key = 'x', assignment = {0: 0.25, 1: 0.25, 2: 0.25}

    def _check_suffix(
        ext_config: ExtParamConfig,
        key: str,
        assignment: dict[str, Any] | tuple[str, str] | str | int,
    ) -> None:
        if key not in ext_config:
            raise KeyError(f"No such key: {key}")
        if isinstance(assignment, dict):  # handle suffixes
            suffixes = ext_config[key]
            if len(assignment) != len(suffixes):
                missingsuffixes = set(suffixes).difference(set(assignment.keys()))
                raise KeyError(
                    f"Key {key} is missing values for "
                    f"these suffixes: {missingsuffixes}"
                )
            for suffix in assignment:
                if suffix not in suffixes:
>                   raise KeyError(
                        f"Key {key} has suffixes {suffixes}. "
                        f"Can't find the requested suffix {suffix}"
                    )
E                   KeyError: "Key x has suffixes ['0', '1', '2']. Can't find the requested suffix 0"

.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ert/run_models/everest_run_model.py:490: KeyError

The above exception was the direct cause of the following exception:

config_file = 'config_advanced.yml'
snapshot = <pytest_snapshot.plugin.Snapshot object at 0x7f9e681ee600>
cached_example = <function cached_example.<locals>.run_config at 0x7f9e67e2db20>

    @pytest.mark.parametrize(
        "config_file",
        ["config_advanced.yml", "config_minimal.yml", "config_multiobj.yml"],
    )
    def test_api_summary_snapshot(config_file, snapshot, cached_example):
>       config_path, config_file, _ = cached_example(f"math_func/{config_file}")

.../tests/everest/test_api_snapshots.py:85: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

test_data_case = 'math_func/config_advanced.yml'

    def run_config(test_data_case: str):
        if cache.get(f"cached_example:{test_data_case}", None) is None:
            my_tmpdir = Path(tempfile.mkdtemp())
            config_path = (
                Path(__file__) / f"../../../test-data/everest/{test_data_case}"
            ).resolve()
            config_file = config_path.name
    
            shutil.copytree(config_path.parent, my_tmpdir / "everest")
            config = EverestConfig.load_file(my_tmpdir / "everest" / config_file)
            run_model = EverestRunModel.create(config)
            evaluator_server_config = evaluator_server_config_generator(run_model)
            try:
                run_model.run_experiment(evaluator_server_config)
            except Exception as e:
>               raise Exception(f"Failed running {config_path} with error: {e}") from e
E               Exception: Failed running .../everest/math_func/config_advanced.yml with error: "Key x has suffixes ['0', '1', '2']. Can't find the requested suffix 0"

.../tests/everest/conftest.py:178: Exception
tests/everest/test_samplers.py::test_sampler_mixed
Stack Traces | 0.308s run time
self = <ropt_dakota.dakota.DakotaOptimizer object at 0x7f9e659e3710>
initial_values = array([0.25, 0.25, 0.25])

    def _start_direct_interface(self, initial_values: NDArray[np.float64]) -> None:
        driver = _DakotaDriver(
            self._config.optimizer,
            self._optimizer_callback,
            self._constraint_indices,
            self._get_inputs(initial_values),
        )
        try:
>           driver.run_dakota(
                infile="dakota_Input.in",
                stdout="Report.txt",
                stderr="dakota_errors.txt",
            )

.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:354: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:437: in run_dakota
    run_dakota(infile, stdout, stderr, restart, throw_on_error)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

infile = 'dakota_Input.in', stdout = 'Report.txt', stderr = 'dakota_errors.txt'
restart = 0, throw_on_error = True

    def run_dakota(infile, stdout=None, stderr=None, restart=0, throw_on_error=True):
        """
        Run DAKOTA with the configuration file as provided as first argument 'infile'.
    
        `stdout` and `stderr` can be used to direct their respective DAKOTA
        stream to a filename.
    
        Set dakota in restart mode if restart is equal to 1
    
        :param infile: The name of the configuration file
        :type infile: str
        :param stdout: The stream where to redirect standard output
        :param stderr: The stream where to redirect standard error
        :param restart: The flag that tells dakota whether to restart or not.
        If set to 1 than dakota will be started in restart mode. Dakota will
        expect in this case the restart file dakota.rst to be present in the working directory
        :type restart: int
        :param throw_on_error: Dakota throws on error instead of aborting
        """
    
        # Checking for a Python exception via sys.exc_info() doesn't work, for
        # some reason it always returns (None, None, None).  So instead we pass
        # an object down and if an exception is thrown, the C++ level will fill
        # it with the exception information so we can re-raise it.
        err = 0
        exc = _ExcInfo()
        err = carolina.run_dakota(infile,
                                  stdout,
                                  stderr,
                                  exc,
                                  restart,
                                  throw_on_error)
    
        # Check for errors. We'll get here if Dakota::abort_mode has been set to
        # throw an exception rather than shut down the process.
        if err:
            if exc.type is None:
>               raise RuntimeError('DAKOTA analysis failed')
E               RuntimeError: DAKOTA analysis failed

.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12/site-packages/dakota.py:224: RuntimeError

The above exception was the direct cause of the following exception:

copy_math_func_test_data_to_tmp = None
evaluator_server_config_generator = <function evaluator_server_config_generator.<locals>.create_evaluator_server_config at 0x7f9e659ae3e0>

    def test_sampler_mixed(
        copy_math_func_test_data_to_tmp, evaluator_server_config_generator
    ):
        config = EverestConfig.load_file(CONFIG_FILE_ADVANCED)
        config.controls[0].variables[0].sampler = SamplerConfig(**{"method": "uniform"})
        config.controls[0].variables[1].sampler = SamplerConfig(**{"method": "norm"})
        config.controls[0].variables[2].sampler = SamplerConfig(**{"method": "uniform"})
    
        run_model = EverestRunModel.create(config)
        evaluator_server_config = evaluator_server_config_generator(run_model)
>       run_model.run_experiment(evaluator_server_config)

.../tests/everest/test_samplers.py:51: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ert/run_models/everest_run_model.py:220: in run_experiment
    optimizer_exit_code = optimizer.run().exit_code
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/plan/_basic_optimizer.py:229: in run
    plan.run()
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/plan/_plan.py:114: in run
    self.run_steps(self._steps)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/plan/_plan.py:196: in run_steps
    task.run()
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../plugins/plan/_optimizer.py:208: in run
    exit_code = ensemble_optimizer.start(variables)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/optimization/_optimizer.py:145: in start
    self._optimizer.start(variables)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:76: in start
    self._start(initial_values)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:341: in _start
    self._start_direct_interface(initial_values)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:361: in _start_direct_interface
    raise driver.exception from err
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12/site-packages/dakota.py:288: in dakota_callback
    return driver.dakota_callback(**kwargs)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:392: in dakota_callback
    function_result, gradient_result = _compute_response(
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12...................../site-packages/ropt_dakota/dakota.py:478: in _compute_response
    functions, gradients = optimizer_callback(
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/optimization/_optimizer.py:186: in _optimizer_callback
    results = self._run_evaluations(
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/optimization/_optimizer.py:259: in _run_evaluations
    results = self._function_evaluator.calculate(
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/ensemble_evaluator/_ensemble_evaluator.py:157: in calculate
    return self._calculate_both(variables, self._config.variables.indices)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/ensemble_evaluator/_ensemble_evaluator.py:355: in _calculate_both
    f_eval_results, g_eval_results = _get_function_and_gradient_results(
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ropt/ensemble_evaluator/_evaluator_results.py:203: in _get_function_and_gradient_results
    evaluator_result = evaluator(
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ert/run_models/everest_run_model.py:379: in _forward_model_evaluator
    self._setup_sim(sim_id, controls, ensemble)
.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ert/run_models/everest_run_model.py:515: in _setup_sim
    _check_suffix(ext_config, var_name, var_setting)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

ext_config = ExtParamConfig(name='point', forward_init=False, update=False, input_keys={'x': ['0', '1', '2']}, output_file='point.json', forward_init_file='')
key = 'x', assignment = {0: 0.25, 1: 0.25, 2: 0.25}

    def _check_suffix(
        ext_config: ExtParamConfig,
        key: str,
        assignment: dict[str, Any] | tuple[str, str] | str | int,
    ) -> None:
        if key not in ext_config:
            raise KeyError(f"No such key: {key}")
        if isinstance(assignment, dict):  # handle suffixes
            suffixes = ext_config[key]
            if len(assignment) != len(suffixes):
                missingsuffixes = set(suffixes).difference(set(assignment.keys()))
                raise KeyError(
                    f"Key {key} is missing values for "
                    f"these suffixes: {missingsuffixes}"
                )
            for suffix in assignment:
                if suffix not in suffixes:
>                   raise KeyError(
                        f"Key {key} has suffixes {suffixes}. "
                        f"Can't find the requested suffix {suffix}"
                    )
E                   KeyError: "Key x has suffixes ['0', '1', '2']. Can't find the requested suffix 0"

.../hostedtoolcache/Python/3.12.8...................................................................../x64/lib/python3.12.../ert/run_models/everest_run_model.py:490: KeyError

To view more test analytics, go to the Test Analytics Dashboard
📢 Thoughts on this report? Let us know!

@verveerpj verveerpj force-pushed the refactor-control-parsing branch 3 times, most recently from 5598aef to 3632cb5 Compare January 20, 2025 10:41
@verveerpj verveerpj closed this Jan 20, 2025
@verveerpj verveerpj force-pushed the refactor-control-parsing branch from 3632cb5 to 941a21b Compare January 20, 2025 10:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
everest release-notes:refactor PR changes code without changing ANY (!) behavior.
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

2 participants