You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Download the diff_sorp data, using the command python3 download_direct.py --root_folder ~/Documents/sandbox --pde_name diff_sorp
which made a file called 1D_diffs-sorp_NA_NA.h5.
The root_folder above is clearly wrong. I manually moved the downloaded data into pdebench/data.
I then ran CUDA_VISIBLE_DEVICES='0' python3 train_models_forward.py +args=config_Adv.yaml ++args.filename='1D_diff-sorp_NA_NA.h5' ++args.model_name='FNO'. This code returned an AssertionError, claiming that HDF5 data was assumed. I found what caused this assertion in a utils.py file I had linked below.
I then tried changing the h5 extension into hdf5. I also changed h5 into hdf5 in the above terminal command. This gave a very elaborate error message attached at the bottom.
I wanted to first clarify an ambiguity. The file I downloaded had the extension as "h5". Yet the FNO model and example code in run_forward_1D.sh appear to assume the extension is "hdf5". Not sure what the intended approach was here?
Secondly, I was wondering if you knew what could be causing the above error.
Using backend: tensorflow.compat.v1
WARNING:tensorflow:From /home/ton070/miniconda3/envs/pdebench/lib/python3.9/site-packages/tensorflow/python/compat/v2_compat.py:111: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
WARNING:tensorflow:From /home/ton070/miniconda3/envs/pdebench/lib/python3.9/site-packages/tensorflow/python/compat/v2_compat.py:111: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
WARNING:tensorflow:From /home/ton070/miniconda3/envs/pdebench/lib/python3.9/site-packages/deepxde/nn/initializers.py:118: The name tf.keras.initializers.he_normal is deprecated. Please use tf.compat.v1.keras.initializers.he_normal instead.
WARNING:tensorflow:From /home/ton070/miniconda3/envs/pdebench/lib/python3.9/site-packages/deepxde/nn/initializers.py:118: The name tf.keras.initializers.he_normal is deprecated. Please use tf.compat.v1.keras.initializers.he_normal instead.
/home/ton070/Documents/PDEBench/pdebench/models/train_models_forward.py:164: UserWarning:
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
@hydra.main(config_path="config", config_name="config")
/home/ton070/miniconda3/envs/pdebench/lib/python3.9/site-packages/hydra/_internal/hydra.py:119: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default.
See https://hydra.cc/docs/next/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information.
ret = run_job(
FNO
Epochs = 500, learning rate = 0.001, scheduler step = 100, scheduler gamma = 0.5
FNODatasetSingle
Error executing job with overrides: ['+args=config_Adv.yaml', '++args.filename=1D_diff-sorp_NA_NA.hdf5', '++args.model_name=FNO']
An error occurred during Hydra's exception formatting:
AssertionError()
Traceback (most recent call last):
File "/home/ton070/miniconda3/envs/pdebench/lib/python3.9/site-packages/hydra/_internal/utils.py", line 254, in run_and_report
assert mdl is not None
AssertionError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ton070/Documents/PDEBench/pdebench/models/train_models_forward.py", line 249, in <module>
main()
File "/home/ton070/miniconda3/envs/pdebench/lib/python3.9/site-packages/hydra/main.py", line 90, in decorated_main
_run_hydra(
File "/home/ton070/miniconda3/envs/pdebench/lib/python3.9/site-packages/hydra/_internal/utils.py", line 389, in _run_hydra
_run_app(
File "/home/ton070/miniconda3/envs/pdebench/lib/python3.9/site-packages/hydra/_internal/utils.py", line 452, in _run_app
run_and_report(
File "/home/ton070/miniconda3/envs/pdebench/lib/python3.9/site-packages/hydra/_internal/utils.py", line 296, in run_and_report
raise ex
File "/home/ton070/miniconda3/envs/pdebench/lib/python3.9/site-packages/hydra/_internal/utils.py", line 213, in run_and_report
return func()
File "/home/ton070/miniconda3/envs/pdebench/lib/python3.9/site-packages/hydra/_internal/utils.py", line 453, in <lambda>
lambda: hydra.run(
File "/home/ton070/miniconda3/envs/pdebench/lib/python3.9/site-packages/hydra/_internal/hydra.py", line 132, in run
_ = ret.return_value
File "/home/ton070/miniconda3/envs/pdebench/lib/python3.9/site-packages/hydra/core/utils.py", line 260, in return_value
raise self._return_value
File "/home/ton070/miniconda3/envs/pdebench/lib/python3.9/site-packages/hydra/core/utils.py", line 186, in run_job
ret.return_value = task_function(task_cfg)
File "/home/ton070/Documents/PDEBench/pdebench/models/train_models_forward.py", line 168, in main
run_training_FNO(
File "/home/ton070/Documents/PDEBench/pdebench/models/fno/train.py", line 67, in run_training
train_data = FNODatasetSingle(flnm,
File "/home/ton070/Documents/PDEBench/pdebench/models/fno/utils.py", line 190, in __init__
_data = np.array(f['density'], dtype=np.float32) # batch, time, x,...
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "/home/ton070/miniconda3/envs/pdebench/lib/python3.9/site-packages/h5py/_hl/group.py", line 328, in __getitem__
oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5o.pyx", line 190, in h5py.h5o.open
KeyError: "Unable to open object (object 'density' doesn't exist)"
The text was updated successfully, but these errors were encountered:
Hello,
I undertook the following steps:
python3 download_direct.py --root_folder ~/Documents/sandbox --pde_name diff_sorp
which made a file called
1D_diffs-sorp_NA_NA.h5
.pdebench/data
.CUDA_VISIBLE_DEVICES='0' python3 train_models_forward.py +args=config_Adv.yaml ++args.filename='1D_diff-sorp_NA_NA.h5' ++args.model_name='FNO'
. This code returned an AssertionError, claiming that HDF5 data was assumed. I found what caused this assertion in a utils.py file I had linked below.I wanted to first clarify an ambiguity. The file I downloaded had the extension as "h5". Yet the FNO model and example code in run_forward_1D.sh appear to assume the extension is "hdf5". Not sure what the intended approach was here?
Secondly, I was wondering if you knew what could be causing the above error.
The code I've been using/reading came from:
https://github.com/pdebench/PDEBench/tree/main/pdebench/data_download
https://github.com/pdebench/PDEBench/blob/main/pdebench/models/run_forward_1D.sh
https://github.com/pdebench/PDEBench/blob/main/pdebench/models/fno/utils.py
(OS: Ubuntu 22.04)
The text was updated successfully, but these errors were encountered: