-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tutorial notebooks from simex_notebooks #163
Comments
The |
Running the The photon source file is not available but a test file can be downloaded from https://zenodo.org/record/888853/files/s2e_tutorial_example_source_input.h5 Then the notebook runs fine until the photon matter interaction photon_matter_interactor.backengine(); photon_matter_interactor.saveH5()
Previous module: simex_notebooks
NOT: data
NOT: history
info
misc
params
version
['arrEhor', 'arrEver']
[('arrEhor', <HDF5 dataset "arrEhor": shape (42, 84, 258, 2), type "<f4">), ('arrEver', <HDF5 dataset "arrEver": shape (42, 84, 258, 2), type "<f4">)]
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-27-37c843a80bc5> in <module>()
----> 1 photon_matter_interactor.backengine(); photon_matter_interactor.saveH5()
/data/netapp/s2e/simex/lib/python3.4/SimEx/Calculators/XMDYNDemoPhotonMatterInteractor.py in backengine(self)
187 pmi_demo.f_init_random()
188 pmi_demo.f_save_info()
--> 189 pmi_demo.f_load_pulse( pmi_demo.g_s2e['prop_out'] )
190
191 # Get file extension.
/data/netapp/s2e/simex/lib/python3.4/SimEx/Calculators/XMDYNDemoPhotonMatterInteractor.py in f_load_pulse(self, a_prop_out)
501 sel_x = self.g_s2e['pulse']['nx'] // 2 ;
502 sel_y = self.g_s2e['pulse']['ny'] // 2 ;
--> 503 sel_pixV = self.g_s2e['pulse']['arrEver'] [sel_x,sel_y,:,:]
504 sel_pixH = self.g_s2e['pulse']['arrEhor'] [sel_x,sel_y,:,:]
505 dt = ( self.g_s2e['pulse']['sliceMax'] - self.g_s2e['pulse']['sliceMin'] ) / ( self.g_s2e['pulse']['nSlices'] * 1.0 )
IndexError: index 42 is out of bounds for axis 0 with size 42 Seems to be that there is something wrong with the slicing order. ping @CFGrote |
After #166 is merged, we can upload a new example notebook that uses the new example FEL source file. |
Multiple errors on running If run manually line by line and initially it looks ok, at least by the line
Next line yields
The same error appears for Next, an error if trying to plot
Propagator apparently does not create
As an alternative, I loaded it from tests and tried to visualise:
So, it looks like there is a major problem with dependencies, paths to libraries or regarding the installation itself. |
your prop_out_*.h5 file has suspiciously few transverse grid points. how did you generate it? most probably, the propagation failed and all entries are 0 or NaN, hence min() and max() fail as well. |
I installed git-lfs and loaded .h5 file by git checkout (apparently no more errors while testing). Yet the problems remain. Maybe it's better to solve them one by one. I follow the example from the notebook, omitting
Here, a message appears and notebook is down: |
one more thought: are you using anaconda? if yes, this will very likely
be the cause for your issue with wpg/srw.
either use native python3 or follow
samoylv/WPG#117 (comment)
Regards, Carsten
…On 4/11/19 2:02 PM, Roman Shopa wrote:
I installed git-lfs and loaded .h5 file by git checkout (apparently no
more errors while testing). Yet the problems remain. Maybe it's better
to solve them one by one.
I follow the example from the notebook
<https://github.com/eucall-software/simex_notebooks/blob/master/start_to_end_demo.ipynb>,
omitting |# Cleanup previous run|. Why the notebook breaks on the line
|source_analysis.plotTotalPower(spectrum=True)|?
|In [1]: # Import all SimEx modules import SimEx from SimEx import *
source_analysis =
XFELPhotonAnalysis(input_path="FELsource_out_0000001.h5") Start
initialization. Loading wavefront from FELsource_out_0000001.h5. ...
done. Getting intensities. ... done. Data dimensions = (104, 104, 651)
Masking NANs. ... done. In [2]: source_analysis.plotTotalPower()
Plotting total power. Pulse energy 0.00017 J # image here is ok just as
in the example! In [3]: source_analysis.plotTotalPower(spectrum=True) ... |
Here, a message appears and notebook is down: |The kernel appears to
have died. It will restart automatically|. Without jupyter, in mere
bash, it's |segmentation fault (core dumped)| error. The same happens
for |prop_analysis.plotTotalPower(spectrum=True)|
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#163 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AOwoR_hPybJlF6SuNac6i4PUS-1ruvWwks5vfyRGgaJpZM4TaACr>.
|
[UPD] Thank you, Carsten, I'll try: have no options for python3 on cluster except anaconda.
The error in the notebook for
|
I ran the example several times both on cluster and on laptop (docker container) and noticed that the impulse Could it be the cause of the error on |
hi roman,
thanks for reporting.
i'll look into this later today.
…--
Regards, Carsten
On 5/7/19 12:12 PM, Roman Shopa wrote:
I ran the example several times both on cluster and on laptop (docker
container) and noticed that the impulse
|/Tests/python/unittest/TestFiles/FELsource_out/FELsource_out_0000001.h5| differs
from the one presented here. For instance, |Data dimensions = (21, 21,
550)| is much smaller (as was for |prop_out_0000001.h5| mentioned above).
Could it be the cause of the error on
|source_analysis.plotIntensityMap()|? Apparently |.h5| files in
|/Tests/python/unittest/TestFiles/| are somewhat deprecated.
image
<https://user-images.githubusercontent.com/15617645/57291791-6c07e800-70c0-11e9-9cac-937c5b6bcb05.png>
image
<https://user-images.githubusercontent.com/15617645/57291815-7b873100-70c0-11e9-889e-8cde567a3117.png>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#163 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADWCQR72HQNP6KDXJDGKE53PUFIYRANCNFSM4E3IACVQ>.
|
hi roman, best regards, carsten |
Hi Carsten, I ask because there are errors in multiple examples from wiki, e.g. the very first example from a year-old workshop: |
yep, i'm on it.
carsten
…On 5/10/19 12:16 PM, Roman Shopa wrote:
Hi Carsten,
Yeah, it would be great, as I encounter errors in multiple examples from
wiki.
E.g. the very first example from a year-old workshop
<https://github.com/eucall-software/simex_platform/wiki/Tutorial-on-nano-crystal-diffraction>:
image
<https://user-images.githubusercontent.com/15617645/57520127-3449ac00-731d-11e9-8f87-b92d1883236e.png>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#163 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADWCQR3OC4GF5WB52CGYIUTPUVDO7ANCNFSM4E3IACVQ>.
|
hi roman, can you try again? |
Hi. Carsten. |
Hi, Carsten. The line from the notebook The further error is as previously:
|
roman,
the first one is an issue in XFELPhotonAnalysis, due to a change in
matplotlib. i fixed it in the develop branch but need to push and run
through CI before pushing it to master. i'll let you know. in the
meantime, you could just comment the two lines where "datalim" appears.
the second: could you tell me as much as possible about your
environment, both hardware (in particular RAM), and software (python
version, c/c++ compiler, fortran compiler, version of libraries fftw,
numpy, blas).
we have to narrow it down this way.
Regards, Carsten
…On 6/18/19 3:01 PM, Roman Shopa wrote:
Hi, Carsten.
Apparently there's a major problem with the installation - none of the
reported problems disappeared.
The line from the notebook |In [9]: source_analysis.plotIntensityMap()|
still doesn't work. An error occurs which ends as |RuntimeError:
adjustable='datalim' is not allowed when both axes are shared| (I
mentioned this in the very first comment
<#163 (comment)>).
The further error is as previously: |propagator.backengine()| returns
|255| and no |prop_out.h5| file is created:
|In [11]: propagation_parameters =
WavePropagatorParameters(beamline=exfel_spb_kb_beamline) In [12]:
propagator = XFELPhotonPropagator(parameters=propagation_parameters,
input_path='5keV_9fs_2015_slice12_fromYoon2016.h5',
output_path='prop_out.h5') In [13]: propagator.backengine() Out [13]:
255 ... |
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#163>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADWCQR3EGGPZZ7JRBJ3JKJTP3DMD7ANCNFSM4E3IACVQ>.
|
Hi Carsten. Here is the description of the environment I use on CIŚ cluster. :
Python:
Since I use anaconda, I removed mkl, as you advised (unit tests would fail):
gcc (I might have used higher version for the compilation):
Here is the list of other modules loaded on cluster:
Full list of conda modules:
|
my suspicion is that your RAM is insufficient. i typically run these
simulations on a 500GB shared memory node. right now, i'm running the
notebook under limited memory conditions.
can you trace your memory consumption over time while the notebook executes?
Regards, Carsten
…On 6/19/19 1:15 PM, Roman Shopa wrote:
Hi Carsten.
Here is the description of the environment I use on CIŚ cluster. :
|System: Scientific Linux CERN SLC release 6.8 (Carbon) Memory
(free).......: 46.18GB / 47.10GB (98 %) |
|$ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order:
Little Endian CPU(s): 24 On-line CPU(s) list: 0-23 Thread(s) per core: 1
Core(s) per socket: 12 Socket(s): 2 NUMA node(s): 4 Vendor ID:
AuthenticAMD CPU family: 16 Model: 9 Model name: AMD Opteron(tm)
Processor 6174 Stepping: 1 CPU MHz: 2199.930 BogoMIPS: 4400.08
Virtualization: AMD-V L1d cache: 64K L1i cache: 64K L2 cache: 512K L3
cache: 5118K NUMA node0 CPU(s): 0,2,4,6,8,10 NUMA node1 CPU(s):
12,14,16,18,20,22 NUMA node2 CPU(s): 13,15,17,19,21,23 NUMA node3
CPU(s): 1,3,5,7,9,11 |
Python:
|Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34) [GCC
7.3.0] on linux |
Since I use anaconda, I removed mkl, as you advised (unit tests would fail):
|$ conda install nomkl numpy scipy scikit-learn numexpr $ conda remove
mkl mkl-service |
gcc (I might have used higher version for the compilation):
|$ gcc -v Using built-in specs. COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/mnt/opt/tools/slc6/gcc/4.6.3/libexec/gcc/x86_64-unknown-linux-gnu/4.6.3/lto-wrapper
Target: x86_64-unknown-linux-gnu Configured with: ../gcc-4.6.3/configure
--enable-languages=c,c++,fortran --prefix=/mnt/opt/tools/slc6/gcc/4.6.3
Thread model: posix gcc version 4.6.3 (GCC) |
Here is the list of other modules loaded on cluster:
|hdf5/1.8.17.patch1 blas/3.5.0-x86_64-gcc46 bzip2/1.0.6 cuda/4.2.9
lapack/3.5.0-x86_64-gcc46 boost/1.64.0-x86_64-gcc71 binutils/2.22
fftw/3.3.4-x86_64-gcc482 intel-composer/2016.2.181 # for mkl, but I
turned it off |
Full list of conda modules:
|$ conda list # packages in environment at
/mnt/home/rshopa/miniconda3/envs/xfel: # # Name Version Build Channel
alabaster 0.7.12 py36_0 asn1crypto 0.24.0 py36_0 attrs 19.1.0 py36_1
babel 2.6.0 py36_0 backcall 0.1.0 py36_0 biopython 1.72 py36h04863e7_0
blas 2.4 openblas conda-forge bleach 3.1.0 py36_0 breathe 4.12.0 py_0
conda-forge ca-certificates 2019.5.15 0 certifi 2019.3.9 py36_0 cffi
1.12.2 py36h2e261b9_1 chardet 3.0.4 py36_1 cryptography 2.6.1
py36h1ba5d50_0 cycler 0.10.0 py36_0 cython 0.29.6 py36he6710b0_0 dbus
1.13.6 h746ee38_0 decorator 4.4.0 py_0 defusedxml 0.5.0 py36_1 dill
0.2.9 py36_0 docutils 0.14 py36_0 entrypoints 0.3 py36_0 expat 2.2.6
he6710b0_0 fabio 0.8.0 py36h3010b51_1000 conda-forge fontconfig 2.13.0
h9420a91_0 freetype 2.9.1 h8a8886c_1 glib 2.56.2 hd408876_0 gmp 6.1.2
h6c8ec71_1 gst-plugins-base 1.14.0 hbbd80ab_1 gstreamer 1.14.0
hb453b48_1 h5py 2.9.0 py36h7918eee_0 hdf5 1.10.4 hb1b8bf9_0 icu 58.2
h9c2bf20_1 idna 2.8 py36_0 imagesize 1.1.0 py36_0 intel-openmp 2019.1
144 ipykernel 5.1.0 py36h39e3cac_0 ipython 7.3.0 py36h39e3cac_0
ipython_genutils 0.2.0 py36_0 ipywidgets 7.4.2 py36_0 jedi 0.13.3 py36_0
jinja2 2.10 py36_0 joblib 0.13.2 py36_0 jpeg 9b h024ee3a_2 jsonschema
3.0.1 py36_0 jupyter 1.0.0 py36_7 jupyter_client 5.2.4 py36_0
jupyter_console 6.0.0 py36_0 jupyter_core 4.4.0 py36_0 kiwisolver 1.0.1
py36hf484d3e_0 libblas 3.8.0 4_openblas conda-forge libcblas 3.8.0
4_openblas conda-forge libedit 3.1.20181209 hc058e9b_0 libffi 3.2.1
hd88cf55_4 libgcc-ng 8.2.0 hdf63c60_1 libgfortran-ng 7.3.0 hdf63c60_0
liblapack 3.8.0 4_openblas conda-forge liblapacke 3.8.0 4_openblas
conda-forge libopenblas 0.3.3 h5a2b251_3 libpng 1.6.36 hbc83047_0
libsodium 1.0.16 h1bed415_0 libstdcxx-ng 8.2.0 hdf63c60_1 libtiff 4.0.10
h2733197_2 libuuid 1.0.3 h1bed415_2 libxcb 1.13 h1bed415_1 libxml2 2.9.9
he19cac6_0 libxslt 1.1.33 h7d1a2b0_0 llvmlite 0.28.0 py36hd408876_0 lxml
4.3.2 py36hefd8a0e_0 mako 1.0.7 py36_0 markupsafe 1.1.1 py36h7b6447c_0
matplotlib 3.0.3 py36h5429711_0 mistune 0.8.4 py36h7b6447c_0 mpi4py
2.0.0 py36_2 mpich2 1.4.1p1 0 nbconvert 5.4.1 py36_3 nbformat 4.4.0
py36_0 ncurses 6.1 he6710b0_1 nomkl 3.0 0 notebook 5.7.8 py36_0 numba
0.43.0 py36h962f231_0 numexpr 2.6.9 py36h2ffa06c_0 numpy 1.16.4
py36h99e49ec_0 numpy-base 1.16.4 py36h2f8d375_0 nvidia-ml-py3 7.352.0
py_0 fastai olefile 0.46 py36_0 openblas 0.3.5 h9ac9557_1001 conda-forge
openssl 1.1.1c h7b6447c_1 packaging 19.0 py36_0 pandoc 2.2.3.2 0
pandocfilters 1.4.2 py36_1 parso 0.3.4 py36_0 pcre 8.43 he6710b0_0
periodictable 1.5.0 py_1 conda-forge pexpect 4.6.0 py36_0 pickleshare
0.7.5 py36_0 pillow 5.4.1 py36h34e0f95_0 pint 0.9 py36_2 conda-forge pip
19.0.3 py36_0 prometheus_client 0.6.0 py36_0 prompt_toolkit 2.0.9 py36_0
ptyprocess 0.6.0 py36_0 py3nvml 0.2.3 <pip> pycparser 2.19 py36_0 pyfai
0.17.0 py36hf2d7682_1001 conda-forge pyfftw 0.11.1 py36h3010b51_1001
conda-forge pygments 2.3.1 py36_0 pyopenssl 19.0.0 py36_0 pyparsing
2.3.1 py36_0 pyqt 5.9.2 py36h05f1152_2 pyrsistent 0.14.11 py36h7b6447c_0
pysocks 1.6.8 py36_0 python 3.6.8 h0371630_0 python-dateutil 2.8.0
py36_0 pytz 2018.9 py36_0 pyzmq 18.0.0 py36he6710b0_0 qt 5.9.7
h5867ecd_1 qtconsole 4.4.3 py36_0 readline 7.0 h7b6447c_5 requests
2.21.0 py36_0 scikit-learn 0.21.2 py36h22eb022_0 scipy 1.2.1
py36he2b7bc3_0 send2trash 1.5.0 py36_0 setuptools 40.8.0 py36_0 silx
0.10.0 py36h637b7d7_0 conda-forge sip 4.19.8 py36hf484d3e_0 six 1.12.0
py36_0 snowballstemmer 1.2.1 py36_0 sphinx 1.8.5 py36_0 sphinxcontrib
1.0 py36_1 sphinxcontrib-websupport 1.1.0 py36_1 sqlalchemy 1.3.1
py36h7b6447c_0 sqlite 3.27.2 h7b6447c_0 terminado 0.8.1 py36_1 testpath
0.4.2 py36_0 tk 8.6.8 hbc83047_0 tornado 6.0.1 py36h7b6447c_0 traitlets
4.3.2 py36_0 urllib3 1.24.1 py36_0 wcwidth 0.1.7 py36_0 webencodings
0.5.1 py36_1 wheel 0.33.1 py36_0 widgetsnbextension 3.4.2 py36_0
xmltodict 0.12.0 <pip> xz 5.2.4 h14c3975_4 zeromq 4.3.1 he6710b0_3 zlib
1.2.11 h7b6447c_3 zstd 1.3.7 h0b5b093_0 |
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#163>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADWCQRY7TFKGZXLUZJMHKIDP3IIOPANCNFSM4E3IACVQ>.
|
my runs consumes approx. 34 GB RAM (virtual memory). do you have
anything else running on the system? e.g. on my node, there are root
processes summing up to ~30GB.
Regards, Carsten
…On 6/19/19 5:36 PM, Carsten Fortmann-Grote wrote:
my suspicion is that your RAM is insufficient. i typically run these
simulations on a 500GB shared memory node. right now, i'm running the
notebook under limited memory conditions.
can you trace your memory consumption over time while the notebook executes?
Regards, Carsten
On 6/19/19 1:15 PM, Roman Shopa wrote:
> Hi Carsten.
>
> Here is the description of the environment I use on CIŚ cluster. :
>
> |System: Scientific Linux CERN SLC release 6.8 (Carbon) Memory
> (free).......: 46.18GB / 47.10GB (98 %) |
>
> |$ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order:
> Little Endian CPU(s): 24 On-line CPU(s) list: 0-23 Thread(s) per core: 1
> Core(s) per socket: 12 Socket(s): 2 NUMA node(s): 4 Vendor ID:
> AuthenticAMD CPU family: 16 Model: 9 Model name: AMD Opteron(tm)
> Processor 6174 Stepping: 1 CPU MHz: 2199.930 BogoMIPS: 4400.08
> Virtualization: AMD-V L1d cache: 64K L1i cache: 64K L2 cache: 512K L3
> cache: 5118K NUMA node0 CPU(s): 0,2,4,6,8,10 NUMA node1 CPU(s):
> 12,14,16,18,20,22 NUMA node2 CPU(s): 13,15,17,19,21,23 NUMA node3
> CPU(s): 1,3,5,7,9,11 |
>
> Python:
>
> |Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34) [GCC
> 7.3.0] on linux |
>
> Since I use anaconda, I removed mkl, as you advised (unit tests would fail):
>
> |$ conda install nomkl numpy scipy scikit-learn numexpr $ conda remove
> mkl mkl-service |
>
> gcc (I might have used higher version for the compilation):
>
> |$ gcc -v Using built-in specs. COLLECT_GCC=gcc
> COLLECT_LTO_WRAPPER=/mnt/opt/tools/slc6/gcc/4.6.3/libexec/gcc/x86_64-unknown-linux-gnu/4.6.3/lto-wrapper
> Target: x86_64-unknown-linux-gnu Configured with: ../gcc-4.6.3/configure
> --enable-languages=c,c++,fortran --prefix=/mnt/opt/tools/slc6/gcc/4.6.3
> Thread model: posix gcc version 4.6.3 (GCC) |
>
> Here is the list of other modules loaded on cluster:
>
> |hdf5/1.8.17.patch1 blas/3.5.0-x86_64-gcc46 bzip2/1.0.6 cuda/4.2.9
> lapack/3.5.0-x86_64-gcc46 boost/1.64.0-x86_64-gcc71 binutils/2.22
> fftw/3.3.4-x86_64-gcc482 intel-composer/2016.2.181 # for mkl, but I
> turned it off |
>
> Full list of conda modules:
>
> |$ conda list # packages in environment at
> /mnt/home/rshopa/miniconda3/envs/xfel: # # Name Version Build Channel
> alabaster 0.7.12 py36_0 asn1crypto 0.24.0 py36_0 attrs 19.1.0 py36_1
> babel 2.6.0 py36_0 backcall 0.1.0 py36_0 biopython 1.72 py36h04863e7_0
> blas 2.4 openblas conda-forge bleach 3.1.0 py36_0 breathe 4.12.0 py_0
> conda-forge ca-certificates 2019.5.15 0 certifi 2019.3.9 py36_0 cffi
> 1.12.2 py36h2e261b9_1 chardet 3.0.4 py36_1 cryptography 2.6.1
> py36h1ba5d50_0 cycler 0.10.0 py36_0 cython 0.29.6 py36he6710b0_0 dbus
> 1.13.6 h746ee38_0 decorator 4.4.0 py_0 defusedxml 0.5.0 py36_1 dill
> 0.2.9 py36_0 docutils 0.14 py36_0 entrypoints 0.3 py36_0 expat 2.2.6
> he6710b0_0 fabio 0.8.0 py36h3010b51_1000 conda-forge fontconfig 2.13.0
> h9420a91_0 freetype 2.9.1 h8a8886c_1 glib 2.56.2 hd408876_0 gmp 6.1.2
> h6c8ec71_1 gst-plugins-base 1.14.0 hbbd80ab_1 gstreamer 1.14.0
> hb453b48_1 h5py 2.9.0 py36h7918eee_0 hdf5 1.10.4 hb1b8bf9_0 icu 58.2
> h9c2bf20_1 idna 2.8 py36_0 imagesize 1.1.0 py36_0 intel-openmp 2019.1
> 144 ipykernel 5.1.0 py36h39e3cac_0 ipython 7.3.0 py36h39e3cac_0
> ipython_genutils 0.2.0 py36_0 ipywidgets 7.4.2 py36_0 jedi 0.13.3 py36_0
> jinja2 2.10 py36_0 joblib 0.13.2 py36_0 jpeg 9b h024ee3a_2 jsonschema
> 3.0.1 py36_0 jupyter 1.0.0 py36_7 jupyter_client 5.2.4 py36_0
> jupyter_console 6.0.0 py36_0 jupyter_core 4.4.0 py36_0 kiwisolver 1.0.1
> py36hf484d3e_0 libblas 3.8.0 4_openblas conda-forge libcblas 3.8.0
> 4_openblas conda-forge libedit 3.1.20181209 hc058e9b_0 libffi 3.2.1
> hd88cf55_4 libgcc-ng 8.2.0 hdf63c60_1 libgfortran-ng 7.3.0 hdf63c60_0
> liblapack 3.8.0 4_openblas conda-forge liblapacke 3.8.0 4_openblas
> conda-forge libopenblas 0.3.3 h5a2b251_3 libpng 1.6.36 hbc83047_0
> libsodium 1.0.16 h1bed415_0 libstdcxx-ng 8.2.0 hdf63c60_1 libtiff 4.0.10
> h2733197_2 libuuid 1.0.3 h1bed415_2 libxcb 1.13 h1bed415_1 libxml2 2.9.9
> he19cac6_0 libxslt 1.1.33 h7d1a2b0_0 llvmlite 0.28.0 py36hd408876_0 lxml
> 4.3.2 py36hefd8a0e_0 mako 1.0.7 py36_0 markupsafe 1.1.1 py36h7b6447c_0
> matplotlib 3.0.3 py36h5429711_0 mistune 0.8.4 py36h7b6447c_0 mpi4py
> 2.0.0 py36_2 mpich2 1.4.1p1 0 nbconvert 5.4.1 py36_3 nbformat 4.4.0
> py36_0 ncurses 6.1 he6710b0_1 nomkl 3.0 0 notebook 5.7.8 py36_0 numba
> 0.43.0 py36h962f231_0 numexpr 2.6.9 py36h2ffa06c_0 numpy 1.16.4
> py36h99e49ec_0 numpy-base 1.16.4 py36h2f8d375_0 nvidia-ml-py3 7.352.0
> py_0 fastai olefile 0.46 py36_0 openblas 0.3.5 h9ac9557_1001 conda-forge
> openssl 1.1.1c h7b6447c_1 packaging 19.0 py36_0 pandoc 2.2.3.2 0
> pandocfilters 1.4.2 py36_1 parso 0.3.4 py36_0 pcre 8.43 he6710b0_0
> periodictable 1.5.0 py_1 conda-forge pexpect 4.6.0 py36_0 pickleshare
> 0.7.5 py36_0 pillow 5.4.1 py36h34e0f95_0 pint 0.9 py36_2 conda-forge pip
> 19.0.3 py36_0 prometheus_client 0.6.0 py36_0 prompt_toolkit 2.0.9 py36_0
> ptyprocess 0.6.0 py36_0 py3nvml 0.2.3 <pip> pycparser 2.19 py36_0 pyfai
> 0.17.0 py36hf2d7682_1001 conda-forge pyfftw 0.11.1 py36h3010b51_1001
> conda-forge pygments 2.3.1 py36_0 pyopenssl 19.0.0 py36_0 pyparsing
> 2.3.1 py36_0 pyqt 5.9.2 py36h05f1152_2 pyrsistent 0.14.11 py36h7b6447c_0
> pysocks 1.6.8 py36_0 python 3.6.8 h0371630_0 python-dateutil 2.8.0
> py36_0 pytz 2018.9 py36_0 pyzmq 18.0.0 py36he6710b0_0 qt 5.9.7
> h5867ecd_1 qtconsole 4.4.3 py36_0 readline 7.0 h7b6447c_5 requests
> 2.21.0 py36_0 scikit-learn 0.21.2 py36h22eb022_0 scipy 1.2.1
> py36he2b7bc3_0 send2trash 1.5.0 py36_0 setuptools 40.8.0 py36_0 silx
> 0.10.0 py36h637b7d7_0 conda-forge sip 4.19.8 py36hf484d3e_0 six 1.12.0
> py36_0 snowballstemmer 1.2.1 py36_0 sphinx 1.8.5 py36_0 sphinxcontrib
> 1.0 py36_1 sphinxcontrib-websupport 1.1.0 py36_1 sqlalchemy 1.3.1
> py36h7b6447c_0 sqlite 3.27.2 h7b6447c_0 terminado 0.8.1 py36_1 testpath
> 0.4.2 py36_0 tk 8.6.8 hbc83047_0 tornado 6.0.1 py36h7b6447c_0 traitlets
> 4.3.2 py36_0 urllib3 1.24.1 py36_0 wcwidth 0.1.7 py36_0 webencodings
> 0.5.1 py36_1 wheel 0.33.1 py36_0 widgetsnbextension 3.4.2 py36_0
> xmltodict 0.12.0 <pip> xz 5.2.4 h14c3975_4 zeromq 4.3.1 he6710b0_3 zlib
> 1.2.11 h7b6447c_3 zstd 1.3.7 h0b5b093_0 |
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#163>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/ADWCQRY7TFKGZXLUZJMHKIDP3IIOPANCNFSM4E3IACVQ>.
>
|
and on another note, i just pushed v0.4.1 which fixes the matplotlib
issue you reported.
Regards, Carsten
…On 6/19/19 5:51 PM, Carsten Fortmann-Grote wrote:
my runs consumes approx. 34 GB RAM (virtual memory). do you have
anything else running on the system? e.g. on my node, there are root
processes summing up to ~30GB.
Regards, Carsten
On 6/19/19 5:36 PM, Carsten Fortmann-Grote wrote:
> my suspicion is that your RAM is insufficient. i typically run these
> simulations on a 500GB shared memory node. right now, i'm running the
> notebook under limited memory conditions.
>
> can you trace your memory consumption over time while the notebook executes?
>
> Regards, Carsten
>
> On 6/19/19 1:15 PM, Roman Shopa wrote:
>> Hi Carsten.
>>
>> Here is the description of the environment I use on CIŚ cluster. :
>>
>> |System: Scientific Linux CERN SLC release 6.8 (Carbon) Memory
>> (free).......: 46.18GB / 47.10GB (98 %) |
>>
>> |$ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order:
>> Little Endian CPU(s): 24 On-line CPU(s) list: 0-23 Thread(s) per core: 1
>> Core(s) per socket: 12 Socket(s): 2 NUMA node(s): 4 Vendor ID:
>> AuthenticAMD CPU family: 16 Model: 9 Model name: AMD Opteron(tm)
>> Processor 6174 Stepping: 1 CPU MHz: 2199.930 BogoMIPS: 4400.08
>> Virtualization: AMD-V L1d cache: 64K L1i cache: 64K L2 cache: 512K L3
>> cache: 5118K NUMA node0 CPU(s): 0,2,4,6,8,10 NUMA node1 CPU(s):
>> 12,14,16,18,20,22 NUMA node2 CPU(s): 13,15,17,19,21,23 NUMA node3
>> CPU(s): 1,3,5,7,9,11 |
>>
>> Python:
>>
>> |Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34) [GCC
>> 7.3.0] on linux |
>>
>> Since I use anaconda, I removed mkl, as you advised (unit tests would fail):
>>
>> |$ conda install nomkl numpy scipy scikit-learn numexpr $ conda remove
>> mkl mkl-service |
>>
>> gcc (I might have used higher version for the compilation):
>>
>> |$ gcc -v Using built-in specs. COLLECT_GCC=gcc
>> COLLECT_LTO_WRAPPER=/mnt/opt/tools/slc6/gcc/4.6.3/libexec/gcc/x86_64-unknown-linux-gnu/4.6.3/lto-wrapper
>> Target: x86_64-unknown-linux-gnu Configured with: ../gcc-4.6.3/configure
>> --enable-languages=c,c++,fortran --prefix=/mnt/opt/tools/slc6/gcc/4.6.3
>> Thread model: posix gcc version 4.6.3 (GCC) |
>>
>> Here is the list of other modules loaded on cluster:
>>
>> |hdf5/1.8.17.patch1 blas/3.5.0-x86_64-gcc46 bzip2/1.0.6 cuda/4.2.9
>> lapack/3.5.0-x86_64-gcc46 boost/1.64.0-x86_64-gcc71 binutils/2.22
>> fftw/3.3.4-x86_64-gcc482 intel-composer/2016.2.181 # for mkl, but I
>> turned it off |
>>
>> Full list of conda modules:
>>
>> |$ conda list # packages in environment at
>> /mnt/home/rshopa/miniconda3/envs/xfel: # # Name Version Build Channel
>> alabaster 0.7.12 py36_0 asn1crypto 0.24.0 py36_0 attrs 19.1.0 py36_1
>> babel 2.6.0 py36_0 backcall 0.1.0 py36_0 biopython 1.72 py36h04863e7_0
>> blas 2.4 openblas conda-forge bleach 3.1.0 py36_0 breathe 4.12.0 py_0
>> conda-forge ca-certificates 2019.5.15 0 certifi 2019.3.9 py36_0 cffi
>> 1.12.2 py36h2e261b9_1 chardet 3.0.4 py36_1 cryptography 2.6.1
>> py36h1ba5d50_0 cycler 0.10.0 py36_0 cython 0.29.6 py36he6710b0_0 dbus
>> 1.13.6 h746ee38_0 decorator 4.4.0 py_0 defusedxml 0.5.0 py36_1 dill
>> 0.2.9 py36_0 docutils 0.14 py36_0 entrypoints 0.3 py36_0 expat 2.2.6
>> he6710b0_0 fabio 0.8.0 py36h3010b51_1000 conda-forge fontconfig 2.13.0
>> h9420a91_0 freetype 2.9.1 h8a8886c_1 glib 2.56.2 hd408876_0 gmp 6.1.2
>> h6c8ec71_1 gst-plugins-base 1.14.0 hbbd80ab_1 gstreamer 1.14.0
>> hb453b48_1 h5py 2.9.0 py36h7918eee_0 hdf5 1.10.4 hb1b8bf9_0 icu 58.2
>> h9c2bf20_1 idna 2.8 py36_0 imagesize 1.1.0 py36_0 intel-openmp 2019.1
>> 144 ipykernel 5.1.0 py36h39e3cac_0 ipython 7.3.0 py36h39e3cac_0
>> ipython_genutils 0.2.0 py36_0 ipywidgets 7.4.2 py36_0 jedi 0.13.3 py36_0
>> jinja2 2.10 py36_0 joblib 0.13.2 py36_0 jpeg 9b h024ee3a_2 jsonschema
>> 3.0.1 py36_0 jupyter 1.0.0 py36_7 jupyter_client 5.2.4 py36_0
>> jupyter_console 6.0.0 py36_0 jupyter_core 4.4.0 py36_0 kiwisolver 1.0.1
>> py36hf484d3e_0 libblas 3.8.0 4_openblas conda-forge libcblas 3.8.0
>> 4_openblas conda-forge libedit 3.1.20181209 hc058e9b_0 libffi 3.2.1
>> hd88cf55_4 libgcc-ng 8.2.0 hdf63c60_1 libgfortran-ng 7.3.0 hdf63c60_0
>> liblapack 3.8.0 4_openblas conda-forge liblapacke 3.8.0 4_openblas
>> conda-forge libopenblas 0.3.3 h5a2b251_3 libpng 1.6.36 hbc83047_0
>> libsodium 1.0.16 h1bed415_0 libstdcxx-ng 8.2.0 hdf63c60_1 libtiff 4.0.10
>> h2733197_2 libuuid 1.0.3 h1bed415_2 libxcb 1.13 h1bed415_1 libxml2 2.9.9
>> he19cac6_0 libxslt 1.1.33 h7d1a2b0_0 llvmlite 0.28.0 py36hd408876_0 lxml
>> 4.3.2 py36hefd8a0e_0 mako 1.0.7 py36_0 markupsafe 1.1.1 py36h7b6447c_0
>> matplotlib 3.0.3 py36h5429711_0 mistune 0.8.4 py36h7b6447c_0 mpi4py
>> 2.0.0 py36_2 mpich2 1.4.1p1 0 nbconvert 5.4.1 py36_3 nbformat 4.4.0
>> py36_0 ncurses 6.1 he6710b0_1 nomkl 3.0 0 notebook 5.7.8 py36_0 numba
>> 0.43.0 py36h962f231_0 numexpr 2.6.9 py36h2ffa06c_0 numpy 1.16.4
>> py36h99e49ec_0 numpy-base 1.16.4 py36h2f8d375_0 nvidia-ml-py3 7.352.0
>> py_0 fastai olefile 0.46 py36_0 openblas 0.3.5 h9ac9557_1001 conda-forge
>> openssl 1.1.1c h7b6447c_1 packaging 19.0 py36_0 pandoc 2.2.3.2 0
>> pandocfilters 1.4.2 py36_1 parso 0.3.4 py36_0 pcre 8.43 he6710b0_0
>> periodictable 1.5.0 py_1 conda-forge pexpect 4.6.0 py36_0 pickleshare
>> 0.7.5 py36_0 pillow 5.4.1 py36h34e0f95_0 pint 0.9 py36_2 conda-forge pip
>> 19.0.3 py36_0 prometheus_client 0.6.0 py36_0 prompt_toolkit 2.0.9 py36_0
>> ptyprocess 0.6.0 py36_0 py3nvml 0.2.3 <pip> pycparser 2.19 py36_0 pyfai
>> 0.17.0 py36hf2d7682_1001 conda-forge pyfftw 0.11.1 py36h3010b51_1001
>> conda-forge pygments 2.3.1 py36_0 pyopenssl 19.0.0 py36_0 pyparsing
>> 2.3.1 py36_0 pyqt 5.9.2 py36h05f1152_2 pyrsistent 0.14.11 py36h7b6447c_0
>> pysocks 1.6.8 py36_0 python 3.6.8 h0371630_0 python-dateutil 2.8.0
>> py36_0 pytz 2018.9 py36_0 pyzmq 18.0.0 py36he6710b0_0 qt 5.9.7
>> h5867ecd_1 qtconsole 4.4.3 py36_0 readline 7.0 h7b6447c_5 requests
>> 2.21.0 py36_0 scikit-learn 0.21.2 py36h22eb022_0 scipy 1.2.1
>> py36he2b7bc3_0 send2trash 1.5.0 py36_0 setuptools 40.8.0 py36_0 silx
>> 0.10.0 py36h637b7d7_0 conda-forge sip 4.19.8 py36hf484d3e_0 six 1.12.0
>> py36_0 snowballstemmer 1.2.1 py36_0 sphinx 1.8.5 py36_0 sphinxcontrib
>> 1.0 py36_1 sphinxcontrib-websupport 1.1.0 py36_1 sqlalchemy 1.3.1
>> py36h7b6447c_0 sqlite 3.27.2 h7b6447c_0 terminado 0.8.1 py36_1 testpath
>> 0.4.2 py36_0 tk 8.6.8 hbc83047_0 tornado 6.0.1 py36h7b6447c_0 traitlets
>> 4.3.2 py36_0 urllib3 1.24.1 py36_0 wcwidth 0.1.7 py36_0 webencodings
>> 0.5.1 py36_1 wheel 0.33.1 py36_0 widgetsnbextension 3.4.2 py36_0
>> xmltodict 0.12.0 <pip> xz 5.2.4 h14c3975_4 zeromq 4.3.1 he6710b0_3 zlib
>> 1.2.11 h7b6447c_3 zstd 1.3.7 h0b5b093_0 |
>>
>> —
>> You are receiving this because you were mentioned.
>> Reply to this email directly, view it on GitHub
>> <#163>,
>> or mute the thread
>> <https://github.com/notifications/unsubscribe-auth/ADWCQRY7TFKGZXLUZJMHKIDP3IIOPANCNFSM4E3IACVQ>.
>>
>
|
Hi Carsten, It still seems odd for me that memory issues cause error at |
the propagation is certainly among the most RAM consuming parts.
consider the data volume for this raster in t,x, and y, where each voxel
has to store two complex fields in double precision.
Regards, Carsten
…On 6/20/19 3:48 PM, Roman Shopa wrote:
Hi Carsten,
Thank you for the hints.
Probably I have to discuss the latest issue with our admins.
It still seems odd for me that memory issues cause error at
|propagator.backengine()| stage.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#163>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADWCQR5AWGKEGY5DCHPIUADP3ODCPANCNFSM4E3IACVQ>.
|
looks great to me!
Regards, Carsten
…On 7/8/19 2:28 PM, Roman Shopa wrote:
Hi Carsten,
The problem with .backengine() functions was related to open MPI. It
disappears after I loaded the module.
The example appears to run ok (haven't updated to v0.4.1 to test the
issue with matplotlib), but with little difference in images. Since I
yet have to dive into the physics, I don't know whether it is ok. Here's
some screenshots:
image
<https://user-images.githubusercontent.com/15617645/60810024-6b590400-a18c-11e9-9b77-33df68fdc9b0.png>
image
<https://user-images.githubusercontent.com/15617645/60809988-51b7bc80-a18c-11e9-9259-dc20af9dec2b.png>
Thank you again for the help and for the enormous patience!
I hope that the examples from wiki would work, as well.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#163>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADWCQR2OQGYJT622FRUM5LLP6MXFLANCNFSM4E3IACVQ>.
|
Run and report errors:
The text was updated successfully, but these errors were encountered: