-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MMI_splitter_3D #13
Comments
I'm also encountering this problem. It appears to not be a memory amount issue, since I tried it on a system with a few hundred GB of RAM and still had no luck. It affects any examples using the fdtd module. @anstmichaels Any thoughts on what might be causing this? |
Oddly I have never run into this issue, and I have run the FDTD solver pretty extensively on CentOS 7, Ubuntu 18.04, and 20.04. If anyone else encounters this issue, please pull master which has @CharlesDove's fixes and give it a try. |
I have encountered some of the following mistakes. I don't know how to solve them. I would like to ask the author to take the time to help out of his busy schedule.Best Wishs.
(base) m3enjoy@m3enjoy-virtual-machine:~/emopt/examples/MMI_splitter_3D$ python mmi_1x2_splitter_3D_fdtd.py
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[0]PETSC ERROR: to get more information on the crash.
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 59.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
The text was updated successfully, but these errors were encountered: