@@ -42,6 +42,8 @@ In this example, we are using 1 node, which contains 2 sockets and 64 cores per
4242 export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
4343 export OMP_PROC_BIND=TRUE
4444
45+ # Optional
46+ python -c " import nest, subprocess as s, os; s.check_call(['/usr/bin/pldd', str(os.getpid())])" 2>&1 | tee -a " pldd-nest.out"
4547
4648 # On some systems, MPI is run by SLURM
4749 srun --exclusive python3 my_nest_simulation.py
@@ -174,6 +176,21 @@ will prevent the threads from moving around.
174176
175177|
176178
179+ ::
180+
181+ python -c "import nest, subprocess as s, os; s.check_call(['/usr/bin/pldd', str(os.getpid())])" 2>&1 | tee -a "pldd-nest.out"
182+
183+ Prints out the linked libraries into a file with name ``pldd-nest.out ``.
184+ In this way, you can check whether dynamically linked librariries for
185+ the execution of ``nest `` is indeed used. For example, if you want to check if ``jemalloc `` is used for the network construction
186+ in highly parallel simulations.
187+
188+ .. note ::
189+
190+ The above command uses ``pldd `` which is commonly available in Linux distributions. However, you might need to change
191+ the path, which you can find with the command ``which pldd ``.
192+
193+ |
177194
178195You can then tell the job script to schedule your simulation.
179196Setting the ``exclusive `` option prevents other processes or jobs from doing work on the same node.
@@ -222,11 +239,3 @@ It should match the number of ``cpus-per-task``.
222239 .. seealso ::
223240
224241 :ref: `parallel_computing `
225-
226-
227-
228-
229-
230-
231-
232-
0 commit comments