Running Vorpal From the Command Line
The following sections describe how to run Vorpal from the command line.
Setting Up Vorpal Command Line Environment
Vorpal needs several environment variables set before it can be run from the command line. XSim provides scripts to setup the environment on each operating system.
The following instructions use the variable SCRIPT_DIR which is the directory where XSim is installed. For example, this would be something like
SCRIPT_DIR=C:\Program Files\Tech-X\XSim1.0 (Windows), SCRIPT_DIR=/usr/bin/XSim1.0 (Linux), SCRIPT_DIR=/Applications/XSim1.0/XSimComposer.app/Contents/Resources (Mac).
On Windows
Open a Command Prompt (run cmd.exe) and execute the following line:
C:\> %SCRIPT_DIR%\setupCmdEnv.bat
On Linux or Mac
In a bash shell, source the XSimComposer.sh script as follows:
$ source $SCRIPT_DIR/XSimComposer.sh
This is a bash shell script, which means you must be running the bash shell to execute the above command. If you are normally a csh/tsh user, you will need to start up a bash shell to execute the above command and subsequently execute XSimComposer.
Most distribution’s operating systems will allow you to add the above command to the .bashrc file in your home directory, which will prevent having to run it each time you log in. Any changes you make to your .bashrc do not take effect until the next time you log in, so after modifying your .bashrc file, you must execute the following command in your current shell, but will not need to do it in the future:
$ source ~/.bashrc
Serial Computation
The Vorpal executable for use in serial computation
is named vorpalser. Except as noted, the
explanations and tutorials within the User Guide
and “Example” Manuals demonstrate Vorpal usage for
serial computations. Here is an example of Vorpal command
line invocation using an input file named myfile.pre
and specifying 1000 time steps, outputting the result data
(dumping) every 500 steps. By default, the output files for
this example would be named using the format myfile.out
.
vorpalser -i myfile.pre -n 1000 -d 500
The Vorpal computation engine for serial computations also
creates a single text file named myfile_comms_0.txt
unless this has been suppressed by command line or input
file options.
Parallel Computation
The Vorpal executable for use in parallel computation is named vorpal. This section explains use of the Vorpal executable program for parallel computations.
Vorpal for parallel computations requires the Message Passing Interface (MPI). On Mac and Windows, you must use our bundled MPI. On Windows the parallel message passing interface (MPI) library provided with XSim is MS MPI (from Microsoft). On Mac, the parallel message passing interface (MPI) library provided with XSim is OpenMPI. On Linux, the parallel message passing interface (MPI) library provided with XSim is MPICH. If there is a reason why you must use a system MPI, please contact Tech-X support, who will quote you for a custom installation.
For administrator information about MPI for use with Vorpal, see Running Vorpal from a Queueing System.
Running Vorpal with mpiexec
In order to run Vorpal in parallel via the command line, you must first add the <VORPAL_BIN_DIR> to your PATH, as noted in the “Vorpal Command Line Options” section in the Reference Manual.
To run Vorpal in parallel, execute the following command:
mpiexec -n <#> vorpal -i filename.pre
in which <#>
is the number of processors,
vorpal is the executable program for parallel
computations, and filename.pre
is the name of the input file
(which must be in the current directory, or must be specified by
a full path).
Following mpiexec, but before vorpal, you can specify a variety of mpiexec options. In particular, for the openmpi implementation of MPI (supplied with macOS), one may need to add the arguments, -x PYTHONPATH -x LD_LIBRARY_PATH to ensure that all processes are using the correct values for these environment variables. For the MPICH implementation of MPI (supplied with Linux) these arguments are not needed, as MPICH by default exports all environment variables to all processes. For more information about mpiexec, including the complete list of options, it can be run with mpiexec -h.
Following vorpal, you can specify a variety of Vorpal options. Some of the more common options are
vorpalser -i esPtclInCell.pre -o newesPtclInCell
vorpalser -i esPtclInCellSteps.pre -r 50
For a complete list of options, see the “Vorpal Command Line Options” section in the Reference Manual.
If a parameter is both set within the input file and specified on the command line, the command line parameter value takes precedence. The command line override enables you to configure an input file with default values while exploring alternative parameter settings from the command line. From the command line, you can quickly change simulation run lengths, dimensionality, output timing, etc.
Vorpal automatically adjusts its decomposition to match the number of processors it is given, unless a manual decomposition is provided for the correct number of cores in the input .pre file.
In contrast to Vorpal for serial computation, which creates a single text output file, Vorpal for parallel computation creates multiple text output files. Each individual processor from the parallel run sends comments to a different output file. A parallel computation output file’s name includes a label that identifies the number of the processor that generated that file, for example:
esPtclInCell_comms_0_1.txt
esPtclInCell_comms_0_2.txt
in which the final _0
and _1
before the file name
suffix indicate the number of the processor.
By default Vorpal writes one HDF5 file for each field or particle
species even for a parallel run. However, one can modify this
behavior, as noted at the above link. Having one file for each
field or particle species for each processor can sometimes get
around parallel I/O problems. When that is necessary, one can
construct a single file for a field or particle species using the
utilities, mergeH5Flds
and mergeH5Ptcls
, which
come with Vorpal.
Running Vorpal with mpiexec Using a Hostfile
If you need to run an MPI job but do not have access to a queuing system then a hostfile must be set up. If this is the case you must know the node names on the cluster that the job is to be run on. You must then create a text file with your text editor of choice, this is your hostfile, and place it in your home directory. The hostfile simply contains each node name repeated on a new line as many times as there are threads in that node. For example consider a two node cluster with four threads each, the hostfile will contain
node1
node1
node1
node1
node2
node2
node2
node2
To run a job one must then source XSimComposer shell script using the command:
source <XSim_SCRIPT_DIR>/XSimComposer.sh
Note
This action changes your environment in your current shell, and so may make other programs fail. Do this in a separate shell from any shell in which you intend to run standard programs, like vi or emacs.
You are now ready to run in MPI using the mpiexec command with the above hostfile (signified as <hostfile> below. For MPICH (which is provided for Linux), the command is
mpiexec -f <hostfile> -n <#> <other mpiexec options> vorpal \
-i simulationname.pre <other vorpal options>
The equivalent command for openmpi (which is provided for macOS) is
mpiexec --hostfile <hostfile> -n <#> <other mpiexec options> vorpal \
-i simulationname.pre <other vorpal options>
The number of nodes, <#>, must be consistent with the computational resources and the hostfile.