Go to www.awrcorp.com
Back to search page Click to download printable version of this guide.

4.6.1. Cluster Installation

These steps are to be performed on the head node of your Linux cluster, typically by your system administrator.

  1. Analyst is tested on CentOS 6.9; any Linux version compatible with this should be adequate.

  2. Your cluster must have OpenMPI v.2.1.3 or compatible installed. Information on downloading OpenMPI is available at the OpenMPI website.

  3. Your cluster must have a job scheduler installed. Analyst supports these:

    • Adaptive Computing "Torque" v.4.2.6 or compatible.

    • IBM "Platform LSF" v10.1.0.0 or compatible.

  4. Your cluster will need to support a locale for the Windows 1252 character set, on every node of the cluster. You can confirm the existence of this locale on a node by means of the Linux command locale -a. Check the result to see if the string en_US.cp1252 is present in the output.

    If it is not, your best resource for creating and installing a character set is your cluster administrator. The following is offered for reference if no overriding policy is already in place:

    • Confirm that you have the right character set description file available, by means of the Linux command locale -m. Check the result to see if the string CP1252 is present in the output.

    • If it is present you can create the locale from the character set, by means of the Linux command localedef -f CP1252 -i en_US en_US.cp1252. You must have root privileges for this to succeed.

    • Confirm the existence of the new locale by means of the Linux command locale -a.

    Note that if you update your Linux kernel you may have to re-install the locale.

  5. Unzip and un-tar all of the Analyst solver files in a common location, on the master node of your cluster such that the other cluster nodes can access the files. An NFS shared directory can be used for this purpose. The environment variable ANALYST_PATH as noted should point to a parent directory which has this common location as one of its subdirectories.

  6. If you intend to run simulations using the MICHELLE charged particle beam optics solver, unzip and un-tar the MICHELLE files in another subdirectory on your ANALYST_PATH. This directory will then be a sibling of the directory in which your Analyst solver files are located. Note that to run MICHELLE simulations you also need to acquire a MICHELLE license file from Leidos. Place a copy of this license file in this same subdirectory.

  7. Only non-root users can run jobs. For each non-root user on the cluster, Analyst requires a /home/[user]/.bash_profile file which sources a /home/[user]/.bashrc file.

  8. Analyst is tested on the Linux bash shell but will also run on other "c-shell"-like shells such as csh and tcsh. No additional configuration files are required for such shells, provided the bash files noted above are present.

  9. For each non-root user on the cluster, the user's /home/[user]/.bashrc configuration file should define the following environment variables. Note that all paths are absolute paths:

    • export ANALYST_PATH=/usr/grsim/install

    • export ANALYST_SIM_FOLDER=~/analyst_sim_folder

    • export ANALYST_MPI_PATH_V14=/opt/openmpi213/bin

  10. To also support v13 and older versions, you should also define the following environment variables since ANALYST_MPI_PATH_V[major version] was first introduced in v14.

    • export PATH=/opt/openmpi181/bin:$PATH

    • export LD_LIBRARY_PATH=/opt/openmpi181/lib:./:$LD_LIBRARY_PATH

  11. Explanation of these environment variables follows:

    • The environment variable ANALYST_PATH is the parent of the directory in which the Analyst solver is located on your cluster. If you will be running MICHELLE simulations, it is also the parent of the directory in which your MICHELLE files are located.

    • The environment variable ANALYST_SIM_FOLDER is the parent of the directories where Analyst runs your simulations. Typically this can be your home directory or a subdirectory thereof.

    • The environment variable ANALYST_MPI_PATH_V[major version] (e.g. ANALYST_MPI_PATH_V14) points to the location of your OpenMPI binaries (e.g. mpiexec).

  12. For discussion of configuration settings specific to the Torque job scheduler, see “Torque Requirements”.

  13. For discussion of configuration settings specific to the LSF job scheduler, see “LSF Requirements”.

Please send email to awr.support@ni.com if you would like to provide feedback on this article. Please make sure to include the article link in the email.

Legal and Trademark Notice