The group works with a variety of computational tools.

Pronghorn

Pronghorn is UNR’s high-performance computing cluster. It has a wiki that an be found here. UNR RC also has a slack channel unrrc.slack.com

Getting a pronghorn account

The first step is to place a request with research computing using this form.

Indicate that you are part of the chemistry rental group and that you wish you use gaussian. It might take a few days to get your account.

Once you have your account, login to pronghorn using your netid and password from a terminal window, e.g.:

ssh king@pronghorn.rc.unr.edu

(note the .rc!)

Running gaussian jobs on pronghorn

Set up your input file and transfer to pronghorn into whatever directory you are using.  I set up the file on my local machine using avogadro and an editor and then transferred the file to pronghorn using scp.  Everyone has their own method.

Ensure that you are in the directory from which you are running your job.

$pwd

Job submission is using the slurm system, which runs scripts.  Start with my script template (rc-g16-0.sl), which can be copied from my pronghorn directory:

cp /data/gpfs/home/king/example/* .

It is easiest to have this script in the submission directory.

Next, submit your job using the queuing system (slurm). My gaussian input file is PtCl4.gau (it should have also copied over). Try to run the test job:

sbatch rc-g16-0.sl PtCl4.gau

Queue Status on Pronghorn

to check the status of the queue, run:

$squeue -t running

for a single user,

$squeue --user=username

The output file will be given the name rc-16-0.#######.out, where the hash marks will the job number. I recommend renaming this file.

I am still not sure if I am setting the number of processors in the best way, but it seems to be quite fast.

Molecular Mechanics

LAMMPS is preferred for our molecular dynamics simulations because it was designed with materials systems in mind. It is installed on staudinger and is available on pronghorn. However, the generation of input files is not trivial. Some of the useful tools include:

On pronghorn, lammps runs in singularity containers. Here are some links from a slack conversion about the topic.

\#singularity https://hub.docker.com/r/lammps/lammps

John Anderson 12:09 PM singularity build lammps_stable_29Oct2020_centos7_openmpi_py3.sif docker://lammps/lammps:stable_29Oct2020_centos7_openmpi_py3

Ben King 1:10 PM example singularity lammps job: srun singularity exec ~/apps/lammps-pronghorn/lammps11Aug17-intel_cpu_intelmpi.simg lmp_intel_cpu_intelmpi < in.friction

John Anderson 1:13 PM https://sylabs.io/guides/3.6/user-guide/cli/singularity_exec.html

John Anderson 5:55 PM set the channel topic: http://lammps.sandia.gov/doc/Manual.html | https://hub.docker.com/r/lammps/lammps

John Anderson 1:53 PM https://github.com/intel/HPC-containers-from-Intel/tree/master/definitionFiles/lammps

John Anderson 6:25 PM https://ngc.nvidia.com/catalog/containers/hpc:lammps



Last modified: April 24, 2022