Support

Guide to Ansys Mechanical (MAPDL) Simulation in the CAD Compute Cluster

Running Ansys Mechanical APDL 2020 R2 using an LSF Job Scheduler (command line; without GUI)

More information about our Load Sharing Facility (LSF) job scheduler is given here 

Our CAD Compute Cluster consists of four login nodes named uwlogin*; eight simulation nodes named uwmhpc*; and two management nodes named uwlsfm*. A way to display these names is to use the LSF command lshosts.  

Figure 1. The nodes in the CAD Compute Cluster

When you run a simulation on our Cluster, you will submit your simulation request to an LSF queue. The job scheduler will attempt to assign the hardware (CPU cores and RAM) you want to your simulation run. Use the bqueues command to discover queue names.  

Figure 2. The queues in the CAD Compute Cluster

Summary: It is possible to run Mechanical and Mechanical APDL simulation three different ways in a LINUX multi-core environment. One option is to create a script containing all the commands that are to be given to the software, and then send the script to the job scheduler.

Here are the steps we have tested in our Cluster to demonstrate this approach

  1. Test file:  td-32 example (ACL ligament simulation). MPI software tested: Intel MPI w/Ansys 2020 R2.
  2. Use “module load ansys/20.2.cadconnect” declaration in terminal window. Type “module list” to see if the software has been loaded into memory.
  3. Use mpitest202 with and without “-mpi ibmmpi” setting to verify message-passing interface (MPI) software function.
  4. Use the setsshkey utility under /home/scripts to generate an SSH passcode. For simplicity, do not enter a password and leave the result in the default location; this is usually your home directory on a Cluster login node.
  5. Create a shell file to enter LSF batch submission (bsub) and the Ansys batch commands. See appendix for an example (“my_ansys_shell.csh”).
  6. Upload user files to working directory. We provide instructions for using MobaXTerm to perform a secure file transfer (sftp) upload.
  7. Upload your shell file to same directory. Ensure that the format of your shell file does not contain extraneous Windows editor characters (e.g a carriage return).
  8. Use bsub < my_ansys_shell.sh command at a prompt to run a simulation.

Use Shared Computing (SMP) with the –smp suffix in the ansys202 command. This will be confined to one node only (e.g uwmhpc03, or uwmhpc05). The maximum number of cores on any Cluster node is 32.

Use Distributed Computing (DMP) with the –dis suffix in the ansys202 command. This can run across more than one node using Infiniband software running in the Cluster’s backplane. We restrict Cluster users to a maximum of 32 cores to prevent abuse of privileges.

MPI Type: Intel MPI is the default choice and is installed with the main Ansys program automatically.

Use commands such as bjobs and bhosts during simulation runs to monitor simulation progress. For example:

~$ bjobs -l 

… or

~$ bhosts 

Appendix

#!/bin/sh 

## embedded options to bsub – start with #BSUB 

# — Name of the job — 

#BSUB -J ansys_mapdl_example 

# — specify queue — 

#BSUB -q adept 

# — specify wall clock time to run as hh:mm — 

## This will close a simulation after the set time. 

##BSUB -W 04:00 

# — specify the number of processors — 

## can be a min, max number of slots (cores) 

#BSUB -n 32 

# — specify the number of nodes — 

##BSUB -R “span[hosts=x]” 

# — user e-mail address — 

##BSUB – user_name@school.ca 

## — mail notification — 

## — at start — 

##BSUB -B 

## — at completion — 

##BSUB -N 

# — Specify the output and error files. %J is the job ID — 

# — -o and -e mean append, –oo and-eo mean overwrite — 

#BSUB –oo acl_%J.out 

#BSUB –eo acl_%J.err 

## 

#!/bin/tcsh 

## — example of ansys command line call — 

## 

## -dis Distributed Memory Ansys; max allowed core count by CMC is 32 

## –smp Shared Memory Ansys 

ansys202 -np 32 –ppf anshpc –smp –i -o  

## or for a distributed simulation: 

## 

## ansys202 -np 32 -ppf anshpc -dis -i -o 

Does your research benefit from products and services provided by CMC Microsystems?
Scroll to Top
X
Skip to content