Support

Guide to Ansys Mechanical (MAPDL) Simulation in the CAD Compute Cluster

Running Ansys Mechanical APDL 2020 R2 using an LSF Job Scheduler (command line; without GUI)

Figure 1. The nodes in the CAD Compute Cluster

Figure 2. The queues in the CAD Compute Cluster

Summary: It is possible to run Mechanical and Mechanical APDL simulation three different ways in a LINUX multi-core environment. One way is by using the Workbench interface. A second way is to use the MAPDL Launcher program and set the run for batch mode, though this bypasses the job scheduler. The third option is to create a script containing all the commands that are to be given to MAPDL, and then send the script to the job scheduler.

Here are the steps we have tested in our node to demonstrate the third option. 

  1. Test file:  td-32 example (ACL ligament simulation). MPI software tested: Intel MPI w/Ansys 2020 R2.
  2. Use “module load ansys/20.2.cadconnect” declaration in terminal window. Type “module list” to see if the software has been loaded into memory.
  3. Use mpitest202 with and without “-mpi ibmmpi” setting to verify message-passing interface (MPI) software function.
  4. Use the setsshkey utility under /home/scripts to generate an SSH passcode. For simplicity, do not enter a password and leave the result in the default location; this is usually your home directory on an cluster login node.
  5. Create a shell file to enter LSF batch submission (bsub) and the Ansys batch commands. See appendix for an example (“my_ansys_shell.csh”).
  6. Upload user files to working directory. We provide instructions for using MobaXTerm to perform a secure file transfer (sftp) upload.
  7. Upload your shell file to same directory. Ensure that the format of your shell file does not contain extraneous Windows editor characters (e.g a carriage return).
  8. Use bsub < my_ansys_shell.sh command at a prompt to run a simulation.

Use Shared Computing (SMP) with the –smp suffix in the ansys202 command. This will be confined to one node only (e.g uwmhpc03, or uwmhpc05). The maximum number of cores on any cluster node is 32.

Use Distributed Computing (DMP) with the –dis suffix in the ansys202 command. This can run across more than one node using Infiniband software running in the cluster’s backplane. 

MPI Type: Intel MPI is the default choice and is installed with the main Ansys program automatically.

Use commands such as bjobs and bhosts during simulation runs to monitor simulation progress. For example,

~%%EDITORCONTENT%%gt; bjobs -l

… or

~%%EDITORCONTENT%%gt; bhosts

Appendix

#!/bin/sh

# embedded options to bsub – start with #BSUB

# — Name of the job —

#BSUB -J ansys_mapdl_example

# — specify queue —

#BSUB -q adept

# — specify wall clock time to run as hh:mm —

## This will close a simulation after the set time.

##BSUB -W 04:00

# — specify the number of processors —

## can be a min, max number of slots (cores)

#BSUB -n 32

# — specify the number of nodes —

##BSUB -R “span[hosts=x]”

# — user e-mail address —

##BSUB – user_name@school.ca

# — mail notfication —

# — at start —

##BSUB -B

# — at completion —

##BSUB -N

# — Specify the output and error files. %J is the job ID —

# — -o and -e mean append, –oo and-eo mean overwrite —

#BSUB –oo acl_%J.out

#BSUB –eo acl_%J.err

#!/bin/tcsh

## — example of ansys command line call —

## -dis Distributed Memory Ansys; min core number is 2, max core number is 8192

## –smp Shared Memory Ansys

ansys202 -np 32 –ppf anshpc –smp –i  -o 

## or for a distributed simulation:

##

## ansys202 -np 36 -ppf anshpc -dis -i -o

Does your research benefit from products and services provided by CMC Microsystems?
Scroll to Top
X