Support

Guide to ANSYS Mechanical (MAPDL) Simulation in the CAD Compute Cluster

Running ANSYS Mechanical APDL 2019 R2 in an LSF 10.1 Server (command line; without GUI)

 

Figure 1. The nodes in the CAD Compute Cluster.

 

 

Figure 2. The queues in the CAD Compute Cluster.

 

Summary: It is possible to run Mechanical and Mechanical APDL simulation three different ways in a LINUX multi-core environment. One way is by using the Workbench interface. A second way is to use the MAPDL Launcher program and set the run for batch mode, though this bypasses the job scheduler. The third option is to create a script containing all the commands that are to be given to MAPDL, and then send the script to the job scheduler.

Here are the steps we have tested in our node to demonstrate the third option. 

  1. Test file:  td-32 example (ACL ligament simulation). MPI software tested: Intel MPI w/ANSYS 2019 R2.
  2. Use “module load ansys/19.4” declaration in terminal window.
  3. Use mpitest194 with and without “-mpi ibmmpi” setting to verify message-passing interface (MPI) software function.
  4. Use the setsshkey utility under /home/scripts to generate an SSH passcode. For simplicity, do not enter a password and leave the result in the default location; this is usually your home directory on the LSF node.
  5. Create a shell file to enter LSF batch submission (bsub) and the ANSYS batch commands. See appendix for an example (“my_ansys_shell.csh”).
  6. Upload user files to working directory. Use a secure file transfer (sftp) method in a program such as FileZilla for this function.
  7. Upload shell file to same directory.
  8. Use bsub < my_ansys_shell.sh command at a prompt to run a simulation.

 

Use Shared Computing (SMP) with the –smp suffix in the ansys194 command. This will be confined to one node only (e.g uwmhpc03, or uwmhpc05).

Use Distributed Computing (DMP) with the –dis suffix in the ansys194 command. This can run across more than one node using the Infiniband software running in the cluster.

 

MPI Type: Intel MPI is the default choice, and is installed with the main ANSYS program automatically.

Use commands such as bjobs or bhosts during simulation runs to monitor simulation progress.

 

Appendix

#!/bin/sh

# embedded options to bsub – start with #BSUB

# — Name of the job —

#BSUB -J ansys_mapdl_example

# — specify queue —

#BSUB -q adept

# — specify wall clock time to run as hh:mm —

##BSUB -W 04:00

# — specify the number of processors —

#BSUB -n 32

# — specify the number of nodes —

##BSUB -R “span[hosts=x]”

# — user e-mail address —

##BSUB – greig@cmc.ca

# — mail notfication —

# — at start —

##BSUB -B

# — at completion —

##BSUB -N

# — Specify the output and error files. %J is the job ID —

# — -o and -e mean append, –oo and-eo mean overwrite —

#BSUB –oo acl_%J.out

#BSUB –eo acl_%J.err

 

#!/bin/tcsh

 

## — example of ansys command line call —

## -dis Distributed Memory ANSYS; min core number is 2, max core number is 8192

## –smp Shared Memory ANSYS

 

ansys194 -np 32 –ppf aa_r_hpc –smp –i (input file name)  (output file name)

## or for a distributed simulation:

##

## ansys194 -np 36 -ppf aa_r_hpc -dis -i (input_file_name) -o (output_file_name)

Does your research benefit from products and services provided by CMC Microsystems?
Scroll to Top

CMC Planned Service Disruption

Thursday, February 4
7 am to 9 am EST

CMC is performing upgrades on our datacenter infrastructure that will temporarily affect access to CMC online services. We apologize for the inconvenience this will cause.

We apologize for the inconvenience this may cause.

X