Support

Guide to ANSYS Electronics Desktop (EDT) Simulation in the CAD Compute Cluster

Uploading an ANSYS EDT Simulation to the CAD Compute Cluster

Note: This document has been updated to demonstrate the use of ANSYS 2020 R1 on the cluster (October 22, 2020).

This is the main webpage describing the CAD Cluster Cluster on CMC Microsystems’ main website. Step-by-step instructions for logging into and using the CAD Computer Cluster can be found in this instruction document.

Special pages such as this one explain how to configure CAD tools for HPC/cluster simulations. Here we describe one way to configure an HFSS project. The instructions you see below have been developed for and run successfully in the CAD Compute Cluster. However, project settings must be tailored by each individual for each simulation run, as all projects will be different.

CentOS 7 LINUX is the underlying operating system in the CAD Compute Cluster. We advise those considering use of this cluster to learn a few basic LINUX commands: To change directories; to list files in a directory; to find the present working directory; and so on. CentOS help is a useful CentOS resource. It is helpful to be able to use a LINUX command line text editor as well, for making quick changes to scripts in your working directory. Uploading a Windows-edited file to a LINUX environment can sometimes cause problems, as certain Windows inserts like carriage returns can be read wrongly by LINUX.

Note that image captures and settings in this document are taken from ANSYS EDT 2019 R2 (19.4). To navigate in the CAD Compute Cluster and find out which CAD tools reside there, use Load Share Facility (LSF) module commands listed in Appendix A. Ensure that the projects you upload to the cluster are compatible with the installed CAD tool version(s), or contact CMC Microsystems’ staff to ask for advice.

The basic method for starting a cluster simulation from your local desktop is outlined in Figure 1. For uploading ANSYS *.aedt project files and downloading simulation results, we suggest the use of a file transfer (FTP) client such as Filezilla for this activity. This particular program is freely distributed, and may be found here: https://filezilla-project.org/. General instructions for making transfers are available on this document. It is also possible to use a Secure Copy Protocol (SCP) transfer from inside one of our virtual environments to make transfers. Our step-by-step instructions show how this is done.

Figure 1. The basic method for starting a cluster simulation.

Special pages such as this one explain how to configure CAD tools for HPC/cluster simulations. Note that we enforce a queue system to submit simulation jobs to the cluster. The purpose of queuing is twofold: It prevents some ANSYS modules (EDT, MAPDL) from occupying an entire node while using less hardware, as they are currently designed to do; and it permits software from different vendors to run simulations on nodes while sharing node hardware. An LSF Job Scheduler command starting with the letter ‘b’ means that the instruction is used for batch (command line) jobs. More useful LSF commands are listed here.

Figure 2. A list of queues configured in the CAD Compute Cluster.

Setting up ANSYS EDT in an LSF Server for Batch mode/Command line Use

These instructions describe the steps to be taken after you have uploaded a project for simulation. The CAD Compute Cluster nodes can be displayed by typing lshosts at a command prompt in the environment as shown in Figure 3. You will see four login nodes, two management nodes and eight compute nodes in the list. 

Figure 3. CAD Compute Cluster environment for testing ANSYS simulations

For this example, we use a prepared test file (a bandpass filter, with filename bp_filter.aedt) supplied by ANSYS support staff. It is from one of its standard tutorials and can be extended in simulation frequency so that multi-core capability can be exercised. You will find it in the list of examples supplied with your local ANSYS EDT installation. Specific settings are shown in Appendix A at the end of these instructions.

  • Extend the frequency simulation range to involve more cores and RAM. Set number of cores greater than four (4) to engage hfsshpc licences. Note that you will have to tell LSF to use the ANSYS pool licences for multi-core simulations through the use of a registry.txt file in the batchoptions segment of the script. See Appendix A below for an example of its inclusion.

The utility setsshkey is used to circumvent password checking on uwmhpc nodes (SSH security). Be sure to run this utility before starting your first simulation. Do not use a passphrase, and leave the result in the default directory. We discuss the importance of this SSH key in this document.

For ANSYS 2019:

At a prompt %%EDITORCONTENT%%gt; in a terminal window, type:

     %%EDITORCONTENT%%gt; module load ansys/19.4

For ANSYS 2020:

   %%EDITORCONTENT%%gt; module load ansys/20.1.cadconnect

Use the following command to ensure you have loaded the software correctly:

     %%EDITORCONTENT%%gt; module list

For ANSYS 2019, you should see a reply from the LSF Job Scheduler that looks like the one in Figure 4. A module in the LSF environment corresponds to a CAD software program.

Figure 4. Check to see if your ANSYS 2019 software has loaded correctly.

When you loaded ANSYS, you also loaded Message Passing Interface (MPI) software, as both the Intel and IBM MPI programs are bundled with ANSYS EDT. This is another layer of software in the CAD Compute Cluster that plays a part in simulation runs. Use the following command to ensure that the MPI software is ready to use:

%%EDITORCONTENT%%gt; mpitest     (for the Intel version)

%%EDITORCONTENT%%gt; mpitest -ibmmpi    (for the IBM version)

At this point, you should have your *.aedt project file, your submission script in a *.sh file and any other data that your simulation might need for execution in your Cluster working directory (workdir). If you aren’t already in it, change your pwd directory so that you are in your workdir. At the prompt %%EDITORCONTENT%%gt; in a terminal window, enter the command:

%%EDITORCONTENT%%gt; bsub < (script_name).sh

… to start your simulation run. The command bsub means “batch submission”. For example, if your script was called my_ansys_hfss.sh, you would enter:

%%EDITORCONTENT%%gt; bsub < my_ansys_hfss.sh

Use the command bjobs to follow the progress of your submission. Use the extension bjobs –l (small letter L) for a detailed job description, including a running tally of CPU time.

General information about writing job scripts for LSF:
https://www.ibm.com/support/knowledgecenter/en/SSWRJV_10.1.0/lsf_admin/job_scripts_writing.html

Appendix A: Creating an HPC simulation: ANSYS EDT Settings and Building a LINUX Script on the Cluster

This Appendix describes some recommended settings we would declare to create a batch command script. 

1) On your computer, launch Electronics Desktop. For these instructions, we use a test file (e.g. bp_filter.aedt) supplied by ANSYS staff. It is from a readily-accessible tutorial and can be extended in simulation frequency so that multi-core capability can be exercised. Find and load this example.

To full test the Cluster with this example, extend the frequency simulation range to involve more cores and RAM. Setting your number of cores greater than four (4) will engage hfsshpc licence features.

2) Open Tools | Options | HPC and Analysis Options… | Configurations tab… | Add… Use this GUI to specify your simulation details. The following are examples, not recommendations.

Analysis Configuration: Machines Tab

Press the Add Machine to List button

In the window Enter Machines for Distributed Analysis, put in your configuration.

Name:   uwhpc.cmc.ca

Cores: 32

GPUs: 0

RAM limit: 90%

Enabled: checkmark

Figure 5. The Analysis Configuration window.

 

Click OK to save and close. Make the selection Active; this will happen automatically if it is the only selection.

Figure 6. The HPC and Analysis Option window showing the Configurations tab.

Click OK again to save and close.

3) Open Tools | Options |HPC and Analysis Options |Options tab. Use this GUI to specify the type of simulation to be run by the LSF job scheduler.

Figure 7. The HPC and Analysis Options window showing the Options tab.

      HPC License: Pool

      Options for Design Type: HFSS

Under Distributed Memory

     MPI vendor: Intel     

     Remote Spawn Command: SSH     

  setsshkey utility>

Under HPC Licensing

     Use legacy Electronics HPC licence: True

     Enable GPU: False

     Enable GPU for SBR+ Solve: False

Select OK to save these changes and close this window.

4) Open Tools |Job Management: Select Scheduler…

       Select scheduler: LSF

Choose OK to close window. You may see an error indicating that your environment is not set correctly. Click Yes to continue.

5) Under Tools | Job Management: Submit Job…

     Scheduler Options tab: Select Fix job name as necessary

The choices on this page allow you to identify your jobs.

Figure 8. The Submit Job GUI in ANSYS EDT.

Compute Resources tab:

Use Automatic Settings/Num variations to distribute:

     Method: Specify Number of Cores and (Optional) RAM

     Total number of cores: 32

     RAM limit: 90%

Note also that choosing a specific number of nodes reserves those nodes exclusively for your simulation. Please do not select this option when specifying hardware.

Analysis Specification tab: The Product path is the location of the executable $PATH variable/ansysedt.exe on the Cluster. 

   Product path: /CMC/tools/ansys/ansys2020.r1/AnsysEM20.1/Linux64/ansysedt.exe

   Project path: /home//file_name.aedt

6) Save your project for transfer to the Cluster. Close your project. Reserve an SSH terminal in our vcad.cmc.ca menu. Transfer your *.aedt file to the Cluster using a method outlined in the Cluster documents in our Community pages.

The remainder of these instructions are handled in your home account on the Cluster.

7) To create a *.sh script to be run by the LSF Job Scheduler, use the structure outlined in the example at the end of these instructions as a template.

Load and check ANSYS and run an MPI test per the earlier steps.

At the prompt in a LINUX terminal window, enter the command:

     %%EDITORCONTENT%%gt; bsub < [script_name].sh

For example, if your script was called my_ansys_hfss.sh, you would enter

     %%EDITORCONTENT%%gt; bsub < my_ansys_hfss.sh

Here is an example of a script. Any text after a single # is visible to the Job Scheduler, and will be read as a possible command by LSF.  Any command after ## is treated as a comment by both LSF and ANSYS.

——–

#!/bin/sh

## embedded options to bsub – start with #BSUB

# — Name of the job —

#BSUB -J ansys_HFSS_example

# — specify queue —

#BSUB -q adept

# — specify wall clock time to run as hh:mm —

##BSUB -W 04:00

# — specify the number of processors —

#BSUB -n 32

# — specify the number of nodes —

##BSUB -R “span[hosts=]”

# — user e-mail address —

##BSUB – guest@school.ca or other address

# — mail notification —

# — at start —

##BSUB -B

# — at completion —

##BSUB -N

# — Specify the output and error files. %J is the job ID —

# — -o and -e mean append, -oo and-eo mean overwrite —

#BSUB -oo BPFilter_%J.out

#BSUB -eo BPFilter_%J.err

#!/bin/tcsh

# — example of an ansys command line call —

/CMC/tools/ansys/ansys.2020r1/AnsysEM20.1/Linux64/ansysedt -distributed -machinelist numcores=32 -auto -monitor -ng -batchoptions registry.txt  -batchsolve /eng/home/guest/ansoft/bp_filter.aedt

-*-*-*-*-*- 

The registry.txt file included above contains a direct reference to the hfsshpc feature ‘pool’. Here is a copy of the file contents:

-*-*-*-*-*-*- 

$begin ‘Config’

‘HPCLicenseType’=pool

‘HFSS/UseLegacyElectronicsHPC’=1

‘TempDirectory’=’/scratch/@cmc’

## — include other desired registry entries here —

$end ‘Config’

-*-*-*-*-*-*-

Use of this registry.txt file is described in the ANSYS 2020 R1 or R2 help files.

Does your research benefit from products and services provided by CMC Microsystems?
Scroll to Top

CMC Planned Service Disruption

Thursday, February 4
7 am to 9 am EST

CMC is performing upgrades on our datacenter infrastructure that will temporarily affect access to CMC online services. We apologize for the inconvenience this will cause.

We apologize for the inconvenience this may cause.

X