Parallel Programming - Introduction to MPI with C/C++

Parallel Programming - Introduction to MPI with C/C++

Published: 2 Mar 2022 by HPC Team Freiburg

The, University of Stuttgart is pleased to announce its web seminar for bwHPC users.

Title: Introduction to MPI with C/C++

Date: 15.03.2022

Topic: Parallel Programming: Introduction to MPI with C/C++ on bwUniCluster 2.0 (Web Seminar)

Lecturer: Bärbel Große-Wöhrmann (HLRS)

Web: https://training.bwhpc.de/goto.php?target=crs_682&client_id=BWHPC

Abstract

Message Passing Interface (MPI) is a widely-used parallel programming paradigm for distributed memory systems. In this course we introduce basic MPI routines and run simple examples on bwUniCluster 2.0. The target audience are newbies in MPI. No knowledge of MPI is required.

Agenda

  • Two-sided point-to-point communication
  • Collective communications
  • Exercises and live demo on bwUniCluster 2.0
  • Groups and communicators
  • Cartesian topologies

After this course, participants will…

  • have a detailed understanding of basic MPI routines,
  • know how to compile and run MPI parallel C/C++ programs on bwUniCluster 2.0.

Prerequisites:

  • the bwUniCluster entitlement https://wiki.bwhpc.de/e/BwUniCluster_2.0_User_Access
  • some background of working on the cluster, e.g. creating simple batch scripts with an editor like nano, vi or emacs.

Date & Location

Online Webex, HLRS, University of Stuttgart, Germany

  • Online Web-Seminar: Tuesday 15.03.2022, 09:00 – 12:00 CET

Webex-Invitation will be sent directly by e-mail to the registered participants.

Registration and further information

Further upcoming courses 2022 that may be of interest for you:

You can not fit our course dates into your busy schedule? Please let us know. We will offer further course dates if there will be more prospective attendees.

bwHPC HPC Course Training Parallel Programming MPI

Latest Posts

NEMO2 Production Mode

NEMO2 has officially launched, transitioning from a testing phase to full production with expanded hardware, including AMD Instinct MI300A and Nvidia L40A nodes. NEMO1 is being phased out, with limited resources available until May 31st. Users are encouraged to transition to NEMO2 and consult the wiki for details.

NEMO2 Conda

NEMO2 uses Miniforge for conda environments, offering a streamlined setup with conda-forge as the default repository. The Miniforge module auto-initializes conda, simplifying environment activation without modifying shell profiles.

Genoa Nodes Delivered

The AMD Genoa, Machine Learning and AI partitions for NEMO2 were delivered on December 4th. The acceptance of the storage has been delayed, so that NEMO2 could not yet start this year. However, calculations with the Milan nodes in NEMO1 are still possible.