François Gelis (IPhT)
Grégoire Misguich (IPhT)
Modern computers have a growing number of processors or 'cores'. From a few units in a simple laptop, to several tens of thousand in big servers, their numbers has been growing quickly over the years. But to fully take advantage of this computing power, it is necessary to have codes or softwares that are able to distribute a given task over several processors working in parallel. These lectures will present an introduction to parallel programming in the context of scientific calculations.
After an introduction about hardware aspects ('shared' versus 'distributed' memory, communication between processors, vectorization, etc.) we will discuss a few solutions based on some "already-parallel" softwares (from linear algebra libraries to high-level computer algebra softwares). We will then present two widely used libraries for code parallelization, OpenMP (Open MultiProcessing) and MPI (Message Passing Interface). These lectures will be based on simple and concrete examples. They are intended for people with some basic programming knowledge (for instance in C/C++,Python or Fortran), but no prior experience with parallelization.