Most of my work involves computer simulations of fluid dynamics. During my PhD I wrote a grid-based code for hydrodynamics (HD) and magnetohydrodynamics (MHD) that included a ray-tracing module for calculating the attenuation of radiation from point sources of ionising photons. The algorithms are described in Mackey & Lim (2010), Mackey & Lim (2011), and with more recent updates in Mackey (2012). I am slowly putting together a release version of this code, to be called pion (because its main reason for existence is to model photoionisation).
I wrote some documentation of the code in 2010 (now partly outdated), including results from a suite of standard test problems to validate the code. Follow the link to read more. In the meantime, versions of the code as used for any of my papers are available on request, but with very little documentation/help to make it useful. If you are interested in using the code for an astrophysics project, let me know and I will probably help.
With Dominique Meyer and Norbert Langer at AIfA we are also using the PLUTO MHD code for modelling stellar winds. This is a flexible adaptive mesh-refinement code for astrophysical HD/MHD, and for our purposes it is used in 2D cylindrical coordinates with a thermal conduction solver.
The code pion is described in the paragraphs below. Results and movies from simulations can be found in my research pages, and in the links to my publications.
![]()
Above is a snapshot of a simulation using pion, taken from Mackey, Mohamed, Neilson, Langer, & Meyer, 2012, ApJ Letters, 751, L10. We modelled the bow shock produced by the runaway star Betelgeuse assuming it has recently evolved from a blue supergiant to its current state as a red supergiant. This can partly explain some of the more puzzling aspects of Betelgeuse's circumstellar medium. Click on the image to open a movie of the simulation's full evolution. Read more here.
Introduction to the pion code
pion is written in C++, and is designed to be as object-oriented (i.e. modular) as possible. This means that, at least in principle, parts of the code can be taken out and plugged into other codes, and parts of other codes can be merged into pion. In practice, this is only really possible for things like chemistry networks and heating/cooling processes, because most other modules depend on the underlying data structures in the computational grid. The main advantage of object-oriented code, however, is that if you edit one part of the code you are very unlikely to break another part. It helps with maintenance, debugging, and extending the code's capabilities.
The main code modules are:
- Systems of equations: equations of inviscid HD, ideal MHD, and ideal MHD with the mixed-GLM divergence cleaning method (Dedner+,2002).
- Coordinate systems: Cartesian coordinates in 1D, 2D, and 3D, Cylindrical coordinates in 2D (R,z), and spherical coordinates in 1D (r).
- Hydro/MHD solvers: Roe-type solvers are implemented for HD and MHD, and flux-vector-splitting and a (slow) exact solver for HD.
- Parallel code communication: a COMMS class using either text-file communicators or the Message Passing Interface (MPI).
- Computational grid. The grid is a multiply-linked list of finite-volume cells (or zones). Most commonly-used boundary conditions are implemented. When run in parallel each process has a subdomain of the full grid, and inter-process communication is used to share boundary data.
- Microphysics: chemistry and heating/cooling processes. A number of different classes have been written for different situations.
- Raytracing, on serial and parallel grids, from point sources or sources at infinity. This uses the short-characteristics raytracer, which is a bit diffusive but very fast and scales well.
- Data input and output (I/O), including ASCII, FITS, and Silo formats.
Problems are set up with a parameter file and an initial-condition generator. This writes an output file, which the code-executable reads. The output files are also restart files, and contain all of the simulation parameters in the header. Many simulation parameters can also be over-ridden with command-line arguments. Almost all features of the code can be used without recompilation e.g. HD, MHD, photoionisation, different coordinate systems, different dimensionality of the problem, etc. are all run-time parameters. This is achieved with inherited classes and interfaces defined by virtual base classes. Some features can be excluded with compile-time flags to make the executable smaller, but I haven't found noticeable speedup using these flags.
Features included in pion
- Solving Euler or ideal MHD equations on serial and parallel grids, with any of the coordinate systems described above. First- or second-order accurate (in time and space) integration schemes are supported.
- Calculation of column densities of neutral/ionised/all gas from radiation sources (on or off the grid domain, in serial and in parallel).
- Passive tracers advected with the flow for e.g. chemical species.
- Dedner et al. (2002) divergence cleaning for multi-dimensional MHD simulations (with some improvements).
- The code has been used in parallel with at least 1024 MPI processes on two different supercomputers (Stokes at ICHEC and JUROPA at Jülich). Results of scaling tests are presented in Mackey (2012,A&A,539,A147).
- There are a few microphysics integrators available, based on the descriptions in the references above. A standard base class for microphysics defines the interface to the rest of the code, so it is relatively simple to add complex chemical networks and heating/cooling functions. Dealing with radiation fields and extinction is more complicated and only partially supported, so adding in extinction-dependent photo-rates is not trivial. An adaptive integrator using the CVODE solver (part of the SUNDIALS library) is available.
- The code should run on unix/linux systems with standard GNU or Intel compilers, and also on OS X (10.6 and later) if Xcode is installed. The makefile may need modification for specialised systems.
Features NOT included in pion
- Gravity, neither self-gravity nor an external potential field. I plan to add these in eventually, at least for a periodic simulation domain.
- Multi-threaded execution (with e.g. OpenMP). Planned for the near future.
- Photoionisation from more than one radiation source is not supported. The raytracer can have as many sources as required, but the microphysics integrators assume there is only one ionising source.
- The grid resolution is fixed, and all cells must be cubic/square and the same size. Support for logarithmic grids in spherical coordinates is planned. Static nested grids are also planned, and eventually an adaptive mesh-refinement grid is planned.
- I have never tried to compile the code on a Windows-based system, and have no expertise to try it.
- Code has not been compiled or run on Blue Gene systems or GPU-based systems. In principle the Blue Gene compilation should work, but external libraries are required (CVODE, FITS, Silo) and this complicates the process. I just haven't spent enough time figuring out how to make it work.
Parallelisation and scaling
The parallelisation is (so far) with MPI, where each process is allocated a subset of the computational grid and communicates with its neighbours to fit everything together. There is no OpenMP multithreading as yet, but this is planned.
The key design consideration was that the parallel version of the code should produce identical results to the serial version (to machine precision); this is the case now for all problems tested. An exception is problems with chemistry integrated using CVODE - this integrates equations to a given relative-error tolerance, so machine-precision errors can change the number of iterations used and hence the solution.
Scaling is very good up to 1024 cores for large problems. A good rule-of-thumb is that each MPI process should have a subdomain with at least 32x32 cells in 2D, or 32x32x32 in 3D. For smaller subdomains the number of boundary cells becomes comparable to the number of grid cells, and the ratio of computation to communication become unfavourable.Plans for future development
- New grid implementation to make boundary communication simpler.
- Separating time integration from spatial flux-calculations in code structure.
- Allowing MPI processes to control more than one sub-domain, and adding multithreading.
- Adding treatment of recombination/scattered radiation to the raytracer.
- 2D spherical coordinates.
- Sensible treatment of thermal conduction.
- Nested grids and adaptive mesh-refinement.