Most of my work involves computer simulations of fluid dynamics. During my PhD I wrote a grid-based code for hydrodynamics (HD) and magnetohydrodynamics (MHD) that included a ray-tracing module for calculating the attenuation of radiation from point sources of ionising photons. The algorithms are described in Mackey & Lim (2010), Mackey & Lim (2011), and with more recent updates in Mackey (2012). I am slowly putting together a release version of this code, to be called pion (because its main reason for existence is to model photoionisation).

I wrote some documentation of the code in 2010 (now partly outdated), including results from a suite of standard test problems to validate the code. Follow the link to read more. In the meantime, versions of the code as used for any of my papers are available on request, but with very little documentation/help to make it useful. If you are interested in using the code for an astrophysics project, let me know and I will probably help.

With Dominique Meyer and Norbert Langer at AIfA we are also using the PLUTO MHD code for modelling stellar winds. This is a flexible adaptive mesh-refinement code for astrophysical HD/MHD, and for our purposes it is used in 2D cylindrical coordinates with a thermal conduction solver.

The code pion is described in the paragraphs below. Results and movies from simulations can be found in my research pages, and in the links to my publications.

Simulation Snapshot

Above is a snapshot of a simulation using pion, taken from Mackey, Mohamed, Neilson, Langer, & Meyer, 2012, ApJ Letters, 751, L10. We modelled the bow shock produced by the runaway star Betelgeuse assuming it has recently evolved from a blue supergiant to its current state as a red supergiant. This can partly explain some of the more puzzling aspects of Betelgeuse's circumstellar medium. Click on the image to open a movie of the simulation's full evolution. Read more here.


Introduction to the pion code

pion is written in C++, and is designed to be as object-oriented (i.e. modular) as possible. This means that, at least in principle, parts of the code can be taken out and plugged into other codes, and parts of other codes can be merged into pion. In practice, this is only really possible for things like chemistry networks and heating/cooling processes, because most other modules depend on the underlying data structures in the computational grid. The main advantage of object-oriented code, however, is that if you edit one part of the code you are very unlikely to break another part. It helps with maintenance, debugging, and extending the code's capabilities.

The main code modules are:

Problems are set up with a parameter file and an initial-condition generator. This writes an output file, which the code-executable reads. The output files are also restart files, and contain all of the simulation parameters in the header. Many simulation parameters can also be over-ridden with command-line arguments. Almost all features of the code can be used without recompilation e.g. HD, MHD, photoionisation, different coordinate systems, different dimensionality of the problem, etc. are all run-time parameters. This is achieved with inherited classes and interfaces defined by virtual base classes. Some features can be excluded with compile-time flags to make the executable smaller, but I haven't found noticeable speedup using these flags.

Features included in pion

Features NOT included in pion

Parallelisation and scaling

The parallelisation is (so far) with MPI, where each process is allocated a subset of the computational grid and communicates with its neighbours to fit everything together. There is no OpenMP multithreading as yet, but this is planned.
The key design consideration was that the parallel version of the code should produce identical results to the serial version (to machine precision); this is the case now for all problems tested. An exception is problems with chemistry integrated using CVODE - this integrates equations to a given relative-error tolerance, so machine-precision errors can change the number of iterations used and hence the solution.
Scaling is very good up to 1024 cores for large problems. A good rule-of-thumb is that each MPI process should have a subdomain with at least 32x32 cells in 2D, or 32x32x32 in 3D. For smaller subdomains the number of boundary cells becomes comparable to the number of grid cells, and the ratio of computation to communication become unfavourable.

Plans for future development




Valid HTML 4.01 Transitional