I started my work on gravitational lensing with my Master's thesis (Diplomarbeit), which comprises
a self-contained derivation of the basic lens theory. The main approach in my
thesis is to write all the observables (image separations, time delays,
magnification matrices) as integrals of the surface-mass density multiplied
with some weight functions.
Other “highlights” include explicit definitions of the
‘thickness’ of lenses (which are not the thickness of the mass
distribution) and a discussion of the idea to use individual lenses to
constrain cosmological parameters even without time-delays. This idea is based
on measuring the Dds/Ds ratio either using
lensed sources at different redshifts or by combining lensing information with
measurements of velocity dispersions. This concept was presented in some
detail at the Golden Lenses
conference at Jodrell Bank in 1997.
With the term classical I refer to lens modelling with lensed
point-like or very compact sources. Observables in this case are image
positions and flux density ratios or relative magnification matrices.
These are used as constraints for parametric mass models of the lenses.
The advantage of classical modelling is the relative simplicity, which is
entirely due to the simplicity of the source.
In lens modelling, the true structure of the source is never known, so that it
always has to be a (implicit or explicit) part of the model.
With point-sources, the source model usually only consists of source
position(s) and maybe true flux densities. For slightly resolved but compact
sources, shape parameters can be added. The important point is that the source
can be described by a very small number of parameters, which can be fitted
simultaneously with the mass model parameters.
In order to fit a lens model to observed image positions, one has to find the observables that would correspond to a given lens+source model. In lensing this involves the highly non-trivial inversion of the lens equation. With a given lens model, it is usually easy to find a source position given an image position. The inverse process, finding all image positions for a given source position, on the other hand, is a real nightmare. Instead of comparing the predictions from the model with the measurements in the image plane, one can approximate the comparison by projecting it into the source plane. The idea is to back-project (easy!) the observed image positions into the source plane. For a correct model, the source positions of all images should coincide. In reality, one will have deviations, which can be minimised to find the best lens model. The deviations can be approximately projected back to the image plane by using the magnification matrices, but the accuracy of this approach is not always sufficient. Intermediate approaches can also be used, in which the comparison is made in the image plane, but the inversion of the lens equation is still avoided.
I developed my own software to perform different kinds of classical lens modelling using very general lens models. The goodness of fit can be measured in the source plane or in the image plane, with several different algorithms. Results of my work in this field have been published for several lens systems:
First models in the context of the detection of the lens galaxy, and
with some estimates of the accuracy to interpret the
first time-delay estimate.
For the latter publication I also analysed the light-curves to determine the
time delay. The models were then used to find the most probable redshift of
the lens, which was not known at that time, so that a determination of the
Hubble constant was not possible.
For this lens I produced the first models, which were presented in an invited talk and in a paper about the lens.
RXJ0911+0551 is the rare case of a quad in which the radial distances of the images from the lens centre are not very similar. Generally, four images are produced by a small perturbation of an Einstein ring, which makes the radii very similar. Having different distances is advantageous, because it provides contraints on the radial mass profile.
I was co-author of a paper that
claimed a detection of the lensing galaxy in this system.
After subtraction of the QSO images, our data showed strong residuals
almost exactly halfway between them.
New data from the CASTLE survey provide a very
different position quite close to the fainter of the two QSO images.Our publication may be interesting for lens modellers anyway because it is to my knowledge the first time that the possibility of up to eight (or nine if not singular) images of one source with simple elliptical models plus external shear was mentioned.
This BL Lac is not lensed itself, but there is evidence that its host galaxy
is in turn acting as a lens that distorts background galaxies to striking arcs.
If the interpretation is correct, the mass of the galaxy
is extremely high. We have a paper about the
determination of the
redshift of this object which includes a short section on lensing.
The interpretation as lensed arcs is disputed by others, but so far none of the alternatives is really proven.
This is still my favourite lens system. I started with classical modelling of
the system, only to learn that the constraints provided by the two bright
images are not sufficient to determine the position of the lensing galaxy with
any accuracy. The galaxy is not detected at radio wavelengths. Optical
observations, on the other hand, are difficult because of the very small image
separation in the system.
Later we were successful to measure the optical position using 36 orbits with HST to produce what was then the deepest optical image ever taken. The analysis was difficult and is presented in a publication.
The difficulties with classical modelling of this system were the main motivation to start my work on LensClean.
Global VLBI observations of this system were able to detect and resolve both images of the jet. These provided additional constraints for classical models and were used to study scatter broadening in the lensing galaxy. This brings us to the subject of using lenses as a tool to study propagation effects. More VLBI observations were conducted in this context.
More information about my work on this system can be found below.
In order to understand which parameters of a mass model are well constrained
by the observables, and which ones are affected by degeneracies, it is
necessary to understand the properties of the lens mapping generally and for
the class of models that is used in particular.
A well-known degeneracy is the so-called mass-sheet degeneracy. If the density of a mass model is scaled with 1-k, while at the same time a homogeneous mass-sheet with density k is added, the observables will not change if the size of the source is also scaled with 1-k. The additional constant mass density amplifies the effect of the scaled mass distribution, so that the total effect is the same. The only observables which are affected are the time-delays between images. That means that the mass-sheet degeneracy is a serious problem for the determination of the Hubble constant.
Another important degeneracy is the one of the radial mass profile. If the
images are located at similar distances from the lens centre, it is often
possible to fit the observables with models of very different mass
profiles. This does again affect the time-delays, which become smaller for
shallower profiles and larger for steeper ones.
The supervisor of my PhD work, Sjur Refsdal, showed that this degeneracy is
basically the same as the mass-sheet degeneracy. Scaling the mass profile with
a factor 1-k with the addition of a constant density makes the
profile shallower, while it keeps the image positions constant, if the source
is also scaled. We presented this idea in a conference poster in 1999.
Later I extended this work to include the effects of ellipticity and external
shear for quadruply lensed systems. In my publication about the
subject I studied a very general family of lens models in which the
potential follows a power-law rb in the radial direction
and can have an arbitrary azimuthal shape. I found that when the external
shear is kept fixed, the time-delays (or the Hubble constant, if the
time-delays are measured) scale with (2-b)/b. If, on the other hand,
the shear is fitted but the ellipticity kept constant, the scaling is weaker
and goes like 2-b, a fact that has already been observed in the past
with more special models.
In the same paper I introduced the concept of a “critical
shear”. A shear of this value (and direction) has the effect that (when
the ellipticity is fitted accordingly) all time-delays exactly vanish. This is
still true if the shear is then varied orthogonally relative to this critical
shear. The figure illustrates a nice geometric
property. The critical shear is defined by the ellipticity of the
‘roundest’ ellipse going through all four images.
This has a direct significant consequence for the determination of the Hubble
constant from time-delays. Systems which are very ‘round’ have a
small critical shear, which means that unknown contributions to the real shear
have a large effect on the time-delays and the Hubble constant. More
asymmetric systems are much more robust in this respect.
The only way to break degeneracies in lens modelling is to include additional
information. There are very good reasons to use exclusively information from
lensing itself, because in this way we can avoid to depend on complicated
additional astrophysics and untested or unjustified assumptions. In lensing,
it is clear that each additional lensed source component contributes its own
set of constraints for the lens models. As long as the components are all
compact, classical lens modelling can be used to
exploit the additional information. Much more general is the use of general extended sources, in which the number of subcomponents and thus constraints can be very large. The disadvantage is that modelling such systems is a very complex task. The basic difficulty is that the true (unlensed) source structure is not known a priori but must be fitted simultaneously with the lens. In the case of radio observations, it is not a good idea to first create maps of the lensed source and then use these to model the lens, because the artifacts created by the deconvolution will affect the lens modelling results. A better approach is to combine the two pieces and try to solve the complete inverse problem. This has been tried before with the development of LensClean. I had data available of the lens system B0218+357, where the original LensClean algorithm proved to be insufficient to determine the position of the lens galaxy with good accuracy. The idea of LensClean is very simple. In the standard Clean algorithm, a radio source is decomposed into a collection of point-like components, which can be placed arbitrarily. In the lensed situation, it has to be taken into account that only certain combinations of multiply lensed components, with their corresponding magnification ratios, are allowed. LensClean builds a model by decomposing the source plane into point-like components, so that a consistent model of the source is found for a given lens model. In an outer loop, the lens model can then in turn be varied to produce a simultaneous fit of lens and source.
In order to extract all available information from the radio data, I developed a new version of LensClean and applied it to the case of B0218+357. One of the new concepts introduced is the correction for bias effects in LensClean. The standard algorithm preferably cleaned regions with higher multiplicity, because the residuals decrease faster there. This is corrected in my unbiased LensClean, which helped a lot to obtain good results.
LensClean relies on a good method to invert the lens equation, which means
finding all image positions for a given source position and lens model. This
has to be done so many times, that a reliable (failure less than one in
100 million) and fast method is essential. For this purpose I developed
‘LenTil”, a tiling algorithm that can find all images
even for complicated models extremely reliably. The basic idea is simple, but
the implementation became pretty complicated, as explained in my PhD thesis.
As said above, this is one of the most interesting
lens systems. In my LensClean
modelling work, I was for the first time able to determine the position of
the lensing galaxy with an accuracy sufficient for a serious application of Refsdal's method
to determine the Hubble constant from a lensing time-delay. The effort neede
for this was considerable, but still significantly less than that of projects
like the HST key project to determine the Hubble constant. Results for the
lens position and cosmology can be found in my publication. The lens models I
found form the basis for most of the later work on this lens system.
One might be sceptic about the method to determine the lens position very indirectly using LensClean. It took me many tests to convince myself that the result is reliable. Finally it was possible to measure the position directly with a very deep HST exposure. The analysis of the maps confirms my results, even though the accuracy of the optical measurement is not comparable with my LensClean result. Higher resolution observations were later made with the VLA + Pie Town at 15 GHz. The analysis is still in progress.
The best lens model is the most important result of my LensClean work. On the
way to this goal, LensClean also produces the optimal model of the (unlensed)
source plane. I developed a new method to take into account the resolution of
the instrument combined with the lens and ‘convolve’ the best
model with what I have defined as the ‘Clean beam in the source
plane’. This approach is superiour to further ideas, but still not
optimal. I am working on methods in which the regularisation is incorporated
directly into the mapping process and not applied afterwards.
The question of frequency-dependent flux ratios of the bright images was (with
contributions from me) investigated by
Mittal et al. (2006) and
Mittal, Porcas &
Wucknitz (2007). We found out that the proposed structural changes of the
source with frequency (together with magnification gradients) cannot be
responsible for this effect. Instead we found a plausible explanation in
free-free absorption in the ISM of the lensing galaxy.
The same lens was also target for a 90cm VLBI experiment that led to the very
first VLBI map of an Einstein ring.
We still do not understand the significant differences between the structures
at 90cm from the ones known from higher frequencies (e.g. 2cm in comparison).
The data from this experiment also served as basis for the first wide-field
VLBI project at low frequencies. Our results (published in Lenc at al., 2008) provide important
input for future low-frequency work, in particular with LOFAR.
In this system we recently found a lensed water maser at a redshift
of z=2.64, by far the most distant detection of water in the Universe.
This discovery, which has motivated several surveys for lensed maser emission,
motivates us to study this lens system in more detail, in particular
concerning the mass distribution of the lens. For this purpose we carried out
two major global continuum VLBI experiments (at 1.7 and 8.4 GHz) to produce
better maps as input for LensCLEAN modelling. In addition, Andreas
Brunthaler at the neighbouring MPIfR studied the source with global VLBI
at the water line in order to determine the position (and maybe structure) of
the water maser emission.
Currently we also monitor this system with the Arecibo telescope.
The 1.7 GHz VLBI data are analysed by my student Filomena Volino, the 8.4 GHz I am
doing myself. Preliminary maps are featured in a recent EVN newsletter.
This lensed LBG was discovered by Allam et al. (2007).
Radio observations started by Mike Garrett and then continued by Filomena
Volino and me show that the star-formation-rate is considerably lower than
estimated in the discovery paper. The reason may be an overestimated dust
extinction.
This is work in progress.
According to reports from a VLA snapshot observation done by another group,
this interesting lens (a lensed star-forming galaxy with a Seyfert core) was not detected at radio wavelength. We reanalysed the
same data and found a very clear detection. Subsequently, we re-observed the
system with MERLIN (L-band), the VLA (C-band) and with the e-EVN (L-band). We
clearly detect the lens and the lensed background source at the lower
resolutions. With VLBI, however, only the core of the lens itself is
detected. This means that the AGN core of the background object is too weak to
be detected. Instead we see the lensed extended star-forming regions.
A very elegant approach to calculate the variations induced by microlensing of
large sources was published by Refsdal & Stabell already in
1991. Unfortunately that method can not be successfully generalised for
situations with external shear. I developed an alternative approach that
generalises well to the case with shear.
The effect of shear (and also the shape of the source) can be very significant
and change the expectation from the shear-less case by factors of a few.
My analytical approach was confirmed by extensive numerical
simulations.
Studies of extinction and other propagation effects always suffer from our
ignorance of the true source structure and spectrum. In the case where lensing
produces several images, we do at least know that their intrinsic spectra must
be the same so that any differences in the observed spectra can be used to
study differential propagation effects.
In a case study, I used HST
spectra to determine the difference of the spectra of both images in the lens
HE0512-3329. I found that both differential extinction and microlensing
produce important differences. By studying the continuum separately from the
emission lines, I was able to disentangle the effect for the first time. This
sets the new standard for future projects in this field. Unfortunately many
groups still study differential extinction neglecting microlensing or
vice-versa. At least in the case of HE0512-3329 I showed that this is a very
questionable approach, because both effects can be about equally strong.
Another system where I am involved in the analysis is MS0451.6-0305, in which an
ensemble of background sources is magnified by a cluster of galaxies in the
foreground.
The lens spreads the emission from several components of a merging system over
such a wide area that resolved studies can be performed with current radio
telescopes. A more detailed study was carried out later and a new paper was submitted.
Together with Neal Jackson, Mike Garrett and others, I am working on a project
to use LOFAR
surveys to search for lenses.
This is a very ambitious project, and the success will largely depend on how
well the international baselines can be incorporated in the surveys. I am
therefore deeply involved in the development of new analysis methods for
LOFAR (and other projects) and in commissioning projects.
An e-MERLIN legacy project (PI: Neal Jackson) to study all known radio lenses
was approved recently. My role in that project is the development of imaging
methods for wide-band observations and the final lens modelling.
The lens effect by moving object shows some surprising properties that seem
to contradict common sense at first glance.
One aspect is that radial motion in the same direction as the light
propagation does decrease the light deflection, even though
the "interaction time" seems to be less than for a lens at rest. For
slow test particles, the effect is indeed the opposite. This also implies that
there is a certain critical speed (c/sqrt 3) where the speed of the lens has
no effect.
It has been argued that gravitational lensing may have focusing effects on
quasi-static gravitational fields in a similar way as it focuses light.
My calculations do not confirm this view. Instead I show that static fields
are only affected locally (without any long-range focusing). The situation is
different for gravitational waves with wavelengths smaller than the typical
scale of the lens. Gravitational lenses can thus focus gravitational waves.
Gravitational lenses with multiple images act like a Young's slit experiment
on (at least) galactic scales. The question arises if this situation can be
used as an interferometer to resolve small structures of the source or (via
correlation of electrical field signals from the images) to determine
time-delays with extreme accuracy. My analysis of the situation shows
that this would in principle be possible, provided that the sources are
extremely small. For realistic extended sources, the time-delay varies over
the area of the source so that the coherence is lost. This is a real pity,
because otherwise we could use interferometric observations of lensed extended
sources to determine the deflection field and potential without significant
degeneracies.
There is a classical
theorem in lensing according to which every lens produces at least one
image with a magnification greater than unity. This seems to contradict total
flux conservation if we consider closed surfaces around the source.
The standard explanation (or excuse) is that the action of the lens modifies
the geometry of the Universe, so that local flux density increase does not
necessarily imply total global flux increase. In my publication about the subject, I
show that this explanation misses an important point of the problem. The
paradox can be resolved if we go beyond the usual approximation of small
angles relative to the optical axis. In this spherical formalism, the
deflection angle is modified, with the effect that the magnification can
actually drop below unity so that the theorem is not valid anymore.
Concerning the potential theory of lensing, this is related to a modification
of Poisson's equation on the sphere. Additional important technical details can be
found in the paper.