Image coaddition is the last step in data reduction, and one of the more difficult ones. Figure 1, which I have stolen from Andy Fruchter's drizzle page, illustrates the problem.
Fig. 1: Mapping of an input image onto an output image (resampling).
The four red pixels correspond to pixels in one of the input images. After applying linear shifts, rotations and distortions the pixels end up onto a new output grid, which can have a different pixel scale. The difficulty is in the redistribution of the flux contained in the red pixels onto the (eventually finer) output pixels. Calculating the integrals that determine the fraction of the red pixel lying on top of a certain output pixels is way too time consuming.
A different approach is chosen, which boils down to the choice of a so-called resampling kernel. There are simple kernels (corresponding to e.g. bilinear interpolation or nearest neighbour) and more difficult ones that are more time-consuming. The latter usually have better noise properties and also do not increase the image seeing artificially. The difference between a good and a bad kernel can be seen in the following picture:
Fig. 2: Direct comparison between an advanced and a primitive (still better than bilinear interpolation). The image to the left is 11% sharper (stars are 0.6" more compact) than the image to the right.
For more information about resampling I recommend Emmanuel Bertin's Swarp manual.