Dealing with alpha channels and compositing images today, and realized 2 things:
-
Pre-multiplied alpha is not very useful
-
Compositing onto an image which already has an alpha channel is much more complicated [and expensive] than compositing onto a fully opaque image
1. Pre-Multiplied Alpha
A white pixel is commonly represented by the RGB triplet [255,255,255], where 255 is the maximum value storable within 8 bits.
A 50% transparent white pixel is usually much the same, except with an added alpha component to represent opacity [[255,255,255],128]. (Here 128 means 50% visible, with 0 meaning completely transparent and 255 meaning totally opaque)
That’s the normal way to represent a pixel with an alpha channel.
The other (less normal) way is called pre-multiplied alpha, and this is where the RGB values in an image are altered to reflect their opacity. Using this method a 50% opaque white pixel is represented as [[128,128,128],128] .
The good thing about pre-multiplying is that if the alpha channel is discarded or ignored, the color values will look ok (as long as I want to see my image as it would appear on a black background)
The really bad thing about pre-multiplying is that color resolution is reduced as opacity decreases (assuming we are using 24 bit color). This can be illustrated by examining a reversal process of the sort which might occur in any number of image processing applications.
-
Starting with a light blue pixel at 100% opacity [[240,240,250],255] .
-
Reduce it to 1% opacity, resulting in the tuple [[2,2,3],3].
-
Increase opacity back up to 100%. To do this we have to multiply each component by 255/3, giving us [[170,170,255],255] ,
Note that the final color is very different from the one we started with! This would not have happened if we had just left the color components alone while messing with the alpha.
Related Note: The TGA library I have been using was confusing me, because even though I knew the TGA files I was testing with were definitely NOT premultiplied, I found that after loading a TGA file they suddenly seemed to become so. A bit of digging in the code revealed that for some strange reason the library was assuming that I would actually want my images with pre-multiplied alpha, and so was doing the multiplication for me, after the image was loaded. What’s really unusual here is that it wasn’t even configured as an option, it was just hard-coded as if to say "who the hell wouldn’t want pre-multiplied alpha?"
2. Compositing with Destination Alpha
This was a surprisingly tricky one, which I’ve never needed to do before. Many people will be familiar with the simple method of alpha blending using only source alpha:
Color = ColorSrc * AlphaSrc + ColorDst * (1-AlphaSrc)
This works great as long as your destination pixels have no alpha channel (are already 100% opaque). If you do have destination alpha [and therefore require an alpha result as well] the formula suddenly gets a lot uglier:
Alpha = 1 - (1-AlphaSrc) * (1-AlphaDst)
Color = (ColorSrc*AlphaSrc + ColorDst*AlphaDst*(1-AlphaSrc)) / Alpha
Note the division by Alpha required for the color component! Note also therefore that the special case where Alpha is zero cannot yield a Color value, and must be handled separately in order to avoid a divide-by-zero error. Division is a terrible thing to have to perform on a per pixel basis, and graphics programmers have learned to avoid it wherever possible. In this particular case however, I don’t think it can be avoided without resorting to some method of rough approximation. Please someone correct me if I’ve missed anything here…