Working on making my gradient fills more platform portable, this effect is not actually caused by a bug; just unfinished code. Although I can tell you that the patterns are an example of the moire effect, I can’t explain why it results in lots of small repeating circles (and no other shapes).
Many years ago I was fooling around with Photoshop when I noticed that a particular combination of filter and blend amounted to a quick and easy photo enhancement technique, which at the time I christened the "Make Pretty" filter. I have never seen a portrait photo which couldn’t be substantially improved using this technique (especially if it was taken using a flash).
The photo below was taken at a dinner a couple of weeks ago, and before processing is typical of the sort of photo that I hate because it makes me look like a pasty git. After processing I think maybe it’s worth keeping, especially because it is almost impossible to find a photo of me smiling.
Three easy steps to making [white] people look better using Photoshop:
Adjust levels and move mid point to lighten the shadow areas (in over-exposed shots you may need to darken rather than lighten).
Apply a Gaussian blur with a radius such that facial features are still discernable but small details (eg a zit or greasy highlight) are smoothed out. Depending on image size this could be anywhere from 3 to 16 pixels.
Use Edit->Fade and set the blend mode to overlay and the opacity to somewhere between 40% and 70%
-> -> ->
The actual numbers will vary from image to image, but the process is fairly straightforward after you’ve experimented a bit. The net result seem to be that the blurred version combines with the original in such a way as to bring out facial structure and color while de-emphasizing surface detail. The above sample was processed with a blur of 3 pixel radius and a 70% overlay blend. I also cheated a little and desaturated my teeth in this case, but that step isn’t essential to achieve an improvement.
Using values 70% or higher for step 3 you will see significant saturation of colors, to the point where you might want to reapply the original colors (keeping the luminosity from the new image).
Hmmm I think I’ll avoid posting any more pictures of myself for a while, since looking back over recent posts I’m starting to seem a little obsessive over my appearance…
BTW I’m not planning to make a habit of OSX’ing my images– I just like experimenting in Photoshop to reproduce various effects.
Anyone who has dabbled in Photoshop has probably wondered at some time what the hell all the blending modes actually do. With this in mind I created the following swatches some time ago, to help me select the most appropriate modes for creating mockups of shading and lighting effects.
The samples here illustrate the result of blending the following two images (disregarding the color specific blends for now).
What I find striking about the results is that many of them have discontinuities, which seems a little crappy to me. Eg overlay and hard light, which both appear to be based on combinations of multiply and screen simply stuck together.
if (A<0.5) 2*A*B
|Soft Light: ???|
if (B<0.5) 2*A*B
Color Dodge: A/(1-B)
Color Burn: 1-(1-A)/B
I don’t normally post feedback without permission, but I got a real kick out of this, and I’m assuming the person who sent it won’t mind me posting an extract here.
… I would love to try Drivey, but I use a Mac. And I have a feeling, as simple as Drivey may (or may not) be, it isn’t based on OpenGL or anything easily ported to that platform. But I love it anyway. It is an imaginary love, since I have no idea how it actually is, other than the screenshots; but I project a lot of wishful thinking on Drivey …
Best feedback in ages :) I think I need to make some time to work on Drivey again…
Based on a physics-based 2D platformer I started fiddling with ages ago… not sure if it will come to anything, but thought I might as well post them here since I went to the trouble of making them.
(The backgrounds and trees are photoshopped in– the wheelchair guy and curvy ground are all the engine currently outputs)
I have been doing a teeny bit of work on a new version Drivey , but it’s coming along a little slowly because I am in the process of porting it to C++, which really means I’m rewriting it, since the original sort of evolved in a very organic way and is written in very messy JujuScript.
I’m not working on it right this second because a) It is very late, and b) I have nasty stomach cramps [probably because I've consumed only toast, chocolate and coffee today – although it could also be related to the enormous amount of leftover pasta I ate last night]
I’d go to bed, but aforementioned cramps are bugging me too much, so instead I will blog – something I had only just privately resolved to do a little less of [in favour of more productive activities]. And not only will I blog, but I will blog about unrelated subjects within the same post…
I use an LCD monitor, and therefore it makes sense for me to enable ClearType*. But for the past fortnight I have been using a CRT [while houseminding] and was surprised to find that even when using a CRT, I have come to prefer the appearance of ClearType over that of the "Standard" font renderer [on WindowsXP].
If you are using WinXP, and your fonts currently look like the first example but you would prefer the second, you can change your preference by opening ControlPanel->Display->Appearance->Effects and choosing ClearType for font smoothing.
* Link to previous post: LCD, ClearType™, Tahoma and MS Sans Serif
A minor issue with blogs is that you can often get multiple search results for the same terms, because those terms are repeated on the front page, permalink pages, monthly archives etc. At time of writing a Google search for "lose the horrible ripple" returns 4 results on intepid, and although Google is obviously very smart and seems capable of guessing that the permalink version is the "best" one, I’d really prefer it if that was the only result that was returned [ Yahoo seems less smart, returning links to the fron page for such searches ]
To this end I am going to ask search engines not to index any pages on intepid.com except for permalinks, by adding the following to the headers of all non-permalink pages:
In theory this should mean that the page will not be indexed but the links will still be followed, so Google [and others] should still be able to find their way to the permalink pages. Should be interesting to see how quickly this works [if it works].
Some time ago, in my first ever blog entry, I mentioned that I was interested in the concept of hexagonal pixels:
Exploring utterly frivolous hexagonal pixel rendering technique… virtually no practical value in it but it’s just sticking in my head and I’m finding myself wishing that hardware could support hexagonal pixels and triangular shading rather than the more convential square bilinear approach.
Now, after more than two-and-a-half years, I finally got around to trying out the technique on some real images!
[ NOTE: in an attempt to avoid ambiguity I will herein use the term texel to refer to a color sample from a regular source image, or texmap, and the term hexel to refer to a color sample from a hexmap, an image specially prepared so that its samples are arranged in a honeycomb pattern. ]
The following two images represent the two most common methods for displaying, zooming and resizing a standard bitmap. To people accustomed to using image editing software or playing 3D games, the visual characteristics should be very familiar.
The first is your classic unfiltered or nearest-neighbour approach, where each texel shows up as a square. It’s cheap, common and ugly, and generally what texture mapping looked like in the good ol’ days of software based 3D engines.
The second uses standard bilinear filtering, a technology now available even on the cheapest graphics hardware, but still relatively expensive to implement in software [which is why bitmaps in flash animations often look more like the first image than the second].
Hexagonal texels, or hexels
Now compare with the following images, which were created quite differently, using a specially prepared hexmap [instead of a regular texmap]:
Instead of appearing as squares on a grid, the unfiltered version now consists of tiny hexagons layed out in a honeycomb arrangement.
The difference between filtered texels and filtered hexels is a little more subtle; instead of a soft stripey appearance we get a kind of dotty look, as though the image is being viewed through a bumpy screen. Although there is an extra sharpness there, there seems to be some additional noise as well.
The difference between a texmap and a hexmap is that a texmap’s values are taken from points on a regular square grid, whereas a hexmap’s values are taken from points on a triangular grid – Note here the relationship between the underlying grid and the honeycomb arrangement of the hexels themselves.
When displaying an unfiltered hexmap, the nearest-neighbour approach is used, whereby the color of a destination pixel is determined by the nearest hexel only. To display a filtered [or smoothed] hexmap, the three surrounding [nearest] hexel values are interpolated [equivalent to Gouraud shading].
I wanted to try this is because in the real world small objects tend to pack most efficiently in a honeycomb formation [rather than a grid], and I wanted to know if a similar sort of effect could be observed when using such a layout for image storage. The result was pretty close to what I expected, in that the technique seems to lend itself best to organic shapes, whereas sharp, straight edges tend to acquire a slightly dotty appearance.
[ Perhaps D&D players will have an instinctive understanding as to why this is the case ;) ]
Note that the two very different maps were created to be [as close as possible to] the same overall size. The dimensions of the regular bitmap are 128×128, whereas the dimensions of the hexmap are 120×136.
Whether or not I will use these methods for anything beyond experimentation I don’t know yet, but I’m quite pleased with the results so far. It’s possible that current generation 3D hardware could be used to render hexmaps at super speeds, but [ until I get around to investigating programmable shaders ] I can’t say for sure. If it can be done without a performance hit, it might be worth doing for the novelty factor.
And it’s just nice to try something different now and again…
You can download a 430K demo [Win32 only] of filtered and unfiltered hexmap rendering, and allows you to toggle between regular texture-mapping and hexmapping.
NOTE: I am not claiming to have invented the concept of hexels, nor to be the first to implement it – I do this kind of stuff because it is fun and/or challenging, and I don’t mind at all if that means I spend a lot of my time reinventing the wheel… I’m used to it ;)
More code clean-up stuff, now looking at bitmap filtering. Once again, this tech is not exactly new, but I have yet to put it to good use…
I’ve created a little demo app for download [Win32] which quite effectively demonstrates the different ways a bitmap can be filtered, including a novel method which allows you to keep the pixels but lose the horrible ripple effect you often see in Flash animations [run demo to see what I'm talking about].
Here’s a previous article describing the concept in more detail.
[ new test shots from Drivey, a project which is not yet dead ]
Linear + Radial gradient fills.
Not as fast as flat shading but not too slow either – and I haven’t yet knuckled down and coded them in MMX [or any of the several other MultiMedia extension type instruction sets]. Really I am just tidying up a bit of code and combining stuff I’ve been tinkering with for years [path renderer + simple software shaders]. I’d love to come up with a way to combine path rendering with current generation hardware pixel shaders… maybe I should check out the DX9 shader language sometime and see what’s possible.
BTW I’ve decided that [until I change my mind] I will refer to the silhouetted graphical style I am experimenting with as CameoVision.
Last week when I drove up to visit my parents in Murwillumbah (my old home town) I decided to grab some family photos to bring back with me, so that I could scan them in for easier access and in order to preserve them [many of the color ones have already faded quite badly, even those under 20 years old].
It’s a project I’ve been meaning to do for a while, but now that I have what appears to be about 20kg of photos and albums piled up here next to me, the whole task is starting to look just a little bit daunting. I decided to start with my old school photos, but of course have already become distracted from the task at hand, and have been experimenting instead with attempting to recreate the facial expressions worn in some of these photos. It’s a surprisingly entertaining activity, if a little time-consuming.
The original images – details from group shots – of me aged 11, 12 and 16 respectively [dig that broody 16-year old]:
And here they are again, with me aged 32, 32 and 32 respectively.
The faces were photoshopped in from photos [taken just now] of me mimicking the expressions from the originals. Considering how ridiculously difficult this is to do on your own [trying to get it right without being able to see a side-by-side comparison] I think it worked pretty well!
Hmmm… I’m looking for a new hairstyle, maybe I should revisit that “Helmet-Head” haircut…
Also, looking back though family and school photos in general makes me think: I really should smile more. It’s really annoying seeing that dead expression all the time, but somewhere along the way I developed an irrational fear of smiling in front of a camera, and I’ve never really gotten over it. I just hope I still smile in Real Life…
The distinctive 3D paint program Z-Brush has come a long way since I last saw it. I recommend watching the Angler Fish Video, which demonstrates how an astoundingly complex and detailed model can be created from scratch!
This is like the modelling software I used to dream about! [when I used to dream about software... sad, I know...]
Am quite inspired by some of [renowned design guru] Edward Tufte’s latest writings, namely a beautifully illustrated [preview] chapter called Sparklines: Intense, Simple, Word-Sized Graphics. The basic idea is that information is sometimes easier to absorb when it is compressed into a scale and density comparable with text, rather than spread out over a page with unnecessary borders, shadows etc. PHP has image handling extensions which make it fairly easy to generate such images on the fly, so I have started tinkering a bit [as have others].
The tiny graph charts the sizes of my last 100 posts, and is about as simple a sparkline as you can get. Although it doesn’t provide much in the way of quantitative information, it may be sufficient for certain types of data where patterns or general trends are more significant than the actual values.
Compare with the more traditional graphical presentation:
which dominates visually, taking up more than 35 times as much space on the page. Obviously those numbered axes are useful, but Tufte shows that with a few extra details a sparkline can often be quite a powerful conveyor of information.
In another home-made example, the following visually demonstrates [I hope] the strong correlation between post size and the interval between posts:
I like this one because it looks like a reflection, which seems appropriate in this case since it implies that the one data series is reflected in the other. Causation is not so easy to discern, but I suspect that it goes both ways, ie a long post is often left at the top of the page for a while, allowing it to be read properly before being bumped, whereas a long interval enables excessive rumination, which in turn may result in an extra long post.
UPDATE – March 31, 2005: Click here for source code and a simple PHP demo allowing you to build your own sparklines. It’s nothing too fancy, but may be enough to get you tinkering…