Technical Stuff
Broadly speaking, the program is attempting to evaluate every region (or pixel) of one image with every region in another image and find some sort of matches. If we did this pixel-by-pixel we'd be looking at an immense performance challenge and probably wouldn't produce the results we want.
If we divide the image into "posterized" unblended regions, the result might still be cool but I was seeking blended regions. A lot of trial and error went into an algorithm that I like well enough. Though I usually find myself in Photoshop adding final polish and diminishing some of the artifacts.
The color matching itself (as mentioned earlier) is probabilistic in nature. Each HSL &, location, border region, and other elements are competing with each other for "the match" based on user-defined parameters. The results are then blended together to produce the final result.
I tried many different ways to both structure and blend the regions. The segmentation itself is a heavily modified version of the flood-fill algorithm. As mentioned in the segmentation section, I settled on using noise functions to guide the process to produce some irregularity in the segments and the way they blend together.
For a long time, I thought the splotches often seen were a fault of the region blending. However, when blending with the original color of the target image, everything looked perfect. Better blending algorithms could improve this but the lower-level problem is asking the color matching algorithm to do things that don't make sense. Matching two hues 180° apart will end up random and splotchy.
Initially I started out with a naive HSV implementation and struggled with saturated dark and light shades (they look unsaturated until they're brightened or darkened). Moving onto a more advanced color model (working in CIE L*a*b*) produced much better looking results.
And using more accurate color comparison model started producing some amazing improvements over simple lightness comparisons. However, sometimes the basic lightness mapping can be more aesthetically appearing.
I'm still very much working on this problem. A simple solution is to oversample or blur a bit but I'm finding intelligent ways to blend regions based on the visual contrast between regions (see bordering regions.)
One area that I'd really like to work on is making the algorithm resolution-independent (or as close as possible). Right now if you work in a lower-resolution (for performance purposes) and then move up to your full-res version, the final results will not match and you need to fiddle with the settings to get a similar result.
One of the more interesting features is video. Currently the temporal coherence is way too jittery. However, certain types of color mapping (location mainly) can be blended without ghosting. After blending, single-frame elements can be added back in (light/dark/lines) to produce some interesting video.
Although performance wasn't initially a concern, I was eventually able to make full use of parallel computations for both segmentation and mapping. On a 4 core 4 ghz processor (using all cores), a 1600x1067 image is done in the range of a few minutes, depending on the settings. A full res SLR image might be on the order of 40-60 minutes. This program will never be an Instagram filter without a robust back-end. To some extend, I consider this program more like a ray-tracer for 2D images. Certain features (especially Used Up) increase the processing time. Also, the performance is dependent on the resolution and segmentation settings of both images.
Aside from using all cores, a lot of work went into limiting the NxN nature of the problem by creating sortable indexes relating to the various mapping properties.
Memory-wise, it can be intensive for high-resolution images. At medium resolution (1600px), peak memory usage is around 2 Gigs. But optimizing memory use hasn't been a focus.
More on performance and resolution issues soon.