IFS is the term originally devised by Michael Barnsley and Steven Demko [1] for a collection of contraction mappings over a complete metric space, typically compact subsets of R^n. Systems of contraction mappings had been considered previously by a number authors for various purposes. But the landmark papers of John Hutchinson [2] and, independently, Barnsley and Demko [1] showed how such systems of mappings with associated probabilities could be used to construct fractal sets and measures: the former from a geometric measure theory setting and the latter from a probabilistic setting. For those who are not familiar with IFS, we'll provide a simple discussion and example a little later in this section.
Just as important, however, was the fact that the Barnsley/Demko paper was the first to suggest that IFS could be used to approximate natural objects. This was the seed of the inverse problem of fractal approximation: Given a "target" set, S, for example, a leaf, can we find an IFS with attractor A (see below) that approximates S to a reasonable degree.
In a subsequent paper, Barnsley and students [3] showed how the inverse problem of fractal approximation was could be reformulated by means of the infamous "Collage Theorem": instead of trying to find an IFS whose attractor A would match the target S (a very tedious and difficult problem), one can look for an IFS that maps A as close as possible to itself. More on this later.
After [3], the next natural question was: "Can we use IFS to approximate images?" The seminal paper [4] by A. Jacquin, then a Ph.D. student of Barnsley at Georgia Tech, provided the basis of block-based fractal image coding which is still used today. Jacquin's paper launched an intensive activity in fractal image compression [5-7]. Indeed, in the meantime, Barnsley left Georgia Tech to found the company Iterated Systems which was dedicated to the commercialization of fractal image compression.
In fractal image coding, one tries again to approximate a "target" image y with the fixed point x' of a contractive fractal transform T. In general, however, this direct inverse problem is difficult because of the complex nature of the fractal transform, which maps modified subimages onto other regions. A great simplification is afforded by the Collage Theorem of Ref. [3] above. In collage coding, one looks for a transform T that minimizes the so-called collage distance d(y,Ty). Most, if not all, fractal image coding methods rely on collage coding.
Very briefly, block-based fractal image coding is a "local IFS" procedure. Given an image I, one typically seeks to approximate its subimages on n x n range subblocks R_i with contracted (i.e., decimated) and greyscale-modified subimages on 2n x 2n domain subblocks D_j. For each range block, one searches for the best domain block D_i from a domain pool. The larger the domain pool, the better opportunity for a good approximation, but at the expense of higher search times.
Below is an example of fractal image coding as applied to the infamous Lena image:
Fractal compression methods were actually quite competitive until the appearance of powerful wavelet-based methods and, subsequently, context-based coders. Nevertheless, it has always been the philosophy of the Waterloo research programme that fractal-based methods are interesting mathematically and that they may also be able to provide useful information about images. We are indeed finding that there is much "life after compression".
Suppose that w : X -> X is a contraction mapping of a complete metric space (X,d) to itself. Then, by the celebrated Banach Fixed Point Theorem, there is a unique point x' in X such that w(x') = x', the fixed point of w. Moreover, for any starting point x_0 in X, the sequence of points {x_n} defined by the iteration procedure x_{n+1}=w(x_n) converges to x' as n -> infinity. In other words, x' is an attractive fixed point .
Now suppose that we have more than one contraction mapping on X, i.e. w_i : X -> X, i=1,...N. Each of these maps w_i will have its own fixed point x'_i. And, of course, if we apply only one of these maps repeatedly, say w_P, then the iteration sequence {x_n} will converge to its fixed point x'_P. But what happens if you pick the maps at random? Clearly, each map will be fighting to bring the sequence to its own fixed point!
Assuming that you pick each map w_i with a nonzero probability p_i (sum of p_i = 1), the sequence will eventually approach a unique set, the "attractor" A of the "iterated function system" W = {w_1, ... , w_N}, performing a random walk over it (or at least getting nearer and nearer to it). The probabilities will play a role in determining how often various regions of the attractor are visited.
Examples on [0,1]:
w_1(x) = x/3, w_2(x) = x/3 + 2/3. Note that w_1(0)=0 and w_2(1)=1. Then the attractor of this IFS is the classical Cantor set on [0,1].
An example in [0,1] X [0,1]:
Note: Let us go back to Franklin Mendvil's beautiful montage of photos presented above. Starting at the upper left photo (which could be considered as the "seed" set), if you scan left-to-right and then move down and scan left-to-right again, you have the first six "sets" that are approaching this "rectangular" Sierpinski gasket.
For a little more discussion on this subject, along with a quite readable introduction to fractal image coding, you may wish to look at E.R. Vrscay's Hitchhiker's Guide to Fractal Image Compression. It was written for a general audience with some undergraduate mathematical knowledge. Ref. [8] below is also a very nice and readable discussion of fractal image coding.