Recently in programming Category

Multitouch360

|

hand.jpg


 - a Platform for Interactive Art Installations


Multitouch360 is a project by Johann Korndoerfer and Thorsten Blum that aims at creating a hemispherical multitouch display. It has now officially reached some degree of "finished", meaning that all essentials are working, no cheap workarounds remain in either hardware or software and the software framework is capable of letting anyone code cool stuff.

Watch the short video below:

This is my debug application running on Multitouch360, showing a wire hemisphere along with camera calibration markers and visualizing touch events. As you can see, touch recognition works reasonably well.

How does it work?

camera.jpg We use a technique called FTIR (Frustrated Total Internal Reflection) to detect touches. This means that an infrared camera looks at the hemisphere from the inside. The picture to the left is what the camera sees - you can see a shadow from the hand and five bright points at the tips of the fingers. Those bright points are created by the users' fingers touching the outer acrylic hemisphere (called the Wave Guide). The Wave Guide is made of clear acrylic and flooded with infrared light from 288 infrared LEDs set in the rim. The IR light can't escape the acrylic except when it is touched by an object with a sufficiently large refractive index, like water or human skin. Thus, the infrared light enters the finger, gets scattered and can be picked up by the camera.


full_dark_captions.JPGSince you can't project anything onto clear acrylic, we needed a inner layer of matte acrylic (called the Diffusor) just inside the Wave Guide. This layer makes the bright points from the outer layer a bit blurry for the camera, but we would have done the blurring in software anyway. Using a mirror to save some space, we project a rendered image onto the Diffusor, again from below. You can see the whole setup in the image to the right.

The image that is projected onto the sphere gets distorted quite a bit, which has to be anticipated in software. The same applies for the other direction: touch locations have to be transformed back into 3d as well. Both the projection and the camera have to be calibrated, which can be done by clicking five positions with the mouse (calibrating the projector), then touching the same positions on the sphere (calibrating the camera). You can see a visualization of the gradient descent that solves the resulting optimization problem right at the beginning of the video above.

The Software

I made a nice little framework for Multitouch360 that lets you write multitouch applications. The debug application from the video above looks like this:

from multitouch360 import * class DebugApplication(MultitouchApplication): def touch_down(self,touch): TouchIndicator(touch) def touch_move(self,touch): pass def touch_up(self,touch): Explosion(touch.world_pos) def display(self): glutWireSphere(1,30,30) DebugApplication()

And this is it. You can define textures, buttons, etc. easily and the weird math you need for handling things on a sphere is taken care of.

What do you do with it?

crowd_und_setup.jpg

First, you can do something with it. If you know some Python and OpenGL, you can code your own application. The Multitouch360 framework will simulate the hemisphere on any computer and you can click on a virtual version of it with your mouse. If you are interested in doing something with it and you are located near Karlsruhe, Germany, please let me know. Keep in mind that anything that runs on Multitouch360 must be tailored to the spherical layout - thinking in 2d will make you wish you had a flat table, but embracing the hemispherical form lets you do some things that are impossible otherwise: planets, star maps, boobs or eyes are some of the things that may come to life. Games that involve not seeing what your opponent is doing on their side are another obvious candidate.

Second, i have made a few very simple demo applications that are hardly more than a hello-world, but may show what's possible. Watch people play Global Thermonuclear War in the video below, incinerating whole countries by touching them. The only winning move is not to play. (Application written in about three hours.)

Project State

Although there are a lot of other ideas for games or weird interactive installations, i am now starting to pursue other projects and apart from some maintenance, documentation and potentially helping you code something nice, the project is now finished for me.

The project is also part of my subsidiary subject (Media Art) at Hfg Karlsruhe and has kindly been hosted by the Computer Graphics Department of the Karlsruhe University. This post is also mirrored on the official Multitouch360 blog where you can find more pictures of Thorsten and me building the thing.

This article describes how to render a ridicuously large 500 Megapixel Buddhabrot with iteration depths in the range of millions. Lisp source included. Scroll down for the final image.


buddhabrot_small_detail.jpgWhy would you want to do this? To have it printed and on your wall, of course (see pictures below).

I will begin by outlining the general idea of the Buddhabrot (see the Wikipedia article for more), then talk about my optimizations and rendering choices.

The Buddhabrot

You typically render a Buddhabrot by choosing a set of starting points in the complex plane (i will call them "samples" from now on), run each sample c trough the iteration z_k+1 = z_k^2 + c with z_0 = 0+0i and for those that grow very large after some time (i.e. those that are outside the Mandelbrot Set) remember all the z_k that have been visited during the iteration. All those visited points from all the escaping samples form a point cloud in the complex plane that is usually called a Buddhabrot when visualized.


You need a maximum iteration depth n (sometimes called the Dwell Limit) because many samples will be within the Mandelbrot Set and you would run the iteration indefinitely for those. Whenever the maximum iteration depth is reached, the sample is considered to be inside the Mandelbrot set - although it possibly isn't and after another 1000 Iterations it would have escaped. There is no way to be sure.

We now consider only those samples "good" whose iteration escaped before reaching the maximum iteration depth, i.e. those who are definitely outside the Mandelbrot Set. Each good sample gives a set of points whose magnitude is equal to the number of iteration steps i that it took for us to find out that it was outside the Mandelbrot Set. Sometimes this is a very small number like 2 or 3, but samples nearer to the border of the Mandelbrot Set take much longer. A sample can take thousands or millions of iterations before escaping, thus leaving a much larger footprint in the resulting point cloud. It seems that for each Natural Number n there is a sample that takes at least n iteration steps, although for higher values of n it becomes harder and harder to find one.

Deep Iteration

It has been observed by many people that just rendering the Buddhabrot like this - using all the escaping samples' visited points - gives a nebula-like image but that a more interesting image can be made by considering only those samples as "interesting" whose iteration time i exceeds some minimum iteration time k with k < n. For example, you could use k = 1000 and n = 10000, i.e. keep only those samples that do not escape before their 1000th iteration but do escape before their 10000th iteration. If you look at one of those more deeply iterated good samples' visited path in the complex plane you will note some pattern: it forms some kind of orbit, sometimes some kind of circle or a set of circles or other shapes. I have no idea why the orbits are shaped this way and i don't know if anyone else does. Here is a part of the orbit of the sample -0.9114822433916717 + 0.2523940788714053i, for example (containing nearly five million points):

orbit.png

Incidentally, all interesting samples like this one are located very very close to the border of the Mandelbrot Set and make good targets to zoom into with your favourite fractal explorer. Try it.

Since the longer orbits look nice we will try to find only "interesting" samples that take a long time to finally escape. There have been several approches to find them. The most obvious one is of course just choosing a sample randomly and seeing how it goes, which means throwing away something like 99.999% of them. You could also use a regularly spaced grid of samples which basically means the same. The interesting samples are all located near the Mandelbrot Set's border, so there must be some spatial pattern to their distribution which might help us find them. If you ever played around with the Set a bit you probably know that the border is infinitely long and calling it "complex" is a bit of an understatement, but surely we can do better than just randomly sample the plane?

Optimization

Mandelbrot_Set_-_Periodicities_coloured.png

  • Rejecting all samples from within the two biggest parts of the Mandelbrot Set, the "Main Cardioid" and the circular "Period-2-Bulb", is a must-have. This simple check removes a large amount of all the points in the Set which are very costly because they will iterate until the dwell limit is reached. I just used the formula from the Wikipedia article. The areas removed by this test are shown in red on this image of the Mandelbrot Set (my output images are rotated clockwise by 90 degrees, don't get confused):
  • People have also used the Metropolis-Hastings Algorithm, considering the mapping c -> i a propability density function. The method is especially useful for rendering a small detail of the bigger image, but does not work too well for the whole image with very big values of k. I kept finding many interesting samples with it, but their orbits turned out to be very similar because several good samples that are very close to each other produce orbits that are almost identical - giving an image of a single blurred orbit. I increased the mutation distance enough to produce sufficiently distinctive orbits, but then the speedup was gone. In the end i didn't use this method. Probably it is more useful for rendering the "normal" Buddhabrot or a detail of it without the minimum iteration constraint.
  • To partly remove some of the costly larger areas (like the non-circular bulbs labeled "3" and up in the image above) from within the Set i divided the plane into rectangular cells and chose random samples only from those cells that contained some part of the border of the Mandelbrot Set. Depending on the cell grid resolution and the iteration depth this gave a speedup of 3 to 8.


"The"  Buddhabrotbuddhabrot_symmetry.jpg

Since we still choose our samples randomly, every rendering will look different - depending on which orbits your random search finds. There is no "the" Buddhabrot (see Melinda's comment below - rendering for an infinite amount of time yields "the" Buddhabrot). This also means that the image is not symmetic on the small scale which is a good thing if you ask me. You could of course get a symmetric image of the same quality in half the time by mirroring it, but i think it's worth waiting for the asymmetric one. On the right is a part of the final rendering that exhibits superficial symmetry, but is composed of asymmetric orbits. Click it to view this detail in the final resolution.

If i had had my renderer run for much longer (maybe a week or a month), more and more orbits would have shown up, occluding each other and finally blurring the whole thing into the standard nebula-like buddhabrot. The key is to render just long enough until a sufficient amount of orbits have appeared and then stop.

Lisp Implementation

The renderer is written in Common Lisp which is not your classic number crunching language. If the goal was to do it as fast as possible, you'd probably end up doing it in C with inline assembly or on the GPU. On the other hand, i had never really optimized any lisp code to the limit and wanted to do just that: see how far you can get. One long night of low-level optimization at entropia with the help of some friends later there was is only a factor of 2 left compared to the C implementation that one entropian made. Which is quite okay, considering the cumulative speedup of about 2000 compared to my first implementation. Trading of sbcl's builtin complex numbers for traditional pairs of floats, declaring all types, working in-place instead of on the stack, considering cache efficiency, prompting usage of inline arithmetic instead of jumps to costly sbcl functions, etc. gained us a factor of about 3 to 5 while the big rest was due to the algorithmic improvements.

The renderer uses double-floats throughout. I think i can safely say that the orbits' shapes are not due to cumulative floating-point errors (which i had suspected at some time) because tiny variations in the starting value usually do not produce significantly different orbits. Orbits are not chaotic systems.

Since tracing the few good samples is quite cheap compared to finding them, the renderer first multithreadedly collects some good samples by randomly sampling from the grid cells (see optimizations above) and only afterwards processes them to keep track of their jumping around on the complex plane. Their path is recorded in a big 16bit greyscale image which is repeatedly written to disk.

Postprocessing, Color and Noise

Converting this greyscale HDR image to a LDR image with color has to be done by hand which is a nontrivial task for most image manipulation programs because 20000x25000 is not exactly your standard image resolution. Gimp and Cinepaint both crash or freeze (and Gimp can't handle 16-bit images), but Photoshop worked for me. I also cheated a bit by gently surface-blurring the image, removing some noise.

The noise is the only reason you need the really large iteration depths in the range of millions that i used. If you increase both the minimum iteration time and the maximum iteration limit, your orbits will consist of more and more points, allowing you to choose a higher resolution. If, on the other hand, you get less than, say, five hits per pixel in the darker regions of the orbit you will start to notice that some pixels may get only two hits and their neighbour pixel may get ten hits which will be visible as noise in the output image. If you iterate more deeply, this effect evens out and the signal-to-noise ratio increases. To remove noise, you can always downscale the output image or, as i did, selectively blur the image a bit to improve the look of the noisy areas.

For the output image that i consider the most successful one i used a minimum iteration time of 1000000 and a maximum iteration depth of 5000000. it took about 16 hours on my university's 8-core Xeon machine which i abused for this task overnight.

Results

Here is a scaled version of about 5000x6000 pixels for your firefox's non-crashing convenience.

buddhabrot_medium.jpg

buddhabrot_print.jpg

I am quite satisfied with the digitally printed one as well, although some of the darker reds lose their saturation.

Do not open the large image in your browser. It is a 93MiB JPG file of 20000x25000 pixels which consumes 1.5 GiB of RAM when opened. Don't try this at home. I warned you.

You can also have a look at the Lisp source.

Images created by Johann Korndoerfer (cupe) and released CC-Attribution 3.0 Germany, except the original Mandelbrot bulb image by Wikipedia user Hoehue.

LaTeX beamer markup = less suckage

| | Comments (3)

There are many bad things about latex - like not being able to use footnotes in tables, color verbatim text for syntax highlighting or center math expressions. One general problem is its verbosity when only a small subset of its functionality is needed. Especially when using the beamer class, the mental and textual overhead made writing latex too bothersome for me to be fun. A simple wiki-style markup would suffice for most tasks when creating slides, with extra functionality only needed for special slides like titles. Wouldn't it be nice to write

=Why tex needs to be replaced=
There are countless reasons:
* it's old
* I don't like it

which would become

\begin{frame}[plain]
\frametitle{Why tex needs to be replaced}
There are countless reasons:
\begin{itemize}
\item it's old
\item I don't like it
\end{itemize}
\end{frame}

I made a primitive LaTeX preprocessor in lisp that does just that. It handles slides, converts itemizations and includes and resizes graphics, while retaining all the nice features that make us use latex in the first place (i.e. the math mode). Things to improve include support for enumerations and flushright/centering and a more friendly shellscript. I will probably update it as I progress with the talk i'm currently preparing which will now be all the more awesome.

You can check out everything from svn://erleuchtet.org/texsucks or download it directly. With sbcl installed, just run make.sh to create an example pdf.