white one - Making of

| | Comments (2)

white one is my first 4k intro and my first serious demoscene production (as far as something like that can be serious). I'm new to C coding and to sizecoding in particular, so there were a lot of things to be learned which I'll try to summarize here. Download and run the executable (nvidia only, sorry) or watch the video capture first:

A 4k intro is a executable file of at most 4 kilobytes (4096 bytes) that generates video and audio. That is, it puts something moving on your screen and something audible on your speakers. The finished product runs for a few minutes, has some coherent theme and design and ideally, sound and visual effects complement each other. On top of that, it's a technical challenge: It's impossible to store 3D models, textures or sound samples in 4 kilobytes, so you have to generate these things at runtime if you need them.

Overwhelmed by the impossibility of all this I started messing around.

I had been lurking on a few demoparties, but never released anything nontrivial - i do know some OpenGL, but i am normally coding Lisp which tends to produce executables that are measured in megabytes. Obviously, that had to change if i wanted to contribute a small intro. Playing with the GL Shading Language had always been fun for me, so it was clear that something shader-heavy was the only option. And I had some experience with C from microcontroller hacking.

When some friends of mine implemented a cellular automaton on the Fragment Shader in summer, I took their code and started messing with the shader. A cellular automaton implements a function on a grid of cells (here: pixels). The function takes the whole grid as input and produces a single pixel's value as output. To get one new frame, it has to be applied once for each pixel, using the previous frame as input.

A very simple function of that kind would, for example, assign to each pixel the value of its left neighbour, thus moving the whole image to the right by one pixel with each frame.

foobar.png

The Shader

(you might want to skip this section if you are more interested in how to make 4k intros in general)

First, i experimented with a function that looked like this:

1. Look up this pixel's current color.
2. Convert it to HSV colors. 
3. Use the Hue value as an angle and walk (dist\_s\*S + dist\_v\*V + dist\_sv\*S\*V + dist\_base) pixels into that direction in the image, where the dist\_something values are arbitrarily chosen by me.
4. Look up the color value from there and output this as the pixel's new color.

I made a small program that allowed me to play with the four parameters. Depending on the choice of the four parameters, interesting yet ugly things happened, e.g. solid-color patches would move about the screen until they collided and merged to a single color or something vagueley resembling smoke would appear.

I realized i wanted some more features and added new parameters to adjust the output color in HSV color space, e.g. rotate the color wheel by incrementing H, decrease/increase the saturation S and darken/lighten the image by modifying V. A new parameter for weighted averaging between the old and the new image was also introduced. Interestingly, instead of using values between 0 and 1 for this blending, i ended up using values below 0 or above 1 for many of the more interesting effects :)

The next addition was a parameter to control smoothing the image. A value of 0 would mean no smoothing, 1 would mean a standard smoothing kernel of [[0 1 0] [1 4 1] [0 1 0]]. We are now only one more parameter away from the first effect that actually made it into the final intro which is also the first one on the screen:

(This will be the last time i bore you with concrete numbers)

smooth: -2.400000
dist_base: 1.400001
dist_s: 3.099999 
dist_v: -1.900000
dist_sv: -2.400000
mix_bias: -0.480000 (alpha blending between frames)
h_cycle: 0.020000 (cycle the color wheel by a small amount)
rotate: 0.001000

As you can see, the smoothing parameter is outside its normal range of [0,1] as well and the kernel that is defined this way looks a lot like a sharpening kernel. If you look closely at the very beginning you can see checkerboard-like pixels in the center that are due to this sharpening. The one missing parameter, called "rotation", rotates the input image by a certain angle before reading from it - or so i thought. I discovered later that i had used the rotated image just for the first read (the one determining in which direction to walk for the second read), but the second read was on the unrotated image. When i fixed this, everything became a lot less interesting, so i kept the bug.

In the periphery of the image it gets blurred by rotating it less than a full pixel and OpenGL's bilinear texture-filtering blending neighbouring values. If you look very closely in the final 1080p production, you can see four areas where the rotation is just about one pixel and the checkerboard pattern re-emerges. I also introduced a parameter for zooming that works just like the rotation parameter (i.e. it is broken as well).

The last parameter controls alpha blending of the final image with an additional image. The reason i had to do this is that some effects can get stuck unpredictably: Once the screen is of a solid color it will stay a solid color forever. I needed a way to re-introduce some variation in the image to get the effects started again in this case. Naively, I thought there would be enough space in the 4 kilobytes to render some primitive geometry (procedural trees and such) so I could use those as a starting point for a LSD trip. In the end the rendered geometry just became a rotating white "1" (|) that is only visible in some effects. Anyway, now there are two inputs: the last frame and some rendered geometry.

Here is the final shader code in GLSL. The rgbToHsv and hsvToRgb functions are not pasted here and behave as you would expect ;)

So this is basically the heart of the intro. It is arguably less then readable since we had compressed the code a bit already. GLSL code must be given as a string to OpenGL and is compiled by the graphics card driver, so the whole shader code has to be present in the executable. Later, all variables and functions would be renamed to one-character names and so on, making it even less readable.

After some time i had collected about 70 different parameter sets that i had found interesting, many of them similar and some ugly. My eyes hurt. Here are some more examples:

bla-1.pngbla-3.png

bla-4.pngwellen_almost_converged.png

The Rest

Now that the shader functionality was fixed (any change would invalidate many of the 70 precious effects), i needed to wrap it in a 4k executable, get some music and make some kind of timeline. Fortunately there are now some nice tools that help the aspiring demo coder with these tasks. For the music, many 4k intros today use 4klang as a synthesizer. 4klang is designed for 4k intros and produces an object file containing the synth, the intruments as well as music data that you can link against. I also found iq's 4k framework which is more of a collection of frameworks for different kinds of 1k and 4k intros. I just took the one for OpenGL shader-on-a-quad intros and started reading its code, figuring out how it works. The third tool is Crinkler, a size-optimizing and compressing linker that replaces/calls Visual Studio's builtin linker and is probably used in 95% of today's 4k and 64k intros on windows. Since Crinkler is Windows-specific, now was the time to switch operating systems and start Visual Studio for coding on Windows for the first time in ten years or so. After a few nights i even stopped using emacs shortcuts.

In the meantime i had coaxed my brother into composing music. While he familiarized himself with the 4klang workflow (using OpenMPT for composing) I continued experimenting with the framework. So now we had become a team of two and kept each other updated while i was distracted for a few months by my last exam. This is how it went for him:

Music

Clemens speaking. So, to be honest, I didn't know at all what I got myself into - I'm new to music making, never seen a tracker or stack-based synthesizer before and because I live far far away, I knew nothing about the visuals but a few old screenshots.

To my surprise, I had a lot of fun. Once you understand stack based synthesis, 4klang is straightforward and gives you really tight control of the sound - you just play and create. There's also not much of an interface to get into your way and I miss that when going back to the pseudo-analogue button soup of Logic or Ableton. But since it's so open, you've got to play with it for a while to find out what you can do.

  • Don't be afraid of 4klang's store unit, because that could well be the most awesome little feature of this synth. It allows you to store the current top value of the stack in (almost) any parameter of (almost) any other unit. That's a simple concept, but it really pays off to follow alcatraz' advice to go crazy with it. You can build multi-purpose instruments that behave differently based on the current pitch. Or get evolving sounds, by adding LFOs or envelope units to provide you with control values that change on a per-note-timescale. Then modulate the parameters of these generators, again, with the current pitch or with a dedicated control instrument to get evolving and pitch-dependent sounds. Add more meta levels. Try and see what happens if you filter the control values before feeding them into Store units. And so on. It's really a lot of fun.

(4klang) Drums by umruehren

(4klang) Gebrumml Gbrumml by umruehren

  • Store the output of your main envelope into said envelope's own gain parameter. I'm still not convinced that this should work, but it will add a lot of punch to the sound, e.g. for a bass drum.

  • I had some problems with phase shifts (for instruments that made heavy use of the reverb unit), resulting in louder-than-usual click noises between notes. In that case, you can get quite far by hand-adjusting the phase and/or detune of the offending oscillators via control instruments... for each note change. Note the glitches:

(4klang) jell-o by umruehren

So now for the really difficult part: What kind of sound will match the visuals? And how do you write write music, anyway? Sadly, there's not much to say about my creative process - Because, obviously, when I say "creative process" I mean "messing around until something works or time's up". At least, I managed decide on "minimalistic but full of little fuzzy things" as a general aim before I started. But what does that even mean? So, instead of planning, I built some 4 or 5 proto-tracks that all went into the trash can for very good reasons (but each one was a necessary step) until suddenly, well, time was up and we stuck to the last one.

I'm almost sure that that is the only correct way of doing things.

(4klang) Mod4- by umruehren

Music size optimization

At about two weeks before the party, I had a track ready. But at 2.5k compressed (about 500 bytes non-negotiable 4klang baseline + 1k for instruments + something short of 1k of note values) that was completely unusable for our 4k. So something had to go.

The final track weighed in at about 1.5k compressed, which means I trashed a kilobyte of instrument definitions and note values. The price for that is of course a reduction of the richness-of-sound (you delete less important instruments, merge instruments, simplify instruments) and reduction of variety (reduce the number of different substrings in each instrument track, delete less brilliant parts and generally repeat more stuff). Strangely, I think if anything, the track got better because of the size constraints.

Mastering

There's just one big regret I have: We should have tested the track on a range of different speakers before submitting, because obviously the party PA behaved differently from the Nubert HiFi speakers we used for the final "mastering". Using the HiFi, I evened out a lot of high frequencies a few days before the party, and the result was mud. That's probably a rookie mistake so don't do it.

One last thing. I've read a few comments by non-coder musicians who complained that 4klang wasn't nice and easy enough - you're just being lazy. All the hard work has been done for you. So grab the chance if someone asks you to learn this stuff - it's a lot of fun and kind of addictive.

So much for the music.

Finishing

I got 4klang plugged into the framework with halcy's help, which proved more difficult than i had expected because the executables i first compiled (even the 4klang example) produced weird glitches on my machine, but not on other machines... Eventually we got it working (without understanding the problem) while my brother's musical output became better and better. This was about a month before tUM, the demoparty where we wanted to release the yet-unnamed intro.

Again with the help of some friends at entropia i got the basic functionality of the linux feedback-explorer ported into the framework. I eliminated many of the 70 original effects, reducing them to about 20, and identified some unnecessary parameters. Still, 20 times 14 float values of four bytes each would be a bit too much to fit in the 4 kilobytes together with the shader, the music, the C glue and the yet-imaginary timeline. I edited the table by hand and with python tools written for the purpose to try and find a compact representation. Many of the values where precision was not that important were merged to the same value so that crinkler could compress better. Also they were mapped to 16bit integers and restored at runtime which did hurt precision, but did not destroy the effects if one chose the right mapping and the few really delicate values ended up being restored close to their original value. The parameter table now took up about 500 bytes and compressed to about 200 bytes which was okay. The whole thing, including the beta music, was 4.8 kilobytes - there were three weeks to go and no timeline or concept yet.

We met around christmas at our parents' house and spent hours creating a sequence of the effects. We introduced a parameter flag to control weather one effect's parameters would be blended into the next effect's parameters while it was running or if the transition would be abrupt. We made a timeline. Syncing sound and visuals late at night proved harmful to sleep quality and after the first night of dreaming of spirals we forced us to take some breaks every few hours. The rendered geometry was cut from fancy twisting spiral-ribbons to the single white quad. We finally arrived at a size of 4200 bytes when we headed for my place in Karlsruhe, right next to where tUM takes place. The intro had even got our grandmother's approval over christmas, which is not something that many people can say about their LSD-themed 4k intros.

At the partyplace we tested the beta version on the compo machine - it worked. Relaxed, we plucked a few more bytes here and there, did some last syncing, hid the mouse cursor, made executables for the various resolutions and won the 4k competition. Here is the live footage of the crowd going wild:

Finally, after ten years of being intimidated by the demoscene, we started to see sceners no longer as godlike, but as talented and motivated people. Had we learned this lesson earlier, we would have started making demos years ago.

Download the original 4k executable from pouet. We didn't bother making the source pretty, but you can have a look if you want.

All content made by Johann Korndörfer and Clemens Korndörfer. Released CreativeCommons 3.0 ShareAlike.

Multitouch360

|

hand.jpg


 - a Platform for Interactive Art Installations


Multitouch360 is a project by Johann Korndoerfer and Thorsten Blum that aims at creating a hemispherical multitouch display. It has now officially reached some degree of "finished", meaning that all essentials are working, no cheap workarounds remain in either hardware or software and the software framework is capable of letting anyone code cool stuff.

Watch the short video below:

This is my debug application running on Multitouch360, showing a wire hemisphere along with camera calibration markers and visualizing touch events. As you can see, touch recognition works reasonably well.

How does it work?

camera.jpg We use a technique called FTIR (Frustrated Total Internal Reflection) to detect touches. This means that an infrared camera looks at the hemisphere from the inside. The picture to the left is what the camera sees - you can see a shadow from the hand and five bright points at the tips of the fingers. Those bright points are created by the users' fingers touching the outer acrylic hemisphere (called the Wave Guide). The Wave Guide is made of clear acrylic and flooded with infrared light from 288 infrared LEDs set in the rim. The IR light can't escape the acrylic except when it is touched by an object with a sufficiently large refractive index, like water or human skin. Thus, the infrared light enters the finger, gets scattered and can be picked up by the camera.


full_dark_captions.JPGSince you can't project anything onto clear acrylic, we needed a inner layer of matte acrylic (called the Diffusor) just inside the Wave Guide. This layer makes the bright points from the outer layer a bit blurry for the camera, but we would have done the blurring in software anyway. Using a mirror to save some space, we project a rendered image onto the Diffusor, again from below. You can see the whole setup in the image to the right.

The image that is projected onto the sphere gets distorted quite a bit, which has to be anticipated in software. The same applies for the other direction: touch locations have to be transformed back into 3d as well. Both the projection and the camera have to be calibrated, which can be done by clicking five positions with the mouse (calibrating the projector), then touching the same positions on the sphere (calibrating the camera). You can see a visualization of the gradient descent that solves the resulting optimization problem right at the beginning of the video above.

The Software

I made a nice little framework for Multitouch360 that lets you write multitouch applications. The debug application from the video above looks like this:

from multitouch360 import * class DebugApplication(MultitouchApplication): def touch_down(self,touch): TouchIndicator(touch) def touch_move(self,touch): pass def touch_up(self,touch): Explosion(touch.world_pos) def display(self): glutWireSphere(1,30,30) DebugApplication()

And this is it. You can define textures, buttons, etc. easily and the weird math you need for handling things on a sphere is taken care of.

What do you do with it?

crowd_und_setup.jpg

First, you can do something with it. If you know some Python and OpenGL, you can code your own application. The Multitouch360 framework will simulate the hemisphere on any computer and you can click on a virtual version of it with your mouse. If you are interested in doing something with it and you are located near Karlsruhe, Germany, please let me know. Keep in mind that anything that runs on Multitouch360 must be tailored to the spherical layout - thinking in 2d will make you wish you had a flat table, but embracing the hemispherical form lets you do some things that are impossible otherwise: planets, star maps, boobs or eyes are some of the things that may come to life. Games that involve not seeing what your opponent is doing on their side are another obvious candidate.

Second, i have made a few very simple demo applications that are hardly more than a hello-world, but may show what's possible. Watch people play Global Thermonuclear War in the video below, incinerating whole countries by touching them. The only winning move is not to play. (Application written in about three hours.)

Project State

Although there are a lot of other ideas for games or weird interactive installations, i am now starting to pursue other projects and apart from some maintenance, documentation and potentially helping you code something nice, the project is now finished for me.

The project is also part of my subsidiary subject (Media Art) at Hfg Karlsruhe and has kindly been hosted by the Computer Graphics Department of the Karlsruhe University. This post is also mirrored on the official Multitouch360 blog where you can find more pictures of Thorsten and me building the thing.

This article describes how to render a ridicuously large 500 Megapixel Buddhabrot with iteration depths in the range of millions. Lisp source included. Scroll down for the final image.


buddhabrot_small_detail.jpgWhy would you want to do this? To have it printed and on your wall, of course (see pictures below).

I will begin by outlining the general idea of the Buddhabrot (see the Wikipedia article for more), then talk about my optimizations and rendering choices.

The Buddhabrot

You typically render a Buddhabrot by choosing a set of starting points in the complex plane (i will call them "samples" from now on), run each sample c trough the iteration z_k+1 = z_k^2 + c with z_0 = 0+0i and for those that grow very large after some time (i.e. those that are outside the Mandelbrot Set) remember all the z_k that have been visited during the iteration. All those visited points from all the escaping samples form a point cloud in the complex plane that is usually called a Buddhabrot when visualized.


You need a maximum iteration depth n (sometimes called the Dwell Limit) because many samples will be within the Mandelbrot Set and you would run the iteration indefinitely for those. Whenever the maximum iteration depth is reached, the sample is considered to be inside the Mandelbrot set - although it possibly isn't and after another 1000 Iterations it would have escaped. There is no way to be sure.

We now consider only those samples "good" whose iteration escaped before reaching the maximum iteration depth, i.e. those who are definitely outside the Mandelbrot Set. Each good sample gives a set of points whose magnitude is equal to the number of iteration steps i that it took for us to find out that it was outside the Mandelbrot Set. Sometimes this is a very small number like 2 or 3, but samples nearer to the border of the Mandelbrot Set take much longer. A sample can take thousands or millions of iterations before escaping, thus leaving a much larger footprint in the resulting point cloud. It seems that for each Natural Number n there is a sample that takes at least n iteration steps, although for higher values of n it becomes harder and harder to find one.

Deep Iteration

It has been observed by many people that just rendering the Buddhabrot like this - using all the escaping samples' visited points - gives a nebula-like image but that a more interesting image can be made by considering only those samples as "interesting" whose iteration time i exceeds some minimum iteration time k with k < n. For example, you could use k = 1000 and n = 10000, i.e. keep only those samples that do not escape before their 1000th iteration but do escape before their 10000th iteration. If you look at one of those more deeply iterated good samples' visited path in the complex plane you will note some pattern: it forms some kind of orbit, sometimes some kind of circle or a set of circles or other shapes. I have no idea why the orbits are shaped this way and i don't know if anyone else does. Here is a part of the orbit of the sample -0.9114822433916717 + 0.2523940788714053i, for example (containing nearly five million points):

orbit.png

Incidentally, all interesting samples like this one are located very very close to the border of the Mandelbrot Set and make good targets to zoom into with your favourite fractal explorer. Try it.

Since the longer orbits look nice we will try to find only "interesting" samples that take a long time to finally escape. There have been several approches to find them. The most obvious one is of course just choosing a sample randomly and seeing how it goes, which means throwing away something like 99.999% of them. You could also use a regularly spaced grid of samples which basically means the same. The interesting samples are all located near the Mandelbrot Set's border, so there must be some spatial pattern to their distribution which might help us find them. If you ever played around with the Set a bit you probably know that the border is infinitely long and calling it "complex" is a bit of an understatement, but surely we can do better than just randomly sample the plane?

Optimization

Mandelbrot_Set_-_Periodicities_coloured.png

  • Rejecting all samples from within the two biggest parts of the Mandelbrot Set, the "Main Cardioid" and the circular "Period-2-Bulb", is a must-have. This simple check removes a large amount of all the points in the Set which are very costly because they will iterate until the dwell limit is reached. I just used the formula from the Wikipedia article. The areas removed by this test are shown in red on this image of the Mandelbrot Set (my output images are rotated clockwise by 90 degrees, don't get confused):
  • People have also used the Metropolis-Hastings Algorithm, considering the mapping c -> i a propability density function. The method is especially useful for rendering a small detail of the bigger image, but does not work too well for the whole image with very big values of k. I kept finding many interesting samples with it, but their orbits turned out to be very similar because several good samples that are very close to each other produce orbits that are almost identical - giving an image of a single blurred orbit. I increased the mutation distance enough to produce sufficiently distinctive orbits, but then the speedup was gone. In the end i didn't use this method. Probably it is more useful for rendering the "normal" Buddhabrot or a detail of it without the minimum iteration constraint.
  • To partly remove some of the costly larger areas (like the non-circular bulbs labeled "3" and up in the image above) from within the Set i divided the plane into rectangular cells and chose random samples only from those cells that contained some part of the border of the Mandelbrot Set. Depending on the cell grid resolution and the iteration depth this gave a speedup of 3 to 8.


"The"  Buddhabrotbuddhabrot_symmetry.jpg

Since we still choose our samples randomly, every rendering will look different - depending on which orbits your random search finds. There is no "the" Buddhabrot (see Melinda's comment below - rendering for an infinite amount of time yields "the" Buddhabrot). This also means that the image is not symmetic on the small scale which is a good thing if you ask me. You could of course get a symmetric image of the same quality in half the time by mirroring it, but i think it's worth waiting for the asymmetric one. On the right is a part of the final rendering that exhibits superficial symmetry, but is composed of asymmetric orbits. Click it to view this detail in the final resolution.

If i had had my renderer run for much longer (maybe a week or a month), more and more orbits would have shown up, occluding each other and finally blurring the whole thing into the standard nebula-like buddhabrot. The key is to render just long enough until a sufficient amount of orbits have appeared and then stop.

Lisp Implementation

The renderer is written in Common Lisp which is not your classic number crunching language. If the goal was to do it as fast as possible, you'd probably end up doing it in C with inline assembly or on the GPU. On the other hand, i had never really optimized any lisp code to the limit and wanted to do just that: see how far you can get. One long night of low-level optimization at entropia with the help of some friends later there was is only a factor of 2 left compared to the C implementation that one entropian made. Which is quite okay, considering the cumulative speedup of about 2000 compared to my first implementation. Trading of sbcl's builtin complex numbers for traditional pairs of floats, declaring all types, working in-place instead of on the stack, considering cache efficiency, prompting usage of inline arithmetic instead of jumps to costly sbcl functions, etc. gained us a factor of about 3 to 5 while the big rest was due to the algorithmic improvements.

The renderer uses double-floats throughout. I think i can safely say that the orbits' shapes are not due to cumulative floating-point errors (which i had suspected at some time) because tiny variations in the starting value usually do not produce significantly different orbits. Orbits are not chaotic systems.

Since tracing the few good samples is quite cheap compared to finding them, the renderer first multithreadedly collects some good samples by randomly sampling from the grid cells (see optimizations above) and only afterwards processes them to keep track of their jumping around on the complex plane. Their path is recorded in a big 16bit greyscale image which is repeatedly written to disk.

Postprocessing, Color and Noise

Converting this greyscale HDR image to a LDR image with color has to be done by hand which is a nontrivial task for most image manipulation programs because 20000x25000 is not exactly your standard image resolution. Gimp and Cinepaint both crash or freeze (and Gimp can't handle 16-bit images), but Photoshop worked for me. I also cheated a bit by gently surface-blurring the image, removing some noise.

The noise is the only reason you need the really large iteration depths in the range of millions that i used. If you increase both the minimum iteration time and the maximum iteration limit, your orbits will consist of more and more points, allowing you to choose a higher resolution. If, on the other hand, you get less than, say, five hits per pixel in the darker regions of the orbit you will start to notice that some pixels may get only two hits and their neighbour pixel may get ten hits which will be visible as noise in the output image. If you iterate more deeply, this effect evens out and the signal-to-noise ratio increases. To remove noise, you can always downscale the output image or, as i did, selectively blur the image a bit to improve the look of the noisy areas.

For the output image that i consider the most successful one i used a minimum iteration time of 1000000 and a maximum iteration depth of 5000000. it took about 16 hours on my university's 8-core Xeon machine which i abused for this task overnight.

Results

Here is a scaled version of about 5000x6000 pixels for your firefox's non-crashing convenience.

buddhabrot_medium.jpg

buddhabrot_print.jpg

I am quite satisfied with the digitally printed one as well, although some of the darker reds lose their saturation.

Do not open the large image in your browser. It is a 93MiB JPG file of 20000x25000 pixels which consumes 1.5 GiB of RAM when opened. Don't try this at home. I warned you.

You can also have a look at the Lisp source.

Images created by Johann Korndoerfer (cupe) and released CC-Attribution 3.0 Germany, except the original Mandelbrot bulb image by Wikipedia user Hoehue.

Talk: Computer Graphics, Part 1

| | Comments (1)

As promised earlier, i gave a talk at entropia in december. I tried to touch many of the non-hardware-oriented topics in computer graphics, with a focus on those that are interesting (to me, at least) and easy to explain without too much background knowledge in math or computer science.

The talk was an experiment for me in several ways. First, the presentation style: The slides should contain as little text as possible, providing only visual assistance to what i said rather than guiding the talk themselves. In typical university or conference talks, i usually read the whole content-stuffed slide as soon as it appears without listening to what the lecturer says. The remaining two thirds of the time the slide takes usually consists of waiting for the next slide while trying not to fall asleep. I did not want to go as far into the opossite direction as Lawrence Lessig does because, obviously, the talk would be graphics-oriented and some of the pictures would need time to sink in and be explained. I think the style worked out fine and i surprised myself by talking quite a lot about slides with just one picture on them although i had not been practicing.

Second, the technical means: It was also the first time for me to use the Latex Beamer Class for slides. Creating the slides was painless enough with the help of my latex preprocessor, but the result did not convince me: (La)tex is not made for displaying pixel data on screen, causing images to be resized that want to displayed exactly 1:1. Most pdf viewers use Nearest Neighbour interpolation for resizing pictures, making all my nice pictures look bad - which is somewhat embarrassing for a talk that covers image filtering and interpolation. With evince i found a pdf viewer that performs other interpolation methods, making the result bearable. Nevertheless i will have to find some other way to show images next time. Possibly i will create the slides as images themselves and present them with some image viewer like gqview. And maybe i’ll end up writing another minimalist slide-defining markup language that creates png images.

Third, i wanted to be able to publish the slides. This meant restricting myself to pictures released under some reasonably free licence or making my own. I ended up using many pictures from the Wikimedia Commons, usually modifying them to fit the black background. I also made many of the pictures myself. I included a file that provides proper attribution to the creators of the CC-Attribution-images in the archive containing the slides. This involved finding the authors of each of the images which was quite painful (and the reason for the long time it took to publish the slides). I’d really appreciate a Creative Commons license or something similar that does not force users of my work to remember or research my name - releasing into the Public Domain is not possible in Germany. Just granting all users every imaginable rights to to as they wish (which i hereby do) leaves me doubting the lawfulness of this statement. I’d really like some standard way of doing this.

All in all things went well and i certainly learned something in the two months it took to prepare a talk that was over in about 90 minutes. As i invested this considerable amount of time into something that did not last that long, i will certainly consider giving it again (or recycling the slides for a similar talk).

The talk’s article in the amazing entropia wiki contains a link to the slides, some of whom may not make that much sense without any comment.

And here’s the flash version:

LaTeX beamer markup = less suckage

| | Comments (3)

There are many bad things about latex - like not being able to use footnotes in tables, color verbatim text for syntax highlighting or center math expressions. One general problem is its verbosity when only a small subset of its functionality is needed. Especially when using the beamer class, the mental and textual overhead made writing latex too bothersome for me to be fun. A simple wiki-style markup would suffice for most tasks when creating slides, with extra functionality only needed for special slides like titles. Wouldn't it be nice to write

=Why tex needs to be replaced=
There are countless reasons:
* it's old
* I don't like it

which would become

\begin{frame}[plain]
\frametitle{Why tex needs to be replaced}
There are countless reasons:
\begin{itemize}
\item it's old
\item I don't like it
\end{itemize}
\end{frame}

I made a primitive LaTeX preprocessor in lisp that does just that. It handles slides, converts itemizations and includes and resizes graphics, while retaining all the nice features that make us use latex in the first place (i.e. the math mode). Things to improve include support for enumerations and flushright/centering and a more friendly shellscript. I will probably update it as I progress with the talk i'm currently preparing which will now be all the more awesome.

You can check out everything from svn://erleuchtet.org/texsucks or download it directly. With sbcl installed, just run make.sh to create an example pdf.

Project Overview: perfectstorm

| | Comments (29)

with-health-bars.png

perfectstorm is a real time strategy game study written in common lisp using OpenGL for graphics display and cairo for texture generation. It is in active development with many of the basic features still unimplemented, but i decided the effort put into it justifies some public documentation.

Terrain Generation 90ies-style

| | Comments (1)

If you have played with Bryce or Terragen you know that Terrain Generation is fun and useless. The Midpoint Displacement Algorithm is one of the simplest methods to build a triangle mesh that resembles the terrain found in nature. You can see its output on the right. displacement.jpg

While this looks like a mountain of some sorts, the algorithm does not produce good mountains every time you invoke it. Instead, the result is quite random and almost always bad. Have a look at the algorithm:

  1. Make one triangle.
  2. Subdivide all your triangles into four new ones (use the edge midpoints as new vertices).
  3. Move your new midpoints along the y-axis by a random offset.
  4. Repeat 2 and 3 with half the offset.

The algorithm refines its terrain model recursively with every step, producing self-similar (in other words fractal) patches of terrain. It is short and easy to understand. Too bad real mountains are only superficially self-similar. One of the most visible shortcomings of the algorithm can be seen when the random displacement is replaced by its expected value. The “expected value” for the algorithm looks like this:

displacement-norandom.jpg

These furrows can be seen in most randomly displaced terrains, too, when viewed from specific angles or directly from above. (The yellow cube is the light source)

A better approach might be to use Perlin noise as a heightmap since it does not introduce visible artifacts. I’ll probably test this later.

Concerning the implementation, i spent most of the time implementing the fundamental mesh operations like splitting lines and triangles. Since this will not be last time i’ll handle triangle meshes and i might save other Lispers the pain i am thinking about expanding this and publishing it as a general mesh handling and drawing library for Common Lisp.

a quick guide to shaders

| | Comments (1)

A shader is a small program that is executed on the graphics card. On modern graphics cards, up to 128 shader programs can be executed in parallel. Conventional parallel programming is hard because there are resources to be shared, but shader programs are completely independent, allowing the graphics card manufacturers to add more shader pipelines at will. Currently there are two important types of shaders: vertex shaders and fragment shaders. For each frame that is rendered, all visible vertices (read: nodes of the mesh) are processed by vertex shaders which can modify their position or attributes like normals, color or any other user-defined attributes.

fraktale_teekanne.png One fragment corresponds to one pixel on the screen. For each fragment, the fragment shader is called. A fragment belongs to the polygon visible at the position of the corresponding pixel on the screen, i.e. the intersection of the pixel ray with the polygon defines the fragment’s position in the 3d world. The fragment shader thus knows its position within the polygon and can use this information, e.g. for looking up (and returning) the color of the polygon’s texture at that place. When using fragment shaders, OpenGL will not do texture mapping, you will have to use a trivial fragment shader to accomplish what would happen automatically without shaders.

Information can pass from the vertex shader to the fragment shader, but not in the other direction. The shader programmer may define arbitrary “varying” constants which can be written by the vertex shader and are, linearly interpolated between the vertices, available to the fragment shader. For each triangle three vertex shader executions need to be made, but a much larger amount of fragment shader executions is needed, depending on the size of the triangle on the screen.

Naturally, the fragment shader has access to all of the triangle’s textures. For advanced shaders, like those in modern games, these auxiliary textures usually contain a normal map and/or a height map, i.e. the direction of the normal for each texel (the three coordinates of the normal are stored as rgb) or the height of the texel compared to the triangle plane (as a grey value) is available. With this information a fragment shader can do arbitrarily complex stuff, ranging from good old bump mapping to a wide variety of occlusion and parallax mapping techniques, the use of gloss maps or the design of weird materials, their colors changing with the angle the camera looks at them.

Since it is also possible to pass values (e.g. a tick count or wind speed) from OpenGL to the shaders, they are used to implement flags fluttering in the wind, rocket engines causing ripples in the air or other animations or effects you would have thought the CPU knew about.

In my experiments i used GLSL (the OpenGL Shader Language), a c-like mid-level language that is compiled into assembly code that can be executed on the graphics hardware. GLSL provides clipping, matrix multiplication, dot product, etc. as primitive operations, along with some useful predefined special variables that are always present and need not be declared.

In the quest for more and more parallelization in computing, shaders may be the most successful example since shader parallelization itself demands no intervention by the programmer at all. Shaders are bound to become even more complex: with geometry shaders (available on graphics cards labeled “Shader Level 4” or “DirectX 10”, something my setup lacks) it becomes possible to generate geometry on the fly and even run particle systems entirely on the GLPU. It can be expected that even more parallelizable work will be taken from the CPU and put onto the GPU, providing an infinite playground for the demo coder. Graphics programming has never been this easy and fun. 500 GFLOPS are just $400 $200 away :)

If you are interested, have a look at typhoon labs’ hard-to-find ex-website, hosting Shader Designer, the tool used in the screenshot above as well as a very nice GLSL tutorial with shiny pictures.