Raytracing although yet too expensive to perform for whole scenes in realtime play a significant role by now for different kinds of visual effects. Therefor this page will show out some simple applications of techniques along the lines the textbook "Texturing&Modelling - A Procedural Approach" available from Morgan Kaufmann Publishers, which contains many ideas which are by now suitable for real-time applications.


The real shape

First of all, when starting toying around with raytracing shaders you need, of course, some geometry to emit and thus generate a set of pixel-fragments to be processed by the shader. Most of the times this can be done with 2-dimensional shapes as the depth of the object will be emulated by the shader. Thus billboarding techniques may be sufficient to generate the underlying geometry. For the simplicity of things, this issue will be omitted here, as it ain't relevant for the intended didactic development. No matter what the underlying geometric shape is, ray-tracing can be performed for the fragment as it will get drawn on the screen.

The virtual shape

Let's assume you have a rect aligned to the camera's X- and Y-axis whose fragments are to be used to draw an ellipsoid shape. This can be done by tracing the ray from the camera's position through the fragment to achieve the intersection-points with the virtual ellipsoid. The difference of those points is length of the way of the ray through the ellipsoid and can - in a first approach - be used to color the virtual shape according to it's amount, resulting in a semi-opaque "egg". The apparent advantage of this technique is that the tesselation of the shape is always pixel-perfect, because the defining formula is evaluated per pixel.
    * A GLSL-fragment performing ray-ellipsoid intersection:
    * Intersection of a ray(Origin,Direction) and a non-rotated ellipsoid(Center,vec3(Dimensions))
    * Returns true if such an intersection exists, in which case rayScalar1 and rayScalar2 get set
    * to fullfil the equation  IntersectionX = rayOrigin + rayScalarX * rayDirection.
    * If false is returned there is no such intersection and the contents of rayScalarX are left unchanged.
    * This code is vectorized to achieve full performance on current hardware,
    * which means the underlying maths is kinda hard to spot. A paper showing the most necessary parts of
    * the derivation of the base-formula can be found at http://www.dark-orb.com/ung/RayEllipsoid.pdf .
    * bool ray_ellipsoid_intersection(
    *		in vec3 ellipseCenter,
    *		in vec3 ellipseDimensions,
    *		in vec3 rayOrigin,
    *		in vec3 rayDirection,
    *		out float rayScalar1,
    *		out float rayScalar2);
    bool ray_ellipsoid_intersection(
		    in vec3 ellipseCenter,
		    in vec3 ellipseDimensions,
		    in vec3 rayOrigin,
		    in vec3 rayDirection,
		    out float rayScalar1,
		    out float rayScalar2)
	    vec3 dimSqr = ellipseDimensions * ellipseDimensions;
	    vec3 delta_p = rayOrigin - ellipseCenter;

	    vec3 rd_ds = rayDirection / dimSqr;
	    float alpha_sqr = dot(rayDirection, rd_ds);
	    float alpha_one = dot(rd_ds * 2.0, delta_p);
	    float no_alpha = dot(delta_p / dimSqr, delta_p) - 1.0;
	    float p = alpha_one / alpha_sqr;
	    float q = no_alpha / alpha_sqr;

	    float root_term = p*p*0.25 - q;
	    if(root_term < 0.0) return false;

	    float root = sqrt(root_term);

	    p *= -0.5;

	    float s1 = p + root;
	    float s2 = p - root;

	    if(s1 < s2) {
		    rayScalar1 = s1;
		    rayScalar2 = s2;
	    } else {
		    rayScalar1 = s2;
		    rayScalar2 = s1;		
	    return true;

The colors



Kenton Musgrave in the textbook mentioned above shows many different kind of noise-functions which result in different visual appearance. For our purposes a very simple and basic noise is enough to achieve a broad range of visual effects. In the examples below a randomly filled grid will be interpolated using the sine-function, which has the advantage that it's derivates are continous at any depth knowing only the current interpolation-interval. I made good experiences with this technique but trying out any of the others found in the book may be worth a look. This may be deprecated on higher-level hardware since the current standards DO define noise-functions accessible in directly shaders, but those seem to be quite expensive to implement. Therefor it is adviseable to use precomputed noise-maps fed into the shader as texture. This results in a significant performance-gain when compared to evaluate such a noise-function directly in the shader but has the drawback that the resolution of the noise has to be chosen wisely, as memory consumption raises exponentially when increasing it while too-low resolutions result in poor visual results when getting too close to a noise-mapped surface. For my applications a resolution of 256*256*256 was sufficient which results in a memory consumption of 16MB per noise-map which ought to fit into the graphic-boards RAM with some space left for other stuff. To make the noise less static and unforseeable it is advisable to use a number of these maps and apply one of them per shape, possibly with using per-shape random offsets. I made the experience that 10 such maps were enough to grant that no unwanted visual correlation between the shapes appeared, but that number was guessed and likely is higher than needed as the complex processing of the maps will likely cancel out any such correlations by itself. The background-image of the main-website shows a few different galactic nebulae rendered with such maps. The computation of those maps is quite time-consuming.