The Crystal Space 3D Engine
---------------------------

In this document I describe the inner workings of the current version
of the Crystal Space engine. This document will be somewhat technical at
times. Note that this document is certainly not a tutorial for 3D
calculations. You should know what polygons, plane normals, vertices,
vectors, matrices, ... are. Perspective correction and space
transformation/translation should also be understood.

Currently Crystal Space has the following features:

	- True 6DOF engine with arbitrary sloped convex polygons.
	- Perspective correct texture mapping with interpolation
	  between every 16 pixels.
	- Support for 8-bit (palette), 15/16-bit and 32-bit truecolor
	  displays (no support for 24-bit yet).
	- Mipmapping to minimize memory strain on the texture cache
	  and to have nicer textures in the distance.
	- The width and height of textures must be a power of two
	  but the texture need not be a square.
	- It is possible to map the texture on a polygon in various
	  ways (rotated, scaled, mirrored, ...).
	- Moving objects and scripts controlling the movement.
	- Transparent and semi-transparent textures allowing for
	  see-through water surfaces and windows.
	- Static lighting with real shadows. Lighting and shadows
	  are precomputed before the world is displayed.
	- Colored lighting! There are three different types of
	  lights possible (can be controlled with a configuration
	  file). These three light colors can be choosen from white,
	  red, green, or blue. A light source can mix the three
	  values.
	- Pseudo-dynamic colored lights which cannot move but have very
	  accurate shadows and can change intensity/color randomly.
	- Dynamic colored lighting with support for shadows (Things and
	  sprites are ignored currently).
	- Support for frame and skeletal based 3D triangle mesh sprites with
	  LOD via progressive meshes!
	- Portals are used for efficient and easy visibility sorting.
	  In addition you can optionally enable a BSP for some
	  sectors to allow for even more powerful world definitions.
	- Additional C-buffer (coverage buffer) which can be enabled for
	  even better visibility culling (for all 3D rasterizers!)
	- Using portals you can also create transparent and semi-
	  transparent mirrors.
	- Alpha transparency in combination with mirrors allows simulation
	  of shining walls.
	- You can also include gouraud shaded triangle in the world.
	- Dynamic gouraud shaded sky spheres.
	- Colored volumetric fog.
	- Halos around lights.
	- Curved surfaces (unlighted currently).
	- General sound system. Currently drivers are made for Linux,
	  Macintosh, and Windows.
	- General networking system on Linux, Windows, BeOS, MacOS/X Server,
	  OpenStep, NextStep, and OS/2.
	- Powerful world file format allows you to easily redefine
	  the world (but a real editor would be better).
	- ZIP archive format to pack the world file, the textures and
	  all other needed data files for one level inside one file.
	  Crystal Space will also use the same ZIP file to automatically
	  add lightmap data. This will greatly speed up the startup of
	  Crystal Space.
	- ZIP libraries with commonly used objects and textures are also
	  supported.
	- Direct3D, Glide, and OpenGL support in Windows port (in various
	  stages of completion :-)
	- OpenGL support for BeOS and Linux as well.
	- Glide support for Linux (but not yet completely
	  functional).
	- Source is available. Crystal Space falls under the GNU
	  copyleft license (LGPL).

Future plans:

	- 2D sprites and alpha-mapped 2D sprites for simulating
	  fog and explosions (possibly in combination with a
	  primitive particle system).
	- More powerful scripting language.
	- More and better hardware accelerator support.
	- Support for landscape capabilities and LOD (Level Of Detail).
	- and much more...

First Some Definitions
----------------------

Before starting to explain how the engine works I'm first going to give
some definitions.

  - We have several kinds of coordinate systems that are used in
    Crystal Space:

	- Object space: every object or room is defined with its own
	  local 3D coordinate system typically centered around (0,0,0).
	- World space: the 3D coordinate system corresponding to the
	  world. Objects and rooms are mapped from object space to
	  world space via a matrix transformation. Currently rooms do
	  not support such a matrix. The transformation from object
	  to world space is always identical for rooms (or sectors).
	- Camera space: before viewing objects and rooms are transformed
	  from world to camera space. This means that the position of
	  the camera is set at (0,0,0), the eye points in the z direction,
	  the y direction is up and the x direction is right.
	- Screen space: 2 dimensional coordinates corresponding to screen
	  coordinates. Screen space is perspective corrected camera
	  space.
	- Texture space: 2 dimensional (u,v) coordinates corresponding to
	  some point on a texture. Texture space is given by a
	  transformation matrix and vector going from object space to
	  texture space.
	- Light-map space: 2 dimensional (l,m) coordinates corresponding
	  to some point on a light-map. Light-map space is closely
	  related to texture space.

  - A Sector is like a room, only it is more restricted (it is a convex 3D
    object that is made up of convex 3D polygons (unless you have version
    0.07 and you use a BSP in the Sector)). In general a room
    will be made up of several sectors (a 'room' is something that the
    engine does not really know about, it only knows about Sectors).

  - A Thing is another 3D object that is used to augment Sectors. In theory
    you could do everything with Sectors but this is not always very
    practical. Things are included to make designing stuff like stairs,
    doors, and other smaller attributes of a scene more easy. Things can
    also move if so desired. Things behave very much like Sectors (this
    is explained later). Lighting and shadows work just fine.

  - A Portal is one of the more important design decisions of Crystal Space.
    Sectors are connected with Portals. A Portal is just one of the polygons
    of a Sector which will not be texture mapped (unless it has a semi-
    transparent texture) but instead the other sector (visible through that
    portal) will be drawn. Using Portals and Sectors you can in theory
    describe every 3D world you want (unless there are curved surfaces in the
    world of course).

Defining the World
------------------

The world in Crystal Space is defined with Sectors connected by Portals and
possibly augmented with Things. This is in a nutshell how Crystal Space works.

It is difficult to give an example if you can't use pictures (@@@ include
.gif file with document containing a drawing of this example) but I will try:

Assume that you want to define a large room with a pillar in the middle of the
room. You can do this two ways: with four Sectors, or with one Sector and one
Thing. First let us define it with four Sectors:

As seen from above, the sectors would look something like this:

    +-------------------+------+---------------------+
    |                   |      |                     |
    |                   |  S2  |                     |
    |                   |      |                     |
    |                   |      |                     |
    |                   +------+                     |
    |                   |      |                     |
    |       S1          |      |         S3          |
    |                   |      |                     |
    |                   +------+                     |
    |                   |      |                     |
    |                   |  S4  |                     |
    |                   |      |                     |
    |                   |      |                     |
    +-------------------+------+---------------------+

Sector S1 has eight polygons (including the top and bottom polygon and the
three polygons at the east side). The two polygons adjacent to Sectors
S2 and S4 are Portals to the respective Sectors. All the other polygons
are texture mapped as normal.

Sectors S2 and S4 have six polygons. Their west polygons are again
Portals to Sector S1. Their east polygons are Portals to Sector S3.

Sector S3 is defined as Sector S1.

Another way to define this room using just the same four Sectors as
follows:

    +--------------+----------------+----------------+
    |               \              /                 |
    |                \     S2     /                  |
    |                 \          /                   |
    |                  \        /                    |
    |                   +------+                     |
    |                   |      |                     |
    |       S1          |      |         S3          |
    |                   |      |                     |
    |                   +------+                     |
    |                  /        \                    |
    |                 /    S4    \                   |
    |                /            \                  |
    |               /              \                 |
    +--------------+----------------+----------------+

To the person standing in this room this makes no difference at
all.

There are many other ways to define this room using the four
Sectors. One important thing to note is that four is the minimum
number of Sectors that you need to define this room (unless you use
a Thing as described below). The reason is that Sectors are convex.

An easier way to define this room is by using only one Sector and
one Thing to define the pillar:

    +------------------------------------------------+
    |                                                |
    |                                                |
    |                                                |
    |                                                |
    |                   +------+                     |
    |                   |      |                     |
    |       S1          |  T1  |                     |
    |                   |      |                     |
    |                   +------+                     |
    |                                                |
    |                                                |
    |                                                |
    |                                                |
    +------------------------------------------------+

Again this makes no difference for the person standing in this room.
There is however a difference in performance. If the pillar is very large
and wide the first approach will probably be faster. If the pillar is
very thin it is more efficient to use only one Sector with one Thing.

The reason (as will be made clearer later) is that Things are drawn
after the Sectors have been drawn and thus cause overdraw.

Things are provided to make defining worlds easier. If they are small
enough they will probably also enhance performance.

With Sectors, Portals, and Things you can describe virtually any world
that you want (note that all polygons of a Sector can be Portals, even
the floor and ceiling polygons. In fact there is no special attribute for
a floor or ceiling polygon. All polygons are equivalent).

Sectors
-------

In this section I will describe Sectors a bit more thoroughly. As stated
before Sectors are 3D convex objects. The faces of a Sector are made up
of convex 3D polygons. The fact that Sectors need to be convex is a serious
restriction but this is solved by the use of Portals.

Sectors contain a set of vertices which can be shared by all the polygons
of the Sector. 

Sectors should be closed. In other words, all polygons of the Sector must
completely cover it.

Starting with version 0.07 of Crystal Space it is possible to add a BSP
to a Sector. This is useful if you want to define non-convex Sectors.
The BSP is local to the Sector (another Sector can use another BSP if
it wishes). The Sector still uses Portals to go to another Sector.

Things
------

A Thing is very similar to a Sector in many ways. In the code this is
reflected by the fact that both inherit from the same class (PolySet).

One of the major differences between Things and Sectors is that a Thing
need not be convex (but the polygons making up the Thing should still be
convex). Z-buffering is used to draw Things so the polygons can be oriented
in any way possible. You could for example make a Thing with several polygons
which are not even connected with each other.

Note that polygons have a visible side and an invisible side (backface
culling).

Currently polygons in Things cannot be Portals. In future I plan to fix this.
With this feature you could have very interesting effects (like a television
Thing with the screen a Portal to some Sector).

Polygons
--------

Sectors and Things are made of 3D polygons. As mentioned before polygons must
be convex. The vertices of polygons are oriented clockwise. This fact is used
for backface culling; a polygon has only one visible side.

Polygons are drawn with a texture. How the texture is mapped on the polygon
depends on a transformation matrix. This is general enough so that you can
translate, rotate, scale, and mirror the texture in every possible direction.

The texture is tiled accross the polygon surface.

In a pre-computing stage three light-maps are created for every polygon (this
is explained in more detail later). Lighting is sampled in a grid 16x16 texture
pixels (or texels) big. Bilineair interpolation is used by the texture cacher
to make this lighting go smooth.

The end result of this is a non-tiled lighted texture that is mapped accross
the polygon surface.

A polygon can also be a Portal (see below). Normally a Portal polygon is not
texture mapped unless the texture is semi-transparent.

Portals
-------

A Portal is a special kind of polygon. Portal polygons can currently only exist
in Sectors.

Instead of texture mapping a Portal polygon, the renderer will recursively
draw the Sector that this Portal points too. After this, if the texture is
semi-transparent, the texture will be mapped over the already drawn Sector.

Portals can also transform space. This feature is new since 0.07.
You can use it to implement mirrors or reflecting surfaces.

Note that when there is a Portal from Sector A to Sector B you should
probably also define a Portal from Sector B to Sector A! Adjacent polygons
of different Sectors are not shared so you need to set a Portal on each
of them. Otherwise you will have the effect that you can see from Sector A
to Sector B but not the other way around.

A special feature of Portals is that you could (in theory) have a Portal
from Sector A to Sector B. But instead of going back to Sector A from
Sector B you set the Portal to Sector C which is a Sector which has the
same world space coordinates as Sector A. This is perfectly possible
(although maybe not desirable) with Crystal Space. An important result
of this is that a given world space coordinate can belong to more than
one Sector! Another corally of this is that you always need a current
Sector together with a world space coordinate to really know where you
are!

Portals in Crystal Space solve the problem of polygon sorting. All polygons
in the current Sector are certainly visible (unless they are behind the view
plane) and do not overlap, so they can just be drawn in any order without
overdraw and without conflicts. If a Portal polygon is reached all Polygons
in that other Sector are behind all the Polygons in the current Sector.
In fact Portals are an explicit form of a BSP tree. The advantages of this
approach are:

	- In theory it would be rather easy to make dynamic worlds. Because
	  the Portals are explicit it is easy to define them so that certain
	  sectors can move and transform in certain ways. Currently this
	  does not work because I have not done anything yet to make this
	  work :-)  There are also some problems with static lighting but
	  this is not severe if the movement is not too much.

	- Because it is an explicit form of a BSP tree split, I think that
	  it is more efficient than a real BSP tree. I have not confirmed
	  this.

	- It is easy to define semi-transparent textures on Portals. This
	  would be more difficult to do with BSP trees.

	- Space warping can be used (see above).

	- Overdraw elimination comes for free with Portals.

One disadvantage I could think of:

	- It is probably more difficult to define worlds this way. You have
	  to make sure that all Sectors are convex. A BSP tree approach would
	  solve this automatically.

The Camera
----------

The position of our 'hero' is defined by a Camera object. It would be very
easy to add more camera's (useful for multiplayer or for environments with
3D glasses).

The Camera is defined by a 3x3 rotation matrix, a position vector, and
the current Sector.

The matrix rotates the world such that the eye points along the Z axis,
the X axis is right of the eye, and tye Y axis points upwards.

The Renderer
------------

How does the renderer use this all to draw a scene?
Here I describe all the various steps to draw the world. The Camera
and a view 2D polygon are given as parameters (the view polygon defines
what is visible on the screen).

    - First the current Sector is transformed from world to camera space
      using the given Camera (this means that all of the vertices are
      transformed).

    - Then, for every polygon of the current Sector do the following (if
      there is a BSP for this Sector then the following is also done but
      in back to front order as defined by the BSP):

	- Perform perspective correction (division by 'z') on all the
	  vertices of the polygon.

	- If all of the vertices are behind the viewplane (Z=EPSILON)
	  then the polygon is not visible and need not be drawn.

	- Here we perform backface culling to see if the polygon can
	  be visible. Note that for the current Sector all polygons
	  are always visible but this step is important if we are
	  drawing the Sector behind a Portal.

	- If all of the vertices are in front of the viewplane then
	  the polygon is completely visible. Skip the following step.

		- Otherwise we need to clip the polygon against the
		  viewplane.

	- If the polygon is still visible after all these steps we
	  transform the texture mapping matrix from world->texture
	  to camera->texture. We also transform the plane normal of
	  the polygon to camera space.

	- Now we clip the polygon against the view polygon. The view
	  polygon is a general 2D polygon, not just a rectangle.

	- If the resulting 2D polygon is not a Portal it is just drawn
	  on the screen.

	- Otherwise this routine is recursively called again with
	  the Sector that the Portal points to as the current Sector
	  and the resulting clipped 2D polygon as a new view polygon.
	  If the texture of the Portal polygon is semi-transparent
	  it is drawn over the resulting image.

    - After the current Sector has been drawn, do the following for
      every Thing in this Sector:

	- First the Thing is transformed from world to camera space
	  using the given Camera. If all vertices are behind the
	  viewplane the whole Thing is not visible and we need not
	  draw it.

	- Otherwise, for every polygon of the Thing do the following:

	    - Perform perspective correction (division by 'z') on all
	      the vertices of the polygon.

	    - If all of the vertices are behind the viewplane (Z=EPSILON)
	      then the polygon is not visible and need not be drawn.

	    - Here we perform backface culling to see if the polygon can
	      be visible.

	    - If all of the vertices are in front of the viewplane then
	      the polygon is completely visible. Skip the following step.

		- Otherwise we need to clip the polygon against the
		  viewplane.

	    - If the polygon is still visible after all these steps we
	      transform the texture mapping matrix from world->texture
	      to camera->texture. We also transform the plane normal of
	      the polygon to camera space.

	    - Now we clip the polygon against the view polygon.

	    - Draw the polygon with a Z-buffering scanline drawer.

Before this algorithm is performed the Z-buffer is cleared once.

A consequence of this algorithm is that their is no overdraw when drawing
the Sectors. The 2D polygon clipping algorithm takes care of that. One
may think that this algorithm would be rather expensive but this does not
seem to be the case. So we eliminate overdraw when drawing all the Sectors
without having to resort to S-buffer or other similar techniques.

There is overdraw when Things are drawn. This is difficult to avoid. One
could consider another clipping algorithm but this would result in concave
polygons which our polygon drawer can't handle.

A current limitation in the drawing of Things is that a Thing can not
be in two Sectors at the same time. This limitation can be removed with
various techniques. In future I will probably have a special class of
Things which can span several Sectors at once.

Lighting
--------

Crystal Space supports three different lighting tables. Every light in the
scene can emit some amount of each variety of light. For example, if the
three tables are red, green, and blue a light could emit light with intensity
(.5, 0, 1) which means: half intensity red, no green, and full intensity
blue. With the MIXLIGHTS setting you can control how the three colors are
defined and how the mixing happens (see below for a description).

Every light has a position in world space coordinates and a current Sector.

Every light also has a radius (expressed in squared distance). This radius
indicates where the light levels of the three light tables will reach zero.

Lighting is precomputed. Every polygon has three lightmaps (for the three
tables) with one light intensity level for every 16x16 texel grid. The
lighting algorithm works as follows:

    - For every light in the world:

	- For every polygon in the current Sector:

	    - Check (with backface culling) if the polygon is visible from
	      the position of the light.

	    - If it is visible, check if the shortest distance of the polygon
	      to the light is within the radius of the light.

	    - If so the polygon is hit by the light. If not hit we go to the
	      following polygon.

	    - If the polygon is a Portal we recursively call this routine
	      again for this light and the new Sector.

	    - Otherwise we hit a beam on every lightmap position (see below
	      for a description) and see if that particular beam really reaches
	      the polygon. The algorithm to check this is described below.
	      If there is a hit we update the lightmap tables on that point
	      based on the distance to that point and the intensities of the
	      three light attributes.

	- For every Thing in the current Sector:

	    - For every polygon of the Thing:

		- Check with backface culling if the polygon is visible from
		  the position of the light.

		- If visible we again hit a beam on every lightmap position
		  (like for the Sector polygons).

To create the three lightmaps for every polygon a 2D bounding box in texture
space is calculated. This defines a rectangle that overlaps with the texture
on the polygon and is correctly aligned with it (so that every 16x16 texel
grid has one lightmap position). A consequence of this is that rotated
textures can waste a lot of lightmap space. For example, see the following
polygon:

                   +
                  / \
                 /   \
                /     \
               +       +
                \     /
                 \   /
                  \ /
                   +

and assume that the texture is aligned horizontally. Then we would need a
lightmap of the following size:

               +---+---+
               |  / \  |
               | /   \ |
               |/     \|
               +       +
               |\     /|
               | \   / |
               |  \ /  |
               +---+---+

(note that also the texture in the texture cache (see later) will have
that size).

The algorithm to see if a given beam reaches a specific point on
some polygon is as follows. The beam is described as two vertices
('start' and 'end'):

We start at the Sector of the light.

    - If the current Sector is equal to the Sector of the polygon
      then there is a hit. To see why this is true you have to consider
      that there are two cases:
	- The light is in this Sector. In this case the statement is
	  obvious, since all polygons of a Sector are completely visible
	  from anywhere in a Sector.
	- Otherwise, if the light is not in this Sector, the beam could
	  reach this Sector through a Portal. Since the beam certainly
	  ends at the polygon (this is given, the 'start' and 'end'
	  vertices define a beam that will (if not blocked) reach the
	  polygon) and the beam passed through Portals to reach here
	  then the beam hits the polygon.

    - If there is no hit we do the following:

	- See which polygon of the current Sector intersects with the
	  beam.

	- If the polygon is on the same plane as the destination polygon
	  then there is a hit. The reason this is true has to do with
	  the fact that the lightmaps are slightly bigger than what is
	  really needed. Bilinear interpolation is used to finally
	  light the polygon and it has to be able to correctly interpolate
	  at the boundaries of the polygon as well.

	- If there is no hit we continue here:

	    - If the polygon that is hit is a Portal we recurively continue
	      with the following Sector. If the recursive calls returns a
	      hit then we have a hit.

	    - If the polygon is not a Portal we have no hit and we return
	      from the algorithm (the beam of light does not reach the
	      polygon).

    - In all cases that we had a hit according to the previous steps we
      come here to check if there are no Things blocking the beam.

    - For every Thing in the current Sector check if the beam intersects
      with one of the polygons of the Thing. If so then there is no hit.

    - Otherwise there is a hit.

Some similar algorithm is used for Things.


The Texture Cache
-----------------

Every polygon has a tiled texture and three lightmaps. When a polygon needs
to be displayed the Texture Cache will combine this into a new untiled
texture that can be displayed. The lightmaps are combined with bilinear
interpolation with the original tiled texture. After that the new texture
is put in the cache. If a texture was already in the cache it will not
be generated again but it will be put in front of all other textures. If
the cache is full the textures that have been used least recently are removed
first.


Pseudo-Dynamic Lighting Extension
---------------------------------

The current dynamic lighting system is an extension to the static lighting
system described above. These dynamic lights can not move but you can
change the intensity or color. When some light is made dynamic (a simple
flag) the processing is a bit different. The shadow-computation remains
the same but every polygon that is hit by the dynamic light needs to have
seperate lightmaps for every dynamic light that hits the polygon. So in
the end all the polygons that are only hit by static lights have just one
lightmap (for the three tables) for all static lighting information.
Every polygon that is hit by one or more dynamic lights will have one
or more extra lightmap tables for every dynamic light and one extra for
all static lights. This information is then added together to result in
the final lightmap table that can then be used by the texture cache
routine (like explained above).

The lightmaps for the dynamic lights are stored without the strength of
the light. So where the light shines brightest the value in the lightmap
will be 255, where it shines least bright the value will be 0. When
combining all dynamic lightmaps and the static lightmap the strength of
every light will be multiplied with the distance value to result in the
real lightmap value.
This also implies that there is only one extra lightmap for every
polygon/dynamic light because the shadow information (which is basicly
what is represented in dynamic lightmaps) is the same for all three
light-tables.

Here are some performance considerations for dynamic lighting. There is
a memory penalty for every polygon that is hit by some dynamic lighting
because extra lightmaps need to be created. So if you use no dynamic
lighting you will suffer no penalty but if you use a very large dynamic
light (with a large radius) it will probably hit a large amount of
polygons and you will have a considerable increase in memory requirements.

There is also a performance penalty whenever the intensity of a dynamic
light changes. This is because all the polygons that are hit by the
dynamic light need to be recalculated (they are removed from the texture
cache). The texture cache will need to be made much faster before this
dynamic lighting feature can really be used for continuous lighting.
Currently it is useable for switching on/off some not-too-big light in
some room.

If the intensity of a dynamic light does not change there is no
performance hit at all.


True Dynamic Lights
-------------------

The latest Crystal Space beta versions also support true dynamic lights.
These are rendered atop of the normal lightmaps as explained above and
support colors and limited shadows.


Mipmapping
----------

In order to minimize the large memory requirements of the Texture Cache and
also to make nicer looking textures when the polygons are far away, mipmapping
is used.

There are four levels: the first level is the original unchanged texture.

Every following level is made by halving the previous level both horizontally
and vertically and applying some anti-aliasing filter. Currently three filters
are supported:

	- Just remove all odd pixels and retain the others. This is fast to
	  precompute but not very nice in appearance.

	- Just calculate the mean color value of every block of four pixels.
	  This is reasonably fast but there is a slight shift in the
	  mipmapped textures.

	- The best result can be had by applying the following filter:
			1 2 1
			2 4 2
			1 2 1

The last algorithm is the best and is the one that should be used. I will
remove the other algorithms soon.

Note that the general filter algorithm is slowest but this speed penalty
is only for precomputing the textures. The renderer is not affected by this.

Mipmapping affects other parts of the engine as well. When lighting is
precalculated this needs to be done for all four mipmap levels.


For the Future
--------------

(Text not finished...)

