VLP is an abbreviation for **V**iew **L**eap **P**lane, a name which
is more an artifact than anything else. VLPs are very similar to MIT's
cells and portals, however developed independently. VLP design is different
from most types of renders in that aside from drawing a scene quickly, it
also provides a good deal of information about the world. Implementations
using models such as BSP will only give information about which polygons
are in front of each other. VLPs do quite a bit more. Using VLPs
one decomposes the world into logical spaces and only process data in those
logical spaces. Collision for example can be restricted to objects
in the same logical space. This greatly enhances the speed of many
calculations, as all non relevant data can be culled out.

Technical Discussion

As mentioned above, VLPs decompose the world into logical spaces. These spaces represent distinct areas, connect to each other by VLPs. Consider each area to be a room, with VLPs as doors between them. The VLPs are essentially polygons representing a viewing portal which links data on the other side of it to the space that the VLP exists in. Certainly objects on the other side of this VLP will be visible if and only if the VLP is visible. As the renderer progresses through more and more VLPs, fewer will be visible, until eventually all visible VLPs have been processed.

The general algorithm employed is as follows: We first fix the viewpoint and a plane that is normal to the direction of sight. The VLPs are treated as planar rings in space, and are projected into two dimensional space on the plane. These 2d coordinates represent the VLP as viewed on the screen. For each VLP , we intersect it with the current "mask," which is initially set to be the size of the screen. Each VLP passes its intersection on to the data set that it links to, and this data set sets its current mask to this intersection. The data set then repeats the above process, intersecting each VLP from the data set with the current mask. The algorithm appears simple on the surface, but there are several considerations which make it more complex. Notably, data sets can be reached through multiple conduits, and we would like to avoid rendering data sets once for each path.

Use of VLPs allows the render to only process data which will actually be visible to the viewpoint. In essense it culls out all data which will not be seen. This allows for an incredibly sized world, as the rendering time will only take time proportional to local object complexity. Even if there are a very large number of data point in the world, the render will only consider the local points. As the VLPs are essentially links directly to the next data set, no time is ever taken searching a large database.

As noted above, collision is greatly sped up through the use of VLPs. There are other applications of this data abstraction. When in a large online environment it is often unfeasible to have the entire data set for the world at once. With VLPs the client can describe exactly which data is required for rendering, and minimize the data stored locally.

The current implementation uses explicit intersection algorithms for each VLP visiblity test. At the moment these are computationally expensive considering that they shuold be performed a great number of times each frame. Work is currently being done on a randomized approach to visibility as well as a deterministic "weight" based approach. In the randomized version, the algorithm will attempt to find an invisibility certifier quickly. When using randomized algorithms it is important to not generate false negatives, or false claims of invisibility. Such errors would result in rendering defects. False positives are acceptable (although undesirable) as they only mean more data to draw. The weight based approach would assign a certain weight to each portal. The renderer would traverse portals until a certain total weight is acheived. At present this is the fastest option, but certainly the least accurate.