I haven't necessarily put a lot of thought into these ideas and some of them may be way off-base. Some of them probably aren't original either; I'm sure others have thought of these things too.
Constructive criticism is appreciated; non-constructive criticism will be ignored.
Last updated: 1 April 2002
Cass Everitt's paper Order-independent Transparency describes the 'depth peeling' technique implemented with a shadow map texture. A shadow map texture is used because OpenGL only has one Z buffer.
A new function such as glDepthUnit(GLenum unit) where unit is GL_DEPTH_UNIT0, GL_DEPTH_UNIT1, etc. would specify the current depth unit. glDepthFunc(), glDepthMask() and glEnable/Disable(GL_DEPTH_TEST) would modify the current depth unit's attributes. This is much like glActiveTexture(). glClear() would clear all depth/Z buffers (unless GL_DEPTH_MASK is disabled).
Extra GLX/WGL language would be needed to describe visuals with multiple Z buffers.
There are probably uses for this beyond transparency depth peeling.
One might also consider having multiple stencil buffers, more color
buffers (aux buffers), etc.
A cubical shadow map would be omnidirectional and allow shadowing in all directions from a light source.
With 2D shadow mapping, texture coordinates are generated such that the fragment S and T coordinates map onto the shadow projection plane and R becomes the distance from the light source to the current fragment.
With cubical shadow maps the direction from the light source to the fragment would be in S, T and R. The Q component would encode the distance from the light source to the fragment. (S, T, R) would be used to chose one of the six cube map images. Q would be used in the texture compare stage to determine whether the fragment is in or out of shadow.
Q is tricky though - we don't want the Pythagorean distance from the light source to the fragment. We actually want the distance from the shadow projection plane to the fragment. Recall that when the cube map depth textures are generated the depth values are distances from the projection plane, not the center of the projection.
The texgen stage should be do-able with a vertex program. The shadow comparison stage would have to be extended to allow comparing the texture coordinate Q to the sampled depth texture value.
I'm sure someone else has implemented cubical shadow maps before, but
it would be new in OpenGL.
If we had a separate coverage channel, perhaps we could do something
more intelligent with a combined Z test and coverage test and avoid sorting
primitives when we want antialiasing.
Signed and floating point channels would be more useful than just unsigned
If GL_PACK/UNPACK_STRIDE could be negative you could read/draw images in top to bottom order.
Anyone who's implemented image file I/O in an OpenGL application can see how useful that would be.
Perhaps a new pack/unpack attribute such as GL_PACK_TOP_TO_BOTTOM (boolean)
would be better.
AccumRGB = (FrameBuffAlpha * FrameBuffRGB) + ((1 - FrameBuffAlpha) * AccumRGB)Unfortunately, the spec doesn't say how the alpha channel is computed! When I asked, I was told that the alpha channel should be computed as for GL_ACCUM mode. The intended application for this extension is back-to-front compositing of volumetric slices.
The Porter/Duff 'over' operator for image composition is well-known and could easily be implemented as a new glAccum mode. The advantage of doing this with the accumulation buffer is the extended precision it offers; many images could be composited with little degradation of image quality.
The RGB values would be computed as above. The alpha channel would be computed as:
AccumA = FrameBuffAlpha + (1 - FrameBuffAlpha) * AccumAlphaI'll probably implement this in Mesa someday but it probably wouldn't get much use until someone implements it in hardware. Then ISVs like Discreet might make use it.
glDualDrawPixels(GLsizei width, GLsizei height, GLenum colorFormat, GLenum colorType, const GLvoid *colorImage, GLenum depthType, const GLvoid *depthImage)Basically, each generated fragment would take on the color and depth from the given images and we'd process the fragments in the usual manner. Then we could do Z-based image composition in one pass.
One problem is the pixel unpacking parameters. Do we use the same stride and byteswapping attributes for both images or do we add new unpack attributes for the second image?
This function would be immediately useful for the Chromium
glBlendEquationSeparate( GLenum rgbEq, GLenum alphaEq );would allow separate equation operators for the RGB vs alpha channels (ie. add RGB, but subtract alpha).
I'm not sure there are good applications for this though.