OpenGL Ideas

This is a collection of ideas for new OpenGL features.  Some of these ideas occured to me long ago but I've never bothered writing them down until now. In parenthesis is the approximate date I was thinking about the idea.

I haven't necessarily put a lot of thought into these ideas and some of them may be way off-base.  Some of them probably aren't original either; I'm sure others have thought of these things too.

Constructive criticism is appreciated; non-constructive criticism will be ignored.

-Brian

Last updated:  1 April 2002
 

Multiple Z buffers (Fall 2001)

OpenGL currently supports zero or one Z buffers per rendering surface.  Having two (or more) Z buffers could be useful.  Each Z buffer would have a corresponding Z test stage in the fragment pipeline.  One application for this would be rendering of semi-transparent surfaces without back-to-front sorting.

Cass Everitt's paper Order-independent Transparency describes the 'depth peeling' technique implemented with a shadow map texture.  A shadow map texture is used because OpenGL only has one Z buffer.

A new function such as glDepthUnit(GLenum unit) where unit is GL_DEPTH_UNIT0, GL_DEPTH_UNIT1, etc. would specify the current depth unit.  glDepthFunc(), glDepthMask() and glEnable/Disable(GL_DEPTH_TEST) would modify the current depth unit's attributes.  This is much like glActiveTexture().  glClear() would clear all depth/Z buffers (unless GL_DEPTH_MASK is disabled).

Extra GLX/WGL language would be needed to describe visuals with multiple Z buffers.

There are probably uses for this beyond transparency depth peeling.

One might also consider having multiple stencil buffers, more color buffers (aux buffers), etc.
 

Cubical shadow maps (Summer 2001)

The OpenGL extensions GL_ARB_depth_texture and GL_ARB_shadow implement shadow mapping in OpenGL using 2-D depth textures.  One problem with this method is the fact that all shadowed objects must lie within the shadow map frustum.  That effectively limits you to spotlights.

A cubical shadow map would be omnidirectional and allow shadowing in all directions from a light source.

With 2D shadow mapping, texture coordinates are generated such that the fragment S and T coordinates map onto the shadow projection plane and R becomes the distance from the light source to the current fragment.

With cubical shadow maps the direction from the light source to the fragment would be in S, T and R.  The Q component would encode the distance from the light source to the fragment.  (S, T, R) would be used to chose one of the six cube map images.  Q would be used in the texture compare stage to determine whether the fragment is in or out of shadow.

Q is tricky though - we don't want the Pythagorean distance from the light source to the fragment.  We actually want the distance from the shadow projection plane to the fragment.  Recall that when the cube map depth textures are generated the depth values are distances from the projection plane, not the center of the projection.

The texgen stage should be do-able with a vertex program.  The shadow comparison stage would have to be extended to allow comparing the texture coordinate Q to the sampled depth texture value.

I'm sure someone else has implemented cubical shadow maps before, but it would be new in OpenGL.
 

Separate fragment alpha and coverage (2000)

OpenGL munges together fragment alpha and coverage when rendering smooth primitives (triangles, lines, points).  It would be nice if the fragment coverage and alpha were kept as separate fragment attributes and we had a separate coverage channel in the framebuffer.

If we had a separate coverage channel, perhaps we could do something more intelligent with a combined Z test and coverage test and avoid sorting primitives when we want antialiasing.
 

Miscellaneous channels (April 2002)

In addition to the multiple Z buffers, stencil buffers and coverage channels mentioned above, it could be useful to have generic channels in the framebuffer.  This might be used by fragment shader programs to store arbitrary per-pixel values.  I wonder if RenderMan has something like that (I'll have to check).

Signed and floating point channels would be more useful than just unsigned integer channels.
 

Better texturing of glDrawPixels, glCopyPixels and glBitmap (April 2002)

The OpenGL spec calls for the current raster texture coordinates to be assigned to all fragments that are generated by glDrawPixels, glCopyPixels and glBitmap.  That's fairly useless.  It would be more useful if the texture coords were instead interpolated across the width and height of the image (from 0 to 1).
 

Negative GL_PACK/UNPACK_STRIDE (Spring 2001)

The GL_PACK/UNPACK_STRIDE attributes for reading/drawing images have to be positive.  OpenGL's images are stored in bottom-to-top order - the opposite of basically every image file format in existance.

If GL_PACK/UNPACK_STRIDE could be negative you could read/draw images in top to bottom order.

Anyone who's implemented image file I/O in an OpenGL application can see how useful that would be.

Perhaps a new pack/unpack attribute such as GL_PACK_TOP_TO_BOTTOM (boolean) would be better.
 

"Over" image compositing with the accum buffer (March 2002)

The new GL_SUN_slice_accum extension defines a new mode for the glAccum command: GL_SLICE_ACCUM_SUN.  The spec defines the new arithmetic for this mode as:
AccumRGB = (FrameBuffAlpha * FrameBuffRGB) + ((1 - FrameBuffAlpha) * AccumRGB)
Unfortunately, the spec doesn't say how the alpha channel is computed!  When I asked, I was told that the alpha channel should be computed as for GL_ACCUM mode.  The intended application for this extension is back-to-front compositing of volumetric slices.

The Porter/Duff 'over' operator for image composition is well-known and could easily be implemented as a new glAccum mode.  The advantage of doing this with the accumulation buffer is the extended precision it offers; many images could be composited with little degradation of image quality.

The RGB values would be computed as above.  The alpha channel would be computed as:

AccumA = FrameBuffAlpha + (1 - FrameBuffAlpha) * AccumAlpha
I'll probably implement this in Mesa someday but it probably wouldn't get much use until someone implements it in hardware.  Then ISVs like Discreet might make use it.
 

Dual-image glDrawPixels (March 2002)

Image composition according to Z (ala sort-last rendering) can be implemented with OpenGL, but it requires using the stencil buffer and two glDrawPixels calls (one for Z, one for RGB).  It would be nice if there were a glDrawPixels variation that accepted both color and Z.  Something like:
 
glDualDrawPixels(GLsizei width, GLsizei height, GLenum colorFormat, GLenum colorType, const GLvoid *colorImage, GLenum depthType, const GLvoid *depthImage)
Basically, each generated fragment would take on the color and depth from the given images and we'd process the fragments in the usual manner.  Then we could do Z-based image composition in one pass.

One problem is the pixel unpacking parameters.  Do we use the same stride and byteswapping attributes for both images or do we add new unpack attributes for the second image?

This function would be immediately useful for the Chromium project.
 

Blend Equation Separate (April 2002)

The GL_EXT_blend_func_separate extension allows one to specify different src/dest blending terms for the RGB vs alpha channels.  A similar extension for the blend equation might be useful.  The blend equations currently supported by OpenGL are GL_ADD, GL_SUBTRACT, GL_REVERSE_SUBTRACT, GL_MIN and GL_MAX.  The blend equation applies to all four of the RGBA channels.  A function such as:
glBlendEquationSeparate( GLenum rgbEq, GLenum alphaEq );
would allow separate equation operators for the RGB vs alpha channels (ie. add RGB, but subtract alpha).

I'm not sure there are good applications for this though.