3D Rendering Optimizations
The cluster centers on discussions of techniques for optimizing 3D graphics rendering pipelines, such as Z-buffering, depth pre-passes, early Z culling, G-buffers, software rasterization, and visibility culling methods compared to hardware shaders and rasterization.
Activity Over Time
Top Contributors
Keywords
Sample Comments
Wouldn't ambient occlusion have been a better technique?
It shouldn't- they're not doing anything crazy in the pixel shaders, and they make excellent use of early Z culling.
Hello, I've seen Crafti, it looks a lot more complete than mine.The rendering is based on the triangle renderer in https://gitea.planet-casio.com/Lephenixnoir/Azur which renders in fragments to the fast on chip memory so it isn't bottlenecked by the slow main RAM, but I customised it to add textures etc. It's using barycentric coordinates over a bound
Is this technique still used in modern engines to determine which part of a level to render?
Yes, it's possible, but why bother when the hardware can rasterize triangles in silicon? :)
Can't it be done just with a shader ? Why did he use quake to implement it ?
Z buffer has limited bit depth so this is all still relevant.
It stand for geometry buffer. It looks like none of the replies led with that. If you render positions into a pixel buffer and normals into another pixel buffer, you can shade the pixels and avoid shading lots of fragments that will be hidden. It gets more complicated obviously (material IDs, reflection roughness, etc.) but those are the basics.
graphics engines do, they only display the visible pixels, the others are culled out of the calculation
Isn't that usually done with the depth buffer?