|
![]() ![]()
![]() ![]() |

Portals are misunderstood. - Read the article here: https://c0de517e.com/011_portals.htm This blogspot site is dead! Update your links (and RSS!) to my new blog at c0de517e.com.
27 Jan 05:26 2024
Read the article here: https://c0de517e.com/011_portals.htm
This blogspot site is dead!
Update your links (and RSS!) to my new blog at c0de517e.com.
- Facebook✓
- Google+✓
- Instapaper✓
- Tweet✓

Real-time rendering - past, present and a probable future. - This presentation was a keynote given to a private company event - I'm not sure if I'm at liberty to say more about it - but the content is quite universal, so I hope you'll enjoy! It does not talk directly of Roblox or the Metaverse... but at the same time, it has, near the end, some strong
12 Jun 01:58 2022
This presentation was a keynote given to a private company event - I'm not sure if I'm at liberty to say more about it - but the content is quite universal, so I hope you'll enjoy!
It does not talk directly of Roblox or the Metaverse... but at the same time, it has, near the end, some strong connections to it.
Also... this is not the first "open problems" slide deck I make, and I mentioned an unfinished one in previous presentations... I realize I will never finish it - or rather, I am not as passionate about it anymore, so... here it is, frozen in its eternal WIP state: slides - circa 2015
- Facebook✓
- Google+✓
- Instapaper✓
- Tweet✓

Why Raytracing won't simplify AAA real-time rendering. - "The big trick we are getting now is the final unification of lighting and shadowing across all surfaces in a game - games had to do these hacks and tricks for years now where we do different things for characters and different things for environments and different things for lights that move
06 Jan 21:00 2021
"The big trick we are getting now is the final unification of lighting and shadowing across all surfaces in a game - games had to do these hacks and tricks for years now where we do different things for characters and different things for environments and different things for lights that move versus static lights, and now we are able to do all of that the same way for everything..."
Who said this?
Jensen Huang, presenting NVidia's RTX?
Not quite... John Carmack. In 2001, at Tokyo's MacWorld, showing Doom 3 for the first time. It was though on an NVidia hardware, just a bit less powerful than today's 20xx/30xx series. A GeForce 3.
Can watch the recording on YouTube for a bit of nostalgia. |
And of course, the unifying technology at that time was stencil shadows - yes, we were at a time before shadowmaps were viable.
Now. I am not a fan of making long-term predictions, in fact, I believe there is a given time horizon after which things are mostly dominated by chaos, and it's just silly to talk about what's going to happen then.
But if we wanted to make predictions, a good starting point is to look at the history, as history tends to repeat. What happened last time that we had significant innovation in rendering hardware?
Did compute shaders lead to simpler rendering engines, or more complex? What happened when we introduced programmable fragment shaders? Simpler, or more complex? What about hardware vertex shaders - a.k.a. hardware transform and lighting...
And so on and so forth, we can go all the way back to the first popular accelerated video card for the consumer market, the 3dfx.
Surely it must have made things simpler, not having to program software rasterizers specifically for each game, for each kind of object, for each CPU even! No more assembly. No more self-modifying code, s-buffers, software clipping, BSPs...
No more crazy tricks to get textures on screen, we suddenly got it all done for us, for free! Z-buffer, anisotropic filtering, perspective correction... Crazy stuff we never could even dream of is now in hardware.
Imagine that - overnight you could have taken the bulk of your 3d engine and deleted it. Did it make engines simpler, or more complex?
Our shaders today, powered by incredible hardware, are much more code, and much more complexity, than the software rasterizers of decades ago!
Are there reasons to believe this time it will be any different?
Spoiler alert: no.
At least not in AAA real-time rendering. Complexity has nothing to do with technologies.
Technologies can enable new products, true, but even the existence of new products is always about people first and foremost.
The truth is that our real-time rendering engines could have been dirt-simple ten years ago, there's nothing inherently complex in what we got right now.
Getting from zero to a reasonable, real-time PBR renderer is not hard. The equations are there, just render one light at a time, brute force shadowmaps, loop over all objects and shadows and you can get there. Use MSAA for antialiasing...
Of course, you would need to trade-off performance for such relatively "brute-force" approaches, and some quality... But it's doable, and will look reasonably good.
Even better? Just download Unreal, and hire -zero- rendering engineers. Would you not be able to ship any game your mind can imagine?
The only reason we do not... is in people and products. It's organizational, structural, not technical.
We like our graphics to be cutting edge as graphics and performance still sell games, sell consoles, are talked about.
And it's relatively inexpensive, in the grand scheme of things - rendering engineers are a small fraction of the engineering effort which in turn is not the most expensive part of making AAA games...
![]() |
So pretty... Look at that sky. Worth its complexity, right? |
In AAA is perfectly ok to have someone work for say, a month, producing new, complicated code paths to save say, one millisecond in our frame time. It's perfectly ok often to spend a month to save a tenth of a millisecond!
Until this equation will be true, we will always sacrifice engineering, and thus, accept bigger and bigger engines, more complex rendering techniques, in order to have larger, more beautiful worlds, rendered faster!
It has nothing to do with hardware nor it has anything to do with the inherent complexity of photorealistic graphics.
We write code because we're not in the business of making disruptive new games, AAA is not where risks are taken, it's where blockbuster productions are made.
It's the nature of what we do, we don't run scrappy experimental teams, but machines with dozens of engineers and hundreds of artists. We're not trying to make the next Fortnite - that would require entirely different attitudes and methodologies.
And so, engineers gonna engineer, if you have a dozen rendering people on a game, its rendering will never be trivial - and once that's a thing that people do in the industry, it's hard not to do it, you have to keep competing on every dimension if you want to be at the top of the game.
The cyclic nature of innovation.
Another point of view, useful to make some prediction, comes from the classic works of Clayton Christensen on innovation. These are also mandatory reads if you want to understand the natural flow of innovation, from disruptive inventions to established markets.
One of the phenomena that Christensen observes is that technologies evolve in cycles of commoditization, bringing costs down and scaling, and de-commoditization, leveraging integrated, proprietary stacks to deliver innovation.
In AAA games, rendering has not been commoditized, and the trend does not seem going towards commoditization yet.
Innovation is still the driving force behind real-time graphics, not scale of production, even if we have been saying for years, perhaps decades that we were at the tipping point, in practice we never seemed to reach it.
We are not even, at least in the big titles, close to the point where production efficiency for artists and assets are really the focus.
It's crazy to say, but still today our rendering teams typically dwarf the efforts put into tooling and asset production efficiency.
We live in a world where it's imperative for most AAA titles to produce content at a steady pace. Yet, we don't see this percolating in the technology stack, look at the actual engines (if you have experience of them), look at the talks and presentations at conferences. We are still focusing on features, quality and performance more than anything else.
We do not like to accept tradeoffs on our stacks, we run on tightly integrated technologies because we like the idea of customizing them to the game specifics - i.e. we have not embraced open standards that would allow for components in our production stacks to be shared and exchanged.
![]() |
Alita - rendered with Weta's proprietary (and RenderMan-compatible) Manuka |
I do not think this trend will change, at the top end, for the next decade or so at least, the only time horizon I would even care to make predictions.
I think we will see a focus on efficiency of the artist tooling, this shift in attention is already underway - but engines themselves will only keep growing in complexity - same for rendering overall.
We see just recently, in the movie industry (which is another decent way of "predicting" the future of real-time) that production pipelines are becoming somewhat standardized around common interchange formats.
For the top studios, rendering itself is not, with most big ones running on their own proprietary path-tracing solutions...
So, is it all pain? And it will always be?
No, not at all!
We live in a fantastic world full of opportunities for everyone. There is definitely a lot of real-time rendering that has been completely commoditized and abstracted.
People can create incredible graphics without knowing anything at all of how things work underneath, and this is definitely something incredibly new and exciting.
Once upon a time, you had to be John friggin' Carmack (and we went full circle...) to make a 3d engine, create Doom, and be legendary because of it. Your hardcore ability of pushing pixels made entire game genres that were impossible to create without the very best of technical skills.
![]() |
https://threejs.org/ frontpage. |
Today? I believe a FPS templates ships for free with Unity, you can download Unreal with its source code for free, you have Godot... All products that invest in art efficiency and ease of use first and foremost.
Everyone can create any game genre with little complexity, without caring about technology - the complicated stuff is only there for cutting-edge "blockbuster" titles where bespoke engines matter, and only to some better features (e.g. fidelity, performance etc), not to fundamentally enable the game to exist...
And that's already professional stuff - we can do much better!
Three.js is the most popular 3d engine on github - you don't need to know anything about 3d graphics to start creating. We have Roblox, Dreams, Minecraft and Fortnite Creative. We have Notch, for real-time motion graphics...
Computer graphics has never been simpler, and at the same time, at the top end, never been more complex.
![]() |
Roblox creations are completely tech-agnostic. |
Conclusions
AAA will stay AAA - and for the foreseeable future it will keep being wonderfully complicated.
Slowly we will invest more in productivity for artists and asset production - as it really matters for games - but it's not a fast process.
It's probably easier for AAA to become relatively irrelevant (compared to the overall market size - that expands faster in other directions than in the established AAA one) - than for it to radically embrace change.
Other products and other markets is where real-time rendering is commoditized and radically different. It -is- already, all these products already exist, and we already have huge market segments that do not need to bother at all with technical details. And the quality and scope of these games grows year after year.
This market was facilitated by the fact that we have 3d hardware acceleration pretty much in any device now - but at the same time new hardware is not going to change any of that.
Raytracing will only -add- complexity at the top end. It might make certain problems simpler, perhaps (note - right now people seem to underestimate how hard is to make good RT-shadows or even worse, RT-reflections, which are truly hard...), but it will also make the overall effort to produce a AAA frame bigger, not smaller - like all technologies before it.
We'll see incredible hybrid techniques, and if we have today dozens of ways of doing shadows and combining signals to solve the rendering equation in real-time, we'll only grow these more complex - and wonderful, in the future.
Raytracing will eventually percolate to the non-AAA eventually too, as all technologies do.
But that won't change complexity or open new products there either because people who are making real-time graphics with higher-level tools already don't have to care about the technology that drives them - technology there will always evolve under the hood, never to be seen by the users...
- Facebook✓
- Google+✓
- Instapaper✓
- Tweet✓

An unbiased look at real-time raytracing - Evaluating technology without hype or hate... That would have been the title of my blog post if I published the version I had prepared after the DXR/RTX technology finally became public last year, at GDC 2018. But alas I didn't. It remained in my drafts folder. Siggraph came and went.
31 Mar 19:31 2019
Evaluating technology without hype or hate...
That would have been the title of my blog post if I published the version I had prepared after the DXR/RTX technology finally became public last year, at GDC 2018.
But alas I didn't. It remained in my drafts folder. Siggraph came and went. Now another GDC, and I finally decided to can that and rewrite it.
Why? Not because I thought the topic wasn't interesting. Hype is easy to give in to. Fear of missing out, excitement about new toys to play with, tech for tech's sake... Hate is equally devious. Fear of change. Comfort zones, familiarity.
These are all very interesting things to think about. And you should. But can I claim I am an expert on this? I don't know, I am not a venture capitalist, and I could say I've been right a number of times, but I doubt that reaches the threshold of statistical significance.
Moreover, being old and grumpy and betting against new technologies is often an easy win. Innovation is hard!
And really, it doesn't matter much. This technology is already in the hardware, and it will stay for the future. It is backed by large companies, and more will come on board for sure. And yes, it could go the way of geometry shaders and other things that tried to work "against" the established GPU architectures, but even for these, we did spend some time to understand how they could help us...
So, let's just assume we want to do some R&D in this RTRT thing and let's ask a different question. What should we be looking for?
The do and do not list of RTRT research.
DO NOT - Think that RTRT will make things simpler, or that (technical) simplicity is an objective. In real-time rendering, the pain comes from within. There's nothing that will stop people spending one month to save 0.1ms in any renderer.
Until power is out of the equation, we will always build complex systems to achieve the ultimate quality vs performance tradeoffs. When people say that shadow maps are hard for example, they mostly mean that fast shadow maps are hard. Nobody prevents us from rendering huge, high precision maps with high-quality filtering. Even rendering from multiple light samples and doing proper area lights. We don't do it, because of performance optimization.
And that's true for all complexity in a real-time renderer. When we add raytracing to the mix we only sign for more pain, hybrid algorithms, code paths, caching schemes and so on. And that's ok. Programmer's pain doesn't really matter much in the logistics of the production of today's games.
![]() |
How many rendering techniques can you see? How much pain was spent to save fractions of ms on each? |
DO - Think about ray/memory/shading coherency and the GPU implications of raytracing. In other words, optimization. Right now, on high-end hardware, we can probably throw a few relatively naive raytracing effects and they will work because these GPUs are much more powerful than the consoles that constrain the scene and rendering complexity of most AAA games. They can render these scenes at obscene framerates and resolutions. So it might seem not a huge price to pay to drop back to 1080p and 60hz in order to have nicer effects. But this doesn't mean it's an efficient use of GPU power, and that won't stand long term.
Performance/quality considerations are a great culler of rendering techniques. We need to think about efficient raytracing.
DO NOT - Focus on the "wrong" things. Specular reflections don't matter much. Perceptually they don't! Specular highlights, in general, are a strong indicator of shape and material in objects, but we are not good at spotting errors in the lighting environment that generates them. That's why cubemaps work so well. In fact, even for shiny floors and walls (planar mirrors) with objects near or in contact with them, we are fooled most of the times by relatively simple cheats. We see errors in screen-space reflections only because some times they fail catastrophically, and we're talking there about techniques that take fractions of milliseconds to compute. And reflections with raytracing are both too simple and too complex. Too simple, because they are an easy case of raytracing as rays tend to be very coherent. And too complex, because they require evaluating surface shading, which is hard to do in most engines outside screen-space and is slow as triggering different shaders with real-time raytracing is really not hardware friendly.
![]() |
Intel's demo: raytraced Wolfenstein (http://www.wolfrt.de/). Circa 2010. |
DO - Think about occlusion on the other hand. It's much more interesting, can be more hardware friendly, definitely is more engine friendly and most importantly it's likely to have a bigger visual impact. Correct shadows from area lights, but also correctly occluding indirect lighting, both specular and diffuse.
DO NOT - Think that denoising will save the day. In the near future, for real-time rendering, it most likely will not. In fact in general denoising (even simple blurring that we sometimes already employ) can lift noise from high frequencies to lower ones, which under animation makes for worse artifacts.
DO - Invest in caching and temporal accumulation ideas. Beyond screen-space. These will likely be more effective, and useful for a wide variety of effects. Also, do think about finer-grained solutions to launch work / update caches / update on demand. For this, real-time raytracing might help indirectly, because it needs in order to be performant the ability to launch shader work from other shaders. That general ability, if implemented in hardware, and exposed to programmers, could be useful in general, and it's one of the most interesting things to think about when we think of hardware raytracing.
DO NOT - Make the wrong comparisons! RTX on / RTX off tells a lie, because what we can't see with "RTX off" is what the game could look like if we allocated all the power that RTX needs to pushing conventional techniques or even simply more assets. There are a lot of techniques we don't use today because we don't think they are on the right side of the quality/performance equation. We could use them, but we prefer to push more assets instead.
If you want to be persuasive about raytracing, proper comparisons should be made. And proper comparisons should also take into account that rasterization without shading (visibility only) leaves compute units available for other work to be done in parallel.
RTX hardware isn't free either! It costs chip area, even if you don't use it, but there's nothing we can do about that...
DO NOT - Assume that scene complexity is fixed. This is a corollary of the previous point, but we should always think at the very least, for overall visual impact, if simply pushing more stuff is better than pushing a given particular idea for "shinier" stuff, because scene complexity is far from having "peaked".
![]() |
Offline rendering might (might!) be essentially complexity-agnostic today. Real-time, not quite. (frame from Avengers Infinity War) |
DO - Think about cases where raytracing could outperform rasterization at its own game. This is hard, because raytracing likely will always have a quite high cost, both because of the memory traffic that is required to traverse the spatial subdivision structures, and because it uses the compute units, while the rasterizer is a small piece of hardware that can operate in parallel. But, that said, raytracing could win in a couple of ways.
First, because it's much more fine-grained. For example, refreshing very small areas in a shadow map could perhaps be faster with a raytracer. Another way to say this is that there are certain cases where the number of pixels we need visibility for is much smaller than the number of primitives and vertices we'd need to traverse in a rasterizer.
The second thing to think about is how raytraced visibility goes wide, using the compute units and thus, the entire GPU. The rasterizer, on the other hand, can often be the bottleneck. And even if in many cases we can overlap other work to keep the GPU busy, that is not true in all cases!
DO - Think about engineering costs if you want the technology to be used now. It's true that programmer's pain doesn't matter. But at the moment RTX covers a tiny slice of the market. Programmers could find their pain in completing more important tasks... Corollary: think about fallback techniques. If we're moving an effect to RTX, how will we render it on GPUs that don't support it? Will it look very different? Will it make authoring more painful? That is something we generally can't afford.
In general, be brutally honest about costs and feasibility of solutions. This is a good rule in general, but it is especially true for an emerging technology. You don't want to burn developers with techniques that look good on paper, but fail to ship.
DO - Establish collaborations. Real-time raytracing is probably not going to sell more copies of a game. And if it's not going to save costs and make authoring more effective, if we're talking about uses in the runtime (an exception could be for uses in the artist tools themselves, e.g. to aid lightmap baking and/or previewing). It currently targets only a small audience, and you'll gain nothing by jumping on this too early.
So, you probably should not pull your smartest R&D engineers from whatever they're doing to jump on this unless you have some very persuasive outside incentives... If not, you'll likely won't have many people to do raytracing related things.
Thus, you should probably see if you can leverage collaborations with external research groups...
- Facebook✓
- Google+✓
- Instapaper✓
- Tweet✓