While there are many directions for video games to improve in the future. Animation being a big one. People are looking for things to do for graphics. First, avoiding the uncanny valley in next gen games may be a big problem. In general high level though (probably mostly obvious)
- Higher resolution textures.
- Higher resolution geometry.
- More world clutter and detail.
- Enormous worlds with little to no loading.
- I'd like to see water reflections that are actually reflections and not just cube-maps.
- Some people think that many more dynamic lights (1000's in a scene) are the future of video games.
- Soft shadows via variance shadow mapping or the like.
- Hair on video game characters will get a huge upgrade. No longer will characters look like they haven't showered for 2 weeks.
- Smoke using fluid simulations (this would be awesome)
- Geometry LOD is going to be a serious problem (even more so)
If you were to stick with triangles, then you would probably do some kind of subdivision surfaces.
There are other options though like voxels which can give triangles a serious run for their money. When properly optimized, they can provide very consistent frame rates irrespective of world detail.
Voxels aren't all static either. Lattice skinning can be used for characters, as well as I've seen some research in the area of dynamic KD-tree generation. There has also been some research in completely dynamic voxels, such that the world is actually changeable real-time. Leaving a foot-print in the sand actually leaves a foot-print there. Possibly permanently depending on how you implement it.
You could also do interesting procedural stuff if you had the compute power, where grass could grow back over craters over time, etc... Other benefits of voxels are the sampling you can do to do motion blur, depth of field, etc stochastically.
You can guarantee a framerate by doing progressive rendering of the frame. Stopping at a certain ms. That gives a lot of freedom to the content people to make the world how they see it in their imagination and no longer be limited by what hardware can provide.
All very interesting stuff, and I'm sure its only the tip of the iceberg.
However... things are not without its challenges. The PS4 may make the life of voxels a little harder. You will likely have to write a custom ray-casting solution for the PS4.
I'm sure many people will continue to use triangles. Hell, even I might. I'm a reasonable guy and will likely measure both approaches and pick the best one for the task at hand. However, ignoring voxels as an option seems kind of silly to me. The game developing public needs a good example implementation of voxels in order for it to really take hold in any sort of realistic way. There is an open source one from NVidia which has lots of ideas pulled from my stuff, however their solution is not really prime-time ready. For example, it doesn't stream from disk so you can only do what you can fit in memory. I don't really have time to write a public implementation of voxels, or to really contribute anything significant to the nvidia one. I'm just too darn busy right now. Who knows though, maybe I'll have time in the future.
Ok, enough voxel talk.
Another big challenge is going to be doing things for 3D displays. I don't have a lot of experience with 3D conversions for games yet, but I will very soon. I'll write more when I know more. Doing so in a deferred renderer seems like it might be a bit troublesome.
I'm sure there is much more that I could chat about here with next gen graphics. Taking each of the previous things in detail. Maybe I'll do a dedicated post to each one.
The breakeven point for voxels is when we start hitting a low enough pixel-to-vertex ratio for submitted triangles - eventually the setup overhead of rastering small enough triangles won't be worth the performance benefits (not enough pixel-to-pixel coherence).
ReplyDeleteI don't think voxels make procedural generation a simpler problem than with tris, do they? It means that you're dealing with solids instead of surfaces, which does make CSG operations simpler (blowing holes in things, leaving behind globs of dirt, etc).
Whenever I'm asked what the future of video game graphics looks like, I always point to the CG movie industry. They went through exactly what we're going through now and have already figured it all out the hard way. We need only learn from them.
ReplyDeleteThe biggest gap between game and movie graphics at the moment is, as you point out, texture resolution. That one's just a matter of time. That problem will get solved by the hardware industry.
But the biggest problems aren't necessarily the kinds of things that would jump to mind for a gamer or game programmer. From a gamer's perspective, yes, the trend is more geometry, more detail, bigger worlds, etc. But even with all that you'll still end pushing up against a wall.
By way of example, imagine a scene from Avatar, say. No characters, no animation... just a still of one of those landscape shots. Most viewers would say it's completely indistinguishable from a photograph, right? But throw all that same content at a game engine (on some hypothetical future hardware that has no problem chewing through that much data) and, I dare say, you wouldn't get the same reaction. The devil is in the details.
First, yes, we must abandon triangles. The movie industry already has, a very long time ago. Micropolygons have gone through almost three decades of battle testing. Aside from being beautifully clean and elegant to implement, they stand up to the realism bar quite nicely. The reason lies in *the* fundamental difference between game and movie rendering, which is that _shading must be done in surface space, not screen space_. This is a hard prerequisite of any attempt at photorealism. There are any number of artifacts that are (provably) impossible to avoid otherwise. The most obvious is temporal aliasing, but even spatial aliasing is impossible to get rid of without having surface-relative derivatives.
The second is finer control over sampling. That is, again, as any rendering guru will tell you, fundamental to achieving a photorealistic image. The kinds of hard-coded MSAA patterns that the GPUs implement attempt to handle the average case, which then means you have a pretty terrible worst case. A programmable sampling stage is also the only way to get accurate motion blur and depth of field. Which brings me to my next point: we need to start moving away from screen-space approximations, and do the math for these kinds of effects in 3D space instead. There have been some trends in this direction recently (attempts at real AO in real-time, as an alternative to SSAO, for example) which I think is on the right track.
This may sound a bit pedantic, and perhaps off topic (your post was about next gen game graphics, not "how to make games look like movies"), but these are the kinds of long term considerations that I think the industry needs to start thinking about, perhaps not in the next generation, but at the latest definitely by the one after that. But even if you ignore the aliasing arguments, micropolygons also have many of the advantages you mention about voxels, such as automatic LOD and consistent frame rate guarantees. All this while being a much more well-understood domain, with fewer open research questions (animation) or constraints (memory, hardware ray casting support).
If Larrabee had succeeded, it was my firm belief that there would have been renewed attempts on the convergence effort from both side of the fence -- game renderers attempting pseudo-micropolygon architectures and a corresponding jump in realism, and offline micropolygon renderers getting Larrabee ports and corresponding order of magnitude speedups. I haven't abandoned my prediction; I simply have to ammend it slightly: if the other IHVs still have such fully programmable architectures on their roadmaps, the next significant jump in video game graphics will come when those are released.
Great comment Sharif. Micro-polygons is also a huge area of possibility in the future. I need to do more research on the specific of micro-polygons. Last I heard there were problems with cracks between surfaces which in the movie industry they would go frame by frame and hand-edit them out. This was an acceptable solution for them. However, it seems kind of a deal breaker for real-time games. Like I said though, perhaps they have found a tech solution for that, or hell maybe some game indsutry guys will! :)
ReplyDeleteVoxels represent possibility to me. I dont have to have a pre-calculated mesh in order to render something. The object(s) can dynamically change to the influences of the world and players impact. After 10 years of designing virtual worlds I am convinced its the way to go.
ReplyDelete