We teamed up with Tencent and Nvidia to bring Monster Hunter Online Benchmark application available for all the fans and players. It was built on the latest version of CryENGINE3 featuring physically based shading as well as the direct adoption of Nvidia’s GameWorks.
Monster Hunter is a critical acclaimed game IP worldwide owned by Capcom. Tencent is the developer of its MMO version aka Monster Hunter Online. I have been working direct with their development team as the technical contact on Crytek side since the beginning of the project in 2011 till its public beta.
Collaborated with Giant Networks and Nvidia, we developed a benchmark for the Preemptive Strike. It showcased the DX11 features of CryENGINE3 along with the latest graphics technologies from NVida. The game itself is a third person shooting game with military story as background, and initially planned to be released late this year. Official website: http://tps.plagame.cn/
I felt very honored to be invited to Nvidia’s CGDC session on behalf of Crytek to give a public presentation with their engineer. It covered several key graphics features (including SSDO, SMAA, TXAA, tessellation), we also shared what we’ve learnt during the development with the audience. Attached is the ppt of this presentation: CGDC2013_TPS_bmark.pptx
Global illumination is always an eternal topic in computer graphics. It reflects the behavior of light emission and transportation from all kinds of materials. It helps to greatly imporve the overall appearance of a 3D scene, but the traditional way of doing GI would be costly in terms of storage and processing time, which makes it impossible to use in a real-time application.
Precomputed radiance transfer was a new popular technique to make real-time GI for game back to year 2005. I adopted the technique to our in-house engine, also created an offline tool to build GI data. As shown in the image, the game level got more realistic lighting and shading. It features:
- Dynamic soft shadows
- Integrated with normalmap usage
- High performance
I also discussed the limitations of this kind of technique, more detail can be found in attached ppt file: Precomputed Radiance Transfer
The presentation was done internally for the engineering team, to share the graphics trend of video games and how we introduce global illumination to our engine.
I’ve read the this article recently on Gamasutra. I would like to say it dose give me another sight on the workflow of automated shader system, mostly because it was written from the artist’s view.
So basically the substance is that do not let non-programmer write shaders which are exactly the code running on GPU. The workflow of this system seems flawed because artists have to take time to learn the tools, but generally, they couldn’t provide reliable shaders and programmers still need to be involved to optimize the performance.
Actually, some people have already debated on graphical shader system, and think it is bad. Shawn Hargreaves said uber shader is not a good coding practice and it is always hard to maintain. I agree, because the main idea behind the scene is to utilize the static flow control to create shader permutations, it of course increases the total complexity.
Others like Christer Ericson and Wolfgang Engle think shaders are most performance critical for a game, and it’s wrong to give this responsiblity to artists. Wolfgang even pointed out such system will lead to ‘shader explosion’. Large amounts of shader copies are generated due to lack of restriction which is a waste of GPU horsepower.
They sugguest a tool that expose the limited variables as checkbox or other kind of GUI, not a do-whatever-you-want interface. What I can imagine is it needs more iterations between programmers and artists to determine what the effect should look like and how many variables are tweakable, that means those programmers will be occupied for maintenance which is not a good news.
On the contrary, uber shader approach is greate for rapid prototype. The advantage of the system is we are able to pull off quite a bit of custom material effects without any large amount of programmer time. With the base shaders created by tech-artists, we can also avoid serious performance issue. Here is a really good example. (also see the comment by J-S)
So both approaches have their pros and cons. I don’t think there’s an absolutely right way to go. Maybe a hybrid approach will be the best one among them.
This is a quick prototype of a racing game. At first, I was trying to do some interpolation test on splines, it was just simple as moving a box along the spline. Then, I was thinking about adding some control logic for those boxes. That’s where the game comes from :)
Basically, there’s a spline editor integrated in the game. It can be ativated with a hotkey. A context menu is provided to create/save/load spline, you can also add/delete spline node and adjust the appearance by dragging the control points. Then, toggle back to gameplay to test the ‘path’ immediately.
It is a 2D game with a top-down camera locked to the player. To make it feel better, I implemented some ‘fake vehicle physics’. It’ll auto brake before a turn based off its current speed. And there’s an ai car to compete against. Once you finish a lap, your score will be updated by your rank. It determines whether you can challenge the next level.