经验首页 前端设计 程序设计 Java相关 移动开发 数据库/运维 软件/图像 大数据/云计算 其他经验
当前位置:技术经验 » 程序设计 » 游戏设计 » 查看文章
Rendering in UE4
来源:cnblogs  作者:x5lcfd  时间:2019/8/26 9:51:11  对本文有异议

Intro

Thinking performance.

Identify the target framerate, aim your approach on hitting that target framerate.

  • Everything needs to be as efficient as possible
  • Adjust pipelines to engine and hardware restrictions
  • Try to offload parts to pre-calculations
  • Use the engine’s pool of techniques to achieve quality at suitable cost

What is rendering?

  • CPU and GPU handle different parts of the rendering calculations
  • They are interdependent and can bottleneck each other
  • Know how to the load is distributed between the 2

Shading Techniques

  • Deferred shading
    • Compositing based using the G-Buffer
    • Shading happens in deferred passes
    • Good at rendering dynamic lighting
    • More flexible when it comes to disabling features, less flexible when it comes to surface attributes
  • Forward shading

Before Rendering

Rendering threads

Rendering is heavily parallel process. It happens on multiple threads, main threads are CPU(Game), CPU(Draw) and

GPU, reality there is many threads that branch and converge.

UE4 Cmd (stat unit, stat unitgraph)

CPU – Game thread

Before we can render anything we first need to know where everything will be,

Calculate all logic and transforms

  1. Animations
  2. Positions of models and objects
  3. Physics
  4. AI
  5. Spawn and destroy, hide and unhide

Results: UE4 now knows all transforms of all models.

CPU – Draw thread

Before we can use the transforms to render the image we need to know what to include in the rendering, ignoring this

question might take rendering expensive on GPU.

Occlusion process – Builds up a list of all visible models/objects, happens per object – not per triangle

4 Stage process

  1. Distance Culling (manually, LOD Component, Cull Distance Volume)
  2. Frustum Culling (what is in front of the camera, wide FOV more objects to render)
  3. Precomputed Visibility
  4. Occlusion Culling

Precomputed visibility answers more complex occlusion questions,

Objects occluded by other objects, divides the scene into a grid, each grid cell remembers what is visible at that location.

Dynamic Occlusion Culling checks the visibility state on every model, that is mostly run on the CPU but some parts are

GPU handled.

Occlusion Performance Implication

  1. Set up manual culling (i.e. distance culling, per-computed vis)
  2. Even things like particles occlude
  3. Many small objects cause more stress on CPU for culling
  4. Large models will rarely occlude and thus increase GPU
  5. Know your world and balance objects size vs count

Results: UE4 now has a list of models to render.

Geometry Rendering

The GPU now has a list of models and transforms but if we just render this info out we could possibly cause a lot of

redundant pixel rendering. Similar to excluding objects, we need to exclude pixels, we need to figure out which pixels

are occluded.

To do this, we generate a depth pass and use it to determine if the given pixel is in front and visible.

GPU – Prepass / Early-Z Pass

Render teapot first, then render the box.

Previous strategy doesn’t work, that why we need to depend on the depth of those pixels to know if this object or this

pixel is behind in front of another object and then decide if we need to render it.

Question 1. How does the renderer associate the early-z pass with an actual object in the scene?

  It doesn’t really associated object per object what it happens, it knows the position of pixel on the screen, so what it
  need to render an object or a pixel it knows. Ignore it or keep it.

Draw calls

  • Now we are ready to actually rendering some geometries, so in order to be efficient, the GPU render a drawcall by
    drawcall, not a triangle by a triangle.
  • A drawcall is it’s a group of triangles that share the same properties.
  • Drawcalls are prepared by CPU(Draw) thread
  • Distilling rendering info for objects into a GPU state ready for submission

2,000 – 3,000 is reasonable, more than 5,000 is getting high, more than 10,000 is probably a problem, on mobile this number

is far lower (few hundred max), draw calls is determined by visible objects.

  • Drawcalls have a huge impact on the CPU (draw) thread
  • Has high overhead for preparing GPU states
  • Usually we hit the issues with drawcalls way before issues with tri count.

Imagination of the overhead of a draw call vs that triangles,

Copying 1 single 1GB file vs Copying 1 million 1KB files.

 

Drawcalls performance implications:

  • Render your triangles with as few drawcalls as possible
  • 50,000 triangles can run worse than 50 million dependents on scene setup (Drawcalls)
  • When optimizing scene, know your bottleneck (Drawcall vs Tri count) 

Optimizing Drawcalls

Merging objects

To lower the drawcalls it is better to use fewer larger models than many small ones. You can do that too much, it impacts
other things negatively

  1. Occlusion
  2. Lightmapping
  3. Collision calculation
  4. Memory

Good balance between size and count is a good strategy.

Drawcall is related directly to how many objects you have and how many unique material IDs you have.

Merging guidelines

  1. Target low poly objects
  2. Merge only meshes within the same area
  3. Merge only meshes sharing the same material
  4. Meshes with no or simple collision are better for merging
  5. Distant geometry is usually great to merge (fine with culling)

HLODs

Hierarchical Level of Detail

  1. Regular LODs means a model becomes lower poly in the distance
  2. Essentially swaps one object for another simpler object (less materials)
  3. Hierarchical LOD (HLOD) is a bigger version, it merges objects together in the distance to lower the drawcalls
  4. Groups objects together into single drawcalls
  5. Grouping need to be done manually

Instanced Rendering

  1. Groups objects together into single drawcalls
  2. Grouping need to be done manually

Vertex Processing

First thing processing the Drawcall

Vertex shader takes care of this process

Vertex shader is a small program specialized in vertex processing

Runs completely on the GPU and so they are fast

Input is vertex data in 3D space output vertex data in screen-space

Vertex shaders – Common tasks

  • It converts local VTX positions to world position
  • It handles vertex shading/coloring
  • It can apply additional offsets to vertex positions

Practical examples of world position offset vertex shaders are

  1. Cloth
  2. Water displacement
  3. Foliage wind animation

Why animate things this way?

Scalability with very high number of vertices, imagine a forest, it could involve millions of vertices to animate.

Vertex shaders do not modify the actual object or affect the scene state, it is purely a visual effect.

The CPU is not aware of what the vertex shaders do, thus things like physics or collisions will not take it into account.

Vertex Shaders Performance Implications

  1. The more complex the animations performed the slower
  2. The more vertices affected the slower
  3. Disable complex vertex shader effects on distant geometry

Rasterizing and G-Buffer

Rasterizing

GPU ready to render pixels, determine which pixels should be shaded called rasterizing, done drawcall by drawcall,
then tri by tri.

 

See here on this magnified pixel grid, we have a blue triangle and the rasterization for this triangle gives up those
orange pixels. Now the thing that need to know it happens drawcall by drawcall again to be more efficient, then it
goes triangle by triangle by same order it is submitted to the GPU.

Pixel shaders are responsible for calculating the pixel color, input is generally interpolated vertex data, texture
samplers, … etc.

Rasterizing inefficiency

When rasterizing dense meshes at distance, they converge to only few pixels. A waste of vertex processing.
i.e. A 100k tris object seen from so far away that it would be 1 pixel big, will only show 1 pixel of its closest triangle!

Overshading

Due to hardware design, it always uses a 2x2 pixel quad for processing. If a triangle is very small or very thin then
it means it might process 4 pixels while only 1 pixel is actually filled.

  

The gray gird is basically pixels, and the orange grid on the top which is the pixel quads that the GPU can process.

So, if we have a tiny triangle like first pic, ideally, we only having three pixels so we just need to process through those
three pixels to output the final color. However, in reality this is not what happens on the GPU, the GPU need to process
12 pixels just to render us those three pixels at the end. So, here we see like our first waste of pixel processing for small triangles.

Even worse case,

How to visualize overshading, Lit -> Optimization View modes -> Quad Overdraw.

Rasterization and Overshading Performance Implications

  1. Triangles are more expensive to render in great density
  2. When seen at a distance the density increases
  3. Thus, reducing triangle count at a distance (lodding / culling) is critical
  4. Very thin triangles are inefficient because they pass through many 2x2 pixel quads yet only fill a fraction of them
  5. The more complex the pixel shader is the more expensive overshading

Result are written out to:

  1. Multiple G-Buffers in case of deferred shading
  2. Shaded buffer in case of forward shading

G-Buffer

It is a rendered image encoding special data, these buffers are then used for different uses – mainly lighting, the frame rendered out in multiple G-Buffers.

原文链接:http://www.cnblogs.com/x5lcfd/p/11407650.html

 友情链接:直通硅谷  点职佳  北美留学生论坛

本站QQ群:前端 618073944 | Java 606181507 | Python 626812652 | C/C++ 612253063 | 微信 634508462 | 苹果 692586424 | C#/.net 182808419 | PHP 305140648 | 运维 608723728

W3xue 的所有内容仅供测试,对任何法律问题及风险不承担任何责任。通过使用本站内容随之而来的风险与本站无关。
关于我们  |  意见建议  |  捐助我们  |  报错有奖  |  广告合作、友情链接(目前9元/月)请联系QQ:27243702 沸活量
皖ICP备17017327号-2 皖公网安备34020702000426号