Tuesday 29 November 2016

HA7 Task 6 – Constraints

Polygonal count and file size

Polygonal count refers to amount of triangles being rendered per frame, because of how limited software is and technology there is only a certain amount of polygons that can be rendered at a time. If the polygons exceed the limitations of the polygonal count then the image will either not be fully rendered or be pushed back to look like lower quality. Also the more polygons you have in your creation then the longer it will take for your 3D models to render out as the software/hardware has to count out each individual polygon and make sure that it gets rendered so your image looks the same as what it did before being rendered. The other problem with higher amounts of polygons being in your creations is that the file size of your model will be higher meaning that more memory is taken up so less creations will get made and a longer waiting time for the object to be rendered. The polygonal count and file size will only be big when the creator is trying to make the model of high quality, if they made the model of low quality then the file size and polygonal count would be lower.




Faces on an object are what take up most of the memory, if you wish to have a model that is exact and quite a high end creation then you will need to include more faces so that you can curve the object in the correct places so that it is spot on. The only problem is the memory it takes up, this mean that when you create any models you should delete any faces that are not being used or seen on an image, e.g. delete the face on the underside of a box that is sat on the floor as people will never see the underside of said box. A polygon count in a modelling software can be quite misleading as well because a triangle count is higher and since most modern graphic hardwares is build to accelerate the rendering of triangles this can cause many renders to take longer than what is expected since the creator has been given false information.




Triangle count vs Vertex count

Vertex count is a lot more important than triangle count since vertex count is better with performance and memory. For traditional and easier purposes though artist and modellers will stick to using triangle count as a performance measurement. Artists can have the triangle count and vertex count similar though as long as the triangles are connected e.g. 1 triangle is made up of 3 vertices, add another vertex and you get another triangle so that's 2 triangles and 4 vertices, add another vertex and you get 3 triangles and so on. This method can cause complications with things such as shading and UV mapping as the data is placed onto each triangle face making the model look broken as it is rendered into game. If you overuse smoothing groups, over split the UV's and more to make sure the model looks correct can lead to a larger vertex count, this will then lead to stress on the rendering process making the rendering take longer and the file size larger.




Rendering time

Rendering is the final process of modelling and the most important as this will lead to the image that people see on their screens. There are multiple methods of rendering an image or model, some examples are non-realistic wireframe rendering, scanline rendering, ray tracing and more. Rendering can vary on the time it takes depending on many factors such as size, rendering method etc. and the different rendering modes are better suited for either photo-realistic or real time rendering.

Real-time

Rendering for the interactive media including things such as games and simulations, is calculated and rendered in real time. The frames that are rendered in real time are of approximately 20 to 120 frames, the lower the frames the more lag the game or simulation will experience. The main objective for real time rendering is to render the most amount of frames it can in one second to give the player a better experience in what they are doing. The minimum frame per second a game can play is 24 due to the fact that the human eye can see these frames successfully. Some rendering software can simulate effects such as lens flare or depth of motion, depending on the software involved will depend on how detailed these physics can go. These physics are used to give the player a more real experience when they play games, it offers a much greater experience for them and brings merges reality with fiction. As technology has increased so has rendering software allowing for greater amounts of realism and physics to be included into games, the processing power of real time rendering has become increasingly better and through techniques such as HDR rendering has offered many people the enjoyment of more realistic graphics. Real-time rendering is often polygonal and aided by the computer's GPU. Real time rendering has a graphics pipeline that goes in these stages architecture, application stage, geometry shape, and then follows the rest of the stages as discussed in my task 2.



Non real - time


non real time rendering uses limited processing power but displays a higher quality image for people to see. It is used mainly within feature films and animations. Depending on how complex scenes are when they get rendered will judge how long it will take to render it. The way non real time rendering works is by rendering the scenes onto a hard disk, this can then be transferred to other pieces of hardware such as a motion picture film or disk. Usually these rendered scenes will run at 24, 25 or 30 frames per second as the human eye can see the illusion of movement.




When attempting to render out photorealistic scenes the techniques developers will use are ray tracing or radiosity, ray tracing generates images by tracing the path of the light through pixels in an image plane which then simulates the effects of its encounters with the objects the light passes through. Radiosity uses light to reflect off of the surfaces of objects, instead of ray tracing where it will pass through the object, radiosity will have the light diffuse of the surface and reflect into the eyes. Other techniques are also used for things like smoke, rain, fire etc. for these techniques they will use a particle system, volumetric sampling, caustics and subsurface scattering. Each for different physical entities that need to be rendered out. 




Rendering in non real time can be quite expensive due to the complex rendering techniques that come with attempting to render multiple physics. As time has gone by the technology for non real time rendering has increased and allowed very high quality images and scenes to be produced with realistic rendering being very powerful now. Film studios that use non-real time rendering will also use render farms for computer generated animations. This is to generate images in a timely manner. A render farm is a high performance computer system built to render CGI. This is mainly for the use of visual effects in the film and TV media. In the modern day film studios aren't the only ones to be able to do this, since hardware costs have dropped since they are a lot more common people from all over the world can make films the same way that movies and TV programs do. They may not be as good or long but it is still possible and can help people to demonstrate their skills on the matter helping them to achieve jobs. 

http://www.shapeways.com/tutorials/polygon_reduction_with_meshlab
http://adamjmattisunit66.blogspot.co.uk/2013/01/constraints-of-3d.html
http://polycount.com/discussion/145737/stupid-question-about-modeling-and-vertex-count
https://www.youtube.com/watch?v=aBfG_7Co-rA
http://cs-exhibitions.uni-klu.ac.at/index.php?id=494
https://en.wikipedia.org/wiki/3D_rendering

No comments:

Post a Comment