Tuesday 6 December 2016

HA7 Task 2 - Displaying 3D Polygon Animations

GPU

A graphics processor unit is a hardware piece used primarily for 3D purposes, it is a single chip processor that creates lighting effects and transforms objects every time a 3D scene is created. A GPU will display images that have been created onto a screen for someone to see. An example would be the visuals you see in video games, if a GPU was not invented then you wouldn't be able to see anything on screen. GPU's use mathematical information to process what they are given and the better the GPU the higher quality and more frames and polygons the GPU can emit.

API

API stands for application program interface and is used within computer game creation (although it does have other uses as well), an API is a set of routines, protocols and tools for building software applications. Without an API we wouldn't be able to have any form of connectivity as the API is what allows us to do daily things such as ordering packages, writing a Facebook post and even just talking to friends through text. An API can be seen as a gigantic messenger database, it holds all this information and sprouts it out to different users. By this I mean if you go to a website which uses multiple services like PC part picker, then on that website is a bunch of stored information from multiple sources of where you can buy items like amazon and ebay, the API makes sure that as people interact with each different source it will send out the right information like a messenger, you tell it one action and it will come back giving you the option you asked. Not all API's are similar though as different ones have different uses such as web based (as I just explained), computer game based and language based. An API is source code based.




In computer games an API is used for things like programming and programming languages, if games didn't have an API then they wouldn't be able to run. All the art, animations, levels and everything could be made but there would be nothing there to connect them to one another. The game would be broken and not playable, include the API however, and actions ad commands can start to be placed into the game. Different factors can be fit together and a fully playable game can be created. An example of an API is below.

Direct3D

Direct3D is a graphics API, it is used to render 3 dimensional graphics in applications where performance is important. Direct3D uses hardware acceleration if the graphics card can support it, this allows for faster rendering in either big or partial chunks. If users can use a certain method such as pixel shaders and the graphics card does not support this then the API cannot emulate this effect although it will still compute and render the polygons and textures of the 3D model, it will just be of lower quality and less efficient performance. The capabilities of this hardware include W-buffering, spacial anti-aliasing, texture blending, mipmapping, colour blending and a lot more.



Graphics pipeline

A graphics pipeline refers to the sequence of steps used to create a 2D raster representation of a 3D scene. This means that once the 3D model has been made the graphics pipeline will be used to make this model displayable on a computer monitor. Without the pipeline the two cannot work and an image would not be able to be displayed on the monitor. Originally with 3D computer graphics fixed purpose hardware was used to make any rendering faster, as technology has progressed the hardware evolved making it more of a general purpose and allowing more efficient and flexibility in graphics rendering. Any image, video or game you see on your monitor has gone through the graphic pipeline steps from being a model in some software to being an entertainment item for your eyes to enjoy.




The steps of graphics pipeline

3D geometric primitives

Scenes will start off with and be made from simple 3D shapes such as cubes, cones, torus' etc. The traditional shape to use within 3D geometric primitives is a triangle due to the fact that it is well suited since they always exist on a single plane. Triangles allow for complex shapes to be made along with polygonal positioning and by using these shapes the user can create anything they want. Cubes and other 3D shapes are considered to be different primitives to triangles and line segment primitives due to the fact that each are used for different software and can be used to create different complex items in their own ways.

modelling and transformation

The modelling system is where objects are placed into this 3D world coordinate system. This allows for objects to be manipulated in from all angles and dimensions meaning you can transform an object from left, right, up and down. This can be used to model 3D shapes into complex shapes such as modelling a cube into a fountain, this is done by extruding and manipulating the object to create curves. The graphic pipeline will make it so that an object is rendered out into this 3D world coordinate system.

Camera transformation

This transforms the camera so that the 3D world coordinate system is now at the cameras origin point meaning that the player can now look in all directions by using either a mouse or analog stick. This allows for players to look around maps and see all objects from this angles.

Lighting

Lighting illuminates the 3D world and without it then everything in a game engine would be dark and unseeable. This means that if you have a texture on a block but no lighting then the object will be seen as black due to it needing some sort of light source to reflect the colours off of it (like real life.) The effect of lighting and reflection would be calculated by the graphics pipeline.

Projection transformation

This is what turns the 3D objects in 2D, this all depends on the view of the camera and as the player walks around objects they would stay 2D. In this step the process of making an object smaller will take place as well, since objects in the background are further away they have to be small. This is achieved by dividing the X and Y coordinates of each vertex of its primitive by its z coordinate. This is only for the view of the camera though as the objects will stay the actual same size if you go back into the modelling software they were made in.

Clipping 

Clipping is done to get rid of items or objects that aren't within a region of interest, if these objects aren't there then they will not be rendered into the final world. Using a mathematical rendering algorithm the pixels are only drawn between the clip region and the scene model.

Scan conversion or Rasterization

rasterization is the task of taking a vector image converting it into a raster image for an output on a video display, it can also be referred to as raster operations pipeline and it is a hardware found within a GPU. Once this is complete operations will be carried out on a each single pixel and although this step sounds quite easy it is infact very hard and involves multiple steps which gives it the nickname of pixel pipeline.

Texturing, fragment shading

This is the final stage and at this stage each pixel is assigned a colour based on what has been placed on the image as a whole, each fragmented pixel will be a different shade the match the colours of the texture which has been placed onto the object. The colours assigned are done from a texture in memory and/or from a shader program.

https://www.youtube.com/watch?v=s7wmiS2mSXY
http://www.adobe.com/devnet/flashplayer/articles/vertex-fragment-shaders.html

No comments:

Post a Comment