Monday, June 27, 2016

Week Four

Describe a highlight of the history of 3D Graphics with visual examples.



3D scanning is one of the newest methods when it comes to modeling in high detail. 3D scanning allows us to now scan in an object from the real world into a digital 3d model which can allow for great detail and realism. Scanning a real person instead of modeling from an image. This process is faster but it has its drawbacks one is that scanned model can be too high in poly with limits the use of the scan in certain areas like games. Also, it isn’t perfect after the scan you are still most likely to have to clean the model remove piece that the scan got by accident.
However, in the end, this method creates awesome result just look at the making of the cyberpunk 2077 trailer which they used 3D scanning.


Find and link to the blog of a practitioner who has contributed to a piece of work that inspires you. Show examples of their work and describe the details of their role on the project.


Jefferson B. Estrabinio a Senior 3D Character Artist who has worked on many titles and his most recent The Division in which he created the 3D models as seen in the images here of the character. Jefferson uses sculpting as his main form modeling which gives him a lot of control in the digital of the characters he creates. In addition to that Jefferson also does other models in his spare time like this one of Goku.

Wednesday, June 15, 2016

Week Three

Lighting


Lighting is very important in creating the scene as well as making the 3D object look more realistic, however before heavy into lighting do forget to get your shader done for the model, otherwise using a basic spotlight is good to just light up the scene. There are many different types of lighting in a 3d engine, point light which casts light outward in every direction from a single point in 3D space like light bulbs or candles. A directional light which casts a light in a direction from a point in 3D space like a son or moon, and much more like a spotlight, area light, volume light and ambient light. To use lights to get the best results there are 24 things you should keep in mind:
  1. Observe how light works in the real world
  2. Learn to read photographs
  3. Get your shades ready first
  4. Read photography magazines
  5. Don’t be slave to photographic realism
  6. Take the time to set up basic lighting
  7. Start with the light that has the smallest effect first
  8. Strip back and get your primary light right
  9. Keep experimenting
  10. Use contrast
  11. Vary your colours
  12. Build up a composition
  13. Setting up your gamma
  14. Don’t be afraid of dark areas
  15. What you can’t see is as important as what you can
  16. Remember off-camera lights
  17. Use volumetric lighting
  18. Introduce complexity to shadows
  19. Break up the light
  20. Using luminous polygons
  21. Add dust particles floating in the air
  22. Create colour effects
  23. Adding fog
  24. Post processing


Rendering

Rendering is the process of which the computer analyzes all the input variables such as animation, lighting, and colour in order to output a final image. In 3D applications like 3D max, it's the model, textures, set lighting and animate element in the scene which are rendered in an image. Each frame needs to do the same process when render which is way this is the most time-consuming part as each frame render time depends on what is on the scene so anywhere from minutes to hours like Zootopia which took up to 100 hours to render a single frame and as there are 24 frames a second or more depending on the settings used in the production.


Compositing

Compositing is basically like taking different elements, images or footage and making them work together. In movies or 3d animation, compositing is the final stage in the process where you add in visual effects to complete the scene. There are many examples of composting like this video here:

When compositing there are different ways they are done in photos it’s an art of marrying pixels, and in 3d there are nodes. Nodes basically take the elements, images or footage and places them in individual nodes which you can then edit to create your final output.



Wednesday, June 8, 2016

Week Two

UV Mapping



UV Mapping is like peeling the skin off a modeling and placing it onto a 2d canvas also known as the UV canvas. The UV canvas is a 2d plane that uses X = U and Y = V to project textures onto a model, the reason for UV Mapping a model before texturing is so that the texture placed onto the model isn’t a planar texture which creates bad look textures unless the object is a 2d plane.


Texturing and Shaders


Texturing is used to turn the model's surface or faces you have created to resemble something in real life and to do that you use texture mapping. It like how you would wrap a present in decorative paper first cutting the paper to fit the object which is UV Mapping and then sticking it on. The same is for texturing a model first you need to get your UV map and from there you can paint onto it using photoshop or other software to create your own style or something resembling a real world object.

Shaders are something that is used along with textures to give your model extra detail or a particular look. Shaders are used for how the light might reflect off the model's surface, how it is absorbed, translucency or bump maps.

Using these shaders you are able to create more detail onto an object without adding the extra vertex, edges or faces, like the image above it is using a bump map to give the illusion of depth onto a flat surface. Other maps are used for different results like transparency maps, normal maps, specular maps and more.

http://blog.digitaltutors.com/cover-bases-common-3d-texturing-terminology/





Rigging


Before a model is handed over to a team of animators to animate the object or character, it first needs to be bound to a system of joints, bones and control handles so that the animators can move the 3d mesh into the position they need. A rig is basically a digital version of our own skeleton which is made up of joints and bones which move certain sections of the model.

In addition to just have the joints and bones you need to also paint the weight which is saying how much a certain joint or bone would move a part of the model, this is important so you don't have a leg bone moving the foot or hip, making it harder to position the model correct for animating.



Animation



Long before 3d software, movies or games animation has been used. In the days when Disney was starting up animation was done by drawing out each frame of an animation and then going through the editing process to make those frame play. Back then animating a 2-minute animation in 24 frames per seconds meant there was 2880 drawing that had to be done, with Disney having a large work staff it didn’t take as long.






Unlike a 12 minute animation called Gertie the Dinosaur created by Winsor McCay which took him nearly 2 years and over 25, 000 drawing all done by hand and himself.
However, these days animation has grown and become a lot easier thanks to the new software like Flash and 3ds Max. Animating in something like 3ds Max uses the same principle as old style animation with keyframes just done a lot easier and quicker with rigs that allow you to move the model into the key poses faster than drawing them. Plus with the new technology of motion capture allow us to capture our own body movements to make the rig follow along.





Thursday, June 2, 2016

Week One

Pre-production



In the digital industry like film, games and animation there are three stages of production, first being pre-production where the idea of the product or animation is setup in detail to create a road map of the production process. These include the idea, story, storyboard, animatic and design, once these steps are completed, the next stage begins production.

In the production stage, the process of the animation begins and is mostly the longest stage as everything from the pre-production is now being created from the layout, R&D, models, textures, rigging/setup, animation, VFX, lighting, and rendering. But as the animation production continues things can change in order to fit the story or clients idea, so sometimes if an animation isn’t working in a certain scene they would go back to the layout to make changes and this is done with a lot of the steps, which is the reason why the production stage takes the longer to complete.

However once this is all done the final stage post-production beginning, mostly as editing everything from the render. Compositing, 2d VFX/motion graphics, color correction and lastly final output.

http://www.slideshare.net/Veetildigital/pre-productionpost-process-in-3d-animation



3D Modeling






Modeling is the process of taking an object like a box or the car above and molding it into a 3D mesh. The basis of modeling starts with a single point that is placed in a location in the X,Y and Z, these points are called vertex. Adding in another vertex you are then able to connect them creating a two-dimensional line or an edge, adding a third vertex you then create a three-dimensional object called a face or polygon. Using these you are then able to create objects like the car above. However in modeling there are different ways you can go about modeling an object:

  1. Spline or Patch modeling
  2. Box modeling
  3. Poly modeling/edge extrusion

Using these different methods make it's easier when creating a certain object like a building, barrel or character for an animation.
As well as modeling, 3D sculpting is used to create high detailed models that would normally be too different to do using the modeling methods above, using sculpting software like Zbrush you can create high detail 3d models faster and convert them into a 3d mesh which can then be rigging and animated.

http://www.animationarena.com/introduction-to-3d-modeling.html