Categories
3D Computer Animation Fundamentals Animation

Week 7: Walk Cycle Blocking

Once that was done, we moved on to learning how to block out a walk cycle using the same character, ‘Walker.’ We were told the main key poses that are needed for a walk cycle:

  • Contact Pose
  • Down Pose
  • Passing Pose
  • Up Pose

These poses form the foundation of the walk, and we were told to create a simple 24-frame walk cycle from left to right and also from the front view before extending it to 48 frames. For reference, I also used “The Animator’s Survival Kit” by Richard Williams, which helped me understand the movement better later on.

I started by referencing the walker into my scene. For this assignment, we also played with the lighting by adding two directional lights to make the scene look better. I then proceeded to place the Contact Poses at frames 0, 12, and 24. Then, I added the Passing, Down, and Up Poses, spacing them evenly with 3 frames between each. This structure helped create the rhythm for the walk cycle. Then it became easy to replicate these initial 24 frames and changing the values to create an opposite walk. At this point I had a basic walk cycle ready with 48 frames.

After laying out the basic poses, I went further and added some character to the animation. I used the front view of the character and applied the principles of weight shift to make the walk feel more realistic by moving it along the x axis on the leg that it shifts its weight on. I also incorporated a slight rotation with the weight shift on the sides to enhance the fluidity of the movement. Additionally, I brought the feet a little closer together and turned them slightly outward to make the walk appear more realistic. These subtle adjustments helped the cycle feel more grounded and natural.

Categories
3D Computer Animation Fundamentals Animation

Week 6: Weight Shift Spline

This week, I worked on refining my weight shift and turning it into spline, focusing on the feedback and review from previous session.

I started by revisiting the blockout, where George had provided feedback on the weight shift, certain leg positions, and foot rotations. After making the necessary changes based on his suggestions, I converted the animation to spline and focused on editing the graphs to create smoother transitions. The goal here was to eliminate any sharp movements and ensure that the transition between key poses felt natural and the weight shift feels realistic.

Looking ahead in the next week, I’ll continue refining the cycle when I turn it into spline by adding more details and editing the graph as needed.

Categories
3D Computer Animation Fundamentals Animation

Week 5: Weight Shift Blocking

We continued working in Maya by setting up our projects and using the “Walker” model as a reference in my file. The first step was to create a selection set of the main controls we needed for key framing and add it to the shelf in the animation tab.

Our main task this week was to work only in the blocking phase. We started by planning out our poses and shot our own reference videos for accuracy. I chose a simple pose where the character steps from the left to the right. For the first frame, I posed the Walker leaning on its left hip, which helped me understand how the weight shifts left before moving right and then shifts again during the step. I used this principle to animate the weight shift.

To add realism, we adjusted the foot brake value to 100 and tweaked the foot roll and heel roll settings to perfect the Walker’s pose as needed.

Initially, I had some challenges with joint positioning and determining how much to bend the Walker in the anticipation pose. However, after reviewing my reference video and getting feedback from George on my blocking, I was able to improve the pose. We were also instructed to convert our blocking to splines the following week. 

We were also asked to create three poses using references from either our sketches or the internet. For this, we used the ‘Ultimate Bony’ rigged character. I chose to make an action pose, a yoga pose, and a dance pose to add variety. This exercise taught me how to position different body parts like joints in the shoulders, hips, and knees to make the poses look natural. 

It also helped me understand how each body part affects the others when creating a pose. Adjusting joint rotations and considering the angles helped me make each pose feel more dynamic and balanced.

Categories
3D Computer Animation Fundamentals Immersion

Week 5: UE Physics

Physics in Animation

  • Explored how physics adds realism to animations, especially in scenes where objects are falling or impacted.
  • Learned that applying physics makes objects react naturally to forces, adding immersion.

Exploring Unreal Engine Modes

  • We usually work in Selection Mode, but this week we explored additional modes:
    • Landscape Mode
    • Foliage Mode
    • Fracture Mode – our main focus, used to apply fractures and destruction effects to meshes.

Modeling and Fracturing Meshes

  1. Started with Modeling Mode to create and place basic meshes in the scene.
  2. Switched to Fracture Mode to break down meshes in various ways:
    • Uniform Fracture – breaks down a mesh evenly.
    • Cluster Fracture – creates smaller, clustered fragments, ideal for explosive effects.
    • Radial Fracture – creates a point-of-impact effect, like a gunshot.
    • Plane Fracture – simulates slicing, like a sword cutting through.
    • Custom Fracture – allows for user-defined break patterns.

Fracturing a Cube Step-by-Step

  1. Selected a cube mesh to fracture.
  2. Generated a fracture asset:
    • Went to Generate > Save in a folder > Generate Fracture.
  3. Enabled physics for realistic impact:
  4. In Details Panel, enabled Simulate Physics to respond to gravity and collisions.
  5. Applied Uniform Fracture mode:
  1. Adjusted the fracture levels to control the number of broken parts.
  2. Observed how one cube fractured into 20 separate pieces.

Advanced Fracture and Impact Control

  • Further fractured pre-existing fractured parts to create additional detail.
  • Adjusted impact strength:
    • Lowered the Damage Threshold in the Details Panel to increase impact sensitivity.
    • Turned off bone colours by unchecking Show Bone Colours in the Details Panel.

Nanite for Performance Optimisation

  • Switched to Nanite for performance when using heavy physics scenes:
    • In Chaos Physics settings, located the collection, enabled Nanite in the Content Browser, making physics processing more CPU-efficient.

Adding Material Properties to Enhance Physics

  • Applied bounce and material properties to objects:
    • In Details Panel > Collision > Physical Material Override, adjusted friction, density, and strength.

Scene Recording Setup

  • Enabled Engine Settings to access built-in blueprints:
    • Activated by ticking Show Engine Content in Content Browser Settings.
    • Located Anchor and FS Master Field in the content browser, then copied them to a custom physics folder.
  • Set up the scene for recording:
    • Dragged Anchor and Master Field into the scene.
    • Added an Initialization Field and created a Chaos Solver to simulate physics interactions.
    • Added objects to the Chaos Cache Manager for precise recording.

Using the Sequencer to Capture Scenes

  1. Added objects to the Sequencer and set keyframes for start and end points.
  2. Worked with Constraint Actors to apply physics constraints:
    • Added two cubes to test the constraint as an anchor point.
    • Observed how one fractured cube reacted to constraints, swinging around the anchor point.

Alternative Recording with Take Recorder

  • Enabled Take Recorder in Plugins and added actors for recording:
    • Started simulation with Alt + S to capture real-time physics effects.
    • Viewed recorded sequences in the Content Browser after recording.

Creating a Bouncing Ball

  1. Created a sphere and enabled Simulate Physics in Details Panel.
  2. Applied a custom physical material for bounce:
    • Created a blueprint class called BP_Ball and added a static mesh.
    • Created a physical material with customised friction and density settings.
  3. Applied the physical material to the sphere to achieve the desired bounce.

This week’s exploration provided a solid understanding of fractures, physics settings, and recording techniques in Unreal Engine. These tools allow us to bring more realism and dynamic effects to our animations.

Categories
3D Computer Animation Fundamentals Immersion

Week 4: Sequencer and Materials

In Week 4, I continued to use the Sequencer in Unreal Engine and started working on creating Materials.

I learned how to set up Shots in the Sequencer, which is key for staying organized when using several cameras. To create a Shot, you first add a Subsequence track by clicking the Add button in the Sequencer menu. This Subsequence Track helps keep track of and manage the different cameras used for various tasks.

I learned how to create Shots in Unreal Engine’s Sequencer:

  • To create Shots, generate a Level Sequence from the Cinematics menu.
  • Ensure proper naming for better organisation.
  • Add these sequences to the Subsequence Track, where you can adjust their lengths.
  • To assign cameras to Shots:
    • Select the desired camera.
    • Press Ctrl + X to cut the camera.
    • Double-click on the chosen Shot to paste the camera into it.

This week, I also learned how to create Materials in Unreal Engine. I started by downloading a Material from Quixel Bridge and importing it into Unreal Engine. Then, I created a new Material and used the Material’s node editor to add the following maps from the imported Material:

  • R Channel: Ambient Occlusion
  • G Channel: Roughness
  • B Channel: Displacement

I connected the maps as follows:

  • The RGB channel of the Colour Map to the Base Colour
  • The G channel of the Roughness Map to the Roughness input
  • The RGB channel of the Normal Map to the Normal input of the new Material

Afterward, I started experimenting with the tiling of the Material. To do this, I incorporated several nodes, including Texture Coordinate, Multiply, and Add. By adding these nodes, I was able to adjust the tiling according to different requirements. I played around with different values to see how each adjustment impacted the appearance of the Material. This exploration not only helped me understand how tiling works but also allowed me to visualise a range of outcomes, giving me greater insight into the creative possibilities within materials in UE.

Once I created a Material, I was guided to generate a Material Instance. This is done by right-clicking on the Material I created and selecting the Material Instance option from the menu. The primary distinction between a Master Material and a Material Instance is that the Material Instance inherits all the properties from the Master Material. 

This inheritance enables real-time updates and adjustments, allowing for more flexibility and efficiency in the material design process. By using Material Instances, I can tweak parameters without altering the original Master Material, making it easier to experiment with different looks while maintaining a consistent base. In conclusion, this week was very informative as I enhanced my skills in material creation and manipulation in UE.

Categories
3D Computer Animation Fundamentals Animation

Week 4: Ball With Tail Spline

Refining the Ball with Tail Blocking

This week, we continued with our work on the Ball with Tail animation, picking up from last week’s progress by focusing on converting our Block Out animations into Spline animations. We kicked off with a critique session, where we received initial feedback on our Block Out animations, helping me pinpoint areas for improvement.

Following the critique, we resumed our work and learned how to transform Block Out animations into Spline. After the conversion, we utilised the Graph Editor to refine the animations and correct the ball’s trajectory using the Motion Trail feature. This allowed us to make subtle adjustments to the ball’s path, enhancing the realism of the jump and making it look smoother than the blocking. I also learned to fine-tune the Translate Y graph in the Graph Editor, incorporating ease-in and ease-out effects to give the jump a more natural feel.

In terms of the tail animation, I made slight modifications in the Graph Editor to smooth out its movement. I also cleaned up on the rotations to make it follow the right curvature while it is in motion. We learned about managing keyframes, focusing on how to delete any extra keyframes at the end of the animation process. This keeps the Graph Editor neat and easy to use. It’s important to remove these extra keyframes only after completing the animation so that nothing changes while we’re still working.

This week, I adhered to the planning I developed last week and successfully converted my Block Out animation into Spline. By leveraging the Graph Editor, I was able to add ease-in and ease-out effects, while the Motion Trail helped me enhance the ball’s bounce distance and trajectory. I made several additional adjustments to ensure that the animation of the ball with the tail appeared more fluid and realistic. This is my version of Ball with Tail:

Categories
3D Computer Animation Fundamentals Animation

Week 3: Anticipation & Ball with Tail

Planning & Animating a Ball with Tail in Maya

In week 3, we focused on Anticipation, a key principle in animation. I learned that anticipation serves as a mechanical buildup for force, which is essential for understanding that all movement is generated by forces—either external or internal. Anticipation effectively builds internal force to create dynamic motion.

We were advised to master foundational rules before deviating from them, which is crucial at this early stage of our animation journey. For the ball with tail animation, we observed videos of squirrels to understand how their tails react during movement and jumping. This helped us grasp the natural curve of the tail following the squirrel’s direction.

We were then taught the difference between Block Out and Spline:

Block Out:

Block Out is a foundational technique in animation that involves creating a rough version of the animation by establishing key positions (keyframes) for the main elements of the scene. The primary goals of the Block Out phase are to define the timing, spacing, and overall movement of the characters or objects without worrying about fine details.

Spline:

Once the Block Out phase is complete, the next step is to convert the rough animation into Spline. Spline animation refines the keyframes by smoothing out the motion curves, resulting in more fluid and natural movement. This allows for fine-tuning of acceleration and deceleration (easing), helping to create more realistic movements that mimic how objects and characters behave in the real world.

We learned that it’s generally more effective to push concepts like anticipation and squash & stretch too far initially and then refine them, rather than hesitating and making small adjustments. Anticipation is crucial for illustrating the strength and force behind movements, while squash and stretch are vital for enhancing the physical believability of actions.

For this task, I began by sketching out my initial ideas in 2D before moving on to animate in Maya based on that reference. Using the Block Out method helped me establish the starting positions necessary to get the ball moving. Although the animation was a bit choppy at first, I added extra keyframes to achieve a smoother flow.

Once the ball’s motion was established, I turned my attention to the tail. At the first keyframe, I positioned the tail as I envisioned it. As I continued, I rotated the tail into an ‘S’ curve, inspired by the natural movement I observed in the reference video. By setting keyframes and blocking the tail’s motion with each jump, I aimed to create a realistic effect.

After setting all the keyframes, the animation looked quite good as a preliminary step toward the final version. I’m now prepared for the next week, where we’ll learn how to convert this animation into Spline for a smoother and more refined appearance.

Categories
3D Computer Animation Fundamentals Immersion

Week 3: Virtual Production Sequencer

This week, I explored the Unreal Engine 5 Sequencer, learning how to apply traditional film-making techniques to Unreal. I focused on the key differences between regular film production and how things work in Unreal.

Film Techniques:

  • Master Scene (Master Shot): This means filming an entire scene in one continuous shot from a wide angle. It shows all the actors and the set, helping establish the action and relationships between characters.
  • Coverage Cameras: After the master shot, additional shots are taken from different angles, like close-ups or over-the-shoulder shots. These highlight emotions and details and help create a smooth final scene during editing.
  • Linear vs. Non-Linear Storytelling:
    • Linear: The story goes in a straight line from beginning to end.
    • Non-Linear: The story can jump around in time, using flashbacks or different sequences for a more interesting narrative.
  • Triple Take Technique: This involves filming three different versions of a shot, with slight changes in performance or camera angles. It gives editors more options to choose from later.
  • Overlapping Action: This technique makes movements feel more natural by staggering actions. For example, when a character turns, their body parts move at slightly different times.
  • Hitting Marks: Actors have specific spots where they need to stand or move during a scene to get the best camera angles and lighting.
  • Film Production Roles:
    • Gaffer: The person in charge of lighting.
    • Grips: Technicians who set up equipment.
    • Production Manager: Manages schedules and budgets.
    • Director of Photography (DP): Decides how the film looks, including camera angles and lighting.

Unreal Engine Techniques:

  • Sequence-Based Linear Workflow:
    • One Level Sequence: Organizes a scene as a single timeline, moving from start to finish without changes.
    • Multi-Camera Setup: Uses multiple cameras to capture different angles of the scene at once.
    • Single Camera Cuts Track: Films each shot separately and then combines them in editing.
  • Shot-Based Non-Linear Workflow:
    • Nested Level Sequences: Smaller parts of a project that can be worked on separately and then combined later.
    • Take System: Helps manage different versions of a shot, making it easier to find the best one.
    • Sub-Scene Tracks: Allows editing specific parts of a scene, like sound or animation, without changing everything.
  • Collaborative Workflow:
    • Sub-Levels: Sections of a project that different artists can work on independently.
    • Sub-Scene Tracks: Focus on specific elements, letting artists work on their parts without affecting others.
    • Visibility Tracks: Control what elements are visible during editing, allowing focus on certain aspects.

Workflow Comparison:

  • Linear Workflow: A simple process where everything is done in order, commonly used in traditional filmmaking.
  • Non-Linear Workflow: More flexible, allowing edits to be rearranged or done out of order. This is helpful for animation and VFX projects, enabling multiple artists to work together.

Both workflows are important for big projects, especially when teams need to work together on some parts of the project.

After exploring the differences between traditional film production and Unreal Engine, we started working with the Sequencer. I learned that the Sequencer is Unreal Engine’s Non-Linear Editing Tool, which includes:

  • Ground-up Shot Creation: This feature lets creators build individual shots from scratch in Unreal, giving full control over camera angles, lighting, and scene layout.
  • Pre-visualisation: A tool to create rough versions of scenes before full production. It helps visualise how the final scene will look and assists in planning.
  • Full Film Creation: Unreal Engine can be used to create entire films, from pre-production to final rendering, providing a virtual environment for production.
  • Game Cinematic Creation: The Sequencer is also used to create cinematic sequences for games, helping to develop narrative-driven cutscenes or trailers with high-quality visuals.

This versatility makes Unreal Engine valuable for both the film and game industries.

I learned that in traditional film, the narrative is usually structured into sequences, often following a three-act format. In contrast, Unreal Engine organises the story using a Level Sequence, which builds the entire narrative using Nested Level Sequences.

We also covered two new terms: Possessable and Spawnable Actors. Possessable actors are like zombies that appear in the scene but aren’t needed, while Spawnable actors are essential elements that we want to keep in our scene. Spawnable actors need to be called to appear, while possessable actors are always visible.

Afterward, we worked on a sample project called DMX Previs Sample. In this project, we learned how to create new cameras in the scene and animate their movements. This experience helped me understand the Sequencer better and how to add cameras and other objects to keyframe and animate them.

We moved on to create Spawnable Actors by adding an actor to the Sequencer. To do this, I right-clicked on the object I wanted to convert and selected Create Spawnable. This process ensures that the object is always accessible in the Sequencer when we need to render the scene.

We created a Level Sequence and opened the Sequencer to add a camera to the DMX Previs Sample scene. After adding the camera, I adjusted the focus property from frame 1 to frame 100 and key framed it to create a simple animation.

We concluded the lecture by experimenting with camera settings and movements to develop different camera animations. I added shots using the camera cut feature, which helped me enhance my understanding of cameras in Unreal Engine while learning to use the Sequencer effectively.

Categories
3D Computer Animation Fundamentals Immersion

Week 2: Introduction to World Building

Introduction & Basics of World Building in Unreal Engine 5

This week, I delved deeper into world building in Unreal Engine 5 and gained a greater understanding of its powerful tools that allow us to create bigger, more detailed environments with high efficiency.

World Design and Workflow Efficiency

Unreal Engine 5 offers a seamless world-building workflow, making it easier to design vast environments. The introduction of world partition and level instances helps manage large-scale worlds by optimizing the level of detail and memory usage. It simplifies handling of bigger worlds and complex stories, making the process more streamlined. The Windows tab in Unreal Engine is a central part of the interface that allows us to customize and manage our workspace. It provides access to different windows and panels that help streamline the workflow, making it easier to navigate, control, and edit various aspects of the project.

The Content Browser in Unreal Engine is an essential tool for managing all project assets, providing a centralized hub for organizing, importing, and manipulating various content types like models, textures, animations, and blueprints. It features drag-and-drop functionality for easy asset placement, detailed previews for quick assessments, and robust search capabilities to swiftly locate specific items. We can create folders for organization, apply filters and tags for efficient navigation, and edit asset properties directly within the browser.

Lighting and Environment Improvements

Lighting plays a crucial role in creating realistic environments. I explored Unreal’s enhanced lighting tools, including the Environmental Light Mixer, Directional Light, Sky Atmosphere, Sky Light, Exponential Height Fog, and Volumetric Clouds. These tools work together to provide dynamic, real-time updates to lighting, giving immediate feedback and helping us craft realistic, immersive environments.

Quixel Bridge

The updated Quixel Bridge simplifies asset importing, making high-quality models, materials, and MetaHumans easily accessible. The drag-and-drop functionality speeds up workflow, and assets are now optimized for virtual textures, ensuring high fidelity without performance compromises. It’s incredibly useful for large-scale projects, like the environments I will be working on. We were made aware that Quixel is now moving to Fab to get all the new downloads and megascans.

Another advantage is that Quixel Bridge assets are now available directly within Unreal Engine’s Content Browser. This eliminates the need to switch between multiple applications or manually import files, which keeps the workflow fluid and focused. You can also make adjustments to assets right in the content browser, making it much easier to tweak and refine them in real-time.

Modeling and UV Tools

The new modeling and UV tools allow for in-engine mesh creation and editing, removing the need for external software. I can now create and modify meshes directly within Unreal Engine, which speeds up the workflow and makes last-minute adjustments much more manageable. From creating new meshes to reviewing and editing them, these tools are a game-changer for asset management.

Nanite Virtualized Geometry

Unreal’s Nanite Virtualized Geometry is another powerful tool I learned about, allowing us to handle millions of polygons without performance loss. Nanite automatically clusters polygons, optimizing them with a single draw call, while still maintaining the high-quality visuals we need. This opens up the potential for highly detailed environments without the usual performance constraints.

Lumen: Real-Time Global Illumination and Reflections

The Lumen system provides real-time global illumination and reflections, which brings unprecedented realism to scenes. It reacts dynamically to changes in the environment, ensuring that lighting and reflections adapt in real-time. Lumen’s ability to work with hardware ray tracing and distance fields adds depth and realism to lighting setups.

Virtual Shadow Maps

Finally, we explored Virtual Shadow Maps, which offer cinematic-quality dynamic shadows in real time. These shadows can be rendered from objects both near and far, with unlimited resolution. Virtual Shadow Maps replace older shadowing methods, providing a more optimal solution for projects using Nanite and Lumen.

Overall, this week’s learnings have equipped me with essential tools to create high-quality, detailed worlds while maintaining performance. I’m excited to apply these concepts in my upcoming projects!

Categories
3D Computer Animation Fundamentals Immersion

Week 1: Introduction to Unreal Engine

Introduction to Unreal Engine 5.4.4

In week 1 of getting introduced to Unreal Engine, we began with an overview of how the software can be used to create larger and more detailed worlds. The streamlined workflow helps make the engine work faster and enables a efficient and seamless process.

The key features include:

  1. Improved asset creation tools inside the engine offering more precision and flexibility 
  2. A robust rendering system ensuring quality without sacrificing performance 
  3. Dynamic GI and reflections

Installing Unreal Engine 

We were told to make an Epic Games account to get started with the software and to access Unreal Engine. After setting up the account, we were told to download Unreal Engine 5.4.4 which is the latest version to ensure us having the software for Week 2.

In this term, we will be working towards mastering the fundamentals of 3D computer animation, focusing on integrating animations and environments in innovative and experimental ways. Our aim is to explore the virtual production pipeline using tools like Unreal Engine 5, Maya, and Premiere Pro to bring our creative ideas to life.

Key goals for Term 1 include:

  • Portfolio Development: We will have to build a build a portfolio that demonstrates our technical skills and creativity in 3D animation, incorporating both still images and moving videos.
  • Creative Exploration: We’ll be encouraged to push boundaries by experimenting with the deconstruction and reconstruction of environments, playing with proportions, and exploring materiality to evoke different emotions and narratives.
  • Animation Techniques: We will work on creating a showreel (30-120 seconds) as the final submission for Term 1 to highlight our animation techniques, focusing on movement qualities such as linear action, scale, tension, and force in our environments.
  • Research and Reflection: Our journey will be documented through a blog, where we will reflect on our research weekly, talk about design decisions, and technical challenges. This blog will track our progress week by week, like a summary of our project development and its connections to broader social and cultural themes.
  • Presentations and Design Proposals: We will also have to create a 5-minute recorded presentation along with the final video to showcase our design concepts, storyboards, and methods, presenting our research and creative outcomes while challenging traditional approaches to 3D animation.