Categories
3D Computer Animation Fundamentals Animation

Week 6: Weight Shift Spline

This week, I worked on refining my weight shift and turning it into spline, focusing on the feedback and review from previous session.

I started by revisiting the blockout, where George had provided feedback on the weight shift, certain leg positions, and foot rotations. After making the necessary changes based on his suggestions, I converted the animation to spline and focused on editing the graphs to create smoother transitions. The goal here was to eliminate any sharp movements and ensure that the transition between key poses felt natural and the weight shift feels realistic.

Looking ahead in the next week, I’ll continue refining the cycle when I turn it into spline by adding more details and editing the graph as needed.

Categories
3D Computer Animation Fundamentals Animation

Week 5: Weight Shift Blocking

We continued working in Maya by setting up our projects and using the “Walker” model as a reference in my file. The first step was to create a selection set of the main controls we needed for key framing and add it to the shelf in the animation tab.

Our main task this week was to work only in the blocking phase. We started by planning out our poses and shot our own reference videos for accuracy. I chose a simple pose where the character steps from the left to the right. For the first frame, I posed the Walker leaning on its left hip, which helped me understand how the weight shifts left before moving right and then shifts again during the step. I used this principle to animate the weight shift.

To add realism, we adjusted the foot brake value to 100 and tweaked the foot roll and heel roll settings to perfect the Walker’s pose as needed.

Initially, I had some challenges with joint positioning and determining how much to bend the Walker in the anticipation pose. However, after reviewing my reference video and getting feedback from George on my blocking, I was able to improve the pose. We were also instructed to convert our blocking to splines the following week. 

We were also asked to create three poses using references from either our sketches or the internet. For this, we used the ‘Ultimate Bony’ rigged character. I chose to make an action pose, a yoga pose, and a dance pose to add variety. This exercise taught me how to position different body parts like joints in the shoulders, hips, and knees to make the poses look natural. 

It also helped me understand how each body part affects the others when creating a pose. Adjusting joint rotations and considering the angles helped me make each pose feel more dynamic and balanced.

Categories
Design for Animation, Narrative Structures and Film Language

Week 7: Narrative structure and Character Role

Narrative Structure

We started by understanding how narrative structure is crucial for storytelling. It sets up a certain chain of events in a story, guiding how the audience connects with the characters and how the story unfolds. A narrative must be able to engage the audience and ensure a satisfying conclusion. For character-driven stories, it’s vital that the actors not only have appeal but also convey the role convincingly. Directors mainly play a central role in extracting the best performance from the actors, enabling them to communicate the story effectively. This is important because the emotional and dramatic connections to the audience depend on the character’s portrayal.

In my understanding, the focus on character appeal and performance emphasizes how the success of a narrative depends not only on the plot but also on how well the characters are brought to life. This reinforces the director’s responsibility in extracting the best from their actors.

Literary Structures

The traditional forms of storytelling—novels, poetry, plays, short stories, etc.—are often used as references in structuring narratives. These forms influence how stories in animation and other media are built, with different genres (like myths or fairy tales) helping shape the expected flow of events. For animation, these traditional literary forms often serve as the basis for constructing narratives, with their age-old conventions about structure, character roles, and themes guiding the development of the animated stories.

From what I understood, by connecting animation to these literary forms, we can see how certain conventions (such as the “hero’s journey” or the archetypal good vs. evil) are woven into animation narratives, just as they have been in literature for centuries.

The Three-Part Story Structure & The Five-Act Structure

Aristotle’s idea of a beginning, middle, and end serves as the foundation of most stories, including animations. This structure is still relevant today, particularly in understanding how to organize events to create a satisfying narrative.

The Five-Act Structure (Exposition, Rising Action, Climax, Falling Action, and Resolution) helps deepen this concept by offering a more granular breakdown of a narrative’s development. In Act 1, the audience is introduced to the characters, setting, and conflict. Act 2 intensifies the story, with the protagonist encountering obstacles. The Climax (Act 3) is where the story hits its highest point of tension. In Act 4, the action begins to wind down as conflicts are resolved. Finally, in Act 5, we reach the resolution, where the story concludes, and any remaining plot points are tied up.

I learned that these structures help build the pace and tension of the story, ensuring the audience stays engaged from the introduction to the resolution. Applying these structures to animation helps make the plot more digestible, clear, and emotionally engaging.

Equilibrium and Re-Equilibrium

The equilibrium-re-equilibrium model follows a structure where the narrative begins in a balanced state (equilibrium), is disrupted (disruption), and eventually resolves (re-equilibrium). This concept is particularly useful for understanding the dynamic nature of narratives, where the protagonist’s journey or growth leads them to a new equilibrium, often after facing significant challenges.

The idea of disruption and re-equilibrium in animation stories resonates because it shows that animation can manipulate time and space in a way that other mediums can’t. It makes the medium powerful for telling stories where reality can be bent to the narrative’s will.

Metamorphosis in Animation

Metamorphosis refers to the ability of animation to transform objects, characters, or environments in unexpected ways. This process can distinguish animation from traditional cinema by allowing for constant change and transformation within the story. The ability to show fluid, non-linear, and imaginative transformations (like changing shapes or environments) is an essential characteristic of animation, setting it apart from more static live-action films.

I understand that metamorphosis in animation allows for an expression of creativity and flexibility. Characters or worlds can change form or perspective, supporting the fluidity and dream-like qualities that animation offers, which live-action films can’t achieve as naturally.

The Language of Animation: Editing

In animation, editing is essential in connecting shots and scenes to create a coherent narrative. It is a tool for pacing, narrative progression, and maintaining audience engagement. The rules of editing, such as ensuring a smooth transition between scenes or using close-ups for emphasis, are crucial for storytelling. The editing should never distract from the story but instead should flow seamlessly, guiding the audience without them noticing the mechanics behind it.

I understand from this that editing is an art form in itself. It’s not just about technical skills but also about knowing when and how to cut a scene to maintain emotional tension, highlight details, or shift the narrative’s focus. The idea that editing should be “invisible” speaks to how well-crafted edits can make the audience focus on the story and characters, rather than the cut itself.

Disney’s Hyperrealism and Influences in Animation

Disney’s hyperrealistic animation—where even in an artificial medium, realism is emphasized—is a driving aesthetic that many studios have sought to replicate. For example, the attempt to replicate realistic movement and emotions in characters is a major influence in studios such as Pixar, DreamWorks, or Blue Sky Studios. These studios often adopt similar techniques to Disney to evoke believability, such as detailed textures and lifelike movements.

On the other hand, some studios resist Disney’s hyperrealism, focusing on more stylized forms of animation. For example, the animation style in films like “The Triplets of Belleville” or the works of Studio Ghibli takes a more abstract approach to character design and movement. They emphasize artistic expression over hyperrealistic detail, which creates a different emotional connection with the audience.

From this, I grasp that hyperrealism in animation isn’t just about achieving photo-realistic visuals; it’s about conveying believability through the medium’s artificial nature. Studios either adhere to this realism or deliberately choose to defy it for creative reasons, both of which result in distinct viewer experiences.

Research Areas

The research areas raised questions about animation’s disruptive properties, such as its ability to break the boundaries of physical reality. Animation is more fluid, and it has the power to visualize the impossible. Cartoons like Duck Amuck and surreal moments like Pink Elephants on Parade show how animation can surprise the audience by distorting reality in ways live-action can’t, offering a playful, imaginative perspective that challenges traditional cinematic boundaries.

In essence, animation’s freedom allows it to express ideas that would be impossible or highly difficult in live-action, such as visual metaphors, whimsical transformations, or exaggerated emotional expressions. This reinforces how animation can be both a form of entertainment and a medium for exploring more abstract concepts.

Categories
Design for Animation, Narrative Structures and Film Language

Week 6: The Language Of Animation – Mise-en-Scène

Mise-en-Scène is a French term that means “what is put into a scene” or “frame.” I learned that mise-en-scène refers to all the visual elements within a frame that contribute to storytelling in animation and film. These elements work together to communicate essential information to the audience without needing words. The key elements of mise-en-scène are:

Settings & Props

  • I learned that the setting of a scene plays a significant role in shaping the story’s mood and guiding the audience’s expectations. Settings can either be built from scratch or carefully selected to add depth to the narrative. For example, in An American Tail, the location of Manhattan is not just a backdrop, but it also adds to the character’s emotional journey. The setting helps set the tone for the events that unfold. Props, on the other hand, provide additional meaning and context to the characters and the plot, as seen in Toy Story and The Godfather, where the props play key roles in understanding the characters’ personalities and the storyline.
  • My takeaway: The setting and props in a scene are not just there to fill space but are integral to conveying meaning and expectations to the audience.

Costume, Hair & Make-Up

  • I learned that costume, hair, and makeup are used to instantly convey a character’s personality, social status, and occupation. For example, in 101 Dalmatians, the costumes and makeup choices for Cruella de Vil immediately tell us she is extravagant and villainous. In Barry Lyndon, the makeup and costumes highlight the social standing of the characters, supporting the film’s thematic depth.
  • My takeaway: These elements act as immediate visual cues, helping the audience quickly understand who a character is, even before they speak or take action.

Facial Expressions & Body Language

  • I realised that facial expressions are a direct way to show a character’s emotions, while body language can indicate how characters relate to each other. For example, in The Breadwinner, the protagonist’s facial expressions and body language reflect her resilience and determination in the face of adversity. The way characters position themselves or move can subtly express power dynamics or emotional states.
  • My takeaway: Animation and film often rely on non-verbal cues like facial expressions and body language to establish relationships and convey emotions effectively.

Positioning of Characters & Objects within the Frame

  • I learned that where a character or object is placed in the frame directs the audience’s attention. For instance, in Isle of Dogs, the positioning of the characters in relation to one another often signifies their emotional connection or tension. The way characters are placed within the frame can also highlight their importance or vulnerability.
  • My takeaway: Positioning within the frame is a powerful tool for visual storytelling, guiding the viewer’s focus and adding layers to the narrative.

Lighting & Colour

  • I discovered how lighting and colour can shape the mood of a scene. For example, low key lighting, which creates sharp contrasts and deep shadows, is used in films like Citizen Kane to add a sense of mystery or drama. High key lighting, as seen in The Barber of Seville, is bright and natural, making the scene feel more realistic. Colour also plays a critical role, as seen in Amelie, where warm tones create a nostalgic and whimsical atmosphere, or in The Revenant, where the cold, muted colours enhance the harshness of the environment.
  • My takeaway: Lighting and colour are not just technical aspects of filmmaking; they are essential tools for creating mood, character emotion, and thematic depth.

Depth-of-Field

  • I learned that depth-of-field refers to the distance between the nearest and farthest objects in focus. This technique can be used to emphasise certain elements in a scene. For instance, deep focus allows both close and distant objects to remain sharp, making it possible to highlight the character’s isolation or the vastness of their environment. Shallow focus, on the other hand, keeps only a specific area or object in focus, often highlighting a character’s inner thoughts or feelings.
  • My takeaway: The use of focus adds depth to the scene and directs the audience’s attention to what is important in the narrative at that moment.

Types of Shots

  • I learned that different types of shots are used to convey varying perspectives and emotions. For example, extreme close-ups, like in The Incredibles, focus intensely on a small detail, which can amplify tension or importance. A medium shot or long shot, like in Wall-E, helps establish the relationship between the character and their environment.
  • My takeaway: The choice of shot type has a profound impact on how the audience perceives the story, emphasising details or broadening the narrative’s scope.

Special Shot Types

  • I explored shot types that focus on specific relationships, such as a one-shot, which shows a single character (as in Anomalisa), or a two-shot, which features two characters (as in My Life as a Courgette). Group shots, like in Meek’s Cutoff, show multiple characters interacting and can emphasise unity or conflict.
  • My takeaway: Special shot types like the one-shot or two-shot are used to highlight the relationship between characters, influencing how the audience perceives their interactions.

Angle Shots

  • I learned that the angle of a shot can change the power dynamics within a scene. A high-angle shot, like in The Lion King, can make the character seem small or vulnerable, while a low-angle shot, like in There Will Be Blood, can make the character seem powerful or intimidating.
  • My takeaway: Camera angles can visually communicate a character’s emotional state or role within the story, affecting how the audience perceives them.

Point of View (POV) Shots

  • I discovered that point-of-view shots let the audience see the world through a character’s eyes, creating a deeper emotional connection. This technique is effective for immersing the audience in the character’s perspective, as seen in various films and animations.
  • My takeaway: POV shots strengthen the connection between the character and the audience, making the story more personal and immersive.

Moving Shots

  • I learned about the different types of moving shots, such as pan shots (which pivot along the horizon), tilt shots (which move up or down), and dolly shots (which move the camera forward or backward). These shots are often used to follow the action or explore the environment. In The Breadwinner, moving shots help create a sense of urgency and emotional intensity.
  • My takeaway: Moving shots are dynamic tools in animation and film, adding energy and emotional depth to the narrative.

Reading the Mise-en-Scène: The Breadwinner & Isle of Dogs

  • I analysed the mise-en-scène in The Breadwinner, where the combination of settings, lighting, and character positioning reinforces the protagonist’s sense of isolation and her emotional journey. Similarly, Isle of Dogs uses colour, lighting, and character placement within the frame to communicate the characters’ relationships and the thematic elements of loyalty and survival.
  • My takeaway: The way mise-en-scène is crafted in animated films is crucial for conveying the emotional and thematic depth of the story. It’s not just about what is seen, but how it’s presented to shape the viewer’s experience.

Screen Direction

We moved on to screen direction which refers to the movement of characters or objects on the screen from the audience’s perspective, and how it’s essential for maintaining visual continuity. If the movement isn’t consistent, it can confuse the audience. Screen direction is governed by camera positioning and movement, and this continuity is crucial for smooth editing and storytelling.

What I understood is that consistent screen direction is necessary to maintain a fluid and believable flow between shots. Terms like “camera left” and “camera right” help filmmakers define the movement within a frame. This needs to be established early in production, especially in the storyboard and animatic stages, so that the timing and flow of the scenes remain intact.

My takeaway is that screen direction helps us guide the audience’s attention and ensures that the characters’ actions and relationships are clearly understood. Without it, even simple interactions could feel disjointed or confusing. I also learned that pre-determined screen direction is especially crucial in animation, where movements must be precise and planned in advance.

Screen Continuity and the 180-Degree Rule

I learned that once screen direction is established, it must be maintained throughout the scene to avoid visual disorientation. This consistency ensures that the actors are positioned and moving in ways that make sense in relation to each other. The Imaginary Line or 180-degree rule helps keep track of the screen direction.

What I understood from the 180-degree rule is that if we shoot from one side of the axis, the movement and eye lines of the characters will remain consistent. This keeps the audience from getting lost or confused about the characters’ relationships or the direction they’re moving in.

My takeaway is that crossing the axis can disrupt continuity, but certain techniques, like using a neutral shot, can help reset the direction and allow smooth transitions. This flexibility in screen direction is essential when managing the complexity of film and animation production.

Animation Layout and Screen Direction

I learned that animation layout is the process of designing the environments for animated films. This stage is crucial for adapting the story to the film’s style, and it’s closely linked to screen direction, as the layout needs to be planned to maintain consistent movement and positioning of characters and objects.

What I understood is that layout artists need to ensure that the rules of screen direction are considered, especially in camera movements like pans and tracks. This organisation helps avoid confusion and ensures that the audience can follow the animation seamlessly.

My takeaway is that screen direction is just as important in animation as in live-action filmmaking. The movement of the camera and characters must be carefully thought out to keep the audience engaged and the story clear. 

Animation Staging

I also learned that staging in animation shares many purposes with film and theatre in directing the audience’s attention but has unique implications in its execution. This principle focuses on making an idea completely clear, whether it’s an action, expression, mood, or personality.

What I understood is that character placement and composition play a critical role in achieving this clarity. Elements such as camera angles, light and shadow, the dynamics of character movements, and how a character enters a scene all work together to focus the audience’s attention. For example, a sudden entry can create surprise, while an expectant one builds anticipation.

My takeaway is that designing the use of long, medium, and close-up shots helps emphasise and pace the narrative. Each shot type serves specific purposes: long shots establish context, while close-ups highlight emotion or detail. The timing and pacing of these shots have significant production implications, requiring careful planning to maintain the flow and meaning of the scene. If the background clashes with the character or is overly complex, it can distract from the main focus. It’s essential to keep the design clean and scale the key subject properly to avoid unnecessary distractions. Every object or detail in a frame has the potential to be a symbol, so unnecessary elements should be edited out to maintain clarity and impact.

My overall conclusion is that these cinematic principles underscore how storytelling in animation and film is an intricate balance of visual and thematic elements. From the arrangement of props and characters to shot choices and screen direction, each decision plays a role in directing the audience’s focus, building tension, and conveying emotion. Mastering these aspects is essential for creating engaging, coherent, and impactful stories.

Categories
Design for Animation, Narrative Structures and Film Language

Week 5: Social and Political Comment in Animation

Politics and Persuasion in Entertainment

In week 5, we explored the complex relationship between politics and persuasion within entertainment and how these elements manifest in film, animation, and other media. One significant aspect of this topic is understanding the mechanisms through which media platforms can shape, influence, and persuade audiences on both conscious and subconscious levels.

Broadly, audiences can be influenced through various outlets, such as social media, broadcast news, film and animation, and television. Each of these mediums carries a unique potential to embed persuasive messages. For instance, broadcasts and print media maintain an authoritative presence that can sway public opinion, while independent film and animation often provide a platform for personal stories and critical commentary on societal issues.

We analysed how media platforms—from mainstream and independent film to games, podcasts, and social media—hold the power to direct, challenge, and reinforce specific narratives. The potential of these platforms to deliver impactful messages stems from their ability to reach diverse audiences and evoke emotional responses.

A key part of this exploration was understanding how messages within moving images can be presented. These can range from subliminal or masked content, which subtly embeds messages, to overtly propagandist intentions that are clear and direct. Persuasive content might serve commercial purposes, aiming to promote products or ideologies, while documentary or investigative approaches seek to inform or provoke thought.

Under the broad umbrella of politics in media, key areas of focus include political and commercial persuasion, and how subjects like race, gender, equality, disability, ethics, and ecology are depicted. The way these themes are approached can vary widely across documentary films, mainstream cinema, television, games, and advertising. Each of these formats can either challenge existing social norms or reinforce them, depending on the underlying political context. 

In addition to my reflections on political persuasion in entertainment, I also explored the concept of animated documentaries and the unique role animation plays in non-fiction contexts. Animated documentaries, which are recorded frame by frame, represent the real world rather than an imagined one. They are presented as documentaries by their producers or accepted as such by audiences, festivals, or critics. Animation in these contexts is often used to clarify, explain, illustrate, and emphasise certain points.

A key question in this field is what the use of animation means as a representational strategy in documentary. Animation can be a powerful tool for presenting subjective experiences, offering insights into mental states and providing alternative ways of seeing the world. While some critics argue that animation destabilises the documentary’s claim to represent reality, Annabelle Honess Roe suggests the opposite—that animation broadens our ability to depict reality in non-conventional ways, allowing us to explore the world from unique perspectives.

The issue of authenticity in documentary also arises with animated documentaries. Bill Nichols argues that documentary images are often linked to the reality they represent, but animation’s departure from traditional documentary realism raises questions about how authenticity is conveyed. Honess Roe notes that animated documentaries do not easily fit into the traditional documentary mold. This challenges the widely held belief that documentaries should be objective and factual, with their authenticity dependent on their realism.

Furthermore, some critics, like Paul Wells, argue that animation’s inherent subjectivity makes it difficult to achieve objectivity, which is a cornerstone of traditional documentary. However, animation’s ability to present subjective experiences can enhance the understanding of complex, personal narratives. For example, animated works like Waltz with Bashir and the Animated Minds series use animation to convey first-person accounts of trauma and mental health, adding layers to the storytelling that live-action documentary may not easily achieve.

Animated documentaries challenge the notion of what a documentary “should” be, which leads to debates about their place within the genre. As animation becomes more commonplace in documentaries, some worry it may become a “layer” that distances the audience from the real experiences being portrayed, while others express concern about its potential for lazy storytelling, where animation is simply used to illustrate an existing narrative.

Overall, I’ve come to see animated documentaries as a unique and evolving form of storytelling, one that pushes the boundaries of what can be represented in non-fiction and challenges traditional notions of authenticity and objectivity. The evolving role of animation in documentary reflects broader changes in how we define reality and truth in visual media.

Categories
3D Computer Animation Fundamentals Immersion

Week 5: UE Physics

Physics in Animation

  • Explored how physics adds realism to animations, especially in scenes where objects are falling or impacted.
  • Learned that applying physics makes objects react naturally to forces, adding immersion.

Exploring Unreal Engine Modes

  • We usually work in Selection Mode, but this week we explored additional modes:
    • Landscape Mode
    • Foliage Mode
    • Fracture Mode – our main focus, used to apply fractures and destruction effects to meshes.

Modeling and Fracturing Meshes

  1. Started with Modeling Mode to create and place basic meshes in the scene.
  2. Switched to Fracture Mode to break down meshes in various ways:
    • Uniform Fracture – breaks down a mesh evenly.
    • Cluster Fracture – creates smaller, clustered fragments, ideal for explosive effects.
    • Radial Fracture – creates a point-of-impact effect, like a gunshot.
    • Plane Fracture – simulates slicing, like a sword cutting through.
    • Custom Fracture – allows for user-defined break patterns.

Fracturing a Cube Step-by-Step

  1. Selected a cube mesh to fracture.
  2. Generated a fracture asset:
    • Went to Generate > Save in a folder > Generate Fracture.
  3. Enabled physics for realistic impact:
  4. In Details Panel, enabled Simulate Physics to respond to gravity and collisions.
  5. Applied Uniform Fracture mode:
  1. Adjusted the fracture levels to control the number of broken parts.
  2. Observed how one cube fractured into 20 separate pieces.

Advanced Fracture and Impact Control

  • Further fractured pre-existing fractured parts to create additional detail.
  • Adjusted impact strength:
    • Lowered the Damage Threshold in the Details Panel to increase impact sensitivity.
    • Turned off bone colours by unchecking Show Bone Colours in the Details Panel.

Nanite for Performance Optimisation

  • Switched to Nanite for performance when using heavy physics scenes:
    • In Chaos Physics settings, located the collection, enabled Nanite in the Content Browser, making physics processing more CPU-efficient.

Adding Material Properties to Enhance Physics

  • Applied bounce and material properties to objects:
    • In Details Panel > Collision > Physical Material Override, adjusted friction, density, and strength.

Scene Recording Setup

  • Enabled Engine Settings to access built-in blueprints:
    • Activated by ticking Show Engine Content in Content Browser Settings.
    • Located Anchor and FS Master Field in the content browser, then copied them to a custom physics folder.
  • Set up the scene for recording:
    • Dragged Anchor and Master Field into the scene.
    • Added an Initialization Field and created a Chaos Solver to simulate physics interactions.
    • Added objects to the Chaos Cache Manager for precise recording.

Using the Sequencer to Capture Scenes

  1. Added objects to the Sequencer and set keyframes for start and end points.
  2. Worked with Constraint Actors to apply physics constraints:
    • Added two cubes to test the constraint as an anchor point.
    • Observed how one fractured cube reacted to constraints, swinging around the anchor point.

Alternative Recording with Take Recorder

  • Enabled Take Recorder in Plugins and added actors for recording:
    • Started simulation with Alt + S to capture real-time physics effects.
    • Viewed recorded sequences in the Content Browser after recording.

Creating a Bouncing Ball

  1. Created a sphere and enabled Simulate Physics in Details Panel.
  2. Applied a custom physical material for bounce:
    • Created a blueprint class called BP_Ball and added a static mesh.
    • Created a physical material with customised friction and density settings.
  3. Applied the physical material to the sphere to achieve the desired bounce.

This week’s exploration provided a solid understanding of fractures, physics settings, and recording techniques in Unreal Engine. These tools allow us to bring more realism and dynamic effects to our animations.

Categories
3D Computer Animation Fundamentals Immersion

Week 4: Sequencer and Materials

In Week 4, I continued to use the Sequencer in Unreal Engine and started working on creating Materials.

I learned how to set up Shots in the Sequencer, which is key for staying organized when using several cameras. To create a Shot, you first add a Subsequence track by clicking the Add button in the Sequencer menu. This Subsequence Track helps keep track of and manage the different cameras used for various tasks.

I learned how to create Shots in Unreal Engine’s Sequencer:

  • To create Shots, generate a Level Sequence from the Cinematics menu.
  • Ensure proper naming for better organisation.
  • Add these sequences to the Subsequence Track, where you can adjust their lengths.
  • To assign cameras to Shots:
    • Select the desired camera.
    • Press Ctrl + X to cut the camera.
    • Double-click on the chosen Shot to paste the camera into it.

This week, I also learned how to create Materials in Unreal Engine. I started by downloading a Material from Quixel Bridge and importing it into Unreal Engine. Then, I created a new Material and used the Material’s node editor to add the following maps from the imported Material:

  • R Channel: Ambient Occlusion
  • G Channel: Roughness
  • B Channel: Displacement

I connected the maps as follows:

  • The RGB channel of the Colour Map to the Base Colour
  • The G channel of the Roughness Map to the Roughness input
  • The RGB channel of the Normal Map to the Normal input of the new Material

Afterward, I started experimenting with the tiling of the Material. To do this, I incorporated several nodes, including Texture Coordinate, Multiply, and Add. By adding these nodes, I was able to adjust the tiling according to different requirements. I played around with different values to see how each adjustment impacted the appearance of the Material. This exploration not only helped me understand how tiling works but also allowed me to visualise a range of outcomes, giving me greater insight into the creative possibilities within materials in UE.

Once I created a Material, I was guided to generate a Material Instance. This is done by right-clicking on the Material I created and selecting the Material Instance option from the menu. The primary distinction between a Master Material and a Material Instance is that the Material Instance inherits all the properties from the Master Material. 

This inheritance enables real-time updates and adjustments, allowing for more flexibility and efficiency in the material design process. By using Material Instances, I can tweak parameters without altering the original Master Material, making it easier to experiment with different looks while maintaining a consistent base. In conclusion, this week was very informative as I enhanced my skills in material creation and manipulation in UE.

Categories
3D Computer Animation Fundamentals Animation

Week 4: Ball With Tail Spline

Refining the Ball with Tail Blocking

This week, we continued with our work on the Ball with Tail animation, picking up from last week’s progress by focusing on converting our Block Out animations into Spline animations. We kicked off with a critique session, where we received initial feedback on our Block Out animations, helping me pinpoint areas for improvement.

Following the critique, we resumed our work and learned how to transform Block Out animations into Spline. After the conversion, we utilised the Graph Editor to refine the animations and correct the ball’s trajectory using the Motion Trail feature. This allowed us to make subtle adjustments to the ball’s path, enhancing the realism of the jump and making it look smoother than the blocking. I also learned to fine-tune the Translate Y graph in the Graph Editor, incorporating ease-in and ease-out effects to give the jump a more natural feel.

In terms of the tail animation, I made slight modifications in the Graph Editor to smooth out its movement. I also cleaned up on the rotations to make it follow the right curvature while it is in motion. We learned about managing keyframes, focusing on how to delete any extra keyframes at the end of the animation process. This keeps the Graph Editor neat and easy to use. It’s important to remove these extra keyframes only after completing the animation so that nothing changes while we’re still working.

This week, I adhered to the planning I developed last week and successfully converted my Block Out animation into Spline. By leveraging the Graph Editor, I was able to add ease-in and ease-out effects, while the Motion Trail helped me enhance the ball’s bounce distance and trajectory. I made several additional adjustments to ensure that the animation of the ball with the tail appeared more fluid and realistic. This is my version of Ball with Tail:

Categories
3D Computer Animation Fundamentals Animation

Week 3: Anticipation & Ball with Tail

Planning & Animating a Ball with Tail in Maya

In week 3, we focused on Anticipation, a key principle in animation. I learned that anticipation serves as a mechanical buildup for force, which is essential for understanding that all movement is generated by forces—either external or internal. Anticipation effectively builds internal force to create dynamic motion.

We were advised to master foundational rules before deviating from them, which is crucial at this early stage of our animation journey. For the ball with tail animation, we observed videos of squirrels to understand how their tails react during movement and jumping. This helped us grasp the natural curve of the tail following the squirrel’s direction.

We were then taught the difference between Block Out and Spline:

Block Out:

Block Out is a foundational technique in animation that involves creating a rough version of the animation by establishing key positions (keyframes) for the main elements of the scene. The primary goals of the Block Out phase are to define the timing, spacing, and overall movement of the characters or objects without worrying about fine details.

Spline:

Once the Block Out phase is complete, the next step is to convert the rough animation into Spline. Spline animation refines the keyframes by smoothing out the motion curves, resulting in more fluid and natural movement. This allows for fine-tuning of acceleration and deceleration (easing), helping to create more realistic movements that mimic how objects and characters behave in the real world.

We learned that it’s generally more effective to push concepts like anticipation and squash & stretch too far initially and then refine them, rather than hesitating and making small adjustments. Anticipation is crucial for illustrating the strength and force behind movements, while squash and stretch are vital for enhancing the physical believability of actions.

For this task, I began by sketching out my initial ideas in 2D before moving on to animate in Maya based on that reference. Using the Block Out method helped me establish the starting positions necessary to get the ball moving. Although the animation was a bit choppy at first, I added extra keyframes to achieve a smoother flow.

Once the ball’s motion was established, I turned my attention to the tail. At the first keyframe, I positioned the tail as I envisioned it. As I continued, I rotated the tail into an ‘S’ curve, inspired by the natural movement I observed in the reference video. By setting keyframes and blocking the tail’s motion with each jump, I aimed to create a realistic effect.

After setting all the keyframes, the animation looked quite good as a preliminary step toward the final version. I’m now prepared for the next week, where we’ll learn how to convert this animation into Spline for a smoother and more refined appearance.

Categories
3D Computer Animation Fundamentals Immersion

Week 3: Virtual Production Sequencer

This week, I explored the Unreal Engine 5 Sequencer, learning how to apply traditional film-making techniques to Unreal. I focused on the key differences between regular film production and how things work in Unreal.

Film Techniques:

  • Master Scene (Master Shot): This means filming an entire scene in one continuous shot from a wide angle. It shows all the actors and the set, helping establish the action and relationships between characters.
  • Coverage Cameras: After the master shot, additional shots are taken from different angles, like close-ups or over-the-shoulder shots. These highlight emotions and details and help create a smooth final scene during editing.
  • Linear vs. Non-Linear Storytelling:
    • Linear: The story goes in a straight line from beginning to end.
    • Non-Linear: The story can jump around in time, using flashbacks or different sequences for a more interesting narrative.
  • Triple Take Technique: This involves filming three different versions of a shot, with slight changes in performance or camera angles. It gives editors more options to choose from later.
  • Overlapping Action: This technique makes movements feel more natural by staggering actions. For example, when a character turns, their body parts move at slightly different times.
  • Hitting Marks: Actors have specific spots where they need to stand or move during a scene to get the best camera angles and lighting.
  • Film Production Roles:
    • Gaffer: The person in charge of lighting.
    • Grips: Technicians who set up equipment.
    • Production Manager: Manages schedules and budgets.
    • Director of Photography (DP): Decides how the film looks, including camera angles and lighting.

Unreal Engine Techniques:

  • Sequence-Based Linear Workflow:
    • One Level Sequence: Organizes a scene as a single timeline, moving from start to finish without changes.
    • Multi-Camera Setup: Uses multiple cameras to capture different angles of the scene at once.
    • Single Camera Cuts Track: Films each shot separately and then combines them in editing.
  • Shot-Based Non-Linear Workflow:
    • Nested Level Sequences: Smaller parts of a project that can be worked on separately and then combined later.
    • Take System: Helps manage different versions of a shot, making it easier to find the best one.
    • Sub-Scene Tracks: Allows editing specific parts of a scene, like sound or animation, without changing everything.
  • Collaborative Workflow:
    • Sub-Levels: Sections of a project that different artists can work on independently.
    • Sub-Scene Tracks: Focus on specific elements, letting artists work on their parts without affecting others.
    • Visibility Tracks: Control what elements are visible during editing, allowing focus on certain aspects.

Workflow Comparison:

  • Linear Workflow: A simple process where everything is done in order, commonly used in traditional filmmaking.
  • Non-Linear Workflow: More flexible, allowing edits to be rearranged or done out of order. This is helpful for animation and VFX projects, enabling multiple artists to work together.

Both workflows are important for big projects, especially when teams need to work together on some parts of the project.

After exploring the differences between traditional film production and Unreal Engine, we started working with the Sequencer. I learned that the Sequencer is Unreal Engine’s Non-Linear Editing Tool, which includes:

  • Ground-up Shot Creation: This feature lets creators build individual shots from scratch in Unreal, giving full control over camera angles, lighting, and scene layout.
  • Pre-visualisation: A tool to create rough versions of scenes before full production. It helps visualise how the final scene will look and assists in planning.
  • Full Film Creation: Unreal Engine can be used to create entire films, from pre-production to final rendering, providing a virtual environment for production.
  • Game Cinematic Creation: The Sequencer is also used to create cinematic sequences for games, helping to develop narrative-driven cutscenes or trailers with high-quality visuals.

This versatility makes Unreal Engine valuable for both the film and game industries.

I learned that in traditional film, the narrative is usually structured into sequences, often following a three-act format. In contrast, Unreal Engine organises the story using a Level Sequence, which builds the entire narrative using Nested Level Sequences.

We also covered two new terms: Possessable and Spawnable Actors. Possessable actors are like zombies that appear in the scene but aren’t needed, while Spawnable actors are essential elements that we want to keep in our scene. Spawnable actors need to be called to appear, while possessable actors are always visible.

Afterward, we worked on a sample project called DMX Previs Sample. In this project, we learned how to create new cameras in the scene and animate their movements. This experience helped me understand the Sequencer better and how to add cameras and other objects to keyframe and animate them.

We moved on to create Spawnable Actors by adding an actor to the Sequencer. To do this, I right-clicked on the object I wanted to convert and selected Create Spawnable. This process ensures that the object is always accessible in the Sequencer when we need to render the scene.

We created a Level Sequence and opened the Sequencer to add a camera to the DMX Previs Sample scene. After adding the camera, I adjusted the focus property from frame 1 to frame 100 and key framed it to create a simple animation.

We concluded the lecture by experimenting with camera settings and movements to develop different camera animations. I added shots using the camera cut feature, which helped me enhance my understanding of cameras in Unreal Engine while learning to use the Sequencer effectively.