The opportunity emergent technologies present to AI Animation_week7

Let’s go back to the 2020 Nvidia launch for a moment, the lifelike announcement is good enough to fake it. The emergent technologies, Omniverse, are used for the announcement of the simulation. This new technology focuses on the animation part and the other part such as modelling is almost the same as before. Thousands of photographs were taken from different directions using a large number of cameras when making the model of the speaker. Same as before workflow, using Maya and Zbrush to make a visual character. Then it’s time for the technical climax, building on the original audio-driven animation technology, NVIDIA has adopted the new Face video to video technology. In simple terms, the technology maps a captured real video onto a virtual model. The realistic real-time textures reduce the uncanny valley. The consequent problem is that the light source of the mapping is fixed so that the highlights do not move correctly with the camera when the view is rotated in 3d software. The Nvidia used AI deep learning techniques. Instead of calculating light changes through a physical renderer, multiple videos were shot in advance for the computer to learn the distribution of highlights at different angles. Although this approach cannot calculate real-time light but for global lighting, getting results that look realistic is what we are ultimately aiming for. A similar method is used for the capture of body action. The speakers were asked to perform several times and videos of the performers’ previous presentations were used to learn the performers’ movement habits. After learning by the computer, once the speaker’s voice has been captured, the action can be selected from the corresponding actions. Each syllable corresponds to a number of different actions, and after computer calculation, different actions are selected to correspond to different scenes. Once Nvidia has enough character action models, it will be possible to directly use actions that match the character’s personality in the future in the film, television or gaming industries. This workflow will significantly reduce the product development cycle.

There are of course limitations to such a technology, as there are scenarios where humans use a particular language of movement to express emotions, which the technology is currently unable to do. And the light mapping by deep learning is very limited, if there are other special light sources in the scene, the computer will get incorrect results, because the result is not obtained by real light calculation. But I believe the cutting edge technology of the future will better help artists create their work.

Genre Analysis_week6

I will start my analysis with a relatively simple film in the war genre. The first movie is the Three Kingdoms: Resurrection of the Dragon (2008) which is released in China and another one is Flags of Our Fathers (2006) which is released in America. Both two films describe a fight between two countries.

The first film follows the legendary life of Zhao Yun, the famous general of the Shu Han Dynasty during the Three Kingdoms period in ancient China. In his early sixties, Zhao Yun, a famous general of the Shu Han Dynasty, returns to the mountains in order to help Shu recover the Central Plains and unify the country, only to be surrounded by the Wei army on Mount Fengming and to face the last battle of his life. Faced with internal and external problems, even Zhao Yun’s resourcefulness could not save him from defeat at Wei.

The Father’s Flag film depicts the background of the acclaimed photographic work ‘Raising the Flag on Iwo Jima’ which represents the Battle of Iwo Jima. It tells the story of the 5th U.S. Navy Division’s attempt to capture the heights of Iwo Jima in 1945, which was stalled until the fifth day when American troops were badly killed and wounded, and the Japanese were forced to retreat to the island’s caves. The whole process was met with fierce Japanese resistance and heavy losses on both sides. Finally, after a month of fighting to the death, the American flag was planted at the highest point on the island.

Mise-en-scene_week5

I chose the Chinese landmark science fiction film, Wandering Earth to analyze the images.

Wandering Earth(2019)

The film includes a close-up depicting a wounded grandfather sacrificing himself so as not to drag his grandson and granddaughter down.

The setting of the sense is in an abandoned skyscraper. In the future, due to extreme weather changes, most of the once existing buildings will be covered in snow and ice. The harsh climate has forced humans to bring along closed protection and oxygen. The most important prop in this image is the oxygen mask which has been removed. The protagonist could have just waited for the oxygen to run out and died logically, however, he chose to remove his oxygen mask to breathe the oxygen that was once. This prop helps shape the character’s mental activity and make audiences understand the character’s personality.

For the costume, the space suit-like design of the protective suit represents the harsh environment from the side. As the same time, The hardened exoskeleton assist system also reflects a certain level of technology

Wandering Earth (2019)

This is a moving shot from close-up to extreme close-up. This shot shows the character’s facial expressions very well. The rich facial detail conveys the memory for the good old days and the worry and blessing for the comfort of his grandchildren. The body language shows the satisfaction of inhaling the air of his homeland and the action of turning back express the attachment to children.

The lens uses low key lighting created by a backlight. The contrast in brightness between the background and the subject outlines a clear silhouette. Blue tones and low saturation sets an icy atmosphere. Although, I think having warm colours added to the scene when the characters are feeling relieved, makes the whole picture more layered. Instead of a constant cold colour palette. In addition, the whole figure is also in the very centre of the frame and then shifted to the left of the frame leaving a blank area to the right. The image can capture the audiences’ attention and lead the viewer’s eyes to the next frame.

Learning journal Week_7_NUKE

What I have done in this week:

After last week’s study, I have a deeper understanding of the properties of colour and specific nodes of the Nuke. I am more sensitive to colour because I have some experience in photography. Both Photoshop and DaVinci are some of the colour posting software I use regularly. Fortunately, both DaVinci and Nuke are node-based operating software, so many functional nodes are similar. Such as the picture I finished this week show.

I prefer to let data flow between nodes than a hierarchical structure of software. Therefore, I enjoy the greater freedom to adjust parameters in Nuke. I also tried the node that I learned In last week like the picture below.

Especially for the nodes of unpremult and premult, I was never quite sure of the slight difference between these nodes before. By understanding the principles, I have a better understanding of the mechanism by which the RGB and Alpha channels operate within the software.

Learning journal Week_7

In week 7 I have tried to make a new human model based on the DAZ 3d. I chose the base model body from Daz3D and started to edit the head in the Zbrush. The first step is to add detail to the ears, eyes mouth, nose and so on. When increasing the object subdivide level, the head needs to be added appropriately detail. Then I used the XYZ human textures to paint different textures in Mari.

I have thought that my PC which contains more than 200G Ram is enough for a high subdivided level in the Zbrush, however when I import the displacement into the whole body in level 7 subdivide the Zbrush notices that my PC has insufficient Ram for processing even the RAM only used around 50G (25%). I have to separate the head and body to handle the problem and I must care about the problem multitexture between the sew parts of two objects. Some problems in the side of different UDIM testures like the blow picture show. There is a obviously edge to divide the model.

Throughing the Layer and morph target, I fixed the mainly problem. After I export the displacement, it also need to be mixed in Photoshop carefully and I get the result like the picture below.

For the animation part, both the building controner or skining the animation is not easy. After I know the principle and basecial step to make shape blend, I use the Advanced skeleton to make the joint and controners and paint the skin by the ngSkintool.

Learning journal Week_6

What I have done in this week:

Recently I joined short video teamwork for a certain one’s graduation product. After frequency communication, I have further understanding of how important the workflow or the standard work pipe is. Someone only transformed the .mb file to me however, when I tried to load the scene the system notice I lack some reference and colour manage files. In other respects, I found that not everyone knows about the whole workflow even don’t know what the next step needs to do. This is the problem that they can not export the appropriate format file for the next process.

For this week what I learned, the animation module of Maya has always been my weak point. Not only the skills of the Joints, Skins and Controller but also the principle of the animation. Some essential tiny animation of humans is hard to obvious but this small shake or front swing make the animation look vivid and realistic.

cinematic product analysis_week4

I’d like to share one of my favourite movies, Iran Man, which is part of the Marvel Cinematic Universe. I was captivated by the ups and downs of the film’s plot, at the same time, I was impressed by Iron Man’s personality. So I will begin by analysing the narrative vision.

The Iron man has a successful narrative structure. At the beginning of the movie, the director shows us an arrogant billionaire, Tony Stark. Through an arms deal, The protagonist’s sloppy characterisation is quickly established. Then the storyline into a long rasing action to make conflict. Although Tony survived the terrorist attack, he was disoriented for a while. The director creates a conflict within the protagonist and also depicts the contradictions of Stark’s industrial weapons. With the help of Dr. Ethan, he eventually becomes Iron Man and logical bring ultimate antagonist. Then, the film reveals all the secrets at the climax, the fight with Obadiah Stane who is a shareholder of Stark Industries, bent on replacing Tony Stark as Chairman of Stark Industries. The movie end at a press conference the next day, Tony announced that I am Iron Man. The short resolution successful lead audiences to imagine the future and so, the Marvel Cinematic Universe is officially open. As I said at the beginning, the film’s plot is tightly paced and has a sense of rhythm.

Then I will talk about some lens detail of the strong terms of references. Such as the picture below:

Picture1, from Marvel Studios
Picture2, from Marvel Studios

Both two views organised the space well and the position of the main subjects rule the RULE OF THIRDS. For picture1, appropriate space in the left frame will better extend the protagonist’s eyesight and at the same time enhance the audience’s focal area. For Picture2, The characters’ positions reflect the character relationships and enhance the plot of the story. Tony, who needs help, is in the bottom right corner of the screen and the assistant in the top left corner strikes the balance of the screen.

Learning journal Week_5_NUKE

What I have done this week:

In the week, I was happy to learn the skills of cinematic. Different perspectives of shots express the creator’s emotion and the director can enhance the subject’s confidence or weak from different angles. In addition, abundant shot selections make the creator has more choice to show different levels of detail about the subject. Besides the shot principle, the Light attributes also affect the cinema scene such as the brightness, hardness, or softness of the scene.

Then, I tried to compose a short video in Nuke as the picture showed. I made a simple random rotation motion in Houdini and render a sequence with z alpha. In Nuke, I used the Zdefocus node to show the z-axis in another dimension. After that, I tried to use the Colorcorrect node to match the background picture. Finally, I added some tiny motionblur to make the view more realistic.

After finishing all adjust, I used the merge node to combine the two groups and expert a video by the write node.

Learning journal Week_4

What I have done this week:

I have finished a complex wood model called Paiwei, which is for commemorates the dead of my grandfather. I begin with an alpha texture in the Zbrush and as I predicted, the processing is really challengeable and funny.

In the beginning, I planned to sculpture tiny detail in a whole simple cube however, the result is not good. Because it looks heavy such as the below picture.

So I changed the old method, I divide the polygon into three-layer. By adjusting the position and scale to make some holes that make the model look more hollow.

Then I use the polymesh to reduce the number of the polygon. Once limit the face number less than 40,000, I export the low-level model to Maya and prepare to finish the UV map. I want to bake the complex detail into a low-level model so the UV is necessary. When these steps are all done, the rendering picture such as below in Maya.

Finally, I use the substance painter and Photoshop to paint the texture. Because the UV map is auto-created by Maya, it is easier to directly paint in the 3d Model.