I created a total of 10 different plants, replicated randomly by some simple scaling, and rotating transformations. In Houdini, I first created some blocks to replace the city model and then I created some random points on the ground. Search through the point cloud and if a building block can be searched within 0.5m of the point, delete the point.
This week I learned somethings about the foundation of green screen. Some basic information about selecting the specific areas in different ways. We have learned about luminance, saturation and hue. The main idea of the colourspace transfer is similar with the colour lab in photoshop and so on. In addition to this, we learned some simple techniques to process different channels to obtain a specific areas through basic arithmetic.
This week we are asked to roto the machine sequence. I have tried both two idea to get a correct alpha.
One of the method is easier. Only using a certain fix frame to project it on card which is at a correct position and we can get a property result by a correct render through a tracking camera. Another one is a little complex I use two project node to finish it. I think the both two way have been told in two weeks ago, so I did not explain too much here just post a screen-print.
This week we have learned how could we combine different layers from certain render. I have used v-ray, Arnold and mantra render. So It’s not difficult for me to mix different AOVs. Once I understand the principle of render each step will be more clear.
Besides that, Mr. Gonzalo has showed us the function of utility textures such as Position, Normal, Depth and so on. Although, some utility textures’ name is different in different render, the main idea is similar. We can use them to rebuild light or make a Len’s effects.
This week we have learned more examples about the project node. In certain statistic building scene it is a really fast way to patch or add something in images by the project node. We mainly learned two interesting ideas to come true some amazing results. Firstly, we can use the project camera to map certain frame information to realistic space coordinates in the nuke 3D view. Then we can operate what we want in the single frame and use the tracking camera to render the whole images again. Finally the patch elements will be automatically catch to the moving. However, a issue we have to focus on. Almost all ideas can skip the frame hold node which means we the patch area is just a single frame and we only use one single information to cover the whole sequence.
Another idea kinds solve the problem. We used the tracking camera to map the whole sequence to the realistic coordinate. What will happen then? We can collect and fix the whole information of a certain object. The result is that certain object color information will be limit in a fixed position in nuke 3D viewer. We can use certain great frame to render a cache image and change anything we like at the same time. Finally, use the tracking camera to render the finally images.
This week, I have rendered all factors in the Arnold render. I used three rectangle light to make the shadow of the old machine catch the raw sequence. one more important thing is the color space. When the rendering finished the default color space is srgb, we have to change the setting in nuke same as Arnold.
I have finished textures in the week. After updating the substance painter to newest one, these is something changed. Especially for different materials in one model, it is a little wired to split them into different parts to paint. Besides, there is some problems for baking AO and normal textures. Sometimes I don’t want components to affect AO textures each other, I only want to them to affect themselves although I use different color to control area, the result is not perfect and it also take much time to try. So although the substance painter is a easier way to finish textures, the workflow is not clear and the software can’t support Arnold materials well. I have to transfer them to Maya standard material such as Lambert material. The utility textures is really helpful for smart material which can save much time, I hope the Adobe will improve the workflow in future. And another problem is applying textures in maya. I have to import each textures what I have finished in Substance painter. When the the number of objects is large, reapplying same result in maya is a crazing work.
I export tracking camera information to Maya scene from Nuke. The first time test is not good because I notice that the scene cannot catch the models. I found the tacker point is not as much as good. After I delete some useless point it work well. Then I adjust the position of model to make it have a correct angle and make it look good.
The week 4 we have studied deeper functions of 3D camera tracking. The first interesting new method to remove some specific patches is the node of the project. I froze a certain frame to roto paint for removing some useless patches. Then I froze the same frame again to hold the result of roto paint again.
Now the question is that the patches can’t follow the camera to move. So I project the image to a grid that catches the same position with tracking cloud information. Besides, I need to freeze the same frame as before, otherwise, the image will shift in the grid by each frame.
Then I use the original camera to render the scene again, I can get the patch to catch the camera moving. What I did like to extract an image to fix to a wall and use the original moving camera to render again.
But this method has a major disadvantage. At the same time as the first froze the frame, it also froze the shadows. This method cannot be used when there are large variations in light and shade. So I learned another method.
We use the node of the project to replace the node of the frame hold. Stabilisation of the picture with little distortion helps us to finish roto painting.
This week we learned some new things about 3d Tracker which is similar to the 3D equalizer.
The main workflow and principle are the same with other tacker software, however when I want to re-distort the image the side of the image will have some errors such as the below image.