關于合成的dissertation報告
本文主要介紹了合成在社會發展中的應用。作者先是引入了古時候的合成技術,將重點放在了光學時代的突破性成果。接著講到了合成技術在其他不同領域的發展以及一些先驅貢獻。最后以案例研究結束了本文的介紹,討論了該技術在好萊塢大片中的應用。
INTRODUCTION
A massive spacecraft hovers over New York, throwing the entire city into shadow. A pair of lizards, sitting in the middle of a swamp, discusses their favourite beer. Dinosaurs, long extinct, live and breathe again, and the Titanic, submerged for decades, sails once more.
Usually the credit of all these fantastic visuals given to "CGI" (computer generated imagery) or "computer graphics". Computer graphics techniques, in conjunction with a myriad of other disciplines, are commonly used for the creation of visual effects in feature films. Digital compositing is an essential part of visual effects that are everywhere in the entertainment industry today: In feature films, television commercials, and many TV shows, and it's growing. Even a non effects film will have visual effects. Whatever will be the genre of the movie there will always be something that needs to be added or removed from the picture to tell the story. It is the short description of what visual effect are all about - adding elements to a picture that is not there, or removing something that you don't want to be there. Digital composite plays a key role in all visual effects.
It is the digital compositor who takes these disparate elements, no matter how they were created, and blends them together artistically into a seamless, photorealistic whole. The digital compositor's mission is to make them appear as if they were all shot together at the same time, under the same lights with the same camera, then give the shots a final artistic polish with superb color correction.
I mentioned earlier that digital compositing is growing. There are two primary reasons for this. First is the steady increase in the use of CGI for visual effects, and every CGI element needs to be composited. The second reason for the increase in digital compositing is that the compositing software and hardware technologies are also advancing on their own track, separate from CGI. This means that visual effects shots can be done faster, more cost effectively, and with higher quality. There has also been a general rise in the awareness of the film-makers in what can be done with digital compositing, which makes them more sophisticated users.
STRUCTRURE
Introduction Phase I will deal with the history and introduction of compositing. Olden compositing techniques such as optical compositing, in camera effect, background projection, hanging miniatures etc. Apart from all that I will focus on how they were creating ground breaking effects during optical era. What are the advantage and disadvantage of optical compositing?#p#分頁標題#e#
Information hub Phase I will deal with the core concept of live action and Multipass composting with a brief introduction of stereoscopic composting. Under live action compositing I will discuss the basics and core concept of live action compositing such as rotoscopy, retouching, motion tracking with more emphasis on keying. Inside multipass compositing section simply I will focus on core concept of passes, different types of passes, use of passes. Finally a brief introduction of Stereoscopic compositing an emerging technology in the world of computer graphics.
Incredible masters Phase I will discuss upon the contribution of pioneers of this sector to develop it up to this extent and also give a brief introduction of the new technologies being used and developed.
Case study Phase which is also the last segment of my dissertation proposal I will discuss on the ground breaking effect techniques used in the Hollywood blockbusters such as Terminator, Golden compass and Finding Nemo etc.
History of compositing
In the summer of 1857, the Swedish-born photographer Oscar G. Rejlander set out to create what would prove to be the most technically complicated photograph that had ever been produced. Working at his studio in England, Rejlander selectively combined the imagery from 32 different glass negatives to produce a single, massive print. It is one of the earliest examples of what came to be known as a "combination print."
Motion picture photography came about in the late 1800s, and the desire to be able to continue this sort of image combination drove the development of specialized hardware to expedite the process. Optical printers were built that could selectively combine multiple pieces of film, and optical compositing was born.
Introduction of Optical compositing
Not to be confused with laboratory effects done on an optical printer these use optical attachments which go in front of the lens. The intention of such apparatus is to modify the light path between subject and lens. There are many such accessories available for hire or purchase but frequently they will be constructed for a particular shot.
Techniques of Optical compositing
Glass Shot
Otherwise known as the glass painting, Hall Process or (erroneously) glass matte or matte painting, the glass shot takes the mask painted on a sheet of glass to its logical conclusion. The next stage of complexity is to make these additions to the frame representational instead of purely graphic. For example, let's say that we have a wide shot of a farm with fields stretching off into the distance and require a silhouetted fence in the foreground. If the camera is focused on the distant hills then, with a sheet of glass positioned at the hyper focal distance (near point still in focus when focused on infinity), we can actually paint the piece of fence on to the glass. This is made possible by the two-dimensional quality of motion pictures. So long as nothing passes between the glass and the lens, and the glass is in focus, then an object painted to be the correct size for the scene when viewed through the lens will appear to be actually in that scene. Thus the silhouette of a fence painted on the glass will appear totally believable, even if a cowboy and his horse pass by in the scene beyond.#p#分頁標題#e#
This minor change actually represents a fundamental leap in our effects capability, for now our mask has become a modification to the picture content itself rather than just an external decoration. However, once we have made this philosophical leap it is a small step to move on to creating photorealistic additions to the scene.
The next stage is to light the camera side of our glass and paint details into the image thereon. In the example of the fence we now paint in the texture of the wood and expose it as required to blend in with the scene.
Glass painting is a fundamental technique of VFX and can be applied to the latest digital equipment just as easily as it was to film prior to the First World War. Basically, if opaque paints are used (or are painted over an opaque base paint) what one is effectively doing is covering over detail in the real image with imaginary additions. This is a replacement technique and is the first of many in the VFX arsenal which permits falsification of real images.
Rotoscopy
Frequently, it comes to pass that a character or object that was not shot on bluescreen needs to be isolated for some reason, perhaps to composite something behind it or maybe give it a special color correction or other treatment. This situation requires the creation of a matte without the benefit of a bluescreen, so the matte must be rotoscoped, which means it is drawn by hand, frame by frame. This is a slow and labor-intensive solution, but is often the only solution. Even a bluescreen shot will sometimes require rotoscoping if it was not photographed well and a good matte cannot be extracted.
Virtually all compositing programs have some kind of rotoscoping capability, but some are more capable than others. There are also programs available that specialize in just rotoscoping. Each frame of the picture is put up on the monitor and the roto artist traces an outline around the character's outer edge. These outlines are then fi lled in with white to create the familiar white matte on a black background, like the example in Figure 1-12. Large visual effects studios will have a dedicated roto department, and being a roto artist is often an entry-level position for budding new digital compositors.
There has even been a recent trend to use rotoscoping rather than bluescreen shots for isolating characters for compositing in big-effects fi lms. I say big-effects films because it is much more labor-intensive, and therefore, expensive to rotoscope a shot than to pull a bluescreen matte. The big creative advantage is that the director and cinematographer can shoot their scenes on the set and on location "naturally," rather than having to shoot a separate bluescreen shot with the talent isolated on a bluescreen insert stage. This allows the movie's creators to focus more on the story and cinematography rather than the special effects. But again, this is a very expensive approach.#p#分頁標題#e#
Rotoscoping is the process of drawing a matte frame-by-frame over live action footage. Starting around the year 50 B.C. (Before Computers), the technique back then was to rear project a frame of fi lm onto a sheet of frosted glass, then trace around the target object. The process got its name from the machine that was used to do the work, called a rotoscope. Things have improved somewhat since then, and today we use computers to draw shapes using the splines we saw in Chapter 5. The difference between drawing a single shape and rotoscoping is the addition of animation. Rotoscoping entails drawing a series of shapes that follow the target object through a sequence of frames.
Rotoscoping is extremely pervasive in the world of digital compositing and is used in many visual effects shots. It is also labor intensive because it can take a great deal of time to carefully draw moving shapes around a moving target frame by frame. It is often an entry-level position in the trade and many a digital compositor has started out as a roto artist. There are some artists who fi nd rotoscoping rewarding and elect to become roto kings (or queens) in their own right. A talented roto artist is always a valued member of the visual effects team. In this chapter, we will see how rotoscoping works and develop an under - standing of the entire process. We will see how the spline-based shapes are controlled frame-by-frame to create outlines that exactly match the edges of the target object, as well as how shapes can be grouped into hierarchies to improve productivity and the quality of the animation. The sections on interpolation and keyframing describe how to get the computer to do more of the work for you, and then fi nally the solutions to the classic problems of motion blur and semi-transparency are revealed.
ABOUT ROTOSCOPING
Today, rotoscoping means drawing an animated spline-based shape over a series of digitized fi lm (or video) frames. The computer then renders the shape frame-byframe as a black and white matte, which is used for compositing or to isolate the target object for some special treatment such as color correction.
The virtue of roto is that it can be used to create a matte for any arbitrary object on any arbitrary background. It does not need to be shot on a bluescreen. In fact, roto is the last line of defense for poorly shot bluescreens in which a good matte cannot be created with a keyer. Compositing a character that was shot on an "uncontrolled" background is illustrated beginning with Figure 6-4. The bonny lass was shot on location with the original background. A roto was drawn (Figure 6-5) and used to composite the woman over a completely new background (Figure 6-7). No bluescreen was required.
There are three main downsides to roto. First, it is labor intensive. It can take hours to roto a simple shot such as the one illustrated in Figure 6-4, even assuming it is a short shot. More complex rotos and longer shots can take days, even weeks. This is hard on both schedules and budgets. The second downside to roto is that it can be diffi cult to get a high quality, convincing matte with a stable outline. If the roto artist is not careful, the edges of the roto can wobble in and out in a most unnatural, eye-catching way. The third issue is that rotos do not capture the subtle edge and transparency nuances that a well-done bluescreen shot does using a fi ne digital keyer. If the target object has a lot of very fi ne edge detail like a frizzy head of hair, the task can be downright hopeless.#p#分頁標題#e#
SPLINES
In Chapter 5, we fi rst met the spline during the discussion of shapes. We saw how a spline was a series of curved lines connected by control points that could be used to adjust the curvature of those lines. We also used the metaphor of a piano wire to describe the stiffness and smooth curvature of the spline. Here we will take a closer look at those splines and how they are used to create outlines that can fi t any curved surface. We will also push the piano wire metaphor to the breaking point. A spline is a mathematically generated line in which the shape is controlled by adjustable control points. While there are a variety of mathematical equations that have been devised that will draw slightly different kinds of splines, they all work in the same general way. Figure 6-8 reviews the key components of a spline that we saw in Chapter 5, which consisted of the control point, the resulting spline line, and the handles that are used to adjust its shape. In Figure 6-8, the slope of the spline at the control point is being adjusted by changing the slope of the handles from position 1 to position 2 to position 3. For clarity, each of the three spline slopes is shown in a different color.
The handles can also adjust a second attribute of the spline called tension, which is shown in Figure 6-9. As the handles are shortened from position 1 to 2 to 3, the "piano wire" loses stiffness and bends more sharply around the control point. A third attribute of a spline is the angle where the two line segments meet at the control point. The angle can be an exact 180 degrees, or fl at, as shown in Figure 6-8 and Figure 6-9, which makes it a continuous line. However, a "break" in the line can be introduced like that in Figure 6-10, putting a kink in our piano wire. Figure 6-10 Adjusting angle. Figure 6-11 Translation. Figure 6-12 Mr. Tibbs. Figure 6-13 Roto spline. Figure 6-14 Finished roto. In addition to adjusting the slope, tension, and angle at each control point, the entire shape can be picked up and moved as a unit. It can be translated (moved, scaled, and rotated), taking all the control points with it. This is very useful if the target has moved in the frame, such as with a camera pan, but has not actually changed shape. Of course, in the real world it will have both moved and changed shape, so after the spline is translated to the new position, it will also have to be adjusted to the new shape.
Now let's pull together all that we have learned about splines and how to adjust them to see how the process works over an actual picture. Our target will be the insufferable Mr. Tibbs, as shown in Figure 6-12, which provides a moving target that also changes shape frame-by-frame. Figure 6-13 shows the completed shape composed of splines with the many control points adjusted for slope, tension, and angle. The fi nished roto is shown in Figure 6-14.
One very important guideline when drawing a shape around a target object is to use as few control points as possible that will maintain the curvatures you need. This is illustrated by the shape used to roto the dapper hat in Figure 6-15, which uses an excessive number of control points. The additional points increase the amount of time it takes to create each keyframe because there are more points to adjust each frame. They also increase the chances of introducing chatter or wobble to the edges.#p#分頁標題#e#
ARTICULATED ROTOS
Things can get messy when rotoscoping a complex moving object such as a person walking. Trying to encompass an entire character with crossing legs and swinging arms into a single shape like the one used for the cat in Figure 6-13 quickly becomes unmanageable. A better strategy is to break the roto into several separate shapes, which can then be moved and reshaped independently. Many compositing programs also allow these separate shapes to be linked into hierarchical groups where one shape is the "child" of another. When the parent shape is moved, the child shape moves with it. This creates a "skeleton" with moveable joints and segments rather like the target object. This is more effi cient than dragging every single control point individually to redefi ne the outline of the target. When the roto is a collection of jointed shapes like this, it is referred to as an articulated roto.
Figure 6-17 through Figure 6-19 illustrates a classic hierarchical setup. The shirt and lantern are separate shapes. The left and right leg shapes are "children" of the shirt, so they move when the shirt is moved. The left and right feet are children of their respective legs. The light blue lines inside the shapes show the "skeleton" of the hierarchy.
To create frame 2 (Figure 6-18), the shirt was shifted a bit, which took both of the legs and feet with it. The leg shapes were then rotated at the knee to reposition them back over the legs, and then the individual control points were touched up to complete the fi t. Similarly, each foot was rotated to its new position and the control points touched up. As a result, frame 2 was made in a fraction of the time it took to create frame 1. Frame 3 was similarly created from frame 2 by shifting and rotating the parent shape, followed by repositioning the child shapes, then touching up control points only where needed. This workfl ow essentially allows much of the work invested in the previous frame to be recycled into the next with just minor modifi cations.
There is a second, less obvious advantage to the hierarchical animation of shapes, and that is it results in a smoother and more realistic motion in the fi nished roto. If each and every control point is manually adjusted, small variations become unavoidable from frame to frame. After all, we are only human. When the animation is played at speed, the spline edges will invariably "wobulate" (wobble and fl uctuate). By translating (moving) the entire shape as a unit, the spline edges have a much smoother and more uniform motion from frame to frame.
INTERPOLATION
Time to talk temporal. Temporal, of course, refers to time. Since rotos are a frameby- frame animation, time and timing are very important. One of the breakthroughs that computers brought to rotoscoping, as we have seen, is the use of splines to defi ne a shape. How infi nitely fi ner to adjust a few control points to create a smooth line that contours perfectly around a curved edge, rather than to draw it by hand with a pencil or ink pen. The second, even bigger breakthrough is the ability of the computer to interpolate the shapes, where the shape is only defi ned on selected keyframes, and then the computer calculates the in-between (interpolated) shapes for you.#p#分頁標題#e#
A neat example of keyframe interpolation is illustrated in Figure 6-20. For these fi ve frames, only the fi rst and last are keyframes, while the three in-between frames are interpolated by the computer. The computer compares the location of each control point in the two keyframes, then calculates a new position for them at each in-between frame so they will move smoothly from keyframe 1 to keyframe
There are two very big advantages to this interpolation process. First, the number of keyframes that the artist must create is often less than half the total number of frames in the shot. This dramatically cuts down on the labor that is required for what is a very labor-intensive job. Second, and perhaps even more important, is that when the computer interpolates between two shapes, it does so smoothly. It has none of the jitters and wobbles that a clumsy humanoid would have introduced when repositioning control points on every frame. Bottom line, computer interpolation saves time and looks better. In fact, when rotoscoping a typical character it is normal to keyframe every other frame. The interpolated frames are then checked, and only an occasional control point touch-up is applied to the in-between frames as needed.
KEYFRAMES
In the previous discussion about shape interpolation, the concept of the keyframe was introduced. There are many keyframing strategies one may use, and choosing the right one can save time and improve the quality of the fi nished roto. What follows is a description of various keyframe strategies with tips on how you might choose the right one for a given shot.
On 2's
A classic and oft used keyframe strategy is to keyframe on 2's, which means to make a keyframe at every other frame?that is, frame 1, 3, 5, 7, and so forth. The labor is cut in half and the computer smoothes the roto animation by interpolating nicely in between each keyframe. Of course, each interpolated frame has to be inspected and any off-target control points must be nudged into position. The type of target where keyframing on 2's works best would be something like a walking character shown in the example in Figure 6-21. The action is fairly regular, and there are constant shape changes, so frequent keyframes are required. Figure 6-21 Keyframe on 2's.
On shots where the action is regular but slower, it is often fruitful to try keyframing on 4's (1, 5, 9, 13, etc.), or even 8's (1, 9, 17, 25, etc.). The idea is to keep the keyframes on a binary number (on 2's, on 4's, on 8's, etc.) for the simple reason that it ensures you will always have room for a new keyframe exactly halfway between any two existing keyframes. If you keyframe on 3's (1, 4, 7, etc.) for example, and need to put a new keyframe between 1 and 4, the only choice is frame 2 or 3, neither of which is exactly halfway between them. If animating on 4's (1, 5, 9, etc.) and you need to put a new keyframe between 5 and 9, frame 7 is exactly halfway between them.#p#分頁標題#e#
Figure 6-22 shows the sequence of operations for keyframing a shot on 2's in two passes by fi rst setting keyframes on 4's, then in-betweening those on 2's. Pass 1 sets a keyframe at frames 1, 5, and 9, then on a second pass the keyframes are set for frames 3 and 7. The work invested in creating keyframes 1 and 5 is partially recovered when creating the keyframe at frame 3, plus frame 3 will be smoother and more natural because the control points will be very close to where they should be and only need to be moved a small amount.
Bifurcation
Another keyframing strategy is bifurcation, which simply means to fork or divide into two. The idea is to create a keyframe at the fi rst and last frames of a shot, then go to the middle of the shot and create a keyframe halfway between them. You thengo mid-way between the fi rst keyframe and the middle keyframe and create a new keyframe there, then repeat that for the last frame and middle frame, and keep subdividing the shot by placing keyframes midway between the others until there are enough keyframes to keep the roto on target.
The situation where bifurcation makes sense is when the motion is regular and the object is not changing its shape very radically such as the sequence in Figure 6- 23. If a keyframe were fi rst placed at frame 1 and frame 10, then the roto checked mid-way at frame 5 (or frame 6, since neither one is exactly mid-way), the roto would not be very far off. Touch up a few control points there, and then jump midway between frames 1 and 5 and check frame 3. Touch up the control points and jump to frame 8, which is (approximately) mid-way between the keyframes at frame 5 and frame 10. Figure 6-24 illustrates the pattern for bifurcation keyframing.
While you may end up with keyframes every couple of frames or so, bifurcation is more effi cient than simply starting a frame 1 and keyframing on 2's, that is, assuming the target object is suitable for this approach. This is because the computer is interpolating the frames for you, which not only puts your shape's control points close to the target to begin with, but it also moves and pre-positions the control points for you in a way that the resulting animation will be smoother than if you tried to keyframe it yourself on 2's. This strategy effi ciently "recycles" the work invested in each keyframe into the new in-between keyframe.
Extremes
Very often the motion is smooth but not regular, such as the gyrating airplane in Figure 6-28, which is bobbing up and down as well as banking. In this situation, a good strategy is to keyframe on the extremes of the motion. To see why, consider the airplane path plotted in Figure 6-25. The large dots on the path represent the airplane's location at each frame of the shot. The change in spacing between the dots refl ects the change in the speed of the airplane as it maneuvers. In Figure 6-26, keyframes were thoughtlessly placed at the fi rst, middle, and last frames, represented by the large red dots. The small dots on the thin red line represent where the computer would have interpolated the rotos using those keyframes. As you can see, the interpolated frames are way off the true path of the airplane.#p#分頁標題#e#
However, in Figure 6-27, keyframes were placed on the frames where the motion extremes occurred. Now the interpolated frames (small red dots) are much closer to the true path of the airplane. The closer the interpolation is to the target, the less work you have to do and the better the results. To fi nd the extremes of a shot, play it in a viewer so you can scrub back and forth to make a list of the frames that contain the extremes. Those frames are then used as the keyframes on the fi rst roto pass. The remainder of the shot is keyframed by using bifurcation.
Referring to a real motion sequence in Figure 6-28, the fi rst and last frames are obviously going to be extremes so they go on our list of keyframes. While looking at the airplane's vertical motion, it appears to reach its vertical extreme on frame 3. By placing keyframes on frame 1, 3, and 10, we stand a good chance of getting a pretty close fi t when we check the interpolation at frame 7 (see Figure 6-29). If the keyframe were placed at the midpoint on frame 5 or 6, instead of the motion extreme at frame 3, the roto would be way off when the computer interpolates it at frame 3.
Final Inspection
Regardless of the keyframe strategy chosen, when the roto is completed it is time for inspection and touch-up. The basic approach is to use the matte created by the roto to set up an "inspection" version of the shot that highlights any discrepancies in the roto, then go back in and touch up those frames. After the touch-up pass, one fi nal inspection pass is made to confi rm all is well.
Figure 6-30 through Figure 6-32 illustrates a typical inspection method. The roto in Figure 6-31 was used as a mask to composite a semi-transparent red layer over the fi lm frame in Figure 6-32 to highlight any discrepancies in the roto. It shows that the roto falls short on the white bonnet at the top of the head and overshoots on the side of the face. The roto for this frame is then touched up and the inspection version is made again for one last inspection to confi rm all the fi xes and that there are no new problems. Using this red composite for inspection will probably not work well when rotoscoping a red fi re engine in front of a brick building. Feel free to modify the process and invent other inspection setups based on the color content of your personal shots.
MOTION BLUR
One of the historical shortcomings of the roto process has been the lack of motion blur. A roto naturally produces clean sharp edges as in all the examples we have seen so far, but in the real world, moving objects have some degree of motion blur where their movement has smeared their image on the fi lm or in the video. Figure 6-33 shows a rolling ball of yarn with heavy motion blur. The solution is an inner and outer spline that defi nes an inside edge that is 100% solid, and an outside edge that is 100% transparent as shown in the example in Figure 6-34. The roto program then renders the matte as 100% white from the inner spline graduating off to black at the outer spline. This produces a motion-blurred roto such as the one shown in Figure 6-35. Even if there is no apparent motion blur in the image, it is often benefi cial to gently blur the rotos before using them in a composite to soften their edges a bit, especially in fi lm work.#p#分頁標題#e#
One problem that these inner and outer splines introduce, of course, is that they add a whole second set of spline control points to animate which increases the labor of an already labor intensive process. However, when the target object is motion blurred, there is no choice but to introduce motion blur in the roto as well. A related issue is depth of fi eld, where all or part of the target may be out of focus. The bonny lass in Figure 6-4, for example, actually has a shallow depth of fi eld so her head and her near shoulder are in focus, but her far shoulder is noticeably out of focus. One virtue of the inner and outer spline technique is that edge softness can be introduced only and exactly where it is needed so the entire roto does not need to be blurred. This was done for her roto in Figure 6-5.
SEMI-TRANSPARENCY
Another diffi cult area for rotoscoping is a semi-transparent object. The main difficulty with semi-transparent objects is that their transparency is not uniform as some areas are denser than others. The different levels of transparency in the target mean that a separate roto is required for each level. This creates two problems. The fi rst is that some method must be devised for reliably identifying each level of transparency in the target so it may be rotoscoped individually, without omission or overlap with the other regions. Second, the roto for each level of transparency must be made unique from the others in order to be useful to the compositor. A good example of these issues is the lantern being carried by our greenscreen boy. A close-up is shown in Figure 6-36. When a matte is created using a high quality digital keyer (Figure 6-37), the variable transparency of the frosted glass becomes apparent. If this object needed to be rotoscoped to preserve its transparency, we would need to create many separate roto layers, each representing a different degree of transparency. This is usually done y making each roto a different brightness; a dark gray roto for the very transparent regions, medium brightness for the medium transparency, and a bright roto for the nearly solid transparency. While it is a hideous task, I have seen it done successfully.
Motion tracking and Stabilizing
MOTION TRACKING
One of the truly wondrous things that a computer can do with moving pictures is motion tracking. The computer is pointed to a spot in the picture and then is released to track that spot frame after frame for the length of the shot. This produces tracking data that can then be used to lock another image onto that same spot and move with it. The ability to do motion tracking is endlessly useful in digital compositing and you can be assured of getting to use it often. Motion tracking can be used to track a move, a rotate, a scale, or any combination of the three. It can even track four points to be used with a corner pin.
#p#分頁標題#e#
One frequent application of motion tracking is to track a mask over a moving target. Say you have created a mask for a target object that does not move, but there is a camera move. You can
draw the mask around the target on frame 1, then motion track the shot to keep the mask following the target throughout the camera move.
This is much faster and is of higher quality than rotoscoping the thing. Wire and rig removal is another very big use for motion tracking. A clean piece of the background can be motion tracked to cover up wires or a rig. Another important application is monitor screen replacement, where the four corners of the screen are motion tracked and then that data is given to a corner pin transform (Figure 8-8) to lock it onto a moving monitor face.
We can see how motion tracking works with the corner pinning example in Figure 8-24, which shows three frames of a camera move on a parked car. The white crosshairs are the tracking markers that lock onto the tracking points, the four points in the image that the computer locks onto to collect the motion tracking data. The tracking points are carefully chosen to be easy for the computer to lock onto. After the data is collected from the tracking points for the length of the shot, it will be connected to the four control points of a corner pin operation so that it moves with the car.
At the start of the tracking process, the operator positions the tracking markers over the tracking points, and then the computer makes a copy of the pixels outlined by each tracking marker. On the next frame, the computer scans the image looking for groups of pixels that match the tracking points from the fi rst frame. Finding the best match, it moves the tracking markers to the new location and then moves on to the next frame. Incredibly, it is able to fi nd the closest fi t to within a small fraction of a pixel.
The computer builds a frame-by-frame list of how much each tracking point has moved from its initial position in the fi rst frame. This data is then used by a subsequent transform operation to move a second image in lock step with the original.
Figure 8-25 shows the fruits of our motion tracking labor. Those really cool fl ames have been motion tracked to the side of the car door with a corner pin.
While all this sounds good, things can frequently go wrong. By frequently, I mean most of the time. The example here was carefully chosen to provide a happy tracking story. In a real shot you may not have good tracking points: They may leave the frame halfway through the shot; someone might walk in front of them; they might change their angle to the camera so much that the computer gets lost; the lens may Figure 8-26 Original frames with camera movement.
Frame 1 Frame 2 Frame 3 Frame 4 distort the image so that the tracking data you get is all bent and wonky; or the fi lm might be grainy causing the tracking data to jitter. This is not meant to discourage you, but to brace you for realistic expectations and suggest things to look out for when setting up your own motion tracking.#p#分頁標題#e#
STABILIZING A SHOT
Another great and important thing that can be done with motion tracking data is to stabilize a shot. But, there is a downside that you should know about. Figure 8-26 shows a sequence of four frames with a "tequila" camera move (wobbly). The fi rst step in the process is to motion track the shot, indicated by the white tracking markerlocked onto the palm tree in each frame. Note how the tracking marker is moving from frame-to-frame along with the palm tree.
The motion tracking data collected from Figure 8-26 is now fed to a translation transform (OK, a move operation), but this time, instead of tracking something onto the moving target the data is inverted to pull the movement out of each frame. For example, if the motion tracking data for frame 3 shifted to the right by 10 pixels, then to stabilize it, we would shift that frame to the left by 10 pixels to cancel out the motion. The tree is now rock steady in frame. Slick. However, shifting the picture left and right and up and down on each frame to re-center the palm tree has introduced unpleasant black edges to some of the frames, as shown in Figure 8-27. This is the downside that you should know about.
We now have a stabilized shot with black edges that wander in and out of frame.
Now what? The fix for this is to zoom in on the whole shot just enough to push all the black edges out of every frame. This needs to be done with great care, since this zoom operation also softens the image.
The fi rst requirement is to fi nd the correct center for the zoom. If the center of zoom is simply left at the center of the frame, it will push the black edges out equally all around. However, the odds are that you will have more black edges on one side than another, so shifting the center of zoom to the optimal location will permit you to push out the black edges with the least amount of zoom. The second requirement is to fi nd the smallest possible zoom factor. The more you zoom into the shot, the ofter it will be. Unfortunately, some softening is unavoidable. Better warn the client.
Just one more thing. If the camera is bouncing a lot, it will introduce motion blur into the shot. After the shot is stabilized, the motion blur will still be scattered randomly throughout the shot and may look weird. There is no practical fi x for this.Better warn the client about this too.
Realistic Compositing
A compositor is, fi rst and foremost, an artist, so this chapter focuses on the art of compositing. There are two levels of artistic excellence. First is the achievement of competent photo-realism, the objective of every digital composite. But beyond replicating reality in a convincing manner, there is also artistic enhancement?going beyond simply professionally assembling the elements provided and adding your own artistic panache to make the shot look cool. Good artistic design makes for good visual effects shots.#p#分頁標題#e#
Digital compositing can be defi ned as taking several disparate elements that were photographed separately and integrate them into a single shot such that the various elements appear to have been shot together at the same time under the same lighting with the same camera. This journey begins by color correcting the elements so that they appear to be together in the same "light space." Then there are fi lm and lens attributes that have to be made to match. Finally, the fi nished composite is sweetened by the addition of special operations designed to enhance realism and improve artistic appeal.
COLOR CORRECTING
The single most important aspect of a convincing digital composite is color correcting the various layers to all appear to have been photographed together in the same lightspace. There are several aspects to color correcting and it is easy to chase your tail, so here we will walk through a methodical step-by-step procedure that teases the issues apart and tackles them one at a time. A methodical approach saves time and gives better results.
The fi rst step in the color correction of a composite is for the background plate to be color corrected. This is often referred to as color grading, and many visual effects facilities have a specifi c pipeline set up to ensure that the background plates for a related group of shots are all color graded similarly in order to maintain visual continuity. After the background is color graded, the compositor adds the various layers on top and color corrects them to match the color-graded background plate.
Whether you are compositing CGI or bluescreen elements, each element will need some level of color correcting. The CGI elements usually need less attention because they were originally created using the background plate as a reference and should be fairly close to begin with. However, the bluescreen elements were shot with no way to ensure that they matched the background so they are usually wildly off and require a great deal of love.
The Black and White Points
The starting point for an effective color correcting procedure is to get the black and white points correct because it also results in matching the contrast. The reason these have to be set fi rst is that if they are wrong, then all other color corrections become much harder to judge. To make matters worse, if other color corrections are done fi rst, when the black and white points are fi nally set correctly, you will then have to go back and refi ne all the other color corrections. Better to start with the black and white points set correctly fi rst.
Strictly speaking, the black point and white point have a very specifi c meaning in the world of color science. However, this is not a book on color science, so we will bend the meaning to our purposes. For the purpose of color correcting a composite, the black point shall henceforth be defi ned as the black that would appear in a completely unexposed part of the picture. While these will be the darkest pixels in the image, they should not actually be set right at code value zero, since that can introduce clipping in the blacks. The white point is defi ned as the pixel value of a white T-shirt in bright sunlight.* The white point should not be set to code value 255 as that would leave no pixel values above it for highlights. The white point should be more like 230 to 240 or so, leaving a little "headroom" for the shiny bits of the picture.#p#分頁標題#e#
A problem with the black and white points is that one or both layers of the composite may not have either of them in the picture. It is entirely possible to have a picture that has no totally black parts. Think of a cloudy sky shot. It is also possible to have a picture with no white points in the frame. Think of a black cat eating licorice in a coal bin. If there are no actual black or white points in one or both layers of the composite, then you would still follow the steps in the order outlined below, but with a lot more procedural estimation (guessing).
Pretend for a moment that the two layers you are trying to match both have black and white points within the picture. Our case study begins with the bluescreen in Figure 9-1, which is to be composited over the color-graded background in Figure 9-2. The un-color corrected raw composite is shown in Figure 9-3. The foreground layer is too green and the contrast is too low. We have our work cut out for us. *The "white T-shirt in bright sunlight" white point is not very scientifi c. The scientifi c defi nition of the white point is a 90% diffuse refl ective surface, and that is what a white T-shirt in bright sunlight would be, and the T-shirt is a lot easier to envision.
An ancient Chinese trick for color correcting digital composites is to make a monochrome version like Figure 9-4 for adjusting the grayscale of an image, which is the black and white points as well as the gamma. We will talk about gamma in a minute. The reason this helps is because it gets the color out of the way and lets the eye focus on the grayscale, or the luminance parts of the picture, which is what gamma and the black and white points are all about. It is a divide and conquer strategy designed to pare down and simplify the task.
The next step is to inspect the monochrome composite to identify the black and white points in both the foreground and background. In Figure 9-5, the black point for the background plate was found under the shady bush between the ladies, and the black point for the foreground layer was found under the black sole of a shoe.
The white point for the background plate was found to be the white awning, and for the foreground layer it is the white shawl in direct sunlight (about the same as our white T-shirt).
Keep in mind that the background plate has already been color graded and we need to match the foreground layer to that. The black point in the background was measured and found to be code value 10, while the foreground black point was code value 22. The white point in the background awning measured 240 and so did the shawl in the foreground. Code value 240 is a bit hot for a white point, but it is a brightly lit outdoor shot and we don't want to start a dust-up with the art director, so we will go with it.
The white point is good on our foreground layer, but we do need to bring the black point down to about code value 10 to match the background. How this is done depends on the color correcting tools your compositing system offers, but Figure 9-7 illustrates how this would be done using the "Universal Color Corrector," the Levels tool in Adobe Photoshop. The inset close-up shows that the black level has been set to pull the Input Level from 14 down to zero, which will shift code value 22 down to around 10. This increases the contrast of the foreground and we now have a fi ne match between the foreground and background in Figure 9-6. Keep in mind that when the black point was lowered like this, it had a small effect on the white point so it should be rechecked and touched up if necessary. If one of the layers doesn't have one of the black or white reference points all is not lost. It is uncommon for a picture to be missing a black point, but let's say the foreground did not have a good white point. Look around the picture for the lightest element you can fi nd. Maybe it appears to be an 80% gray (remember, we are working with the monochrome version here). Search the background plate for what appears to be another 80% gray and match them. Look for similar elements in both layers, such as skin tones. Assuming skin tones in the background and foreground are in the same lighting they would be about the same brightness, also assuming the two characters had the same skin type, of course. You are defi nitely guessing?I mean estimating?but it is better than just eyeballing it. At least you tried to be scientifi c about it.#p#分頁標題#e#
Gamma
After setting the black and white points, the next step is gamma correction. While the gamma adjustment mostly affects the midtones of a picture, its affect does spread all the way up and down the entire grayscale of an image. Still working with the monochrome version, adjust the gamma of the foreground layer for best match. Unfortunately, there is no slick procedural way to set this unless both the foreground and background layers were photographed with chip charts (Figure 9-) to be used for a color reference, which did happen once in 2003.
That's what I heard, anyway. The thing to know about adjusting the gamma is that it can shift your black and white points, so be sure to go back and check them after any gamma correction. A gamma correction alters all pixel values between 0 and 255, but does not alter 0 or 255 themselves. Since your black point is hopefully not at zero nor your white point at 255, they will be shifted a bit after the gamma correction. Note that the black point will be shifted more than the white point.
Color
We now have the grayscale set correctly for our case study so it is time to turn the color back on and check the color (Figure 9-9). Yikes! The girls are green! No worries, we will have that fi xed in a jiffy. There are a couple of approaches to getting the color of the foreground to match the background. The problem with trying to match color like this is that you really don't know what color the foreground objects are supposed to be unless you were on the set when the objects were fi lmed. Maybe you will be on the set someday, but for now you will need another approach. The next best approach is careful observation of known color elements in the picture.
Caution?it is not true that equal RGB values will create a neutral gray in all situations. Some display devices have a color bias, so in order to get a neutral gray to the eye the RGB values need to be biased. Another situation is when the lighting in a shot is not neutral. If the lighting were a bit yellow, for example, then to get a neutral gray to the eye it would have to have a bit of yellow in it.
Occasionally, there will be known gray objects in the picture. Their pixel values can be read and used to adjust the RGB values until they are equal.
Whoa, did you see that big caution on the previous page? Maybe you better not make the RGB values equal so much as make the gray object appear a neutral gray to the eye. There may be a known gray object in the background plate that you can use as a reference. Measuring its RGB values might reveal, for example, that a neutral gray in the background has a bit more red than green or blue, so the foreground gray should have a similar red bias.
When sampling pixel values of a photographic image to be used for setting the color of another image, you should run a small blur over the image being sampled.#p#分頁標題#e#
The natural grain or noise in the sampled image introduces variations at the pixel level that can give you false RGB readings. To address this issue some color sampling tools (the good ones) have an option to sample more than a one-pixel spot. They are, in effect, running a little blur over the spot you are sampling.
One other technique is to use the skin tones, assuming there are some in the shot. We are very sensitive to the color of skin tones so adjusting the color until the skin tones look right can be very effective. Even if you use gray objects to balance the color, be sure to check the skin tones carefully. The "skin tone" method was used to color correct Figure 9-9 to get the color corrected composite in Figure 9-10.
Changing the color of a shot can change its brightness, causing you to go back to the beginning of the color correction procedure and start over. One color has much more affect on the apparent brightness of a picture than the others, and that color is green. Change the green level just a bit and the brightness changes a lot. Change the red or blue and the brightness is hardly affected at all. The idea here is to change the color of a shot without disturbing the green channel, which is not hard to do if you use the patented "constant green" method of color correction.
Figure 9-11 shows how to adjust the RGB color sliders to increase any primary or secondary color without disturbing the green level. For example, to increase cyan, lower the red. To decrease any of these colors just move the sliders in the opposite direction shown here. For the overly green shot in Figure 9-9, instead of lowering green, the red and blue were raised.
Color Adjustments
When confronted with a layer that needs the color, or hue, corrected to match the other layers of the shot, the next question is which color correction adjustment should be used? Your composing software might offer lift, gamma, gain, contrast, hue, saturation, brightness, color curve adjustments and others, any of which may be used to correct the color problem. How can we choose which one to use? The fi rst order of business is to be clear on exactly what each of these different adjustments does to the code values of the image and its appearance.
Figure 9-12 is the basic reference setup for demonstrating each color correction operation. The gradient across the bottom shows the pixel brightness from black to white and the graph plots their code values. Figure 9-13 shows the lift operation and how it affects the gradient. It has its largest visual impact in the darks so it is often referred to as "adjusting the darks" but don't be fooled, as you can see it also affects the midtones and whites, but to lesser degrees. Figure 9-14 shows the effect of a gamma adjustment, which is usually referred to as "adjusting the midtones." While its effects are mostly in the midtones, it also affects the darks a lot and the lights a little and it does not introduce clipping. Of course, the lift and gamma adjustments can go in the other direction to darken the image.#p#分頁標題#e#
Figure 9-15 shows the gain operation, which is also known as scale RGB because the gain operation actually does scale the RGB values. While it does affect the entire range of pixel values,
its greatest impact is in the whites. Watch out for clipping unless you scale the RGB values down. The contrast adjustment in Figure 9-16 raises both the whites and lowers the blacks so it can introduce clipping at both ends. There is no clipping danger if the contrast is lowered. Figure 9-17 shows the brightness operation, which looks a lot like gain (Figure 9-15) because they are mathematically identical. The only difference is that typically gain allows you to adjust each color individually, while brightness adjusts them all together as one.
Figure 9-18 is an attempt to illustrate saturation but the graph is only suggestive, not literal. A saturation increase moves the RGB values of a pixel further apart, suggested by the three graph lines moving apart. Most saturation operations are smart enough to prevent clipping. The hue adjustment in Figure 9-19 indicates how it actually rotates the pixel values around the color wheel. A green pixel will move to yellow then to red and so on. Figure 9-20 shows three color curves, one for red, green, and blue that can be individually adjusted to affect any portion of the color space any way you want. Color curves are very powerful, but hard to control.
There is a more elaborate type of color corrector, illustrated by Figure 9-21, which splits the range of adjustments into three "zones": The darks, the midtones, and the lights. First, the zone to be affected is selected and then all color corrections are limited to that zone and gently blended into the next. Figure 9-22 illustrates increasing the brightness in the darks. The gradient across the bottom of the graph is split to show the effect of the change. The bottom half of the graph is the "before" and the top half is the "after." Figure 9-23 illustrates a lowering of the whites with the gradient showing the change.
Now that we know how each adjustment will affect the code values of our pixels, the next question becomes, "Which adjustment do we use under what circumstances to fi x the color of a shot?" Remember, we already have the black and white points set as well as an overall gamma correction. At this point, we are only concerned about the overall hue of the shot, in this case, the green ladies in Figure 9-9.
To decide which color correction operation to use we fi rst need to determine where in the image the color needs to be changed?the darks, the midtones, or the whites. If it is only in the darks, then use the lift; if it is only in the midtones, then use the gamma; if it is only in the whites, then use the gain. Use these operations on a per-channel basis and don't forget that they also affect the other parts of the image, but to a lesser degree. For example, if the darks had too much red, then lower the lift of the red channel. If the whites had too little blue, then increase the gain of the blue channel. If the midtones were too green, then increase the gamma of the red and blue channels. Remember, we want to leave the green channel alone as much as possible to avoid affecting the overall brightness.#p#分頁標題#e#
Pre-Balancing the Color Channels
The process of color correcting a layer to match the background is made much more diffi cult if the layer starts way off. Pre-balancing the color channels means to view and adjust the composite one channel at a time to blend it better with the background.
While this method is not really suitable for fi nal color correction, it can get things in a much better starting position, which will make the fi nal color correction faster and easier.
Figure 9-24 shows just the green channel of the composite before color correction, while Figure 9-25 shows the same channel after. After all three color channels have been individually color corrected, then the image viewer is set to show the full color RGB image to do the fi nal adjustments. You will be pleasantly surprised at how close this procedure can get you to a fi nished color correction.
Gamma Slamming
Visual effects shots are typically developed on a workstation with the picture displayed on the workstation monitor. However, after the shot is fi nished it goes off to fi lm, video, digital cinema, or some other display device. Those other display systems have different characteristics and color spaces than the workstation monitor that can exaggerate even small differences between the layers of a composite. The shot may also go to a colorist, which may increase the contrast or "stress" the shot in other ways. This can cause the different layers of a composite to visually "pull apart" and become noticeable. Gamma slamming can be used to prevent this embarrassing development by exaggerating any small discrepancies so you can fi nd them before the client does.
The procedure is to add a gamma adjustment to the fi nal composite, which will be used for viewing purposes only, not as part of the shot. With some compositing The Art of Compositing 163 systems, you can adjust the gamma of the image viewer instead. The gamma is "slammed" from one extreme to the other to "stress" the image and see if any of the layers pull apart visually. Figure 9-26 shows our case study composite with the gamma slammed all the way up to 3.0, blowing out the shot. At this extreme, any difference in the blacks between the foreground and background layers would become very noticeable. Figure 9-1 shows the gamma slammed down to 0.2. If the midtones or highlights did not match between the two layers, that would show up here.
Gamma slamming should be done on every composite you do.
One other important use for gamma slamming is to detect clipped pixels in a shot. Again, we really should not have any pixels in a photo-realistic visual effects shot that have RGB values of exactly zero or 255. Recall from the gamma correction discussion that the gamma operation does not touch the pixels with RGB values of 0 or 255. By slamming the gamma way down to 0.01, for example, almost all of the RGB values less than 255 get pulled down toward black. This leaves on the screen only those pixels with RGB values of 255, which are the clipped pixels.#p#分頁標題#e#
164 Compositing Visual Effects
The original shot in Figure 9-28 was captured with a high-resolution digital still camera so the image is riddled with clipped pixels due to the high dynamic range of the scene content (the bright lights). A severe gamma correction of 0.01 was applied to it to create Figure 9-29, which reveals all the clipped pixels in the shot. Not only does this show you where the clipped pixels are, but it also shows which channels are clipped. Where there are white pixels, all three RGB values must be code value 255 so all three channels are clipped. The pixels that appear red must have red values at 255, but the other channels are lower, so only the red channel is clipped there.
Yellow pixels must have red and green at 255 so they are both clipped, and so on through the colors.
If the original images you are given to work with are already clipped, there is nothing you can do. However, you need to make sure that you did not introduce any new clipped pixels into the shot. Slam the gamma on the original image and compare it to your fi nished composite to make sure you have not added to the problem.
MATCHING LAYER ATTRIBUTES
In addition to a convincing color correction on all of the layers of the composite, there are a host of additional layer attributes that need to be matched. You will be combining two or more layers of fi lm (or video) and each layer will have its own grain (or noise) that will have to match the rest of the shot. This goes double for CGI or a digital matte painting that has no grain to begin with. If any one of the layers was photographed through a lens, it will have a depth of fi eld and lens distortion imparted to it that needs to be dealt with. Of course, shadows are a key visual cue that must be consistent between the various layers of a composite.
Grain Structure
Film has a very noticeable grain structure that is part of its unique look. Many consider it one of fi lm's major charms. Compositors consider it a royal pain in the arse.
The reason it is a problem is that the grain structure of all of the layers of a composite must match. If the layer being added has no grain, such as CGI or a digital matte painting, then it is easy enough to add grain. However, if it is another layer of fi lm, such as a bluescreen element, it already has its own grain. If its grain structure does not match the background, it is much more diffi cult to fi x. If a fi lm element is scaled up or down, its grain structure goes with it and it will no longer match the rest of the shot. Many times a digital matte painting is created using a frame from the fi lm as the base, so now it has grain "frozen" into the picture. All of these grain issues must be addressed.
Figure 9-30 is a gray chip from an actual piece of digitized fi lm to show the fi lm's grain structure. There are two key points: The fi rst point is that the grain pattern on The Art of Compositing 165 each channel is unique. It is not one grain pattern embossed into all three layers, it is three unique grain patterns. The second point is that while the red and green grain is similar, the blue grain is considerably "grainier." The fi lm's grain will vary with different kinds of fi lm stock and different exposures of the fi lm. The grain can vary in both the size of the grain particles and their contrast, meaning, how much they vary from dark to light. The blue channel grain will be both larger and have more contrast than the red and green channels as you can see in Figure 9-30.#p#分頁標題#e#
If the grain of a layer of fi lm is a problem, then why not just degrain it and regrain to match? Because degraining fi lm is very diffi cult. If a blur or median fi lter is used, the picture turns soft. There are special degrain programs out there, but they are pricey and tend to soften the picture, but not as badly as a simple blur. If you don't have sophisticated degrain tools, there is one trick that can help. Since most of the picture detail is in the red and green channels, but most of the grain is in the blue channel, you can usually improve the situation by blurring just the blue channel.
Gray chip Red noise Green noise Blue noise Video does not have grain, but perversely, it has noise, which is the video version of grain, shown in Figure 9-31. Even more perversely, it is again the blue channel that has the highest level of noise. Like fi lm grain, the video noise pattern is unique for each channel. The good news is that video noise is almost always much less than fi lm grain. The exception stems from low light shots where the videographer has cranked up the video camera's gain to brighten the picture, which dramatically
166 Compositing Visual Effects increases the video noise level. Whether fi lm or video, the digital compositor's mission is to match the grain or noise between all layers of the composite.
Depth of Field
All lenses, whether they are on fi lm, video, or digital cameras, have a depth of fi eld.
That is, a zone where the picture is in focus and everything in front and behind that zone is out of focus. Cinematographers use this very deliberately when selecting lenses for a shot in order to keep the item of interest in sharp focus and everything else out of focus. The eye ignores the out of focus parts, so this is an important cinematographer's tool for keeping your eye where they want it. This makes the depth of fi eld a very important part of the storytelling, in addition to being an essential element of a technically correct composite.
Your mission as a digital compositing artist is to introduce the correct depth of fi eld to each layer that you add to the shot by defocusing it as needed. You inspect the scene carefully, estimate where the new layer is relative to the other objects in the shot, and then set its depth of fi eld appropriately. If the element moves front to rear, you may have to animate the defocus. While something out of focus is blurry, a blur is not a defocus. Some compositing programs acknowledge this with an honest "depth of fi eld" or "defocus" operation. Use'em if you've got'em. The rest of us must use a blur and hope that nobody notices. Of course, if you apply a blur to an element to defocus it, the grain will be wiped out so it will have to be restored with a regrain operation.
In general, the depth of fi eld gets shallower the closer the focused target is to the lens. Here are some examples of this beginning with Figure 9-32. This medium shot has the two characters in focus, but the far wall is out of focus (note the fl owers in the background). The depth of fi eld is even more noticeable in Figure 9-33 where the two chaps are chatting over a very blurry background. The close-up in Figure 9- 34 has such a shallow depth of fi eld that the lady's far shoulder is actually out of focus. An extreme close-up of a face might have the nose and eyes in focus but the ears might be out of focus!#p#分頁標題#e#
The Art of Compositing 167
Shadows
The eye notices the lack of shadows immediately, so they are an essential element of a convincing composite. Nothing integrates two layers together like having the shadow of one layer cast on the other. Nothing reveals a bad composite like their absence. Any existing shadows in the background plate should be studied for clues as to the direction, sharpness, and density for making your faux shadows. If there are no shadows to study, then think about the nature of the lighting in the shot as a guide to the nature of the shadows that would be cast.
Shadows are not simple things. They have an inner core and an outer edge, and there is the all-important contact shadow. In this section, we will take a look at creating a progressively more realistic shadow to see what makes up a good shadow.
The shadow is applied to the background layer prior to the composite by multiplying it by each shadow mask. Let's take a look.