All posts by admin

What effects program should I get?

Another “what can you afford” question.  Most editing programs have the ability to do simple effects, including chroma keying (green/blue screen).  However, programs dedicated to effects (or extra plug-ins you can buy for most editing programs) will do a much better job than whatever comes standard with your editing program.

After Effects is a popular choice and you can easily get help/advice from thousands of people on the web.

If you are serious about perusing a career in visual effects, Shake seems to be the big up and coming program these days.  My personal choice for the most bang for the buck is Combustion, which also has a decent foothold in professional effects studios.

Honestly, if you’re just starting to get into this whole movie making thing, I suggest you concentrate on learning how to use the basic tools to tell an entertaining story before you get caught up in the eye candy.  A visual effect should contribute to telling the story.  If you don’t know how to tell a story first, you won’t know how to make the most effective effect.

[Post to Twitter] Tweet This Post 

What editing program should I get?

This is more of a “what can you afford” type question.  If you’re serious about perusing a career in editing try to get an Avid product.  You can’t go wrong with Final Cut on the Mac (and it’s giving Avid a run for it’s money in the professional field).  I personally use Vegas Video with no regrets on the PC, but would not recommend it for “career” editors.  Whichever editing program you go for, make sure it has at least these features:

1) Allows “split edits”. 
A split edit is when the picture and audio cut at different times. (e.g. You see a shot of someone talking.  The picture cuts to a reaction of another person while the audio from the previous shot continues.)  A lot of the el cheap-o editing programs that come with camcorders only allow you to cut the picture and audio at the same time.  Split edits are a MUST HAVE feature. You can’t do any kind of respectable editing without them.  Watch ANY movie and you’ll notice that the majority of edits are split edits.

2) At least a few tracks of audio and the ability to mix them.
Some el chep-o programs only allow you to use the original audio recorded along with the video and another track for adding music. This simply isn’t enough.  You also need extra tracks for sound effects, ambiance, etc.  Four tracks would be the bare minimum, but most decent programs allow you to have as many audio tracks as your computer system can handle.

3) Lets you control and export/import video to/from your DV camcorder directly.
Okay, I guess it’s not a must have feature, but using 3rd party programs to handle your video I/O is a pain.

[Post to Twitter] Tweet This Post 

What kind of computer should I get?

Well, that can be a touchy question, and I don’t want to start a Mac vs. PC flame war here.

It is my personal belief that the less you know about computers, the more a Mac system makes sense.  There are a lot more software/hardware options and support out there for the PC, but you will run into compatibility problems more often than with a Mac system, and unless you know how to fix them you can run into some major headaches.

Despite not having as many options, there are still some excellent programs out there for the Mac that will allow you to do just about any movie making related thing you can imagine.

Get whichever system you’re more comfortable with.  Just about any G5 equipped Mac or Pentium 4 equipped PC has enough horsepower for DV editing and audio work.  Get at least 512 MB of RAM and a second hard drive of 100 Gigs or more to dedicate to video data, and you’ll be in good shape.  Firewire ports (to connect your DV camcorder to your computer) are pretty standard these days, but you should double check that your system has one.

[Post to Twitter] Tweet This Post 

What camera should I get?

What camera should I get?

I won’t get into any specific makes or models of cameras, but I’ll tell you what I think are the bare minimum features your camera should have.

1) Manual control of everything.
If you are serious about MAKING movies, about capturing the images YOU want, you must have a camera that allows manual control of EVERYTHING.  You can’t learn camera techniques for yourself if the camera is  making the exposure decisions for you.

2) External microphone and headphone jacks.
Sound is 50% of the picture.  To make a good movie you need good sound.  The mics. built into most camcorders aren’t very good and  pick up noise from the camera (tape transport,  zoom and focus motors), and any physical fumbling you do with your hands.  Also, the further away your actors are from the camera the worse the  sound will be.
An external mic. on  an extension cord lets you get away from the noise the camera makes and closer to your actors.  Headphones allow you to monitor the sound while it’s being recorded.  This will let you know immediately if there are any audio problems (mic. bumps, airplanes flying overhead, etc.) instead of later in the editing room after all the actors and crew have gone home.

3) MiniDV or Digital8
The only difference between MiniDV and Digital8 is the size of the tape.  Digital8 gives you the bonus of being able to play the old 8mm formats with your new camera, but MiniDV is much more wide spread and I would suggest you go for it instead.  Some film festivals will accept movies on MiniDV, but not on Digital8.

Stay away from cameras that record to DVD.  The image quality isn’t as good, the data format isn’t as “editing friendly” and there is way more support and options for the DV format.

I would strongly suggest you get an extra battery and external microphone.  A wide angle lens adapter is also a nice thing to have.

There’s plenty of other features that would be great (audio record levels, progressive scan, 16:9 aspect ratio, image stabilization (optical is better than electronic), high definition, interchangeable lenses, etc.), but they will of course cost more.

If your camera meets the above 3 criteria, you will have a very good foundation for learning and experimenting with the basic movie making techniques.

Here’s a great page that compares and rates just about every camcorder out there.  I highly recommend it:
www.camcorderinfo.com

NOTE:  Don’t spend ALL your money on the camera.  Keep in mind you will still need to buy some support equipment such as a microphone, tripod, computer software (and a computer to run it on!), etc.

[Post to Twitter] Tweet This Post 

The DV Format

 

HDV uses MiniDV tapes, but it is otherwise a different beast all together.  HDV info to come.

There are some professional flavours of DV (DV-Pro, DVCam, etc.).  Depending on the exact model, the only difference may be the quality of the camera’s optics (lenses).  In other cases, there may be a difference in the size of the video cassette (larger = longer run times), or the speed at which the tape runs (faster = fewer drop outs (errors)).  But, lets stick to MiniDV/Digital8 which both use the exact same codec:  DV  (DV25, if you want to be exact).

The DV codec works in YUV 4:1:1 colour space, which is explained in detail HERE.

The images captured by your camera’s CCD(s) start off as RGB, and then converted to YUV 4:1:1.  Once this is done, a Discrete Cosine Transform is then applied.  Okay, I don’t really know what that means, but you thought I was pretty smart there for a second, didn’t you?  What it does is throw out more info, starting with what you are most likely not going to miss, which tends to be the finer details.

But, of course, colour sub-sampling and loss of fine detail can only be taken so far before it becomes apparent.  DV footage may look nice at first, but a closer look reveals how much “damage” the DV codec does to the original image.

DV compresses data at a ratio of nearly 10 to 1.  Considering that 90% (!) of the image data has been lost, it looks pretty damn good.

Still, this amount of data loss does cause problems for some things such as green/blue screen compositing.  Take a look at this image.  The colour channels have been separated and displayed as greyscale, so you can better see what’s going on.

Note that the red and blue channel have a “halo” around the subject. This is because the red and blue channels are at a lower resolution, due to the 4:1:1 colour sub-sampling. The green channel fairs a bit better, which is why it’s best to use green screen for chroma keying when shooting on DV.

The DV format is Interlaced and has a lower Field Dominance.  For an explanation of what the hell Interlacing and Field Dominance is, check out the video standards page, comming soon..

[Post to Twitter] Tweet This Post 

Colour Space

Colour Space is a method of defining colours using a numerical system.  There’s lots of crazy colour spaces out there, but lets stick with the ones you are most likely to run across.
RGB

RGB stands for Red, Green and Blue, which are the three primary addative colours.

Addative colours are used when working with light, such as in your computer monitor or  TV.  A blank screen is black and colours are added to produce an image.

Different intensity combinations of these three colours can produce all the other colours of the rainbow.  When all three primary colours are at the same intensity, the result is a shade of grey.

Hey!  Wasn’t I told in school that the three primary colours were Red, Green and Yellow?  Those are the three primary SUBTRACTIVE colours.  Also, the exact colours are actually Cyan, Magenta and Yellow.  See CYMK below.

CMYK

CMYK stands for Cyan, Magenta, Yellow and Black.  These are the primary subtractive colours.  I don’t have a clue why the letter K represents black, but my guess is that they didn’t want to use B for black in case you confuse it with B for blue in RGB space.

These are refered to as subtractive colours because they are used for pigments, such as those you find in a printer.  A blank page starts off as white (all the colours of the rainbow) and pigments are applied to subtract the unwanted colours.

CYMK is mostly used in pre-press aplication and you probably won’t come across it too often in the corse of movie making (except when you’re printing DVD lables of your latest flick:)

An interesting thing to note is that RGB and CMY are the opposite of each other.

You can see this on a colour wheel where: Red is across from Cyan, Green is across from Magenta and Blue is across from Yellow. 

This is a handy thing to know when it comes to colour correcting an image.  For example, if your image is too yellow, it can be cancled out by adding blue (the opposite colour).  And it follows that green counteracts magenta, and red counteracts cyan.


HLS


HLS stands for Hue, Luminance and Saturation.  Sometimes the last two letters are swapped (HSL), but it means the exact same thing.

I can’t think of a file format that saves images in HLS space, but you often find it available in most painting programs, because it’s very easy for people to think in terms of HLS.

You can see how it’s simple enough to pick a colour (Hue) and then modify it to the shade (luminance) and saturation you desire.  Easy peasy.

YUV

Okay, this colour space is harder to wrap your brain around, but it is found in most digital video formats (including MiniDV) so it’s worth understanding.
First of all, I don’t know why they use the letters Y, U, V (must be some big brain scientific reason).  It’s (a bit) easier if you think of it as Luma, -Red, -Blue (denoted as L,-R,-B).  So excuse me, all you notational purists out there while I continue to use L,-R,-B to explain how YUV works…

L,-R,-B capitalizes on a weakness of human visual perception in order to reduce the amount of data used  to create an image and still give the PERCEPTION of no data loss.  The trick is to save less colour information than you do luminance (brightness) information.  The human eye is more sensitive to luminance than it is to chrominance so you can get away with throwing some of it out.

This is done by “sub-sampling” the colour information.  This means that the colour information is captured at a lower resolution than the luminance information.  The sub-sampling ratio is often denoted as 4:2:2 or 4:1:1.  This mean for every 4 pixes of luminance info, there are only 2 (or 1) pixels of colour info.  For example:  Suppose you have a picture that is 100 by 100 pixels in size.  With 4:2:2 encoding, the luminance of the image is saved at a resolution of 100×100, but the colour info (both -Red and -Blue) is saved at a resolution of 50×100.  In the case of 4:1:1 encoding, the luminance is still at 100×100, but the colour is sampled at 25×100.

Of course, before the image is actually displayed, the colour info needs to be scaled up to match the resolution of the luma info.

But where is the colour Green in all of this?  Well, it works like this…
The luma info is a grey scale (black & white) image.  White is made up of ALL the colours of the rainbow.  So if you start with white and then subtract red (-R) and subtract blue (-B), what you’re left with is green. Neat, huh?

This also takes advantage of another attribute of human perception.  Our eye’s are more sensitive to green than to other colours.  In L,-R,-B colour space, green is what is left of the luminance (after red and blue are subtracted) and since the luminance is sampled more often than -R and -B, that means by default, green is sampled more than red and blue.

L,-R,-B (YUV) may seem overly complicated, but (hopefully) now you can see how clever it is in the way it’s geared towards the way human perception works.

Some high end video gear can record in 4:4:4 colour space which means that the colour info in not sub-sampled.  The potentially confusing point is that 4:4:4 is usually YUV space, but sometimes it’s RGB space depending of the equipment specs.

[Post to Twitter] Tweet This Post 

The Basics of Chroma Key

Chroma key basics:

Here’s how I go about a simple chroma key (green screen) composite. Each compositing program has different terms for the processes and tools I’m going to talk about.  Therefore, this is meant to be a general overview, not a program specific tutorial.  So,  don’t ask me how to do “such and such” with program “X”, because I don’t know how and your program may not even be able to do it.

A good program will allow you to view the matte (or Alpha Channel as it’s sometimes called).  It’s often easier to spot problems with the matte if you are able to view it directly.  Even if your program doesn’t allow you to see it, it’s still generating a matte internally.

We start with two images: a background (BG) image and a foreground image (FG).

 

The FG is placed in front of the BG.  At this point, the FG completely hides the BG, because we have not made a matte for the FG yet. Chroma keying is a method of creating a matte from a specific colour (in this case the green screen).  First we tell the computer which colour we want to make transparent (allowing us to see the background through it).

The green screen is never perfectly lit, so some tolerance ranges have to be adjusted to let more of the green become transparent.  You can see here what happens if the tolerances aren’t set right.  Parts you want to keep start going transparent.

 

Keep tweaking the settings until you get the best possible result.  Don’t worry if there is still a green outline around the FG subject (it’s bound to happen 99% of the time).  There are other tools that will take care of this.

A Note On Colour Space:
First of all, if you don’t know anything about colour space, you may want to check THIS out.

Most low end compositing programs work in RGB (red, green, blue) colour space. Unfortunately, in my experience, this is the worse colour space to work in. Why?  Think of it this way;  The three controls you have over the key are Red, Green and Blue. Assuming you’ve shot a perfectly lit and exposed green screen (even though this never happens!), then what good are the Red and Blue controls on your chroma key?  Basically, two thirds of the controls don’t help you refine your key.

Now think of the same thing in HLS (Hue, Luminance, Saturation) colour space.  Once you pick the colour (Hue) of the key, in this case green, you can then hone in on the luminance of that green AND the saturation of that green.  Now all three of your controls are helping you refine the key.

If you’re working with MiniDV footage, then it makes even more sense to work in YUV, as that is the native colour space of that format.

If your compositing program allows it, work in any other colour space than RGB. If you’re stuck with RGB, it’s sometimes better to colour correct your green screen footage before you try to pull a key off it. This may make your subject look awful, but puts the green of your screen more in the “zone” your keyer likes. You then have to take the matte this generates and apply it to an uncolour corrected version of the FG.  This is more complicated, but can produce much better results.

Shrinking (or choking) the matte does what it sounds like.  It makes the matte smaller and will help remove  green outlines.  The catch is, the more you shrink the matte, the more fine details are lost (such as strands of hair).  If you go too far, people can lose their hands or even their heads!  A good program will allow you to adjust the gamma, erosion, blur, histogram, etc. of the matte as well.

Next we have to deal with something called Spill. That’s when light reflected off the green screen falls onto your subject, turning it green.  This is why it’s important to keep your subject as far away from the green screen as possible.  The farther away they are, the less spill will hit them.

You can see below how the actor’s skin is reflecting some green.  The way to get rid of it is to use Spill Suppression. Most el cheap-o compositing programs don’t have this tool.  Too bad, ’cause it makes your composites look a lot better.  It basically allows you to colour correct just the green.

You can see what it does here.  I’ve not only suppressed the green, I’ve also swung it towards flesh tone, because I was most concerned about what it was doing to peoples’ faces.  Wispy hair is something else that benefits greatly from Spill Suppression.

So, we’re almost done. All we have to do is get rid off all that crap outside of the green screen. This is accomplished with a Garbage Matte.  Here I have created the garbage matte using a roto spline (a hand drawn shape).  When it’s turned on, everything outside the shape becomes transparent regardless of what my chroma key says.  Some programs may refer to this as a Mask.

Here’s the final composite.

A colour correction has been applied to the foreground to make it better match the BG environment.
A letterbox has also been applied.  I like to shoot full frame and then add the letterbox in post.  When working with virtual sets it can be hard to know at the time you’re shooting if you have the shot lined up exactly right.  Having extra image outside the letterbox allows me to adjust the composition.  In this case, I actually moved the shot up a bit to reduce some of the headroom.  The letterbox then hides the gap left at the bottom of the frame.

There, all done!  However… Just because a still image looks good, doesn’t mean the moving footage will be acceptable.  MiniDV has a lot of compression and this can cause noise (or bubbling) on the edge of your mattes.  This is often not apparent until you see the footage in motion.

If you have noise in your matte, go back and tweak all the settings until it goes away.  Shrinking or choking the matte helps, but remember if you go too far, people start losing their body parts!

Sometimes applying a blur to the shot before the key is pulled will help.  But of course, you will then have a blurred shot.  Use the matte pulled from the blurred shot, but apply it to a non-blurred version.

[Post to Twitter] Tweet This Post 

Camera

Your camera’s soul purpose is to create images.  …Okay, if you’re using a video camcorder you’re probably using it as your audio recorder as well, but that’s another topic.  We’re just talking about how to use a camera here…

Let’s start with the basics of how an image is captured.  It’s all about the exposure.  The first picture is under exposed.  The second picture is over exposed.  But the third picture is just right.  (Pictures courtesy of Goldilocks).

Three things control the exposure:
1) The aperture, AKA the opening in the iris.
2) The shutter speed.
3) The sensitivity of the imaging device.

They all add up like this:
The larger the aperture the more light comes into the camera.
The longer the shutter is open, the longer light is allowed to fall onto the imaging device.
The less sensitive the imaging device is to light the more light it needs to make a good image.

Film compared to Video

Apertures:  
Apertures in film and video cameras work the same way.

Shutter speed:  
With motion picture cameras, the shutter is a metal half-circle that spins just in front of the film.  When the shutter is out of the way, the film is being exposed.  When the shutter is blocking light from falling on the film, the film is being advanced to the next frame.  Most of the time, motion picture cameras have a shutter speed of 1/48 of a second.

With video cameras…  Okay, are you ready for this?  I’m going to blow your mind.  Despite the fact that video cameras have a feature you can set called “shutter speed”, THERE IS NO SHUTTER IN A VIDEO CAMERA!  …Have you recovered from that yet? 

Alright, let me explain…
Film cameras have shutters because the film needs to be advanced to the next frame.  If there was no shutter, the film would be exposed while it’s moving to the next frame, causing  nasty vertical streaks.
Video cameras use CCDs.  When light hits the CCD it begins to build an electric charge.  The longer light is allowed to hit the CCD the larger the charge becomes.  Also, the more intense the light the faster the charge builds.  When the CCD is discharged, the image is captured and the CCD returns to zero charge.  The process repeats.
Normally, an NTSC video camera has a “shutter speed” of 1/60 of a second (PAL is 1/50).  That means the CCD is discharged after 1/60 of a second.  If the “shutter speed” is 1/1000, that means the charge on the CCD is only allowed to build for 1/1000 of a second before it’s discharged.  Obviously, the shorter the time a charge is allowed to build the smaller that charge will be, resulting in a darker image.
This has the same affect as shutter speeds have with film, but it does it without a physical shutter.
Whew…  glad that’s over.
Sensitivity of the imaging device:
Okay… That’s a bad choice of words to describe how film cameras work, because they don’t use an imaging device, they use… well, er, film.
However, the sensitivity can change depending on the particular film stock used.  The sensitivity of film to light is expressed in the ASA number and referred to as the “speed”.  The “faster” a film stock is (or the higher the ASA number) the less light it needs to capture a good image.  The trade off is that fast film is more grainy.  Most movies use 250 ASA film for daytime scenes and 500 ASA for nighttime and indoor scenes.

The sensitivity of a CCD depends on it’s engineering.  Some video cameras use three CCDs and as a result need more light (but have better colour reproduction).
Video cameras have a feature called “Gain”.  This boosts the video signal to make it look brighter, but does so at the expense of making the image more grainy.  It’s very much the same effect as using a faster film stock.  In most cases it’s better to add more light to a scene rather than increasing the gain.

[Post to Twitter] Tweet This Post