Some Tips on Shooting Movies with Outside Editing
Posted 15 May 2010 - 09:19 PM
The chromakeying in After Effects offers the most control over what you can get rid of but you need to be careful that the edges don't come out "scraggy". Try to get it to look as smooth as possible.
As far as Magix goes (which is what I use for pretty much all my editing, visual and otherwise) then you are unlikely to ever fully get rid of the colour around the edges, but sometimes you can edit the colours to give the illusion that it's gone. For example, I often lower the saturation to make it less noticeable. The Movies' default look is too brightly coloured anyway in my opinion.
The best way to use depth of field in my experience is to try and restrict it to close to mid shots. Long shots tend to make the upper layer look untidy and tacked on. Remember that although it should make a noticeable difference in the visuals, it shouldn't be overly distracting. Keep it subtle and use it sparingly
Hope this helps
Posted 24 November 2010 - 12:28 AM
(With apologies for anything already mentioned)
1. While you don't want long movie files to edit, keep individual scenes a little longer than you might need. It's tempting to only export the exact length you need for a scene, but it will severely limit the way you can transition between scenes. You can always trim a scene down to what you need, but once the scene is too short, you're kind of screwed.
2. For transitioning between scenes, keep it simple, and keep it consistent. If you choose to go Lucas-esque with circle/diagonal wipes, run with it. Just don't try to add cross or pixellated fades or things will look sloppy. And vise versa. One transition style is really all you need.
3. More on transitions. The speed of your transition should ideally equal the pace of the action on screen. To a very large degree, the pacing of your final picture will come from how you edit the scenes together. Lots of quick cuts will give the impression of a faster pace. Longer scenes with few cuts tend to slow things down. The original Star Wars trilogy is a great place to see how editing affects the perception of pacing.
4. Use color correction effects to give your scenes a more uniform color palette. Evening out the colors across a whole project will make it look more cohesive.
5. Add music and sound effects in your NLE as much as possible. While you do need to use MS and TM for your lip synch files, try to use the audio from your output file as a guide track only. Overdubbing the voices, and adding your effects and music in NLE post production will give you MUCH more flexibility, along with a wealth of effects you can use on each aspect of your audio for a better overall sound.
Grand Bastardo Presentations
Posted 24 November 2010 - 09:57 PM
Posted 29 November 2010 - 06:48 PM
(With more apologies for anything posted that falls into the "Well DUH!" category)
Let me level with you:
Somewhere in your outside editor, there should be a sound level meter that represents the signal level strength of your master audio channel. Put more simply, it tells you how loud your audio will be in the finished video.
Usually, the number starts at -Inf (infinity) at the bottom, then works it's way to 0db, before going into a red zone above 0db.
To get the best sound possible, you want to flirt with hitting the 0db mark without actually hitting or surpassing it, which would cause distortion. As digital distortion is very harsh it is to be avoided at all costs.
When recording audio into your editor, have a peek at the level meter while you do a practice run through your lines. If you run into the red, or very close to it (between -1db and 0db), lower the input volume of your mic until your peaks are hitting somewhere in the -3db range. The reason I go lower is that some effects will clip (distort) if the input signal is too strong.
When mixing audio, things get a bit trickier. The more tracks of audio you add, the quicker things will get to distortion levels. Enter, the submix.
Fix it in the (sub)mix.
Working with submixes can be tricky at first, but will save a lot of hassles, headaches, and most importantly, CPU cycles in the long run.
**NOTE** Not all editors are created equally when it comes to submixes, so this will be more a general guide of how you can make them work for you. Most info on how to set up a submix is your editor can likely be found in the Help files, in the user guide, or online.
So let's take an hypothetical example project: A 6 minute courtroom drama. You've decided on having 5 speaking roles (Judge, 2 lawyers, witness, defendant), plus a few sound effects (gavel bangs, courtroom murmurs and chatter, the sound of the stenographer recording the testimony), and finally some background music.
Simple right? So let's map out what we need to mix. If we were to label each element the way we would on a mixing board, it would look a little like this:
That makes 9 tracks of audio to balance without going over the 0db mark. Tricky, but not impossible. But now we decide the courtroom sounds too small for real drama, so we want to add some "large room" reverb to the voices to make it sound like they're in large space the way we see things in Hollywood.
So let's add reverb to our spoken parts with an asterisk
Notice something? Well, yes, the sound levels do change once the reverb is added, so we now have to adjust the levels for all 9 tracks again to get the sound balanced just the way we like it.
Now that the vocals are fixed, and the sound is back to being balanced, we notice the sound effects need a little something. Maybe a light compression (+) and a little bit of reverb (-) to give them some ambience and make things sound more realistic.
So now our mixing board would something like:
Courtroom ambient +-
Back to adjusting our levels. And is it my imagination or are things running a bit slower? Could it be the 8 reverb effects and the 3 compression effects? Dammit, can't this be done in an easier way?
First, we notice from the layout of our virtual mixing board that we seem to have 3 "groups" of sounds. We have the spoken parts, sound effects, and music. These will be our submixes.
Courtroom ambient +-
Hmm, even better, it seems each of our elements in our submixes use the same effects. We can now move the effects from each track to the submix track.
Submix A (*)
Submix B (+-)
We have now gone from running 8 reverb effects to 2, and from 3 compression effects to just one, with the same sound as before. This should speed up render times on any system by a noticeable amount.
Even better, is once our relative levels in each group are set, we now only need to manipulate 3 faders on our editor to adjust the sound. Effects too loud compared to the voices? Lower the B fader to reduce the overall sound effects level.
Getting the levels in each group set can be every bit as much a pain in the ass as without any submix groups set up, but once they're set,it's a single fader to adjust their overall volume level in the final mix as you add or remove effects, submix groups, etc.
Grand Bastardo Presentations
Posted 30 November 2010 - 12:38 AM
1 - You can never "fix it in the mix."
The most important thing when it comes to getting a sound - from a voice over, to a sound effect, to an ambient background noise to loop in the background to give things a more natural sound, getting a clean take is the single most important thing.
A distorted signal that was recorded too hot will always be distorted and will be noticeable in the mix, no matter what tools you use to try and restore the signal. Make sure that no part of your signal in any way goes past the 0db limit when recording or mixing down.
Also, a clean take means recording a sound without ANY effects applied during the input stage. It might sound really cool to record your voice with that ring modulator on for your robot overlord character, but if later you find that the sound gets lost in the mix, or runs into phasing problems with other sonic elements, you're SOL for solutions other than re-recording the part from scratch. Add your audio effects in post-production where you'll have more control over the final sound and can tweak things to fit better in your sound mix.
2 - If everything is in stereo, nothing is in stereo.
To make the most of a stereo image, you ideally want the individual elements of your mix to be mono files, so you have the most freedom to pan around the stereo field and give your audio some depth.
If a character hears a voice off screen to the left, it should sound as if the voice is coming from the left to the viewer. It doesn't have to be a drastic pan (10-15% is usually enough) but it should clearly be coming from the left. With a mono file, set the pan fader to taste, and you're set.
Doing this with a stereo file becomes complicated since you'll have to pan an unnatural amount to any one side to overcome the fact that you have a clear signal coming from the side it isn't supposed to. Multiply this excess signal by a dozen tracks or more and your mix will soon be a cluttered mess. You end up losing space to place subtle spatial clues to your viewer on where they should focus their attention.
3 - Dynamic range is about more than just volume.
Every now and then, you'll have what you feel is the perfect sound balance. Your fader volumes are low enough to give you plenty of headroom before reaching distortion, and things seem to be going well.
Yet, for some reason, you'll get these weird spikes that reach the clipping point even though the overall volume of the track doesn't appear to rise high enough to go into distortion. What gives?
In fact, your signal really is too loud, but perhaps only at certain frequencies.
A common problem is that your files will all be bottom/mid/top heavy in terms of frequencies and eventually you run out of headroom in that range. So even though the overall volume of your master audio track doesn't appear to rise significantly, your signal clips in a certain range that will cause that nasty digital distortion.
So what to do?
Do exactly what professional studios do, use EQ to trim away frequencies so things play nice together. To use a music example, bass guitar, kick drums, and guitars can all occupy the same frequency range. A recording engineer will often chop down a LOT of the low end of an electric guitar so it doesn't get washed out by the bass and drums (and one of those will be made artificially brighter than the other so the sounds of each aren't washed out by the other).
Taken individually, each track of a given song wouldn't sound right - the guitar might sound too thin, the kick drum too boomy, the bass too brash, but mixed together, each element fills the sonic space vacated by the other two and the result is pleasing to the ear.
In the home movie studio, the same approach can be taken. If your character has a deep rumbling voice, try using EQ to roll off some of the bass frequencies of your music track (or another track) to give the voice some space in the mix. The roll off doesn't need to be extreme. A little here and there can make all the difference.
It should be noted that EQ is generally part of each channel's set up, so this is one "effect" that should be applied on a track by track basis and NOT be done through the submix channel, unless you're trying to be really avant garde.
Grand Bastardo Presentations
Posted 30 November 2010 - 12:56 AM
1 - The "Background Noise" test.
Put your movie on, set the volume to about the volume you'd listen to a DVD at home (maybe a hair louder if you can), and then do anything but watch the movie. Read a magazine, leave the room and come back in, lie on the bed/couch, have a snack. If the sound mix is noticeable, something needs to be fixed. Ideally, the soundtrack should become background noise and ideally sound the same way a television set to any random channel would sound as background noise and just be "there."
If you notice it's on, it's usually your ear picking out something it doesn't like and alerting you to it. Usually after a listen or two like this, you'll know what's bugging you and you'll get ideas on how to fix it.
2 - The "Blindfold" test
For this test, sit at your computer, put on your movie, and while looking toward the screen, close your eyes and simply listen. The closing of the eyes is important so you can focus on how things sound without the confusion of trying to reconcile the sound with the look.
Are sounds coming from where there are supposed to? Do the sound locations match what you visualize in your head/in your video? Are people/things closer to the camera louder than people/things in the background?
Ideally, the sound should be able to carry the load of the story the way a radio play might.
** NOTE ** This test should be performed in the final stages of post when you have an intimate knowledge of the visuals on screen. Closing the eyes is ver important to avoid distractions, so you have to be able to know what is going on on screen in order to know how the sound compares to the action.
Grand Bastardo Presentations
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users