From My Perspective:
Music Playback and Live-Record
by Joseph Magee, CAS
Have you ever heard one of these gems?
• Your Producer says, “I have a friend who knows Pro Tools and should do playback,” the Music Supervisor says, “Right on man.”
• Your Producer appointed to watch over the musical scenes in the film wants it all recorded live, with no tempo glue for editorial. He says, “That’s the only way to have a real performance, no click ever, live pre-records and live on the day. Our Editor will make it all work in Post.”
• The Director has a relative with an amazing home studio (in his garage) for pre-record. “The tracks will rock for sure. They will prep everything.”
• The UPM tells you, his faithful Sound Mixer, that music playback needs to happen the next day without a hitch; we don’t have a track yet. “Also, we don’t have a budget for a music playback person so you guys figure it out. Remember, you have a whole trailer full of gear I’m paying for.”
• The Music Supervisor has an MP3 they will email you sometime soon; it’s all good.
• And last, but not least, the Film Editor wants you to get playback timecode on the slates because that’s how he used to do it when he was doing music videos.
I’ve been privileged to work on production music for feature films for more than two decades now. Before coming into the world of on-camera musical performances, I recorded classical and jazz records and broadcasts, worked as an orchestral scoring mixer for features and mixed front-of-house live sound for large venues including the Hollywood Bowl. Over the years, I’ve developed a keen sense of the procedures that facilitate a smooth production and the elements that enhance an artist’s ability to give a great performance. My projects have given me the chance to work in feature film pre-production, prerecord, production and post with many acclaimed music producers, composers, musicians and recording artists all facilitating the filmmakers’ vision. I do believe I have a unique perspective that starts from the very beginning and extends to the bitter end in final Post.
Although every project is slightly different, each usually starts with the music team, Director and Producers visualizing how the scene will play and then planning so that all the elements are in place on the shoot day. This is essentially the same as with any other scene in a feature film, except that a music performance has the complexity of managing creative work in three separate periods of work: the initial music composition/rehearsal/pre-record, the on-set performance to camera and through to creation of the scene in post. However, different than the rest of the feature film, these three distinctive periods are tied to the element of synchronous performance locked to the established timeline of the music track. This makes the music scene full of its own technical and artistic challenges.
The ideal scenario is to execute pre-records that will make it to final dub. During my many features with Disney, this also proved to be financially prudent. Yes, the tracks will be sweetened, edited, fixed to picture and stem mixed in the film’s final theater presentation. But the musical, artistic content will be set and adhered to, creating the exact intention of the musical moment, the storyline and the actors’ performances.
A synth track mock-up will not achieve this; it may get you through the day but that’s about it. The mock-up has a very good chance of not feeling the same, or sounding anywhere as good as the final track. The hastily assembled temporary track does a poor job of conveying the emotions of the scene for cast and crew—a sure recipe for a lifeless performance. Even if the track exactly matches every beat and every note, music is a “feel” thing and if the performers don’t feel it, the audience in the theater likely won’t either. The substitution of better music in Post might improve the scene technically but won’t do anything to breathe life into the unmotivated performances during production. I’ve found this to be a common theme—time spent in preparation makes filming go better and lessens the need to spend time in Post fixing mistakes.
A well-prepared playback should have vocals that are dry and relatively free of compression or processing. Vocal FX should be available as separate stems and mixed to the environment on the day. A believable music scene requires natural bridges between dialog and music. The performer can best deliver these transitions when every syllable from the recording can be easily heard in the playback. Pro Tools is the industry standard software/hardware for feature films. The sound FX, dialog and music teams all use Pro Tools. It is the standard for the dubbing stages as well. So it saves a lot of time if Pro Tools is also the software of choice for on-set music playback. The technical sound platform software for communication from beginning to end of a production process should be a standardized. When someone chooses to use different software, it just creates conversion issues. Fortunately, Pro Tools is easily accessible on many levels and with many types of hardware. The one exception to this standard is often the music score composer’s personal studio, but this can be worked out by conversion to Pro Tools before the score leaves to see the outside world.
The Pro Tools session that goes to set for music playback should have the music locked to a bars/beat grid. This will enable very quick edits if you are called upon to create magic while a 1st AD waits, not so patiently. The grid is easily achieved in advance, not so quickly on set at the last minute. I also believe in using your prep time to print a click and thump track, beginning to end. Even though your grid is functioning and your click is a plug-in firing off the grid, it is easier to show and cut a visual region when folks are at the rig trying to work out cues. We are lucky today that most choreographers and music folks all have a common ground in Pro Tools and are able to use the visual aide of the screen to communicate with each other. I also have my memory locations already set for song structure before anyone steps to my screen to talk cues. Another detail most often missed for the prep of the sessions is that PB timecode should advance to a new hour for each different song. This will help Editorial in the long run.
I believe that a music-intensive show should not rely on PB timecode on an audio track. An Avid Sync HD I/O should be used on films with music-intensive scenes. This device should be synchronous to a video sync reference. Good news is there are a few ways of setting up this requirement, which now makes the on-set hardware compliment much lighter.
In many situations live-music-record is very important. Combinations of music playback and live-record performances, if executed properly, are often worth their weight in gold in Editorial. Even a few words of live-record cut into the pre-record in Post enables the audience to believe the musical performance in the final cut.
On the other hand, a show built from all live-record can be a disaster in Post. Folks giving their accounts of “all live-record” shows don’t always tell the whole story. Often these shows require extensive editing and pitch work to correct meandering tempos and modulating keys. I have worked on a long list of projects with well-meaning Directors who have gone down this road from the excitement during production to frustration in Post.
If you do have to go “all live” during production, you’ll need to provide the performers some sort of mapped tempo either using a click track through earwigs or a thumper or both. If the singing is a cappella, you’ll also need to play a pitch reference at the right moments. Even so, some key modulation and tempo variations are likely to occur.
Modern earwigs are very useful although limited by volume and low fidelity. I started doing this on-set work back in the days, first with earwig inductance loops taped into the set, and then with neck loops. So I am comfortable explaining the current limitations to talent and creating an environment that helps the devices do their jobs. For example, when transitioning from speaker playback to earwigs and back to speakers, I like to leave the thumper running at a very low level the entire time. The pulse helps provide the “rhythmic glue” to tie the separate moments into one seamless feeling. A thumper quietly pulsing away also helps to keep the full range speaker volume level lower throughout the day.
Active eighteen-inch subwoofers today are very affordable and do a great job. The source of the thump is also very easily tuned on-set in Pro Tools. The sample used for the thump can be highly tuned prior to arriving to set. I have used the same sample for thump for many years. With the current state of the art in active loudspeaker design, I think everyone should take advantage of better fidelity playback on set. A speaker system with higher than average Total Harmonic Distortion (THD) and poor crossover points is fatiguing to the cast and crew. When music plays on set and sounds great, the day goes by more smoothly. It’s easier for performers to follow lyrics that are clearly articulated and better fidelity helps them “feel” the music and translate that energy to the performance. New, high-quality designs are affordable and durable. Passive speakers with amp racks on set and drive racks with crossovers and EQs are basically a thing of the past. I worked through those days and am happy not to use that gear anymore. If a production requires very high sound pressure level (SPL) playback or on-set monitor mixing becomes critical, I then recommend employing a professional touring company to join the team.
The Playback Engineer should try to coordinate his efforts with both Editorial and the Production Mixer. A conversation with each before the assignment starts can sort out issues and make the process smoother. This is the best time to bring up the issue of playback timecode. Having both time-of-day (TOD) code and playback code married, available in burn-in windows for Editorial is the best way to load and edit synchronous music playback scenes. When loaded correctly, endless hours of sliding sync or making on-the-fly corrections will be completely avoided for the editorial team.
This production workflow is easily accomplished. For the Production Mixer, it’s only necessary to print the PB timecode on one analog track on your multi-track and the mono music playback reference on another track. Your multi-track is already synchronous with your TOD code.
The media management company contracted for dailies and editorial workflow can then easily meet the need for PB code in a second window, if requested. On a show where Editorial is taking your tracks directly, they can create the second code window on their own. Either way, it will save numerous days of questionable sync work.
The relationship between the Avid assistant and the Playback Engineer is vital to maintaining sync in the music scenes. The initial conversation between Playback and the Assistant Editor responsible for loading each day’s work into the Avid will set the tone between departments.
The Playback Engineer should provide to Editorial a master playback 48 kHz, 24-bit stereo interleaved file for each musical piece performed. The file should be created from the exact playback session and have the positional timecode reference identical to the day’s playback work. This file with the correct timestamp will enable the correct loading of all of the takes with music playback timecode. Sent at day’s end, the file labeled PB Edit Master, should go directly to the Avid assistant editor; I deliver this file via Aspera, with explanations regarding the use of the playback in the scene.
I’ve found that it takes a complete team effort to pull off a complicated PB, live-record, earwig, thumper day on set. Technology has gotten more complicated and offers more production possibilities, but increases workload. Personally, my favorite shows are a team effort with playback integrated into the sound crew. Coordination of cable runs, speaker and thumper placement, music edits and session maintenance, music cues with the 1st AD and earwigs to talent is all very doable when executed by the whole team.
In my experience, the most effective way to operate PB is to coordinate with all the departments responsible for the creative process, before stepping onto set. The Playback Engineer can act as a bridge between Production and Post Production on the music scenes, assisting workflow and maintaining accountability. From my perspective, an effective Playback Engineer is always prepared before coming to set each day. Wise colleagues in Production and Post should bring him aboard early enough to make those preparations.
Glossary for highlighted words
Stem A mix of multiple audio sources. Example: A blend of music and effects, without dialog. The use of a stem allows complex source material to be treated as a single unit in the final mix or as a temporary part of the process of editing and recording audio.
Live-Recording The process of recording a musical performance on set rather than having the players mime to the playback of a studio session. Sometimes a live-recording will be used to generate a playback master that is immediately put into service to shoot alternate angles and closeups.
Earwig A miniature monitor designed to fit within the ear canal like a hearing aid.
Thumper A playback system to reproduce the beat of music as a series of low-frequency thumps. The tones are typically about 40 Hz so they may easily be removed from a track without harm to recorded vocals. A special thumper speaker system optimized for low-frequency reproduction is used to play the track. The thumps permit performers to follow the beat of the music without musical playback that might interfere with dialog recording. Originally invented by Hal and Alan Landaker for Warner Bros. Studios. (See 695 Quarterly, Volume 2, Issue 1, Winter 2010)
Aspera A company making software to facilitate transfers of large data files.