Did Marion Keisker record Elvis? A female audio engineer's perspective

This post may contain affiliate links. Please read my disclosure for more information.

 Sam Phillips, Elvis Presley, and Marion Keisker

Sam Phillips, Elvis Presley, and Marion Keisker

I recently researched Marion Keisker, a woman known for helping Elvis Presley early in his career while she worked at Sun Records/Memphis Recording Service. Marion is sometimes credited for recording Elvis for the first time (which is what caught my interest in her story). Her boss, Sam Phillips, also claims to have done the first Elvis recording.

We may never definitively know who did the recording because all three people involved have passed away.  At the end of the day, it  may have been a matter of flipping a couple switches. But, what’s interesting about this story is that Marion could have done the recording. Even if she didn't, there weren’t many recording studios back then let alone women working in them which still makes her story incredible. Why is she so often credited as Sam's secretary or personal assistant, then?

Life at a recording studio

For anyone who hasn’t worked in a recording studio, it's far from a normal job. A studio operates around the clock with some artists starting their work late at night. As an employee, you don’t know if a session is going to run an hour or 12 and some positions are on-call 24-7. Some weeks can be grueling (with little control over your schedule).

At the same time, it’s exciting when you're in the room with a good artist. Some days it doesn't feel like work at all. An old boss of mine put it well: "This isn't a career path - it's an addiction."

There’s a sense of camaraderie and shared experience that I don’t see in many other jobs or fields (except perhaps the military). To work in an environment like that you have to be committed - whether it’s to your career, the music, or to the people you’re working with. 

There’s no doubt Marion’s commitment was to Sam. She wasn’t an engineer and she didn’t know a lot about music when she took the job working for him. Marion wanted to see Sam fulfill his goal: Starting a recording studio open to all races (at a time when that wasn't the norm). Marion helped with everything from bookkeeping to sweeping up acetate shavings from the recording lathe. Marion was asked in an interview about being referred to as secretary and she said, “It’s ok if they’ll also say I was office manager, assistant engineer and general Jane of all trades.”

Was Marion a recording engineer?

If you compare Memphis Recording Service to a modern day recording studio, Sam would be the owner/chief engineer and Marion would be the studio manager (much more than a secretary). Marion’s job would be to keep the studio operating from day to day (under the guidance of her boss). But, I wouldn’t call Marion a recording engineer because:

  • Sam was an experienced broadcast engineer (a job he had to get a license to do)

  • Sam alone designed the control room, picked the studio equipment and acoustical treatment for the tracking room

  • Sam didn't use much audio processing during recording. He didn't use EQ and used little limiting and compression

  • Because of this, studio recordings probably had Sam’s “sound” even if he wasn’t the one operating the equipment.

  • "Engineer" is a title given when someone is responsible for selecting mics, placing them, and making technical adjustments to manipulate sound. There's no evidence to support Marion doing any of this (even by her own account).

Marion as a tape operator

I would argue that Marion was a tape operator, not an engineer.  By many accounts she had the intelligence and capability to learn and operate studio equipment. She said to colleagues and in interviews she operated the lathe and the tape machine.

In Peter Guarlnick’s biography of Sam, Sam Phillips: The Man Who Invented Rock 'n' Roll, Guarlnick says he spoke to numerous people familiar with Sun’s operations who said “not only could she have operated it, she probably did.”

Looking at it from a business perspective

Marion was hard-working, willing to do whatever needed to be done, and was capable of operating equipment. Sam wanted Marion at the studio so it could be open as much as possible when he wasn’t there (when the studio opened they were both still working radio jobs at WREC). Sam's business card read “anyone-anywhere-anytime.”

It doesn’t seem rational for a business to be open only for Marion to have to turn away simple gigs. If Marion was capable of operating a lathe and someone wanted a simple demo, why not take advantage? When Marion tells the story of Elvis’s first recording, Sam was on his way to the coffee shop next door when Elvis came in. Sam told her to go ahead and do the recording. If anything were to go wrong, Sam would have been less than a 2 minute walk away.

The counter argument to that is Sam had to uphold his reputation as the studio’s engineer. As a business owner and engineer, I am very conscious about how the people I hire reflect on me. If you went to a business expecting someone with years of experience and credits and instead got the person you saw answering the phones and doing bookkeeping, what would you think of the business?

My theory...

I believe Marion knew how to operate equipment and she did it on occasion.  I'm guessing it was in the earlier years of the studio and probably didn't happen very often. Did she record Elvis? It's hard to say. Even Marion herself said she could be remembering it incorrectly.

Sam said in an interview that even if Marion were to record someone it would be for the purpose of playing for him. He didn't immediately say, "Oh no, she can't operate equipment so there's no way she could have done it." To me, that implies that scenario could have happened. Although, he did claim in other interviews Marion couldn't operate the lathe.

Once the studio got established the client base changed. I doubt they were recording weddings and funerals as much once the studio had a reputation. Musicians were coming to Memphis just trying to meet and work with Sam (like Johnny Cash, who was turned away at first).

With Sam’s reputation and experience he wouldn't put an important session in the hands of an inexperienced engineer. That might explain why no one else has come forward to say Marion was their tape op.  If Marion was working in the studio alone and did occasionally do a recording, it was likely for walk-in artists in the earliest years who paid their money for 10 or 15 minutes worth of studio time and possibly were never seen again. Initially, Elvis was just another local kid looking to make a demo.

Marion's story

Marion is probably most known for her time working at Sun Records but that was a small part of her career. She produced many radio shows, ran a television network in the Air Force, and was a women's rights advocate. Check out these for more on Marion: profile of Marion Keisker

More Than a Supporting Role: Marion Keisker, Gender, Radio History (Academic paper by Melissa Meade)

Sun Record Company on Scotty Moore's site (lot of photos and tech info about the studio)

Thank you to Peter Guralnick, Jon Hornyak & Maureen Droney of the Recording Academy, J M VanEaton, US Air Force Historical Support Division, Billy "The Spa Guy" Stallings, Wes Dooley and Soundgirls for their assistance.

Searching Online for Audio Jobs

In the audio industry, there's a few types of job listings you'll commonly see online:

  • Audio manufacturers. These jobs are creating audio products that people use. It may be anything from customer service (answering phones and emails) to quality assurance, product development, sales, programming/computer engineering, and more.
  • Schools. These jobs could be teaching audio, A/V work, or audio engineering positions.
  • Corporations. These are generally advertising full-time positions.
  • Temp help or one-time gigs. These are independent movies looking for sound, bands looking for recordings, venue looking for an engineer, etc.

If you notice one major item left off the list: Studios. Music studios and post-production generally don't post jobs online. If they do, I ask...

Why are they listing online?

The old phrase "work comes by word of mouth” is totally true in the studio world. It means opportunities are most likely to come from people you know (your connections). A resume may not tell a lot about your work ethic or your ears, but a former co-worker or colleague can easily vouch that you’re a good fit for a job. If a manager at a studio has a good relationship with their employees and needs to hire, it’s a quick conversation: “We need to hire. Can you recommend anyone?” Chances are, someone has a friend, roommate, former classmate, or colleague who is perfect for the job and can be in for an interview quickly (vs the trouble of making a job listing, waiting for applicants, sifting through resumes, etc). So, anytime I see an ad online for a studio it raises a red flag. Why is their existing crew not bringing in good candidates? Is this the type of place that people want to work, or is it a revolving door that always needs new people? It could be a great opportunity - but it could also be an early sign of a problem.

If it's too good to be true, it probably is. Pixar, for example, has an audio job listing show up once a year or two and there's a frenzy of people applying for it. That's the type of job a lot of people dream about. Why would a company that's probably one of the most popular animation places in the world need to ask for applicants? I wouldn't be surprised if they get multiple resumes a day from audio people. There's no harm in applying when a job listing at Skywalker Sound shows up online. I just wouldn't spend a week tweaking a resume for it (unless you know someone who is personally giving your resume to Leslie Ann Jones).

Corporate jobs

Some corporate jobs also fall under "if it's too good to be true, it probably is." Some corporations (and large companies) require that all job openings are posted publicly. It’s great for the public to find out about jobs, but sometimes the company already knows who they’re hiring and still has to post an ad. If you apply to one of these jobs, be objective about it – don’t wait around for them to contact you (the same could be said of any online job). If you have a connection to the company, take advantage. Sometimes there’s hiring bonuses when an employee gives a recommendation, so you may find someone eager to help you.

It may be someone in HR or a recruiter looking at your resume first, and they may not understand the technical nuances. If you’re going to apply to a corporate job, tailor your resume so it has easy-to-read points, and includes some general details that could be understood by anyone reading it.

Amateur/Semi-professional work

A large subset of online ads is the amateur/semi-professional market.  In film, there’s a lot of self-taught filmmakers who seek sound help but don’t know any sound people. In music, there’s bands everywhere looking for help with recording or live sound for gigs. There’s a lot of opportunities but the quality, talent level, and pay can vary significantly. It’s hard to distinguish this in an ad, too.  If you’re going to apply for work in this market, ask a lot of questions before committing. Make sure that their expectations are in line with the work you are going to do (and not do), and be very clear about the budget and timeline (even better would be to get it in writing).

For films, ask for trailers or a clip to watch to get a sense of audio quality. For music, ask for past recordings or a demo (even an iPhone recording or YouTube video) just to hear what they sound like. There’s been many times I’ve passed on a project because what someone said they needed was different from what they actually needed. For example, an unwritten song needs a songwriter, not a sound engineer. Film ads regular confuse terms such as “sound mixer,” “Foley” and “sound designer.”

Just because you inquire or put your name in the running doesn’t mean you have to take the work – especially if you have concerns about the level of professionalism or the person hiring you. The right project can be a great opportunity for learning and relationships, but it still may entail a lot of extra work, teaching/explaining what it is you do (and can’t do), and managing expectations.

Craigslist, Mandy and

You can sometimes find great gigs on sites like these but there's a couple things to know:

  • Good gigs get a lot of responses. I've received over 100 emails in a few days for a studio internship. I've applied to jobs where they got hundreds of responses, too.
  • There's fake ads. A colleague once told me he posted a fake ad just to find out what his competitors were charging for similar work.
  • Be cautious handing out personal information. Craigslist uses anonymous email addresses so it's especially important to be protective of your information.

Standing out

The absolute best way to stand out is to find someone who will recommend you (whether it’s passing along your resume, or who’s name you can include in an email or cover letter – with their permission). Check your LinkedIn or Facebook networks for connections to the company and reach out. Ask your local friends or family if they know anyone who works for the company.

Tips for responding to online ads

  • Cater your resume/cover letter to every job you apply for. It's obvious when it's a canned response and even a little personalization can go a long way.
  • Check and double-check for mistakes. If you have misspellings or accidentally address the wrong person, studio, or job title, you might be done before you even had a chance.
  • Show a good attitude about the job you will actually be doing (and willingness to learn – even if it’s something you’ve done before). If the job listing is for an entry level job (like internship or assistant), it’s better to say in a cover letter, “I have a working car and I am willing to run errands” than to say, “I can engineer and mix.” 
  • Don’t give out your mailing address unless you can verify it’s going to a reputable source. Always include the city that you live in (out-of-town or anonymous locations may be dismissed immediately). Use caution giving out your phone number (or get a Google voice number – this may help if you don’t have a local area code, too).
  • Don’t give a bid, rate, or salary to an online ad (especially if it’s anonymous) unless you think it’s absolutely necessary. It’s better to ask for a phone conversation or say, “I’m happy to give a rate, but I’d like to verify some details first.”
  • Carefully read over the ad and follow directions.

When I used to screen internship resumes, I always removed these candidates: weren’t physically in town for meeting (within 25 miles), had spelling or basic grammar errors, cover letter clearly pasted from another email or application. I asked directions like, “Include resume in text of email; attachments will not be opened” and “Please include a cover letter where you tell us why you’re interested in our company.” Anyone who didn’t follow directions wasn’t considered - plus it helped find candidates who were attentive and good with details.

Accept online jobs for what they are

Online websites can be a good supplement to a job search, but it shouldn’t be considered the primary means of looking for work. It’s a balance; if you spend too much time looking online, it’s time taken away from building your network, relationships, and skills. It’s good to set a limit for how much time you spend every day on online searches/applying, and aim to spend just as much time trying to connect with people in the industry you’re looking for work in.

Different jobs at a post-production sound studio

If you’re looking to build a career in post-production sound (sound for picture like television, film, web) there’s two primary routes:

Different Jobs at a PPS.jpg
  • Work for yourself
  • Work for a facility that specializes in post-production sound

There’s advantages and disadvantages to both. If you don’t have a lot of experience, working for yourself could mean high competition for low budget projects with varying quality level. At the same time, it can be good experience to do all the sound jobs yourself.

The main advantages of starting out at a facility is:

  • You get to work with professionals which means more learning opportunities and relationships to help you in the future
  • You'll probably get better credits than the projects you land on your own. Having credits will help if you decide to go freelance later
  • You'll have the security of having a job (and knowing where your next gig will come from)
  • You’ll get exposed to a lot of different projects, styles, and people. All of this is good for your chops
  • Even if you may not get a lot of hands-on experience for a while, there's a ton to learn observing

The main disadvantages of starting out at a facility:

  • It can be a lot of grunt work and long hours
  • You may spend more time out of the studio (helping with operations and tech) than in it 
  • It can possibly take years to move into hands-on roles like engineer or re-recording mixer

The jobs at a post-production sound facility typically include:

PA: A “production assistant” is someone who aids in daily operations. On an average day, you might be making coffee, answering phones or sitting at the front desk, stocking the kitchen with snacks, studios with supplies, running errands (picking up food, supplies, hard drives to and from clients), taking out trash.

You may be one of the first ones to the studio in the morning and last to leave.  PAs don’t get to hang out in sessions much (unless it’s allowed off the clock) but there’s a lot you can learn just being around. PAs are hired as employees. PAs may be interns who were promoted or people who applied from outside the company. PA jobs are high demand and studios get a lot of applicants since it’s the “foot in the door” job.

Intern: Interns often do the same duties as a PA but may get more opportunities because they aren’t getting paid. An intern might get to sit in on sessions or do occasional light work (like sound editing). Interns come and go more frequently than PAs and there is no guarantee of getting hired. I know people who waited it out in internships for over a year (without pay!) before moving into a paid PA position. Unfortunately, some studios abuse the intern status so it’s important to ask questions to make sure it’s not just a PA job without pay.

Assistant (also called A2, assistant engineer, or machine room operator): Assistants help support the technical operations of the studio. If an engineer or mixer has an issue they call an assistant to help. Job duties might be troubleshooting computer or gear issues, setting up and testing mics, opening and splitting AAFs, prepping Protools sessions, file management/archiving, tape laybacks, quality control, and receiving/sending files to clients.

The way assistants tend to move up is slowly getting opportunities at the studio. These can be things like engineering sessions, doing sound editing, or small mixing projects (in addition to his/her normal job the rest of the time). Assistants are usually employees. If you’re an assistant who can engineer, edit, handle your own tech support and know the day to day operations of a studio you’re truly an indispensable employee. An assistant could be a promoted PA or intern but may come from the outside.

Sound editor: sometimes sound editors are role specific (dialog editor, sound designer, Foley editor) or sometimes a single sound editor covers all of those roles. Sound editors can be employees or freelancers. Sound editors are increasingly expected to know how to do detailed audio repair (using software like Izotope RX).

There still is a hierarchy of sound editors. Entry level sound editors may only do simple tasks like cutting background sound fx, edit recorded Foley, or light sound design (these may be called "assistant sound editors"). Lead editors get to do the heavy creative lifting. Sound editors can be trained and promoted from within or come from outside the company. Freelancers are expected to already have some editing experience/credits and possibly work off-site. Side note: A "Music Editor" (by title) is not an employee of a post-production studio. Those jobs fall under music/music editing companies.

Engineer: there’s generally three types of engineering gigs in post-production: recording voice-over, ADR, and Foley. Some facilities have dedicated engineers and sometimes engineering duties are part of other jobs. For example, a mixer may record VO as part of his/her mix session.

Some engineers are hired freelance by the session or project and others are employees. Freelancers are expected to have engineering experience/credits already.

Sound supervisor: the sound supervisor oversees the sound process. He/she may be involved with scheduling or delegating work to sound editors. If there’s questions (technical or creative) before the mix, the sound supervisor is the person in the know or who will communicate with the client to find out. Traditionally, the sound supervisor has a meeting or spotting session (watching down a project to take notes and ask questions) with a director or picture editor. The supervisor also attends ADR sessions and the mix. Unfortunately, sound supervisor is one of the first jobs to go or gets combined into other positions if there’s budget constraints. Some studios don’t have a designated sound supervisor, either - sometimes a lead assistant or lead sound editor handles similar duties but doesn’t hold the title.

Re-recording mixer: this is the person responsible for taking all of the elements of a mix (VO, edited dialog, recorded/edited ADR and Foley, sound design, music) and blend them together. Mixers are at the top of the hierarchy (in terms of sound jobs and pay) but along with that comes more responsibility; You're the point person with a client, which can be stressful at times.

Re-recording mixer work is increasingly becoming freelance/contract but full-time opportunities do exist. Freelancer mixers generally are expected to already have significant experience and credits and, in some cases, bring their own clients to a facility.

Important people to know behind the scenes

Operations manager: Oversees day-to-day tasks and handles issues at the studio (with clients and employees). He/she is involved with other aspects of the business such as accounting, sales, scheduling, HR, etc. Usually the studio owner is not the operations manager so these two people work closely together.

Scheduler: Scheduling coordinates client bookings and also books freelancers for sessions. Sometimes the scheduler is also the operations manager. It’s in a freelancer’s best interest to have a good relationship with the scheduler since he/she may get to choose who to call for a session.

Sales: You probably won’t see a good salesperson at the studio all the time. It’s in your benefit to get to know the sales people, though, since they generally have a lot of relationships in the industry.

Originally featured on

Studio jobs: Why you have to start at the bottom

When you have a degree, Pro Tools chops, or job experience it may seem like a step back to start as an intern or PA at a studio. Why does it work this way? 

Studios need employees they can trust.

Studio jobs_ Why you have to start at the bottom.png

If a studio can’t trust you to make a lunch order without errors and pick it up on time, why would they trust you with a crucial delivery of a master tape or hard drive? Confidentiality is also important at a studio to protect the privacy of high-profile clients and their projects. Leaks can mean millions of dollars of losses. Like any relationship, it takes time to build trust between a studio and someone they don't know well.

Studios need to know the people they hire can do the work needed.

A surprising number of people embellish on resumes. Sometimes applicants have no idea they don’t have the proper skills or credentials for the job they are applying for. It takes practice to be good at any job and a studio isn't going to pay someone to learn on their most important client’s dime.

I wouldn’t recommend applying for a job like sound editor or engineer unless you can show at least one prior job (at a studio) with the same title and no less than half-dozen credits. I wouldn’t apply for a mixer job without a dozen mixing credits and 2 years experience. You may only have one chance to get a meeting or interview and it’s a risk to try for a job above where your experience and credits are.

Studios want employees who they feel comfortable representing the studio.

Everything from how you dress and what you say reflects on the studio you are working for. You have to be a team player who's looking out for the studio, not your own career. True story: a studio I worked at had an intern offer his business card to a client when the mixer left the room. It may have seemed like a good sales opportunity to the intern but his job was to pick up the dirty plates in the room. How did that make the studio look to the client? The intern was fired that day.

There’s technical skills that take time to learn and experience to get good at.

In post-production, it takes time to develop an eye for sync. Any audio engineer needs practice to develop his/her ear. Troubleshooting and recognizing problems gets faster the more you have to do it (and come across the same problems).

You might be thinking, It's a catch-22. How can I get work if I can’t get credits to show I can do it?

This is why the first couple years in the field is an optimal time to camp out at a studio where you can watch and learn as much as possible. It'll probably feel like you're still in school but you're getting paid (hopefully)! In time, a studio will give you an opportunity. If it goes well, you'll probably get another opportunity and so on. It takes patience - and being open-minded to learning whatever is in front of you. There's something to learn in every job that will help you later in the job you want.

5 tips to land a job at a post-production sound studio

We covered why you might want to work for a studio in Different Jobs at a Post-Production Sound Studio. Now we're going to talk about how to land a studio job.

1. Get a recommendation from a connection 

5 ways to land a job in PPS.png

A lot of studios do not post job listings online and hire by word of mouth instead. Sometimes they don't have to go past employees to find good applicants (between friends, roommates, and colleagues looking for work). 

Do some sleuthing to find out if you know someone who works at a studio you're interested in, has worked there, or is friends with someone who works there. LinkedIn and Facebook can be good for this. Always contact your connection and ask permission to use them as a recommendation. Then, when you contact the studio manager, start by mentioning the person who recommended you.

2. Cater your resume to the position you’re applying to.

Studios want to see that you're willing to do the actual job you're being hired for - not the one you're working towards. If you're applying for a PA job, having a car or experience in the service industry can be an advantage. For assistant or machine room operator, skills that give an advantage are IT/networking, soldering, computer or electronics (especially repair).

Nearly every applicant knows how to operate a computer and Pro Tools; it pretty much can be assumed you know standard software and hardware without mentioning it. If you are exceptionally good at something (cleanup with Izotope RX, for example) or have a unique technical skill, this is worth mentioning.

Focus on the things that are different about you that might help the studio. Do you speak a foreign language? As mentioned before, are you good at computer repair or used to be a barista? They want to see that you're going to be a team player.

3. Be open-minded to get a foot in the door.

Building a career takes time. The majority don't go from being a college graduate to staff re-recording mixer in a summer - or even a decade sometimes. It's easy to miss a good opportunity because a job isn't exactly what you're trying to pursue. My first post-production studio job I was assistant scheduler! While it wasn't at all my career goal, I had the chance to meet a lot of people and learn the inner workings of the studio I never would have otherwise. 

4. Focus your time in the right places. 

Keep your CV (list of credits/projects) and iMDB page up to date. If you're applying for a job like editor, engineer, or re-recording mixer, a studio or employer will be interested and may check your iMDB page before meeting. Tips for adding iMDB credits:

  •  There’s an option for “uncredited” if your name wasn’t in the credits.
  • If you have time, add the entire sound department. This helps out your colleagues plus it’s not as obvious you were the one who added it.

A demo is not necessary. In post-production sound, we usually don't have control over the source material or the deadline. Our "best" work may not be flawless. It's more important to show you can get the work done on-time with whatever hurdles come up. If there's any questions about your ability, you can always offer to do a test session.

Don't spend much time looking online for jobs. As mentioned earlier, entry-level jobs tend to come from word of mouth and are not advertised.

Build connections and get to know people. Your best advocates for finding you work are connections. Would you rather look for a job alone or with dozens of people keeping an ear open for you? The more you can build your connections the more people who can help you. Go to industry events and talk to people. If there's someone who's work you're interested in, ask if you can meet for coffee or buy them lunch and find out how they got into the industry. What do they recommend to land a job? Finding work is really a team sport and it will be through your career.

5. If you get an interview, be yourself.

Studios get so many applicants for every job they don’t have to pick the person with the most experience. They may pick someone based on temperament. Studio employees (especially entry level) spend a lot of time together so they want to see you'll fit in with the team and be fun to hang out with (especially on really long or stressful days). 

Studios also look for applicants with enthusiasm for the job they are hired for, like not expecting an immediate promotion or to be mixing as an intern. Studios are also looking for how you carry yourself, like, do you seem comfortable in the interview? Are you easy to talk to? It's ok to be nervous but are you still able to have a conversation and speak articulately? Every studio job (from intern to mixer) has to do some level of client services. Communication skills are very important because we are a service-based industry. 

If you have good people skills, the interview is the time to show off. If you're shy or freeze up under pressure, that won't keep you from getting the job, either. If the social side isn't your strength, practice! Mock interviews and talking to strangers are good ways to build up your confidence.

Common Audio AAF Issues with Premiere

Adobe Premiere Pro is known among sound editors and mixers to be problematic when receiving audio by AAF (or OMF). There's bugs, features that aren't compatible with the AAF/OMF formats, and very little information from Adobe about any of it. I recently worked on a project that had four major bugs that required in-depth troubleshooting. I found very little about some of these issues online.

I'll refer to two types of AAFs: “Embedded” and “link to source” (in Pro Tools terms);  “Embed Audio” and “Separate Audio” (in Premiere's terms).

  • Issue 1: Distortion/Garbage audio when exporting embedded AAF from Premiere Pro

  • Issue 2: AAFs are corrupt/missing tracks if the Premiere sequence contains nested clips

  • Issue 3: Premiere cannot export an AAF with transition effects

  • Issue 4: When an AAF links to source media, Premiere points to multi-channel audio files (not supported by Pro Tools). Side effects:

    • Files will be missing in Pro Tools and won’t relink (despite having the files)

    • Clips may be truncated or missing completely from the timeline.

    • The Pro Tools Import session notes will show “Pro Tools does not support import of AAF/OMF reference to multi-channel audio files” and likely other errors

Issue 1: Distortion/Garbage audio when exporting embedded AAF from Premiere Pro

This is a known bug that was fixed in Premiere version 12.0.1 (January 2018) however it’s a problem for any sequence that was used in previous versions of Premiere.

In simple terms: when Premiere exports an AAF (with embedded media) it messes up some of the audio, making it unusable.

In tech terms: It incorrectly processes audio that exceeds zero (which shouldn’t be allowed anyhow) as it goes into the AAF wrapper. It may have a brick-wall limiter programmed in but it’s not working correctly - it interpolates the audio back to the zero crossing point. The result is a bizarre distortion that can’t be repaired with audio correction tools. Here’s what the audio looks like in Pro Tools with the bug (upper/blue track), what it should look like (lower/red track), and the distortion up close:



Cause of the bug: Attribute data in some audio clips (I suspect it’s volume or channel volume specifically but didn’t test each parameter individually).

What doesn’t work to fix it:

  • Other versions of Pro Tools or other DAWs (tested in Logic and Nuendo). The problem occurs in the export from Premiere.

  • Opening the sequence in 12.0.1 or later.

  • Repairing attributes in Premiere 11.


  1. Open the sequence in 12.0.1 (or later)

  2. Select clip (or all clips)

  3. Right click on the clip

  4. select "Remove Attributes"

  5. Deactivate all the audio options (pan, volume, channel volume, and any other processing).

  6. output AAF with embedded media

Issue 2: AAFs are corrupt/missing tracks if the Premiere sequence contains nested clips

In simple terms: When opening an AAF, you’re expecting to see a lot of tracks and instead there’s only 2 (1 video and 1 audio). Other session data may not look correct.

In semi-tech terms: Some Premiere editors use “nesting” in their sequence as an organization tool. A good analogy would be file folders in a filing cabinet. Unfortunately, the AAF format isn’t as sophisticated and only understands a filing cabinet with a bunch of papers in it - it doesn’t know what to do with the folders. Instead of providing a warning, Premiere just outputs a junk AAF.

Solution: Remove all nesting from the sequence.

Left: Pro Tools AAF import window from a sequence with nesting; Right: With nesting removed. Note the name, start time, timecode format, and video frame rate are different.


Track count only shows 1 video and 1 audio track when it should have 24 audio tracks:

  • Premiere’s help site (for version 12) does mention this issue but it’s not very clear the consequences (“Avid Media Composer does not support linking to the nested sequences. Therefore, in the AAF file, there’s no linking between the master composition and the nested sequences.”)

Issue 3: Premiere cannot export an AAF with transition effects

This is a widely known issue with Premiere editors because it’s impossible to miss it: the AAF export fails. The problem is technically “overlapping transitions” and the only known workaround is to remove all the transitions. It’s a pain for everyone - editors removing their work, audio people getting material without fades - but it's a problem that comes from Adobe - one they have chosen not to fix.

(I didn’t get a chance to troubleshoot this myself since it’s an issue I already knew about - if you know know a workaround or how to detect the "overlapping transitions" that stop the export, please contact me!)

Issue 4: When an AAF links to source media, Premiere points to multi-channel audio files, which Pro Tools doesn’t support

There’s a few symptoms to this problem:

  1. The Session Notes will show “Pro Tools does not support import of AAF/OMF reference to multi-channel audio files” and likely other errors

  2. Files will be missing in Pro Tools and won’t relink (despite having the file it's referring to)

  3. Clips may be truncated or missing completely from the timeline.

In simple terms: This is sort of like playing a game of Charades. You know the word and you know the person you're playing with knows the word, too. But, for whatever reason they just don't say the actual word and you can't tell them what it is. You probably have the file you need but Pro Tools doesn't recognize it because it's coming in via AAF/OMF. (If you were importing the file on it's own it would work fine.)

In technical terms: We’re working with different software that independently can work with multi-channel audio but once it goes through an AAF wrapper, Pro Tools doesn’t recognize it (and is unable to use it). It’s confusing for a few reasons. The Session Notes will say “Pro Tools does not support import of AAF/OMF reference to multi-channel audio files” but it still will show all those files in the relinking window. If you try to manually relink and point it to the file you need, Pro Tools won’t find any file candidates. In the relink window, if you select an offline file then drag the source file to the relink window, it will give you a warning: “One or more files are incompatible in format, sample rate or bit depth with the audio file that you are trying to relink.”

It appears as a relinking/missing media problem that can’t be fixed. You can have the same drive as the editor with all the same media and it won’t relink. The key is in the session notes - is there an error about multi-channel audio files?


The other errors:

“some clips had invalid bounds and were adjusted or deleted”

“Some OMF parsing errors occurred”

“Some renderings were missing”

“Some elements were dropped because they are beyond the maximum session length”



When this multi-channel issue happens, it may reflect in the timeline as well. In the below example, all 7 tracks should look exactly the same (in terms of region  placement and length) but instead looks like this:

One known workaround is to export a Final Cut Pro XML from Premiere. It can be opened in Final Cut Pro 7 (the last version that allows AAF export), or X2Pro Audio Convert. Another workaround is to export an AAF with embedded media, not link to source media.

Takeaways for Premiere editors (if you're exporting to a sound mixer or editor):

  • Don't use nested clips.

  • Don't use merged clips.

  • Audio transitions may need to be removed (if your AAF/OMF output fails)

This information is relevant as of January 2018 and tested with Premiere cc2018, 12.0.8, Pro Tools 12 & 2018, and Nuendo 8.

Post-Production basics: Mixing with Broadcast Limiters and Loudness Meters

Any time you’re working on a mix that’s going to broadcast, it’s important to ask for specs. Specs are essentially a set of rules for each broadcaster, such as:

  • How loud content can be (overall average and peak levels)
  • What format to deliver (files or tape) and how or where
  • Specific mix requirements (such as “no music in the center channel”)

Generally there will be a “spec sheet” for each broadcaster (i.e. ABC, CBS, BBC, etc) that your client will provide when asked. Spec sheets aren’t necessarily public or available online, but some are (such as NBC Universal). Some online content providers (like Amazon), movie theater chains, and movie distributors also have specs, so it’s always good to ask.

To understand some important concepts, we’ll take a look at PBS’s most recent specs (2016), found here.

For PBS, it’s a 21-page document that includes requirements for video, audio, how to deliver, file naming, closed captioning, etc. It gets pretty detailed, but it’s a good example of what a spec sheet looks like and the types of audio requirements that come up. The information in the spec sheet will dictate some details in your session, such as track layouts for 5.1, where your limiters should be set, dialog level, bars and tones, etc. We’ll break down a few of these important elements.

 PBS Technical Operating Specification 2016 – Part 1, Page 6 Sections 4.4.1, 4.4.2 – Audio Loudness Requirements

PBS Technical Operating Specification 2016 – Part 1, Page 6 Sections 4.4.1, 4.4.2 – Audio Loudness Requirements

The three most important details to look for on a spec sheet are peak loudnessaverage loudness, and the ITU BS 1770 algorithm. These will be explained in detail below. In this case, the PBS specs are:

Peak Loudness: -2dBTP (“true peak” or 2 dB below full scale). This is your brickwall limiter on the master buss/output of the mix. In this case, it would be set to -2dB.

Average Loudness: – 24dB LKFS +/-2 LU.

ITU BS 1770 Algorithm: ITU-R BS.1770-3. This is the algorithm used to measure average loudness.

Background on the average loudness spec

Before 2012, there used to only be one loudness spec: peak loudness. This was a brickwall limiter placed at the end of the chain. Back then, most television networks (in North America) had a peak level of -10dBfs. From the outside (especially coming from the music world) it seems like an odd way to mix – basically you’ve got 10 dB of empty headroom that you’re not allowed to use.

As long as your mix was limited at -10dB, it would pass QC even if it was squashed and sounded horrible. That’s what was happening, though, especially with commercials that were competing to be the loudest on the air. If you remember running for the remote every commercial break because they were uncomfortably louder, that was the issue.

In the US, Congress enacted the CALM act which went into effect in 2012 and required broadcasters to reign in these differences in loudness between programs and commercials. The spec that evolved from this was "average loudness level." A loudness measurement covers the length of the entire piece, whether it’s a 30 second spot or a 2 hour movie. Average loudness is measured through a loudness meter. Popular measurement plugins are Dolby Media Meters, Izotope Insight and Waves WLM.

 Izotope Insight in a Pro Tools session

Izotope Insight in a Pro Tools session

The ITU developed an algorithm (ITU BS 1770) to calculate average loudness. The latest algorithm is 1770-4 (as of early 2017). In technical terms, loudness is an LEQ reading using a K-weighting and full-scale; the designation for this reading is “dB LKFS”. In the PBS spec sheet, section 4.4.1 and 4.4.2 say mixes should use ITU BS 1770-3, which is an older algorithm. This is an important detail, though, because when you’re measuring your mix, the plugin has to be set to the correct algorithm or the reading may be off. The PBS specs were written in 2016 (before 1770-4 came out). Broadcasters update these every couple of years, especially as technology changes.

In this PBS spec, the optimal average loudness is -24dB LKFS, but there is an acceptable loudness range (LRA) above and below +/-2 LU (“Loudness Units”). Basically that means your average loudness measurement can fall on or between -26dB LKFS and -22dB LKFS, but ideally you want to mix to hit at -24dB LKFS. The measurement plugin will probably show a short term and a long term value. The short term reading may jump all over the place (including beyond your in-spec numbers). The overall (long) reading is the important one. If the overall reading is out of range, it’s out of spec, won’t pass QC and will likely be rejected for air. Or, it may hit air with an additional broadcast limiter than squashes the mix (and doesn’t sound good).

As HD television has become more popular, broadcasters have loosened up on the peak loudness range. PBS is pretty liberal with -2dBTP (or -2dBfs); some broadcasters are at -6dBfs and occasionally some are still at -10dBfs.

 Screenshot of a mix with a limiter at -10dBfs (you can see the compression smashing the mix. It doesn’t sound very good!) and the same mix without. If your average loudness reading is too hot and your mix looks like the upper, there’s a good chance that your mix (or dialog) is overcompressed.

Screenshot of a mix with a limiter at -10dBfs (you can see the compression smashing the mix. It doesn’t sound very good!) and the same mix without. If your average loudness reading is too hot and your mix looks like the upper, there’s a good chance that your mix (or dialog) is overcompressed.

The challenges of working with loudness specs

When the CALM Act went into effect, re-recording mixers thought loudness metering would be restrictive to creative mixing. Average loudness is measured across the entire program so there’s still room for some dynamic range short term. Loudness specs can be a problem for certain content, though. For example, if you’re mixing a show with a cheering audience, the cheering is still picked up as dialog by the loudness meter. You could have a spec of -24dB LKFS (+/-2), mix the show host at -24dB LKFS (in spec), but every time the audience cheers the short term measurement is -14dB LKFS. The overall loudness measurement might be -18dB LKFS – which is way out of spec! So sometimes you end up mixing dialog on the low side or bringing down an audience more than feels natural to fall in spec.

Another difficulty of mixing with a loudness spec is making adjustments when your overall measurement is out of spec. A dB of LU (the unit of measurement for average loudness) is not the same as 1dBFS (full scale). If you drop the mix 1dB by volume automation, it’s not necessarily a 1dB change in average loudness. If you’re mixing a 30 second promo and the loudness level is out of spec it’s easy to adjust and recheck. If you’re mixing a 90 minute film, it takes a bit more work and time to finesse the mix and get a new measurement.

There’s software that will make these adjustments for you – basically you can tell the software what the specs are and it’ll make small adjustments so the mix will fall in spec. While this is a good tool to have in the toolbox, I encourage mixers to first learn how to adjust their mix by hand and ear to understand how loudness measurements and metering works.

Tips for working with loudness specs

I find in general if dialog is sitting between -10 and -20dBfs (instantaneous highs and lows) and not over-compressed, the average loudness reading should fall pretty close to -24dB LKFS. When I first started mixing to an average loudness spec, my mixes were often averaging hot (-20 to -22dB LKFS) when spec was -24. My ear had become accustomed to the sound of compressed dialog hitting a limiter on the master buss. What I’ve learned is that if you’re mixing with your dialog close to -24 dB LKFS (or -27 for film) you can bypass the master limiter and it should sound pretty seamless when you put it back in. If you’re noticing a big sound change with the limiter in, the overall reading will probably fall on the hot side.

When I start a mix, I usually dial in my dialog with a loudness meter visible. I’ll pick a scene or a character and set my channel strip (compressor, EQ, de-esser, noise reduction etc) so the dialog mix lands right on -24dB LKFS. I do this to “dial in” my ear to that loudness. It then acts as a reference, essentially.

One thing I like about mixing with a loudness spec is you don’t have to mix at 82 or 85 dB. While a room is optimally tuned for these levels, I personally don’t always listen this loud (especially if it’s just me/no client or I anticipate a long mixing day). Having a loudness meter helps when jumping between reference monitors or playing back through a television, too. I can set the TV to whatever level is comfortable and know that my mix is still in spec. When I’m mixing in an unfamiliar room, seeing the average loudness reading helps me acclimate, too.

When there's no loudness spec

I mix most projects to some sort of spec, even if the client says there are no specs. For indie films, I usually mix at -27dB LKFS and a limiter set to -2dBFS or -6dBFS (depending on the content). If an indie film gets picked up for distribution, the distributor may provide specs. Sometimes film festivals have specs that differ from the distributor, too. If you’ve already mixed with general specs in mind, it may not need adjusting down the road, or at least you will have a much better idea how much you’ll need to adjust to be in spec.

Article originally featured on

Post-Production Basics: Sound editing – Dialog

In What is an OMF or AAF and why does it matter, we covered file transfer between a video workstation and DAW and how to prep these materials for a sound editor. In this part, we will cover some of the basics of sound editorial.

Different types of sound editing

Sound editing for picture can be broken into different elements (and job titles):

  • Dialog editing (dialog editor)
  • Music editing (music editor)
  • Sound FX editing/sound design (sound designer, sound fx editor)
  • Foley editing (Foley editor)

These roles could be different people or it could be one person doing all of the above. In credits, if someone is listed as “Sound Editor” they likely worked on multiple elements.

Dialog Editing

As we saw in part one, the materials are brought into an audio workstation from a video workstation (through an AAF or OMF) and then “split” so that each element is placed on appropriate tracks. The dialog editor is responsible for going through all of the dialog tracks for the following:

  • Organizing files within each set of audio tracks
  • Sorting through tracks and removing regions so only usable or preferred/best mics are remaining.
  • Once the appropriate mics are in place: adjusting fade ins, fade outs, cross fades, and filling in holes as necessary.
  • Removing unwanted sounds such as pops, clicks, hums, thumps, or other noises that can’t be removed by real-time mixing. Sometimes the dialog editor can remove other non-desirable sounds like dogs barking or sirens.
  • Repairing sounds that can’t be fixed by real-time mixing (such as mic dropouts)
  • Editing ADR (actor’s lines that were re-recorded in the studio) and voice-over narration

The fundamentals of dialog editing

dia edit vs orig.png

Here’s an example of a very basic dialog edit; The above track is edited while the grey track (lower) is how it was delivered by the picture editor (via AAF).

Most dialog clips will need a fade in/fade out to make the ambience come in (or shift to another mic) more naturally.  Production dialog naturally has an audible noise floor (from background noise). For an exterior shot, this could be distant traffic or light wind; interior might be an air conditioner running or a refrigerator hum. In the above example, there’s a small spot where a mic is missing (on the "DIA ORIG" track). The dialog editor would need to “fill” that – in this case, the original audio in that area was clean so the region was extended to fill in the hole. 

Towards the end of the clip (the 5th region), an edit was moved slightly to clean up a bad dialog edit in the middle of a word. At the very end, the original audio had something going on (a noise or start of a new word). That had to be edited to add a clean fade out using audio from earlier in the track.

Removing mics

 Before dialog editing

Before dialog editing

 After dialog editing

After dialog editing

This is a before and after look of two tracks of dialog. It’s two people with separate mics talking at close proximity. Even just looking at the regions (without listening) you can get a general idea of when one person (or both) are talking based on the size of the waveform. Even though it may appear obvious, it’s still a good idea to listen through each track to make sure you’re not removing anything important that’s hidden in the waveform (like a quiet word or laugh). In this example, the second region of dialog came from another scene (this was added by the picture editor or assistant). That had to be replaced with fill that from this scene to match sonically. Sometimes an audio replacement sounds fine in the picture edit bay but not work at all on the mix stage. Sometimes issues like that aren’t audible unless you’re listening with professional-quality headphones, studio monitors, or with a compressor on the dialog.

Dialog organization

There’s a lot of different ways to organize dialog and the style can change depending on a few factors (like the genre of the project or the mixer you're editing for). For example, when working on reality tv shows (or documentary), I like working with two sets of dialog tracks: interviews and in-scene dialog. A scene could switch many times between action (in-scene dialog) to an interview of someone talking about what’s happening. Here’s an example of a show that uses that style:

Even though it’s the same person talking in-scene and in the interview, it doesn’t make sense logistically to have all that  audio on the same track. It’s different locations, different mics (or mic placement), and the source mics probably have different levels and EQs.

That style of dialog editing may not work for a scripted film or tv show, though. It may make more sense to have 5-10 generic dialog tracks. You typically want to edit the same character/same mic on the same tracks through a scene (in a new scene they may switch to a different track). In this example, there’s 3 people (and three mics):

Sometimes the style of dialog editing will be catered to the mixer you are editing for. Below is the same audio but edited to another mixer’s preferences (no straight fades, longer fade ins/outs, switching between tracks A-B and C-D between scenes):

If you're editing for another mixer, it’s always a good idea to speak with him/her before to get a sense for preferences. Some mixers have 5 dialog tracks in their template and others have 20. Some mixers only want a specific type of cross fade. It can help to see another project that was edited for that mixer or to use the mixer's template so names will match. In essence, the dialog editor’s job is to make it easy and seamless for the mixer to import the dialog edit and start working as quickly as possible.

Removing sounds

It’s expected for a professional dialog editor to know how to do detailed audio clean up using corrective software or plugins (with functions like declick, decrackle, and hum removal). Detail work is the focus; Broadband noise reduction (globally reducing noise) typically happens during the mix, not by the dialog editor. 

Izotope RX is commonly used software that dialog editors use to remove problem sounds. It's sort of like Photoshop for audio. In the example below, there’s wind on the mic that’s causing rumble and clicks. The left side is the original audio; the right side is after it’s been treated by RX 5 (to remove low pops, de-plosives and declick):


The biggest change is in the low frequencies (seen as bright yellow at the bottom of the  left photo). What’s impressive is that RX can remove this without compromising the quality of the dialog (with the appropriate settings). A mixer could achieve a similar result with a high pass EQ filter but they would be completely losing low end information – which can cause a shift in ambience or negatively affect the sound of the voice.

Izotope RX can also repair mic dropouts, as seen in this before and after:

izotope dropout.png

Tips for dialog editing

  • Add EQ and compression to your edit tracks (for temporary use) to listen closer to how the mixer will be hearing it. It may take some adjusting plugins between scenes but the idea is to hear things that you may not catch otherwise. For example, some lavs sound very dull or boxy (especially if poorly placed). A lav might need 6 dB or more of a high end boost – significant enough to hear issues that went totally unnoticed without the boost. I like the  Waves MV2 plugin for compression when editing dialog.
  • Sometimes it’s up to the dialog editor whether to cut a scene with lavs or boom mics but it's a discussion to have with your mixer. Some mixers generally prefer one  or prefer to have both options in the cut.
  • Unused mics: There’s a couple ways to handle mics that aren’t needed. If there’s two mics on the same person and both sound pretty good, it’s ok to edit both and leave one unmuted and the other one muted. You could also make “X” tracks; “X1, X2, etc” and place any unused audio on there. Your mixer may want these tracks or may not (that’s another question to ask). It’s good to hang onto as much as possible in your own work session, either way. If a mixer later asks, “were there any other mics for this spot?” you can easily see how many mic options there were and can listen to the alts (so you can explain why you chose the way you did).
  • If you’re doing any processing (declicking, etc), it’s really important to keep a copy of the original somewhere accessible. Sometimes it’s muted on the track below or you can make a track labelled “unprocessed” (or something similar) so you or the mixer can quickly get back to the original, if needed. If only a small portion of a region is processed (and has handles) and the rest of the region is not processed no copy is needed. In general, you want to make it quick and easy for anyone to get back to the original/unprocessed file.
  • Headphones versus studio monitors: This is a personal preference, but I typically prefer headphones unless I’m working in a good-sounding room with monitors that I know and trust. It’s hard to hear rumble on a speaker that only has a 6 inch woofer, for example. If I’m working at a studio, I would rather edit on a mix stage than an edit bay (it’s not always possible but it’s really helpful if you have the option). Even better is to work in the mix room that the final mix will take place. The mic choices that you make in one room may sound very different in another – especially between a small edit room and a mix bay.

Advanced dialog editing

This has been a basic overview of dialog editing. There’s more advanced skills that come up such as:

  • Removing sound fx that naturally occur in production audio so they can be used in the M&E (foreign versions)
  • Creating fill that can be used for ADR, holes, or used as transitions between mics
  • Adjusting mics for phase or sync issues
  • Conforming lav mics (from the source recording) when they aren’t included or cut by the editor

Who makes a good dialog editor?

Dialog editing is a good fit for people who like to work alone and is generally more independent and less stressful than mixing. You have to be detail-oriented and like problem solving. It’s rewarding because it’s often a drastic change between where you started and what it sounds like when you’re done. Dialog editing can be really challenging at times, too. As far as sound editing goes, it’s probably the most important job (because dialog is up front and center – literally).

What My Deaf Cat Taught Me About Sound


Yuki Cat on Rhodes Keyboard

Meet Yuki, one of my cats. She’s a tiny, feisty tabby. In 2015, we learned that Yuki (6 years old at the time) had gone deaf after having normal hearing most of her life. She probably lost her hearing gradually but it wasn’t obvious until one day when I was vacuuming right next to her and she was still happily curled up and sound asleep.

There’s a learning curve to owning a deaf pet – especially a cat that’s already stubborn and sleeps in places you can’t find. Deaf pets get extremely startled if you touch them when they don’t know you're there (through vibration, sight, or smell). Words they responded to before (like “dinner” or “no”) suddenly have no meaning. Yuki became cautious, spending a lot of time just trying to gauge her surroundings (such as the other cats who were unaware of her condition).

As an audio engineer, I naturally became curious and observant about what changed in her world without sound. When do we react to sound instinctually, and what is that reaction? It also made me question my own relationship with sound. Do sound engineers naturally favor sound to communicate since it’s what they do for a job?

One of the first changes I noticed in Yuki (and the most dramatic) was how much calmer she was. For most cats, a doorbell, vacuum, or an unfamiliar voice sends them running under the bed before they even know the cause of their fear. Humans have the same rush of anxiety or “fight or flight” response when a loud or unfamiliar sound catches us off guard. It’s a trick that we utilize in film sound and sound design regularly. How intense would a horror movie be without sound?

Watching Yuki, I realized that my perception of total silence was wrong.  For Yuki (who lived in silence) it brought out curiosity. When one sense is taken away, we naturally move focus to our other senses. We also adapt to changes in our environment over time so we may not be alarmed or disoriented by it. We’ve all been to a music concert that gradually got louder and louder without really noticing. Once you step outside the building (where there’s a drastic change in sound level), it’s totally obvious. The shift is what’s disorienting, not the silence.

Yuki talking loudly to a bird

With Yuki, we had to learn to communicate using senses other than sound. Some new forms of communication came easily, like using hand signals (waving “hello” or “come here”) to attract her by vision. If she doesn’t hear me approach, I stomp on the floor or tap near her so she feels movement or vibration. If she’s asleep, I can put food near her and Yuki will jump to alert – the same reaction as when I used to say “dinner.”

It can be hard without sound to send an emotional message. I didn’t realize how much I used an excited voice to get her to play, or talked calmly when she was being skittish. She’s become sensitive to new smells or when something familiar moves out of place, and cries loudly to let us know. Other than petting (touch), what else can you do to communicate that everything is ok? Yuki communicates by sound differently, too – she talks at full volume all the time now, so every meow sounds like distress (even if she’s just saying hello). It’s forcing me to use senses beyond sound, too, because I have to look at her body language and environment to see what she’s actually trying to communicate.

Sound is a means of communicating a message from point A to point B. It’s up to the sender to determine the message, how to send that message (sound, vision, smell, touch, etc), and to assess how the message will be received. For example, comfort food can use taste or smell to elicit emotions, like security, relaxation, or love. But, what one person experiences as comforting might be exotic or have no meaning to someone else. A chef who specializes in comfort food has to consider: Who are his/her diners? What’s the message he wants to send, and how is it received? Sound engineers have the same consideration: How will an audience react to a sound when the intention is to provoke a specific emotion? A musical instrument that’s familiar and popular in one culture might be obnoxious and out of tune to another. A technology beep or ringtone may not have any meaning to other cultures, and it may not be relevant to our own in the future.

When someone new comes to visit

A message can change meaning depending on the environment, too. A warm blanket in the winter might be calming and soothing to the touch, but in the heat of summer, it could be uncomfortable and irritating. An ominous ambience sound might evoke a sense of fear in one scene, but in another, the same sound makes the audience laugh. A message can cross over senses, too. A clapping sound may not have meaning to Yuki anymore, but she still responds if she feels the air movement. A loud subwoofer in a theater is sound-based, but it could be effective because it’s also utilizing touch (vibration).

There’s a lot of interplay between senses than we pay attention to unconsciously. Isn’t the nature of multimedia to create experiences that excite multiple senses? As sound engineers, maybe we need to ask more often: Is sound the most effective sense to send a message, or is it secondary to something else? It might seem like an unusual question to ask, but what could the audience be feeling (physical sensations), tasting, or smelling? Can we impact those senses through our use of sound? It’s an interesting exercise to move the focus away from our two primary work senses (sound and sight). It’s something I probably wouldn’t have considered if it weren’t for Yuki.

Collaborative Mixing: Thinking outside the Dubstage

Article co-written with Shaun Cunningham April and Shaun have mixed a number of independent feature films together, but working from different locations and with minimal time on the dub stage. In this article, they explain how they make it work. 

This article was featured in CAS Quarterly Magazine, the official quarterly of the Cinema Audio Society. The article can be found at the link above (blue button) on page 20. 


Mixing and Your Focus Zone

Do you have trouble staying focused during a mix? Do you feel wiped at the end of a long mix day? Here’s the science of stimulation, and how it can be applied to audio work.

This article was featured in CAS Quarterly Magazine, the official quarterly of the Cinema Audio Society. The article can be found at the link above (blue button) on page 31.