Friday, December 3, 2010
In this weeks labs, we have been editing together a piano recording. There are several different takes of the same piece, ranging in tempo. The assignment included taking certain chunks of bars from different takes, depending on the better performance. We tried doing time expansion on some of the parts, but the piano is a complex instrument. The waveform and sounds seem too complex still, for the software to fully and accurately maintain the original sound quality. After completing the assignment, we tried out some drum replacing/enhancing. To do this, create a duplicate track of the one you want to replace. Find a transient that you like, and make sure you cut it where you want it. Bring it down to the new track, and zoom all the way in get the audio to the proper starting point and make sure it matches to the grid. If you don’t stay locked in grid, things can get messy fast and your audio will be all out of line. Taylor and I got some really cool bass samples that we edited into a custom rhythm. We then ran it through the eventide harmonizer and got a few different pitched rhythmic takes. We organized the pattern into a little odd time groove in grid.
Friday, November 5, 2010
Compression settings:
The prerecorded track goes out A1 and the newly recorded track goes in A2. To listen back, they both go out 2 track.
The Millenia’s fastest attack setting is 2ms.
Set settings at 12 o’clock with a 3:1 ratio
Try a fast attack, fast release
Ratio 6:1
Low threshold, attack around 30ms, release 1000ms, 9:1 ratio
The attack lets the initial transient through. The release is slow enough that the compressor has a slow release and the room sound rushes up.
Fastest attack, Slowest release, 10:
Fast attack, slow release, 9:1
Medium fast attack, medium release
Relative snare timbres:
Thud – fast attack
Crack – slower attack
Changing the threshold will keep a consistent snare, with a ratio of 8:1 or higher
Try a fast attack, and a fast release. Then the same fast attack with a faster release.
Fast release, medium attack gets lots of punch out of a snare drum.
Slowest attack, fastest release, threshold at 9:00, and turn the make up gain up. Start with a ratio of 1.4:1, then 3:1, then 6:1.
The 3:1 setting sounds brighter than the 6:1 setting. The compressor also beings acting like an EQ.
Fastest attack, fastest release.
Fastest attack and slowest release, ratio 15:1. Long release will prevent the room sound from rushing up.
Fast attack shaves off the transient.
Low threshold, fully clockwise, everything is being compressed, fast attack and release, 10:1.
Fast attack, .5 sec release, ratio 10:1, low threshold, room sound not very present but overall loudness is up
Same settings with Medium attack -
Threshold fully clockwise, attack 10, release .7, ratio 8:1
A compressor acts like glue, and imparts a sonic characteristic when combining tracks that were recorded from different places.
4:1 ratio, 2 sec release, 2 ms
Release – how long it releases after the initial attack.
Raising the threshold means less audio will be compressed. Because little to no audio is getting compressed, the compressor acts as an amplifier.
Conservative settings make it sound more uniform.
Fast release can release too fast on the low frequencies and it will distort.
Super fast attack, long release, and lower threshold, you’ll get a crushing sound
The prerecorded track goes out A1 and the newly recorded track goes in A2. To listen back, they both go out 2 track.
The Millenia’s fastest attack setting is 2ms.
Set settings at 12 o’clock with a 3:1 ratio
Try a fast attack, fast release
Ratio 6:1
Low threshold, attack around 30ms, release 1000ms, 9:1 ratio
The attack lets the initial transient through. The release is slow enough that the compressor has a slow release and the room sound rushes up.
Fastest attack, Slowest release, 10:
Fast attack, slow release, 9:1
Medium fast attack, medium release
Relative snare timbres:
Thud – fast attack
Crack – slower attack
Changing the threshold will keep a consistent snare, with a ratio of 8:1 or higher
Try a fast attack, and a fast release. Then the same fast attack with a faster release.
Fast release, medium attack gets lots of punch out of a snare drum.
Slowest attack, fastest release, threshold at 9:00, and turn the make up gain up. Start with a ratio of 1.4:1, then 3:1, then 6:1.
The 3:1 setting sounds brighter than the 6:1 setting. The compressor also beings acting like an EQ.
Fastest attack, fastest release.
Fastest attack and slowest release, ratio 15:1. Long release will prevent the room sound from rushing up.
Fast attack shaves off the transient.
Low threshold, fully clockwise, everything is being compressed, fast attack and release, 10:1.
Fast attack, .5 sec release, ratio 10:1, low threshold, room sound not very present but overall loudness is up
Same settings with Medium attack -
Threshold fully clockwise, attack 10, release .7, ratio 8:1
A compressor acts like glue, and imparts a sonic characteristic when combining tracks that were recorded from different places.
4:1 ratio, 2 sec release, 2 ms
Release – how long it releases after the initial attack.
Raising the threshold means less audio will be compressed. Because little to no audio is getting compressed, the compressor acts as an amplifier.
Conservative settings make it sound more uniform.
Fast release can release too fast on the low frequencies and it will distort.
Super fast attack, long release, and lower threshold, you’ll get a crushing sound
Thursday, October 28, 2010
COMPRESSION
Compressors control maximum levels and maintain higher average loudness. Compressors and limiters are specialized amplifiers used to reduce dynamic range. The dynamic range is the distance between the loudest and softest part of a wave. A flute produces a tone where the difference is 3 dB. The human voice has a 10db dymanic range, while plucked instruments have about a 15dB range. Our ears act as compressors as well. Ears respond to the average loudness of a sound. Compressor are designed to include detecort circuits that responds to an average signal level. A second circuit responds to peak signal levels. A brick wall limiter hits a threshold and stops there, it wont be any louder than the threshold.
Multiband compression: Some compressors and digital plugins can separate the compression into 3 or 4 different bands of lo, lo mid, mids, and highs. This allows compression on only a certain amount or “band” of frequencies.
Compressors: Optical (LA2A, Fairchild 670) – use a photoresistor: signal comes in to a lightbulb. More audio = brighter light. Photocells can recognize the very suttle light changes caused by the incoming audio. FET (1176) – Field Effect Transistor: First transistor that emulated tubes in the way the work. They are fast, clean, and reliable. VCA – Voltage controlled amplifier – most versatile of all the compressors, have a higher level of control to control range. Varigain – doesn’t involve circuits, digital compressors – exaggerated compressors: all settings can be from 0- infinity, and you can get precision from this compressor versus the others.
Ratio – the degree to which the compressor is reducing dynamic range, of the difference between signal increase at the input and output. 2:1 means that for every 2 dB coming into the compressor, it will only sound like 1dB over going out.
Threshold – the level of the incoming signal at which the compressor amplifier changes from a unity gain amplifier into a compressor reducing gain. Everything above the threshold is being compressed . Once threshold is reached, compression happens depending on the amount of signal coming in, and the ratio setting. KNEE – hard and soft, the exact moment the signal reaches the threshold. A hard knee is sudden and abrupt. A soft knee eases on the compression. By manipulating the attack and knee, we change the envelope, mainly the attack and release.
Attack – brightness of character) the time it takes for the compressor to compress after the threshold has been reached. Attack times typically range from 1ms to over 100ms. The attack time effects tone in terms of brightness. Fast attack clamps down on the signal
Release – the time the compressor uses to return to unity gain after the singlahas fallen below the threshold. longer release time creates a darker sound, shorter relase makes it sound brighter. Compressor is released from gain reduction. 20ms – 5 seconds. Depends on the tempo and program material.
Fast release on a bass distorts it.
Slow attack – all the brightness comes through on a snare. Fast attack for snare – dulled sound
Brings loud portion down, and creates make up gain.
Send a snare track from protools out A 1-2 into line 1 inputs. Send that signal out of channel insert sends into the distressor, and then send the signal out of the distressor to channel insert returns. Start with the input, attack, release, and output to 5 on the distressor module.
Multiband compression: Some compressors and digital plugins can separate the compression into 3 or 4 different bands of lo, lo mid, mids, and highs. This allows compression on only a certain amount or “band” of frequencies.
Compressors: Optical (LA2A, Fairchild 670) – use a photoresistor: signal comes in to a lightbulb. More audio = brighter light. Photocells can recognize the very suttle light changes caused by the incoming audio. FET (1176) – Field Effect Transistor: First transistor that emulated tubes in the way the work. They are fast, clean, and reliable. VCA – Voltage controlled amplifier – most versatile of all the compressors, have a higher level of control to control range. Varigain – doesn’t involve circuits, digital compressors – exaggerated compressors: all settings can be from 0- infinity, and you can get precision from this compressor versus the others.
Ratio – the degree to which the compressor is reducing dynamic range, of the difference between signal increase at the input and output. 2:1 means that for every 2 dB coming into the compressor, it will only sound like 1dB over going out.
Threshold – the level of the incoming signal at which the compressor amplifier changes from a unity gain amplifier into a compressor reducing gain. Everything above the threshold is being compressed . Once threshold is reached, compression happens depending on the amount of signal coming in, and the ratio setting. KNEE – hard and soft, the exact moment the signal reaches the threshold. A hard knee is sudden and abrupt. A soft knee eases on the compression. By manipulating the attack and knee, we change the envelope, mainly the attack and release.
Attack – brightness of character) the time it takes for the compressor to compress after the threshold has been reached. Attack times typically range from 1ms to over 100ms. The attack time effects tone in terms of brightness. Fast attack clamps down on the signal
Release – the time the compressor uses to return to unity gain after the singlahas fallen below the threshold. longer release time creates a darker sound, shorter relase makes it sound brighter. Compressor is released from gain reduction. 20ms – 5 seconds. Depends on the tempo and program material.
Fast release on a bass distorts it.
Slow attack – all the brightness comes through on a snare. Fast attack for snare – dulled sound
Brings loud portion down, and creates make up gain.
Send a snare track from protools out A 1-2 into line 1 inputs. Send that signal out of channel insert sends into the distressor, and then send the signal out of the distressor to channel insert returns. Start with the input, attack, release, and output to 5 on the distressor module.
Friday, October 22, 2010
In the studio this week, Taylor and I practiced sub-grouping and putting the song Jonesy through the MTA 980, on the group bus, to the monitor groups, and then back into the box to record the mix. We used 3 groups, Drums/Bass, Gtr/Horn/E Percussion, and Vocals. That means we had to use 6 tracks- 3 stereo AUX tracks to get out of protools and into the board, and 3 stereo audio tracks to get the mix back into protools. We then got started on Raw Tracks 4 is not a very good track. In fact, if I were in the real world, and had some credibility as a producer, I would not accept this as a paid gig. I would send it back and tell them to get it re-tracked, or hire new musicians. The main thing I noticed was that there were a ton of vocal track, which isn’t the initial problem. In most of these recordings the vocalists’ words don’t line up, and it would take an unnecessary amount of editing/ be a huge waste of time – which is why I mentioned it be re-tracked. We edited the tracks and cleaned the session up.
Tuesday, October 19, 2010
Groups, Submixing In and out of the box, and Stems
The board has just been fixed and its awesome. Today we learned some new stuff: Sub-mixing and Stemming both in and out of the box. To mix in the box, we still use the 2Tk1 and mix buttons on the master fader. Going through the board, we can use the master fader itself, with the mix button down. Start by grouping a few instruments together, like all of the drum tracks with bass, the guitar with piano, and all of the vocals into 3 separate stereo aux tracks to be labeled as subgroups in ProTools. We get sub groups by sending the original audio tracks to AUX tracks. To record a stem in the box, create 3 stereo audio channels with the inputs set the same as the outputs of the subgroup channels. Mute and record enable. To get the sub mixes out of the box, patch those stereo aux tracks out 1-2, 3-4, 5-6 into line inputs with the 6 channels of your choices on the board (1, 2, 3, 4, 5, 6) and pan the channels L R L R L R. You can send these subgroups on the group bus with via the group bus matrix (the buttons at the top of any channel strip on the board). These are sent to the Red faders at the right of the board, the monitor group channels (Yes, Will… they are monitor and group channels!). Send the drums/bass to 1 and 2, gtr/piano to 3 and 4, the vocals to 5 and 6, and make sure the monitor level pot is dialed all the way up. The each group hard left and right (1L, 2R, 3L, 4R, 5L, 6R). Now that we have these submixes/groups through the board, and we have our submixes in the box to be recorded as stems, we want to send the monitor groups back into ProTools to be recorded out of the box as stems. Create 3 more stereo audio tracks in ProTools and label them the inputs that you patch them to. Patch the monitor groups out of the “group outputs – 1, 2, 3, 4, 5, 6” on the patch bay into ProTools inputs that are not being used as outputs, or you will get a feedback loop. Say, In B-1, B-2, B-3, B-4, B-5, and B-6). In ProTools set the inputs of the 3 stereo audio tracks you created to correspond ( drums B1-2, gtr/piano B3-4, and vox B5-6) and record enable. You should now be able to record your in and out of the box stereo stems from the submixes you created.
Thursday, October 14, 2010
“The Inner or Deep Part of an Animal or Plant Structure” Bjork DVD
Bjork collaborates with many different artists. For the album Medulla, she worked with a variety of artists from different part of the world. This interested me a lot, seeing as she did much of her work on her own in the earlier days. I think she is very culturally aware, she thought to incorporate many different styles of music into this project. Bjork used the Icelandic and English languages for the lyrics in this record. A few places she went for the recordings of this album were New York, Iceland, and Brazil. Medulla is a concept album, meaning it revolves around a particular idea or concept – in Bjork’s case this was to be an “all vocals” album. Back when she was a teenager she always knew that she was going to do a vocal album, but she wanted to wait until the right time and the right place. Razhel is a famous artist that has an incredible beat-boxing talent. Bjork heard of/met Razhel in New York came invited him to do work on the album. It was really cool how he made a bass drum sound with the throat, in the pattern of a rhythm that resembled a heartbeat in a song. Mark Bell programmed some electronic beats that eventually became the beats that Razhel imitated with his skills. She worked with Mike Patton because he’s from the rock background, a very experimental, open-minded, and a good guy to work with. Bjork felt she had done all she could do by herself, and started to begin with to work with an engineer and incorporate other peoples’ voices. Voices emulating the land, or animals, or whatever is going on at the time. *The Inuit culture is one that lives in extreme weather conditions in places like Canada, in the northern and southern Arctics, northern Russia and Alaska. These people have to use the environment around them and the resources there (which are minimal) to sustain life. The September 11th tragedy got her thinking and invoked the idea of primitive elements: Back in the day, we didn’t have all of the convenient resources we have available to us today. One of the only things we are born with as humans is a voice, and she wanted to express that we can use what primal resources we might have to create the things we invision. I think Bjork is inspired by this, in that she mentioned she wanted to have an album that contained “blood, bones and meat”, or in other words, all sounds will be derived from human expression. She had made a collection of hundreds of voices and effects, but she only ended up using a few! Despite the technological capacity we have today, Bjork and other artists are making killer sounding records with a DIGI888 interface that is only 16 bit, 44.1KHz. At the time of conceptualizing the album, Bjork wasn’t focused on lyrics at first. Reading through her diary that she had written, she felt that words wouldn’t be as important as the message and emotion that human breaths, noises, and voices are capable of creating. As she gathered more and more vocal tracks, she became more conscious of how the voices were fitting in with each other, and their relationships with one another. Nigel Godrich has produced many albums, including Radiohead’s OK Computer, and has mentioned in an interview that he may dream of producing Bjork, but she’s already so interesting in her own way. Spike Stent produced Bjork’s first album Homogenic, and helped out on the production of Medulla. During the recording process, Bjork wears many different hats. Not only is she the inventor of the concept, and the mastermind behind the production and composition, she acts as a conductor and an engineer. She also acts as a translator when trying to describe to someone the sound that she is looking for.
The Mexican food was great ;)
The Mexican food was great ;)
Friday, October 8, 2010
In class this week we went over possible combinations of inexpensive home studios, and the gear necessary to have one up and running with 16 channels in and out. Starting with the computer, have a powerful enough computer to handle the processes carried out by a DAW like Protools or Logic, with at least 2 GB of RAM memory. For Protools, the DIGI 002 or 003 are good models. They run between $1,000-1,500. A 1/4 inch patchbay will be necessary for routing signals where you want them to go. The API 312 mic pre’s are pretty nice and the API 550’s and 560’s are small, modular EQ’s that you can fit I a rack or on a channel strip. For Logic, using an RME interface will work. Protools will only accept Digidesign hardware.
We have been working on the mix for RawTracks 3 and I still can’t get over how well this song was tracked. All of the tracks are super clear sounding with very little bleed. The coolest thing we’ve done so far in lab was rhythmically rearrange one of the vocal lines at the end of the song and it sounds like the singer is rapping over it. We got our mono mix out of the board recorded into protools. We compressed the kick and snare through the distressors, and the vocals through the millennia. The snare was sent to a cool reverb-delay and we also sent the electronic percussion track there as well. When we did a mixdown, we used the board as an instrument too and we performed live automation! We controlled the reverb at certain sections of the song with the AUX knob. The board was being serviced and repaired for the rest of the week so we weren’t able to do another mix, but we will pick back up on Tuesday. There have been many issues with the board, so it’s great that it will be running solid soon.
We have been working on the mix for RawTracks 3 and I still can’t get over how well this song was tracked. All of the tracks are super clear sounding with very little bleed. The coolest thing we’ve done so far in lab was rhythmically rearrange one of the vocal lines at the end of the song and it sounds like the singer is rapping over it. We got our mono mix out of the board recorded into protools. We compressed the kick and snare through the distressors, and the vocals through the millennia. The snare was sent to a cool reverb-delay and we also sent the electronic percussion track there as well. When we did a mixdown, we used the board as an instrument too and we performed live automation! We controlled the reverb at certain sections of the song with the AUX knob. The board was being serviced and repaired for the rest of the week so we weren’t able to do another mix, but we will pick back up on Tuesday. There have been many issues with the board, so it’s great that it will be running solid soon.
Friday, October 1, 2010
This week we had a lab practical. The goal was to correctly patch, and get signal and other outboard effects to different tracks through the board. We were responsible for getting a drum mix, and vocal track through the board and into Protools. Compression of the kick, snare, and toms was required using 4 compressors: the 2 Distressors and the 2 Millenia. Reverb on the vocals using the Lexicon PCM91, and a second reverb on the snare using the SPX90. I started by looking at the output path selector on the digital channel strip in the computer, and made sure the outputs were set to what I wanted. There were seven overall tracks, so outputs A1-A7 were used. I patched out of Protools on the patchbay, and into Line 1 inputs, Channels 18-24. Over on the channel strips on the board, I set the faders at unity gain, and pressed in the Line 1 and Mix buttons at the top by the mix bus. To get the kick, snare and toms compressed, I patched the signal out of channel insert sends into the respective outboard gear, and patched out of the gear into channel insert returns. Now all instruments that needed compression had it. Reverb was next, and since it is a Send, I used the AUX sends. I patched the Lexicon into AUX 1 and the SPX90 into AUX 2. Do to this properly, patch the AUX sends to the inputs of the effects processors, and out of the processors into the echo returns section of the patchbay, and turned up echo return faders 1 and 2. On the channel strips themselves, in the auxiliary section, I dialed in the reverb signal on the rotary pots (Aux2 for the snare, and Aux1 for the vocals). I felt I did all of this rather quickly, and without any confusion as to where the signal flow was being routed. This week in labs, we started on mixing RawTracks 3, and this is a more fun song than the last two. I also noticed something right off the bat about this tune. In listening to it, I am very impressed with the way that this song was tracked, much better than the previous 2 we had been working on. There is great isolation of all instruments through the miking, and very little bleed. Gates will work really well for most of the drum tracks, while they won’t be necessary on others. We started by properly naming and grouping the tracks, and giving the song a complete listen through before we began cleaning up the tracks and editing. Next week we will start EQing and processing tracks for 2 mono and 2 stereo mixes, one of each in the box and out of the box.
Friday, September 24, 2010
In mixing the track this week we experimented a lot with a reverb track and automation. We created an auxiliary track and sent the overheads to it. I wanted to do this because we edited out a section of the song where it then sounded like it cut out too abruptly. First we tried cutting everything but the overheads, but there were toms being played, and I was just looking for a ring. So I thought automation in the tail reverb right as the music cuts out. It turned into a cool effect that we then decided to pan across the stereo spectrum. That just about wrapped up our stereo mix in the box before we bounced it.
Proposed Production Schedule:
-Tuesday, Sept 14
~Housekeeping
~EQ at least the drums
-Thursday, Sept 16
~EQ, compression, etc.
-Tuesday, Sept 21
~Mix through the board
This week my group did our presentation for chapter 13 in the Mixing Concepts book, and here are the other groups notes as well:
Tracks
-Audio
-Aux
-MIDI
-Instrument
Mixer Strips
-Input Selection
-Output Selection
-Insert Slots
-Send Slots
Solos
Control Grouping
Audio Grouping
Sends and Effects
Naming Buses
Internal Architecture
-Integer Notation
~The highest amplitude a 16 bit sample can handle is 65,535. Anything above this results in clipping
-Floating-Point Notation
~16-bit sample can theoretically handle any amplitude
-How they work together
~Pro Tools allows two hot signals to be summed without clipping. When bouncing in Pro Tools, the audio is converted from float into integer. If you bounce onto a 16-bit file, you lose 54dB of range
Dither
-To avoid producing repeating decimals, processors round off this data. Since the data is now incorrect and rounded off in the same way every time, distortion is produced. Dithering randomizes the rounding off so that a "low level of random noise" is created.
-Most audio sequencers ship with dither capabilities
-To avoid producing repeating decimals, processors round off this data. Since the data is now incorrect and rounded off inn the same way every time, distortion is produced.
-Most audio sequencers ship with dither capabilities
Normalization and the Master Fader
-Normalization
~Brings all signal level on a track up to the highest peak, without clipping, but rounding errors can occur, resulting in distortion, especially with 16-bit files. Use with CAUTION.
-Master Fader
~Scales mix output to the desired range of values
~Sometimes clipping will occur, even when no channels are overshooting the clipping threshold
Playback Buffer and Plugin Delay Compensation
-Playback Buffer
~Determines latency of input signals. Lower buffer size results in less latency, which is better for recording
~The mixdown should utilize a higher buffer size, because the system needs to read the information faster than it is played back
-Plugin Delay Compensation
~Plugins that run on DSP expansion, like a UAD card
~Plugin delay occurs when processing involves algorithms requiring more samples than available by each playback buffer
Chapter 11: Phase
What is Phase?
-Relationship between two or more waveforms, measured in degrees
-We only consider phase in relation to similar waveforms
-Identical waveforms are usually signs of duplication
~ex: Duplicated snare, one dry and one reverb
-Waveforms of the same event are two microphones capturing the same musical event (or recording)
~ex: A kick mic and overheads, both with kick in it
-3 Types of Phase Relationships between Similar Waveforms
~In phase or phase-coherent: waveforms start at exactly the same time
~Out of phase or phase-shifted: waveforms start at different times
~Phase inverted: both waveforms start at the same time, but amplitude is inverted
-Problems arise when similar phase shifted or phase inverted waveforms are summed
~Comb Filtering: If phase off less than 35ms, frequencies attenuated, tonal alteration and timbre change
~If waves are phase-inverted, level attenuation. If phase inverted and equal in amplitude, cancel each other out completely
-Phase in Recorded Material
~Comb filtering caused by a mic a few feet from guitar amp, picking up reflected frequencies as well as the direct sound. Not much a mixing engineer can do to fix. Caused by having two or more tracks of the same take of the same instrument can be treated by the mixing engineer:
(A) top/bottom front/back tracks: Microphones that are placed on opposite sides of an instrument are likely to pick up opposite sound pressures. Fix it by inverting the phase of one of the microphones.
(B) Close-mic and overheads: Close-miced kick or snare might interact with overhead microphones to cause phase shifting or inversion. Fix it by taking the OH as a reference and make sure the kit is phase coherent
(C) Mic and Direct: The signal from a bass guitar that is recorded DI will travel much faster that a signal that goes from guitar to an amplifier to a microphone to your console. Fix it by zooming in and nudging the track
Phase Problems During Mixdown:
-Delay caused by plug-ins
-Delay caused by digital to analog conversion when using outboard gear
-Short delays may cause comb filtering
-Equalizers cause delay in a specific range of frequencies
Tricks:
-Two mixing tricks are based on a stereo setup with both identical mono signals being sent to a different extreme, and one of the signals is either delayed or phase inverted
-Haas Trick
~Helmut Haas discovered that the direction of the sound is determined solely by the initial sound providing that (1) successive sound arrive within 1-35ms from the initial sound and (2) successive sounds are less than 10dB louder than the initial sound
-Takes the original signal panned to one extreme and the other, phase-inverted signal is sent to the other extreme with a delay of 1-35ms
-One way involves panning a mono track hard to one channel, duplicating it, and panning the duplicate hard to the opposite channel and nudging the duplicate by a few milliseconds
-Second way involves loading a stereo delay on a mono track, setting one channel to have no delay and the otehr to have a short delay between 1-35ms
~Used to:
-Fatten sounds on instruments panned to the extremes making them sound more powerful
-As a panning alternative
-To create more realistic panning, since the human ear can use teh amplitude, time, and frequency differences to locate sound
~Haas Trick controls amount of delay, level
Out of Speakers Trick
-Like Haas Trick, but instead of delaying the wet signal, just invert the phase. Results in the sound coming from all around you rather than directly at you.
Chapter 12: Faders
Sliding Potentiometer
-Simplest basis for an analog fader
-The amplitude of teh analog signal is represented in voltage
-Contains a resistive track with a conductive wiper slides as the fader moves
~Different positions along the track provide different amounts of resistance
~Different degrees of level attenuation
-Can not boost the audio signal passing through it (unless a fixed-gain amplifier is placed after it)
-Audio signal enters and leaves
VCA Fader
-Combination of a voltage controlled amplifier and a fader
-VCA is an active amplifier that audio signal passes through
~Amount of boost or attenuation is determined by DC voltage
-Fader only controls the amount of voltage sent to the amplifier
~No audio signal flows through the actual fader
-Allows a number of DC sources to be summed to a VCA
~Shortens the signal path
Digital Fader
-Determines a coefficient value by which samples are multiplied
~Doubling a coefficient of 2 results in a boost of around 6dB
~0.5 results in around 6dB attenuation
Scales
-Typical measurement is in the scale unit dB
~Strong relationship to how the human ear perceives loudness
-Scale is generally based on steps of around 10dB or 6dB
~6dB is approx. doubling the voltage (or sample value) or cutting it in half
~10dB is doubling or halving the perceived loudness
-The 0dB point is called unity gain
~Where the signal is neither boosted nor attenuated
-Most faders offer extra-gain
~Generally around 6, 10, 12dB boosts
~Only used if signal is still weak while at unity
-Area between -20dB and 0dB is the most crucial area
Level Planning
-Faders are made to go up and down
-When mixing the levels start by coming up
~Generally ending up at around the same positions
-Problem
~A natural reaction to not being able to hear a track is to bring the fader up
%Bringing a snare up in the mix might begin masking vocals, so you bring up fader on vocals, then bass masked, etc.
~Eventually, end up back where you started
-Solutions
~Having a set plan for levels before bringing up faders so the extra-gains settings are left alone
~Setting the loudest track first and bringing up the rest of the tracks around it
Extremes - Inward Experiment
-Take the fader all the way down
-Bring it up gradually until the level seems reasonable
-Mark the fader position
-Take the fader all the way up (or to a point where the instrument is too loud)
-Bring it down gradually until the level seems reasonable
-Mark the fader positions
-You should now have two marks that set the limits of a level window. Now instrument level within this window based on the importance of the instrument
How Stereo Works
-Alan Dower Blumlein
~Researcher and engineer at EMI
~December 14, 1931, applied for patent called "Improvements in and relating to Sound-transmission, Sound-recording, and Sound-reproduction System"
~Was looking for a 'binaural sound', we call it 'stereo' today
~Ironically, first stereo recording published in 1958 (16 years after Blumlein's death and 6 years after EMI's patent rights had expired
-Stereo Quick Facts
~We hear stereo based on three criteria: (EX: trumpet on your right)
%amplitude (sound be louder in R ear than L)
%time/phase (sound will reach L ear later than R)
% frequency (less high freq in L than R)
~Sound from a central source in nature reaches our ears at the same time, with the same volume and frequencies. But, with two speakers, no center speaker, so phantom center
~Best stereo perception when triangle acheived
Pan Controls
-Pan Pot (Panoramic Potentiometer)
~First studio with a stereo system was Abbey Road, London
~Splits a mono signal L and R, and attenuates the side you're not favouring
-Pan Clock
~Hours generally span from 7:00 (L) to 17:00 (R)
-Panning Laws
~A console usually has only one panning law, but some inline consoles have one for channel path and one for monitor path
%two main principles:
^if two speakers emit the same signal at the same level, listener in the center will perceive a 3dB boost of what each speaker produces.
^when two channels summed in mono, half of each level is sent to each speaker
~0dB Pan Law: doesn't drop the levels of centrally panned signals. The instrument level will drop as we pan from the center outward, with 3dB increase of perceived loudness when centered
~-3dB Pan Law: when panned center, there is a 3dB dip (generally best option when stereo mixing)
~-6dB Pan Law: used for mono-critical applications. Provides uniform level in mono, but a 3dB dip when in stereo
~-4.5dB Pan Law: compromise between -3 and -6dB laws. 1.5dB center dip when in stereo, 1.5dB center boost in mono
~-2.5dB Pan Law: gives a 0.5dB boost when panning to the sides so instruments aren't louder when panning.
-Balance Pot
~Input is stereo, unlike pan pot. 2 input channels go through separate gain stages before reaching stereo output. Pot position determines how much attenuation applied on each channel.
~never cross-feeds the input signal from one channel to the output of the other
Mono Tracks
-Problem with dry mono track is it provides no spatial perception
-Dry mono tracks always sound out of place, so add reverb or some other spatial effect to blend it
-Some mono tracks include room or artificial reverb that doesn't sit well with a stereo reverb of the whole mix
Stereo Pairs
-Coincident Pair (XY) technique provides the best mono-compatibility given that the diaphragms of the two mics are so close in proximity, and there's no need to worry about phase complications
-Spaced Pair (AB) involves two mics a few feet apart, is certain to have phase issues, and is not mono-compatible
-Near-coincident pair is two mics angled AND spaced, with less drastic phase problems
Multiple mono tracks
-Multiple mics on the same instrument
-Mirrored panning widens and creates less focus on the instrument in the stereo image
-Same panning gives a more relative stereo image, and is easier to locate
Combinations
-Like mirrored panning but less extreme
Panning Techniques
-Look at the track sheet and get a basic idea of a tentative pan plan
-Panning strategies differ with every mix
-Small tweaks in the near final stages can greatly improve mix
-Panning instruments in the same place causes masking. Panning different directions and mirroring can avoid masking
-When panning, think of a sound stage or try to visualize an actual performance
-Center and extremes in the panning field tend to be the busiest areas wehre masking is more likely to occur
-Level and frequency balance are the main concern when panning
-Be aware of the rhythmic structure of the tracks and keep them balanced
-A close-to-perfect stereo mix is basically a good mono mix, although there is still room for imbalances
-Stereo effects (reverb/delays) can be panned towards th dry track to put the desired effect in clearer focus
-Mono effects benefit more from panning the effect farther from the dry track and enhance the stereo image
Beyond Pan Pots
-Autopanners: pans cyclically between the left and right sides
~Rate: Defined by Hz, cycles/second
~Depth: How far the signal will be panned. Higher setting = more apparent effect
~Waveform: Defines shape of panning modulation, how smooth/rigid the panning will sound
~Center: This setting defines the position of modulation
Proposed Production Schedule:
-Tuesday, Sept 14
~Housekeeping
~EQ at least the drums
-Thursday, Sept 16
~EQ, compression, etc.
-Tuesday, Sept 21
~Mix through the board
This week my group did our presentation for chapter 13 in the Mixing Concepts book, and here are the other groups notes as well:
Tracks
-Audio
-Aux
-MIDI
-Instrument
Mixer Strips
-Input Selection
-Output Selection
-Insert Slots
-Send Slots
Solos
Control Grouping
Audio Grouping
Sends and Effects
Naming Buses
Internal Architecture
-Integer Notation
~The highest amplitude a 16 bit sample can handle is 65,535. Anything above this results in clipping
-Floating-Point Notation
~16-bit sample can theoretically handle any amplitude
-How they work together
~Pro Tools allows two hot signals to be summed without clipping. When bouncing in Pro Tools, the audio is converted from float into integer. If you bounce onto a 16-bit file, you lose 54dB of range
Dither
-To avoid producing repeating decimals, processors round off this data. Since the data is now incorrect and rounded off in the same way every time, distortion is produced. Dithering randomizes the rounding off so that a "low level of random noise" is created.
-Most audio sequencers ship with dither capabilities
-To avoid producing repeating decimals, processors round off this data. Since the data is now incorrect and rounded off inn the same way every time, distortion is produced.
-Most audio sequencers ship with dither capabilities
Normalization and the Master Fader
-Normalization
~Brings all signal level on a track up to the highest peak, without clipping, but rounding errors can occur, resulting in distortion, especially with 16-bit files. Use with CAUTION.
-Master Fader
~Scales mix output to the desired range of values
~Sometimes clipping will occur, even when no channels are overshooting the clipping threshold
Playback Buffer and Plugin Delay Compensation
-Playback Buffer
~Determines latency of input signals. Lower buffer size results in less latency, which is better for recording
~The mixdown should utilize a higher buffer size, because the system needs to read the information faster than it is played back
-Plugin Delay Compensation
~Plugins that run on DSP expansion, like a UAD card
~Plugin delay occurs when processing involves algorithms requiring more samples than available by each playback buffer
Chapter 11: Phase
What is Phase?
-Relationship between two or more waveforms, measured in degrees
-We only consider phase in relation to similar waveforms
-Identical waveforms are usually signs of duplication
~ex: Duplicated snare, one dry and one reverb
-Waveforms of the same event are two microphones capturing the same musical event (or recording)
~ex: A kick mic and overheads, both with kick in it
-3 Types of Phase Relationships between Similar Waveforms
~In phase or phase-coherent: waveforms start at exactly the same time
~Out of phase or phase-shifted: waveforms start at different times
~Phase inverted: both waveforms start at the same time, but amplitude is inverted
-Problems arise when similar phase shifted or phase inverted waveforms are summed
~Comb Filtering: If phase off less than 35ms, frequencies attenuated, tonal alteration and timbre change
~If waves are phase-inverted, level attenuation. If phase inverted and equal in amplitude, cancel each other out completely
-Phase in Recorded Material
~Comb filtering caused by a mic a few feet from guitar amp, picking up reflected frequencies as well as the direct sound. Not much a mixing engineer can do to fix. Caused by having two or more tracks of the same take of the same instrument can be treated by the mixing engineer:
(A) top/bottom front/back tracks: Microphones that are placed on opposite sides of an instrument are likely to pick up opposite sound pressures. Fix it by inverting the phase of one of the microphones.
(B) Close-mic and overheads: Close-miced kick or snare might interact with overhead microphones to cause phase shifting or inversion. Fix it by taking the OH as a reference and make sure the kit is phase coherent
(C) Mic and Direct: The signal from a bass guitar that is recorded DI will travel much faster that a signal that goes from guitar to an amplifier to a microphone to your console. Fix it by zooming in and nudging the track
Phase Problems During Mixdown:
-Delay caused by plug-ins
-Delay caused by digital to analog conversion when using outboard gear
-Short delays may cause comb filtering
-Equalizers cause delay in a specific range of frequencies
Tricks:
-Two mixing tricks are based on a stereo setup with both identical mono signals being sent to a different extreme, and one of the signals is either delayed or phase inverted
-Haas Trick
~Helmut Haas discovered that the direction of the sound is determined solely by the initial sound providing that (1) successive sound arrive within 1-35ms from the initial sound and (2) successive sounds are less than 10dB louder than the initial sound
-Takes the original signal panned to one extreme and the other, phase-inverted signal is sent to the other extreme with a delay of 1-35ms
-One way involves panning a mono track hard to one channel, duplicating it, and panning the duplicate hard to the opposite channel and nudging the duplicate by a few milliseconds
-Second way involves loading a stereo delay on a mono track, setting one channel to have no delay and the otehr to have a short delay between 1-35ms
~Used to:
-Fatten sounds on instruments panned to the extremes making them sound more powerful
-As a panning alternative
-To create more realistic panning, since the human ear can use teh amplitude, time, and frequency differences to locate sound
~Haas Trick controls amount of delay, level
Out of Speakers Trick
-Like Haas Trick, but instead of delaying the wet signal, just invert the phase. Results in the sound coming from all around you rather than directly at you.
Chapter 12: Faders
Sliding Potentiometer
-Simplest basis for an analog fader
-The amplitude of teh analog signal is represented in voltage
-Contains a resistive track with a conductive wiper slides as the fader moves
~Different positions along the track provide different amounts of resistance
~Different degrees of level attenuation
-Can not boost the audio signal passing through it (unless a fixed-gain amplifier is placed after it)
-Audio signal enters and leaves
VCA Fader
-Combination of a voltage controlled amplifier and a fader
-VCA is an active amplifier that audio signal passes through
~Amount of boost or attenuation is determined by DC voltage
-Fader only controls the amount of voltage sent to the amplifier
~No audio signal flows through the actual fader
-Allows a number of DC sources to be summed to a VCA
~Shortens the signal path
Digital Fader
-Determines a coefficient value by which samples are multiplied
~Doubling a coefficient of 2 results in a boost of around 6dB
~0.5 results in around 6dB attenuation
Scales
-Typical measurement is in the scale unit dB
~Strong relationship to how the human ear perceives loudness
-Scale is generally based on steps of around 10dB or 6dB
~6dB is approx. doubling the voltage (or sample value) or cutting it in half
~10dB is doubling or halving the perceived loudness
-The 0dB point is called unity gain
~Where the signal is neither boosted nor attenuated
-Most faders offer extra-gain
~Generally around 6, 10, 12dB boosts
~Only used if signal is still weak while at unity
-Area between -20dB and 0dB is the most crucial area
Level Planning
-Faders are made to go up and down
-When mixing the levels start by coming up
~Generally ending up at around the same positions
-Problem
~A natural reaction to not being able to hear a track is to bring the fader up
%Bringing a snare up in the mix might begin masking vocals, so you bring up fader on vocals, then bass masked, etc.
~Eventually, end up back where you started
-Solutions
~Having a set plan for levels before bringing up faders so the extra-gains settings are left alone
~Setting the loudest track first and bringing up the rest of the tracks around it
Extremes - Inward Experiment
-Take the fader all the way down
-Bring it up gradually until the level seems reasonable
-Mark the fader position
-Take the fader all the way up (or to a point where the instrument is too loud)
-Bring it down gradually until the level seems reasonable
-Mark the fader positions
-You should now have two marks that set the limits of a level window. Now instrument level within this window based on the importance of the instrument
How Stereo Works
-Alan Dower Blumlein
~Researcher and engineer at EMI
~December 14, 1931, applied for patent called "Improvements in and relating to Sound-transmission, Sound-recording, and Sound-reproduction System"
~Was looking for a 'binaural sound', we call it 'stereo' today
~Ironically, first stereo recording published in 1958 (16 years after Blumlein's death and 6 years after EMI's patent rights had expired
-Stereo Quick Facts
~We hear stereo based on three criteria: (EX: trumpet on your right)
%amplitude (sound be louder in R ear than L)
%time/phase (sound will reach L ear later than R)
% frequency (less high freq in L than R)
~Sound from a central source in nature reaches our ears at the same time, with the same volume and frequencies. But, with two speakers, no center speaker, so phantom center
~Best stereo perception when triangle acheived
Pan Controls
-Pan Pot (Panoramic Potentiometer)
~First studio with a stereo system was Abbey Road, London
~Splits a mono signal L and R, and attenuates the side you're not favouring
-Pan Clock
~Hours generally span from 7:00 (L) to 17:00 (R)
-Panning Laws
~A console usually has only one panning law, but some inline consoles have one for channel path and one for monitor path
%two main principles:
^if two speakers emit the same signal at the same level, listener in the center will perceive a 3dB boost of what each speaker produces.
^when two channels summed in mono, half of each level is sent to each speaker
~0dB Pan Law: doesn't drop the levels of centrally panned signals. The instrument level will drop as we pan from the center outward, with 3dB increase of perceived loudness when centered
~-3dB Pan Law: when panned center, there is a 3dB dip (generally best option when stereo mixing)
~-6dB Pan Law: used for mono-critical applications. Provides uniform level in mono, but a 3dB dip when in stereo
~-4.5dB Pan Law: compromise between -3 and -6dB laws. 1.5dB center dip when in stereo, 1.5dB center boost in mono
~-2.5dB Pan Law: gives a 0.5dB boost when panning to the sides so instruments aren't louder when panning.
-Balance Pot
~Input is stereo, unlike pan pot. 2 input channels go through separate gain stages before reaching stereo output. Pot position determines how much attenuation applied on each channel.
~never cross-feeds the input signal from one channel to the output of the other
Mono Tracks
-Problem with dry mono track is it provides no spatial perception
-Dry mono tracks always sound out of place, so add reverb or some other spatial effect to blend it
-Some mono tracks include room or artificial reverb that doesn't sit well with a stereo reverb of the whole mix
Stereo Pairs
-Coincident Pair (XY) technique provides the best mono-compatibility given that the diaphragms of the two mics are so close in proximity, and there's no need to worry about phase complications
-Spaced Pair (AB) involves two mics a few feet apart, is certain to have phase issues, and is not mono-compatible
-Near-coincident pair is two mics angled AND spaced, with less drastic phase problems
Multiple mono tracks
-Multiple mics on the same instrument
-Mirrored panning widens and creates less focus on the instrument in the stereo image
-Same panning gives a more relative stereo image, and is easier to locate
Combinations
-Like mirrored panning but less extreme
Panning Techniques
-Look at the track sheet and get a basic idea of a tentative pan plan
-Panning strategies differ with every mix
-Small tweaks in the near final stages can greatly improve mix
-Panning instruments in the same place causes masking. Panning different directions and mirroring can avoid masking
-When panning, think of a sound stage or try to visualize an actual performance
-Center and extremes in the panning field tend to be the busiest areas wehre masking is more likely to occur
-Level and frequency balance are the main concern when panning
-Be aware of the rhythmic structure of the tracks and keep them balanced
-A close-to-perfect stereo mix is basically a good mono mix, although there is still room for imbalances
-Stereo effects (reverb/delays) can be panned towards th dry track to put the desired effect in clearer focus
-Mono effects benefit more from panning the effect farther from the dry track and enhance the stereo image
Beyond Pan Pots
-Autopanners: pans cyclically between the left and right sides
~Rate: Defined by Hz, cycles/second
~Depth: How far the signal will be panned. Higher setting = more apparent effect
~Waveform: Defines shape of panning modulation, how smooth/rigid the panning will sound
~Center: This setting defines the position of modulation
Friday, September 17, 2010
Busing and Grouping, For real!
Buses
o Common signal path where many signals can be mixed
o Typical buses
• Mix bus
• Group bus (or single record bus on CD)
• Aux bus
• Solo bus
• Processors vs. Effects
o A dry signal is the unaffected audio, while a wet signal is the affected audio
• For processors, can adjust the percentage used between wet and dry
o Processors: Made to alter the input signal and replace it with a processed signal
• Added with an insert point
• Include EQs, dynamic range processors (such as compressors, limiters, gates, expanders, and duckers), distortions, pitch correctors, faders, and pan pots
o Effects: Add something to the original sound. Takes signal and generates a new signal based on original one
• Added by using an auxiliary send
• Include time-based effects (such as reverb, delay, chorus, flanger), pitch-related effects (such as pitch shifters and harmonizers)
• Basic Signal Flow
o Step 1: Faders, pan pots, cut switch
• Each channel is fed from a track on the multitrack recorder. Signal travels from the line input socket, the fader, then the pan pot.
• Pan pots take the mono signal and send out a stereo signal, then sum it into the mix bus. Single fader alters the level of the stereo bus signal.
• Then, mix bus signal goes to two mono outputs on the back of the console (L, R)
o Step 2: Line gains, phase-invert and clip indicators
• Line-gain (or tape-trim) boosts/attenuates the level of the audio signal before it gets to the channel signal path
• Optimize the level of the incoming signal to the highest levels possible without clipping (digitally) or unwanted distortion (using analog)
• Some engineers use the over-hot input because it adds appealing harmonic distortion
• Check phasing with the phase invert
• Don’t always trust clip indicators, trust your ear above all else
o Step 3: On-board processors
• Quality dictates much of a console’s value
• Include hpf, EQ, basic compressors at times
o Step 4: Insert points
• Many engineers prefer to use external insert points rather than in-board.
• Lets us insert devices into the signal path
• Each external unit can only be connected to one channel, but multiple tracks can use the unit through inserts.
• Can use multiple inserts on a single track
• Importance of Signal Flow Diagrams
o Step 5: Auxiliary sends
• Takes a copy of the signal on the cannel path and sends it to an auxiliary bus
• Local aux controls are on the individual channels, containing:
• Level control: pot to control level of the copy sent to the aux bus
• Pre/post fader switch: determines if the signal is taken before or after the channel fader. Post-fader lets you control level of signal with channel fader. We often want aux effect level to correspond to instrument level, so we use post-fader feed. If pre-fader, the level is independent of the channel fader and will play regardless of channel fader level.
• Pan control: Aux buses can be mono or stereo. If stereo, pan pot available to determine how mono channel signal is panned to the aux bus
• On/off switch: Often called MUTE
• Master aux controls in master section. Same as the local ones, but no pre/post fader. Most have multiple auxiliary buses
o Step 6: FX returns (or aux returns)
• Dedicated stereo inputs that can be routed to the mix bus
• Provide quick and easy way to blend an effect return into the mix, but offer very limited functionality
• When possible, effects are better returned into the channels
• Groups
o Control grouping: Allocate a set of channels to a group, so moving one fader controls all of them
• VCA grouping: Consoles with motorized faders have master VCA group faders. Individual channel faders are then assigned to a VCA group
• Cutting or soloing VCA group affects each channel assigned to it
o Audio grouping
• To handle many signals, must sum a group of channels to a group bus (subgrouping). Group signal can then be processed and routed to the mix bus.
• Format: Channels:Groups:Mix-buses
• Ex: 16:8:2 denotes 16 channels, 8 group buses and 2 mix buses (or 1 stereo mix bus)
• Routing matrix: collection of buttons that can be situated either vertically next to fader or in its own area. Depress one, and the channel will be sent to the corresponding master group
• In-line grouping
• Ex: In a 24 track recording, drums may be ch 1-8. They are routed through the matrix to Channels 24 and 25 that now function as a group.
• Bouncing: by sending groups to yet another subgroup, we then send that final subgroup to an available audio track on the multitrack recorder
o In-line consoles
• The desk accommodates two types of signals:
• Live performance signals: Are sent to a group to be recorded onto the multitrack
• Multitrack signals: Already recorded information sent to a group
• In-line consoles and mixing
• Since the channel path is stronger than the monitor path, it’s ideal to use the channel path for multitrack recording and return signals and use the monitor path for:
o Effects returns
• Ex: We can send a guitar to line 1 inputs to a delay unit and bring the delay back to the monitor path on the same channel strip/module
o Additional aux sends
• Ex: We can send the background vocals on a bus to a group, the group to the delay and/or reverb. The bus acts as a local aux send while the group channel acts as a master fader of what is being received.
o Signal copies
• Ex: Multiple snare tracks sent to a single channel through the monitor path
o The Monitor Section
• Monitor output
• To hear it, it needs to be sent from the mix output (we commonly use Pro Tools 1 - 2 on the patch bay, and MIX pressed on master channel) to the 2 Track Recorder (2TRK button on master channel). Then, to the monitor output (the actual monitors)
• Additional controls
• Cut: cuts monitor output. Feedback, noise bursts, clicks/thumps, etc.
• Dim: Attenuates monitor level by user-definable amount of dB (for audible convenience in studio).
• Mono: Sums the stereo output to mono (for phasing, masking issues).
• Speaker selection: Allows you to switch between different monitors (if you have them)
• Cut left, cut right: Mutes right or left monitor.
• Swap left/right: Left speaker in right speaker, right speaker in left speaker (used to check stereo imbalance)
• Source selection: Determines where the speakers get the audio (mix bus, external outputs, aux bus
o Solos
• Two types of solos:
• Destructive in-place (when one channel soloed, every other channel is cut
• Nondestructive
o PFL (takes a copy before the channel fader and pan pot, so mix levels and panning aren’t engaged
o AFL (takes a copy after the fader but before the pan, so it maintains levels, but not panning) or APL (takes a copy after the fader and pan, so both panning and levels are maintained)
• Solo safe
o Keeps a channel soloed permanently, even when other tracks soloed.
• Which solo?
o Destructive solo is favored for mixdown because when a track is soloed, the signal level remains the same as it previously was,, as opposed to nondestructive solo where the signals may drop or rise in level.
o Correct Gain Structure
• Make sure that the signal is at its optimum level so 100% of the signal is sent and received
• Given that most analog gear gives off unwanted noise, just use the channel fader, not the processor’s output. This will prevent the noise given off by the processor from being boosted.
o The Digital Console
• ADA vs. DA
• Digital consoles have fader-layer capabilities
• Allow complete control over automating any parameter
• External processing is still possible, but it is an option. On an analog console, it would be a necessity
o Common signal path where many signals can be mixed
o Typical buses
• Mix bus
• Group bus (or single record bus on CD)
• Aux bus
• Solo bus
• Processors vs. Effects
o A dry signal is the unaffected audio, while a wet signal is the affected audio
• For processors, can adjust the percentage used between wet and dry
o Processors: Made to alter the input signal and replace it with a processed signal
• Added with an insert point
• Include EQs, dynamic range processors (such as compressors, limiters, gates, expanders, and duckers), distortions, pitch correctors, faders, and pan pots
o Effects: Add something to the original sound. Takes signal and generates a new signal based on original one
• Added by using an auxiliary send
• Include time-based effects (such as reverb, delay, chorus, flanger), pitch-related effects (such as pitch shifters and harmonizers)
• Basic Signal Flow
o Step 1: Faders, pan pots, cut switch
• Each channel is fed from a track on the multitrack recorder. Signal travels from the line input socket, the fader, then the pan pot.
• Pan pots take the mono signal and send out a stereo signal, then sum it into the mix bus. Single fader alters the level of the stereo bus signal.
• Then, mix bus signal goes to two mono outputs on the back of the console (L, R)
o Step 2: Line gains, phase-invert and clip indicators
• Line-gain (or tape-trim) boosts/attenuates the level of the audio signal before it gets to the channel signal path
• Optimize the level of the incoming signal to the highest levels possible without clipping (digitally) or unwanted distortion (using analog)
• Some engineers use the over-hot input because it adds appealing harmonic distortion
• Check phasing with the phase invert
• Don’t always trust clip indicators, trust your ear above all else
o Step 3: On-board processors
• Quality dictates much of a console’s value
• Include hpf, EQ, basic compressors at times
o Step 4: Insert points
• Many engineers prefer to use external insert points rather than in-board.
• Lets us insert devices into the signal path
• Each external unit can only be connected to one channel, but multiple tracks can use the unit through inserts.
• Can use multiple inserts on a single track
• Importance of Signal Flow Diagrams
o Step 5: Auxiliary sends
• Takes a copy of the signal on the cannel path and sends it to an auxiliary bus
• Local aux controls are on the individual channels, containing:
• Level control: pot to control level of the copy sent to the aux bus
• Pre/post fader switch: determines if the signal is taken before or after the channel fader. Post-fader lets you control level of signal with channel fader. We often want aux effect level to correspond to instrument level, so we use post-fader feed. If pre-fader, the level is independent of the channel fader and will play regardless of channel fader level.
• Pan control: Aux buses can be mono or stereo. If stereo, pan pot available to determine how mono channel signal is panned to the aux bus
• On/off switch: Often called MUTE
• Master aux controls in master section. Same as the local ones, but no pre/post fader. Most have multiple auxiliary buses
o Step 6: FX returns (or aux returns)
• Dedicated stereo inputs that can be routed to the mix bus
• Provide quick and easy way to blend an effect return into the mix, but offer very limited functionality
• When possible, effects are better returned into the channels
• Groups
o Control grouping: Allocate a set of channels to a group, so moving one fader controls all of them
• VCA grouping: Consoles with motorized faders have master VCA group faders. Individual channel faders are then assigned to a VCA group
• Cutting or soloing VCA group affects each channel assigned to it
o Audio grouping
• To handle many signals, must sum a group of channels to a group bus (subgrouping). Group signal can then be processed and routed to the mix bus.
• Format: Channels:Groups:Mix-buses
• Ex: 16:8:2 denotes 16 channels, 8 group buses and 2 mix buses (or 1 stereo mix bus)
• Routing matrix: collection of buttons that can be situated either vertically next to fader or in its own area. Depress one, and the channel will be sent to the corresponding master group
• In-line grouping
• Ex: In a 24 track recording, drums may be ch 1-8. They are routed through the matrix to Channels 24 and 25 that now function as a group.
• Bouncing: by sending groups to yet another subgroup, we then send that final subgroup to an available audio track on the multitrack recorder
o In-line consoles
• The desk accommodates two types of signals:
• Live performance signals: Are sent to a group to be recorded onto the multitrack
• Multitrack signals: Already recorded information sent to a group
• In-line consoles and mixing
• Since the channel path is stronger than the monitor path, it’s ideal to use the channel path for multitrack recording and return signals and use the monitor path for:
o Effects returns
• Ex: We can send a guitar to line 1 inputs to a delay unit and bring the delay back to the monitor path on the same channel strip/module
o Additional aux sends
• Ex: We can send the background vocals on a bus to a group, the group to the delay and/or reverb. The bus acts as a local aux send while the group channel acts as a master fader of what is being received.
o Signal copies
• Ex: Multiple snare tracks sent to a single channel through the monitor path
o The Monitor Section
• Monitor output
• To hear it, it needs to be sent from the mix output (we commonly use Pro Tools 1 - 2 on the patch bay, and MIX pressed on master channel) to the 2 Track Recorder (2TRK button on master channel). Then, to the monitor output (the actual monitors)
• Additional controls
• Cut: cuts monitor output. Feedback, noise bursts, clicks/thumps, etc.
• Dim: Attenuates monitor level by user-definable amount of dB (for audible convenience in studio).
• Mono: Sums the stereo output to mono (for phasing, masking issues).
• Speaker selection: Allows you to switch between different monitors (if you have them)
• Cut left, cut right: Mutes right or left monitor.
• Swap left/right: Left speaker in right speaker, right speaker in left speaker (used to check stereo imbalance)
• Source selection: Determines where the speakers get the audio (mix bus, external outputs, aux bus
o Solos
• Two types of solos:
• Destructive in-place (when one channel soloed, every other channel is cut
• Nondestructive
o PFL (takes a copy before the channel fader and pan pot, so mix levels and panning aren’t engaged
o AFL (takes a copy after the fader but before the pan, so it maintains levels, but not panning) or APL (takes a copy after the fader and pan, so both panning and levels are maintained)
• Solo safe
o Keeps a channel soloed permanently, even when other tracks soloed.
• Which solo?
o Destructive solo is favored for mixdown because when a track is soloed, the signal level remains the same as it previously was,, as opposed to nondestructive solo where the signals may drop or rise in level.
o Correct Gain Structure
• Make sure that the signal is at its optimum level so 100% of the signal is sent and received
• Given that most analog gear gives off unwanted noise, just use the channel fader, not the processor’s output. This will prevent the noise given off by the processor from being boosted.
o The Digital Console
• ADA vs. DA
• Digital consoles have fader-layer capabilities
• Allow complete control over automating any parameter
• External processing is still possible, but it is an option. On an analog console, it would be a necessity
This week in labs we have spent the majority of our lab time cleaning up the tracks as best as possible. I've found by listening to others mixes that simple edits in certain parts of every track really help the real, uncluttered feel that we strive for as we mix. We noticed that in the beginning of the song, there are around 9 or 10 tracks of drums playing at once. There is only kick, snare, and hi hat, in the beginning, with a ton of redundant and unnecessary audio in 7 of the other tracks. We decided before doing anything the the drums seemed to be off in the distance, and there was a lot of extra room noise. We were looking for a tight drum sound. The first thing we thought of was EQing it a little. That didn't produce the effect we wanted and so we moved onto compression. This didn't work either and only brought out the huge drums sound we weren't looking for. I thought about the overheads, realizing that soloed, they sound very distant. Simply by muting the beginning region of drums on the overheads, the drums snapped into place,and sounded super tight. The verse and intro really came to life. My point of this is that we don't necessarily need drastic eq's and a ton of compression or effects to get a great sound. As long as the instruments are speaking well with each other then it leaves a great amount of room to put just the little touch it needs. I am becoming more and more surprised at how much attenuation is required to clear up a mix, and gathering a more clear understanding of how frequencies talk to each other. This week I tried out a grouping concept with the drums. I set the output of all the drum track to a stereo auxillary DRUM mix. This way, after EQing the individual drums I now had a drum mix that I could work on EQing the drumset as a whole. We tried experimenting with the intro and doing a little rearrangement. A few simple edits and I think the song hits a little harder. When I hear verses and choruses sound like they are building up musically, I start thinking of ways that I could help build it up aesthetically. An example of this is automation of the reverb send on the snare drum track. For this particular song, I thought it sounded good to have a more close, laid back reverb for the verse. When the chorus came in I automated the reverb send level up 7dB, giving a more full, ambient effect to back the distorted guitars. And when EQing the guitars, using a low shelf helped the low end be still present but not so boomy. In the 2K to 7K range, We dipped the guitars with a bell EQ and it really brought out the vocal timbre. There was much more clarity and again, we did not need to use heavy compression or any drastic EQ boost. This next week we plan to do a new mix, stereo, and out of the board. We will use the disressors on the kick and snare, and put the bass through the milennia. I want to try putting the vocals through the SPX 90 and the NEVE preamp. HI Pass filters on many of the drum tracks, as we want most of the lows and all of the sub frequencies coming from the bass. Guitars will have reverb via the aux sends. I would like to try live automation as well, and record some panning of the vocals into the mix. When in the studio, paying attention to the meters is always a good idea, but just because the meter peaks doesn't mean distortion is happening. By the time the VU meter hits the red, you have already heard change in SPL happen, and it is hard to sync up what you are seeing with what you are hearing. Bar meters are easier to analyze and can respond to electrical pulses more quickly. Amplitude and Level are two different things. Amplitude is a representation of the changes in air pressure as vibrations displace air. Level refers to the overall magnitude of a signal. Meters are important to watch while mixing, but ultimately you should mix with your ears. When listening to a mix or mixing, we should start to decipher how the stereo spread is laid out, and how much depth the music has. And just by boosting and cutting certain frequencies, we can invoke emotion. Boosting lows creates a dark mysterious effect, and highs bring out more brightness and happiness. A good balance of the entire frequency spectrum from 20Hz to 20kHz creates a more balanced.
Friday, September 10, 2010
Mixing Engineers
What makes a good mix engineer?
Having the skill to evaluate the quality of sounds and to be able to relatively identify frequencies is a must have as a mixing engineer. Critical listening skills are very important for the success of a mix. A few questions that should you should regularly ask yourself as you listen to a mix are: Are levels balanced between all instruments? How are things panned? What is the instrumentation? As we learn to mix, we find out that there really is no perfect way to mix a song. There are a set of basic guidelines, with a few tools and the use of technology that gets the job done. Mixing is an art form. Therefore, every song will have its own unique mix, just like every painting has its own unique set of styles and colors. Being able to set personal preference aside to mix a specific genre is also important. Music is full of emotion, and it is a mixer’s job to bring that emotion and the performance of the song to life, in a sonic, panoramic representation.
A great mixer has learned over time to have a mixing vision. They’ll know what tools they want to use and how to use them before they enter a mix. Novice engineers aren’t able to envision a mix because they haven’t spent enough time with the gear to learn what will efficiently resolve a good mix. Some ways to practice and educate yourself on mixing: You can read and educate yourself, you can watch someone mix - but will not truly understand where they are coming from and what their motives are for the mix, or you can listen to and study mixes. None of these is as good of a learning experience as just plain mixing songs. Critically listen to your mixes and compare them to other mixes you are trying emulate. Use reference tracks to A-B your mix for level comparison. Some aspects in a mixing depends on the genre. The beat and vocals are usually mixed up front in hip hop. In jazz the snare is more important that the kick, and should be mixed more in front.
Sequenced music is another mixing process that relies on the use of DAWs. You can mix while you create music. A conflict in using many of the preset pad and sounds in softsynths is that many of the sounds have effects added. If you are working with MIDI, you can change these parameters. If you are working with sound files that have already been converted to audio, there are less possibilities during the mix-down to add effects processing. Through the process of recorded music, we start with song writing and arrangement, recording/editing and mixing, and mastering. When there are many tracks that use the same range of frequencies, it is best to listen closely and attenuate the overlapping frequencies that generally cause masking. Recording live music requires good quality gear, from microphones and cables to pre amps, eq’s, compressors. It also requires a decent performance. No mixing engineer wants to sit in the studio and spend hours editing a part to make it sound good when the performance could have been much better. A good performance captures a sound quality that “digital surgery” cannot truly replicate. Aside from the quality of each take, the mix is still what is most important, and highly contributes to the success rate of artists and their musical creations. Something may not sound natural when EQing a soloed track, but in the context of the mix it may work. Things may start sounding funky altogether if you have been sitting at the console for too long. Aural fatigue is typical when our brains require so much focus and energy to listen critically. Taking breaks is a must while mixing for long periods at a time. You may find that what you mixed yesterday sounds completely different and may not work for the mix.
Having the skill to evaluate the quality of sounds and to be able to relatively identify frequencies is a must have as a mixing engineer. Critical listening skills are very important for the success of a mix. A few questions that should you should regularly ask yourself as you listen to a mix are: Are levels balanced between all instruments? How are things panned? What is the instrumentation? As we learn to mix, we find out that there really is no perfect way to mix a song. There are a set of basic guidelines, with a few tools and the use of technology that gets the job done. Mixing is an art form. Therefore, every song will have its own unique mix, just like every painting has its own unique set of styles and colors. Being able to set personal preference aside to mix a specific genre is also important. Music is full of emotion, and it is a mixer’s job to bring that emotion and the performance of the song to life, in a sonic, panoramic representation.
A great mixer has learned over time to have a mixing vision. They’ll know what tools they want to use and how to use them before they enter a mix. Novice engineers aren’t able to envision a mix because they haven’t spent enough time with the gear to learn what will efficiently resolve a good mix. Some ways to practice and educate yourself on mixing: You can read and educate yourself, you can watch someone mix - but will not truly understand where they are coming from and what their motives are for the mix, or you can listen to and study mixes. None of these is as good of a learning experience as just plain mixing songs. Critically listen to your mixes and compare them to other mixes you are trying emulate. Use reference tracks to A-B your mix for level comparison. Some aspects in a mixing depends on the genre. The beat and vocals are usually mixed up front in hip hop. In jazz the snare is more important that the kick, and should be mixed more in front.
Sequenced music is another mixing process that relies on the use of DAWs. You can mix while you create music. A conflict in using many of the preset pad and sounds in softsynths is that many of the sounds have effects added. If you are working with MIDI, you can change these parameters. If you are working with sound files that have already been converted to audio, there are less possibilities during the mix-down to add effects processing. Through the process of recorded music, we start with song writing and arrangement, recording/editing and mixing, and mastering. When there are many tracks that use the same range of frequencies, it is best to listen closely and attenuate the overlapping frequencies that generally cause masking. Recording live music requires good quality gear, from microphones and cables to pre amps, eq’s, compressors. It also requires a decent performance. No mixing engineer wants to sit in the studio and spend hours editing a part to make it sound good when the performance could have been much better. A good performance captures a sound quality that “digital surgery” cannot truly replicate. Aside from the quality of each take, the mix is still what is most important, and highly contributes to the success rate of artists and their musical creations. Something may not sound natural when EQing a soloed track, but in the context of the mix it may work. Things may start sounding funky altogether if you have been sitting at the console for too long. Aural fatigue is typical when our brains require so much focus and energy to listen critically. Taking breaks is a must while mixing for long periods at a time. You may find that what you mixed yesterday sounds completely different and may not work for the mix.
Tuesday, August 31, 2010
Doing a rough mix
The first project will be to mix a track that has drums, bass, rhythm and lead guitars, vocals, and backing vocals. When I heard this song, I filed it in what I think is the pop/indie genre today. The first mix of this song will be all in the box with use of ProTools plug-ins. To get to the session, go to HOME and select the csumbuser account > folder MPA308 > RawTracks1 session template. Time to get started with cleaning up/organizing the audio and getting rid of things we don’t need in the mix. Start with organization, because it is easier to manage your workflow when things are in order and you know where/what tracks are what. We only need the 2track patched in because we are only monitoring. Listen to the files to make sure they are the correct instrument and are corresponding with the track names. Use standard track order and meaningful names to ensure efficiency in the studio: Kick / Snare / Toms / Overheads / Room mic / Bass / Guitars / Vocals and any other instruments in the mix. Cut and erase all unnecessary audio that is taking up space and is redundant (you don’t need toms in the snare track when they are already on their own track!). Make sure you solo each track so you can hear what you are listening for. You can use the Strip Silence function for quickly editing unwanted blank audio, but make sure to zoom in and check if you are hacking off any important decays in the performance, and sometimes you can use this for just about all of the tracks. You’ll notice some of the individual tracks were summed in the mix. We don’t necessarily need to use these tracks. They are allowing different mix options. The kick and snare are on a track together with compression, and the toms were summed as well. There are several room mic options as well so be sure to choose the ones you want and hide/mute the ones you don’t need. Group all of the drums together with the bass when you are finished sweeping. There are plenty of guitars in this mix to make it sound full, so there are many options when deciding how to make the solos stand out in the mix. Leave the solo C pan, and try a few things with 4 rhythm guitar tracks: Gtr 1 center, Gtr 1+2 center, Gtr 1 L, Gtr 2 L, Gtr 1 R, Gtr 2 R. Try the same series with guitars 3+4, or 1+3, or 2+4, or any other creative combinations you can think of. There are many options and different timbres, recorded with different mics, and some were already processed with outboard compression, so don’t hesitate to experiment! Set up a REVERB stereo aux track and send the lead gtr there. Turn up the input to 100% on the reverb module. Dial in what you think is necessary. Moving onto the vocals, we definitely want them center, and also sent to a little reverb. Back up vox can be panned for a more huge sounding effect. Set up a delay aux channel, and send both backing vox track to it. Pan one of the backing tracks: dry track hard L, and the wet track hard R. Take the other backing track that is sent to the delay and: dry track hard R, wet track hard L. Now we will have what sounds like 20 people singing. Use complementary EQ for all the tracks. Listen to the mix as a whole for overbearing frequencies in the individual tracks, and scoop out what is necessary using the parametric 7-band EQ. A low cut/high pass filter should be used on the reverb and the backing vocals to take out the fundamental note that is already there. Reverb on the bass end of things tends to make a mix muddy and ruin the aesthetics of what you are trying to achieve. Go through the song and add markers in the appropriate places to outline the song form. That way you can easily see where a chorus or verse is if you need to make edits in sound quality, performance, and arrangement. SAVE the session in the MPA308 RawTracks1 folder as Group4RawTracks1.
Subscribe to:
Posts (Atom)