These are the outline notes to lectures to students and graduates on Recording Technology courses, and Foundation Degree courses for the Summer 2005 season.

Recording in the real world.

By Ted Fletcher


Teaching about sound and sound recording inevitably concentrates on the physical properties of sound and how to capture it.

I am becoming more and more convinced that this is entirely the wrong approach. It's fine and necessary to learn about the technology of capturing sound and being able to reproduce it again at a later date, but to get anything like a true understanding of what we are really getting into, one needs to understand more about how we hear sound, both physically by studying ears and the mechanisms within them, but even more importantly, how sounds affect us and how we understand what sounds are.

Once we have some sort of grasp of what and how we hear, we can start to apply that knowledge to the various parts of the recording process, starting in the studio at the microphone, then moving on to the microphone preamplifier and other necessary or unnecessary parts of the recording chain, up to the recording medium itself, and the ways of monitoring what's going on, and how to use it once we've got it.


We are taught, parrot fashion, that sound doesn't't behave in a linear way like string or water, it is logarithmic. Usually we are told some 'gee wizz' facts to try to understand scale, but sadly our brains don't work that way and it's a tough concept to grasp. I think the only way to come to terms with sound levels is to think in terms of dB (decibels) and try to remember that 1000 watt amplifier is not that much louder than a 10 watt amplifier!

I am being intentionally flippant about this; at the two ends of the spectrum; to make a loud sound louder, a lot of extra energy is needed, if a sound is already very quiet, you can take most of its energy away, and it won't seem to get much quieter!

And what about frequency? We can 'hear' a range of frequencies from about 25Hz up to around 14KHz. That can be measured as the 'frequency response' of our ears, but natural and musical sounds extend from as low as 8Hz and up to 40 to 50KHz. Is that relevant?

To answer that question we need to turn the problem on its head and start to talk about ears....


From the mid 1930s up to the late 1960s a mass of work was done in audio labs studying the biology and the physical limits of human ears. I won't be arrogant enough to dismiss the whole of the research out of hand, but insist that all that work needs to be placed in a context of the knowledge that what we think we hear is very much more important than what some figure on a graph tells us we should be hearing.

The mechanism of the ear is reasonably well understood and taught; the path of pressure waves from the outside air causes movement of the eardrum, the bone structures act as an 'impedance converter' and transfer the vibrations across the middle ear to the inner ear where the pressure waves act on sensory cells in the 'cochlea'. That level of understanding of 'hearing' is about on a par with the knowledge that a dog usually has four legs.

To scratch the surface just a little.....


Physical hearing tests show that we can discern frequencies as 'notes' down to about 30Hz, yet in our daily lives we are subjected to, and are well aware of lower frequencies (so called 'infra sound'). These can come from mechanical things such as heating and air con systems, trains and motors, as well as naturally (storms and wind).

At the other end of the spectrum we are not only aware of frequencies above 15KHz, but all musical sounds contain harmonic information at high frequencies contributing to musical quality.


Any discussion about the range of volumes that are heard by the human ear is bound to be complicated. Simplistically, we can hear sounds as low as a pin dropping onto carpet at a distance of 20 feet (OK, that was a guess!) up to a level where the pressure of sound causes physical pain; in front of the rig at an AC/DC concert. But within those extremes hearing does some amazing things: A trip out into the country on a quiet night can easily show how our hearing (note, I'm using the term 'hearing' rather than 'ears') changes and becomes very much more sensitive than normal.

Equally, in a noisy environment our hearing 'desensitises' as if it is compensating to make things more comfortable; and that's exactly what it is doing. Where these effects actually take place is debatable; some of the compression effects take place in the middle and inner ear but I suspect that most of it is in the brain.


So far we have considered sound and hearing in terms of ranges of perception. It's like describing a painting as 'various coloured patches on a flat plane'. But eventually we want to move towards an understanding of recorded (or created) performance, and so we need to know more about what our hearing considers good and acceptable and if there are any aspects of 'not so good' that have to be watched out for.


The simplest 'musical' note is a sine wave, this is a sound of a single frequency, devoid of harmonics. If a sine wave is distorted by compressing or constricting just the top or bottom of the wave, then harmonics appear in the sound. These harmonics are called 'even order' harmonics and they are musically related to the fundamental frequency. The 2 nd harmonic is one octave above the fundamental, the 4 th is 2 octaves, and so on.

BUT if the sine wave is distorted symmetrically, top and bottom, the resulting harmonics are called 'odd order' , 3 rd, 5 th and 7 th harmonics, and these frequencies are musically unrelated to the fundamental frequency, they sound harsh and unnatural.

I have theories as to why even order distortion sounds acceptable while odd order doesn't't, a part of the answer probably lies in the way the cells respond in the inner ear; they are tiny hairs of different lengths that sway and trigger impulses from their roots. Another possibility is that it is because almost all harmonics that occur in nature are even order; the whistling of the wind, a human voice, the song of birds, all are rich in 2 nd order harmonics.


(A lot of this is simple stuff... but just in case....)

The main types of microphone are 'moving coil' (dynamic), 'condenser' (capacitor), and 'ribbon'.

Moving coil mics are inexpensive, rugged and good for most sorts of signal.

Condenser (the word 'condenser' is an old fashioned word for capacitor) mics used to be classed as expensive and delicate, but modern Chinese manufactured ones are in the same price bracket as a good dynamic.

'Large diaphragm' mics sound smooth and full, and are good for vocals.

'medium diaphragm' types are normally thought of as instrument mics. They don't have the impressive bottom end sound of the big diaphragm.

Capacitor mics can be either 'true capacitor' meaning that the polarising voltage in the capsule comes from a phantom power source, or 'back electret' where the capsule is permanently charged during manufacture. There is not a great difference in sound between the two types, but the 'true capacitor' tends to be quieter in operation.

Ribbon mics are expensive and delicate. Most have a 'figure 8' response, and have a very low electrical output. Quality can be extremely good.

(demonstration of microphone types; proximity effect and sensitivity)

So what is the purpose of a microphone? It's not such a simple question! If you are recording a string quartet, then the purpose of the microphone would have to be to reproduce the sound that it 'hears' as accurately as possible. But if you are recording say a brass instrument, or an electric guitar, then the purpose could be to reproduce the sound that you imagine you might hear when listening to that instrument.

The simplest and also one of the most difficult signals to record is the human voice. The recording studio of course needs to be free from serious reflections from walls, it has to be fairly 'dead' to avoid the sound being coloured by reflections, and the singer needs to be fairly close to the microphone so that the direct path of the voice to the microphone is short.

If a large diaphragm capacitor mic is used, and the vocalist is used to recording, then there's a good chance that the recording will be a success, but we have already made some enormous compromises compared to listening to a singer unamplified in the real world.

In the real world, the performer is normally seen, and the visual clues picked up by the listener compensate hugely for the problems of intelligibility because of reflections and distance; so already the recorded voice is by no means 'real', it is an idealised simulation.

Taking this one step further, if we can compensate and make a voice sound more 'natural' by eliminating reflections and having the microphone close, then why not go beyond this and enhance the sound of a recording? And that is exactly what we shall be doing for the rest of this talk.


For discussion and demonstration I'm using the TFPRO P10 unit because it has very flexible examples of mic amps, compressor and EQ.

Driving into the P10 I have just a simple portable CD player, so the signals are all recorded at 44.1KHz 16 bit.


There is a lot of false techno babble talked about mic amps. The basic conventional professional mic amp is a preamplifier with a medium input impedance, extremely low noise, and a very wide variation of gain.

All the really good ones achieve this by using two or more gain stages, this is because trying to get say 55dB of gain from a single gain stage is difficult to do and still retain good noise and distortion performance.

The input impedance needs to be high enough not to slug down the signal from the microphone, yet low enough to realise good 'self noise' figures, and there are now integrated circuits that do the job well.

The output from microphones can be as low as -70dB or as high as 0dB, with anything in between, so the amp must be flexible!

The conventional mic amp works as a voltage amplifier, that is, it senses the voltage generated by the sound in the microphone, and amplifies it. The P10 (and other TFPRO mic amps) actually work a slightly different way; they sense the current generated by the microphone and in the first amplifier stage, they amplify the current before converting it to a voltage.

Whether a mic amp 'sounds good' or not is an interesting question.... It's more than a simple choice; a lot of less expensive mic amps don't 'sound good' for the simple reason that the design is over–stretched; the designer has believed the spec sheets provided by the IC manufacturers and has tried to get too much gain from a single chip. This is very common in inexpensive preamps and mixers. The effect is that they sound slightly 'thin' and it's not until you hear a really good mic amp that you realise how bad the bad ones are!

And then there's the 'good' ones; they can vary a great deal depending on whether they have a transformer at the input and what the amplifier input impedance is, all these things affect the sound.

An input transformer tends to give the preamp a warm sound that is full of life. Variations in input impedance alter the character of the sound of passive microphones (dynamics and ribbons).

The mic amps in the P10 have chunky transformers.


The TFPRO P10 has both a phase inversion button and a variable phase control on each channel.

These are really 'get you out of trouble' controls that can avoid problems where using more than one microphone introduces phase cancellations; these show up as thin reedy sounding mixes.


Probably the single most important tool of the recording engineer is the volume compressor...... but first, a couple of definitions; just in case you are a bit hazy about it! A COMPRESSOR is a variable gain amplifier that reduces dynamic range; that is, if you increase the volume going in by an amount, the output will increase by a lesser amount depending on the compression ratio.

A compression ratio of 2 to 1 means that for an increase of 2dB at the input, the output will only rise by 1dB.

A LIMITER is an amplifier whose output is restricted to a defined point; if the output tries to go beyond that point, then the gain of the amplifier is reduced to compensate.

In the real world, a high ratio compressor works very much like a limiter.

The attack of a compressor is the time taken for the gain to reduce after the input of a higher level. The release is the time taken for the amplifier to recover to it's 'no signal' gain state.

There are 5 main types of compressor in use nowadays:

The digital compressor, where the gain structure is carefully controlled and the attack and release are highly predictable.... And it doesn't sound too good!
The FET compressor; unusual nowadays as they tend to be noisy and prone to high distortion, but they sound OK for some instruments.
The 'Tube' compressor; expensive and unreliable, but make a beautiful sound.
The VCA compressor; The commonest analogue type, highly predictable and not very nice sounding.
The optical compressor; my speciality, and in my opinion by far the best sounding.


The origin of the word 'EQUALISER' comes from the film industry, where in the early days it was a challenge to get the sound of recorded speech to be the same when the film was edited. Different camera angles required different microphone positions and so the sound had to be changed to make it all the same... or 'equalised'.

Like compressors, there are several basic types of equaliser.... In outline there are digital equalisers, that are an attempt to mimic the best analogue EQ. But in the analogue domain there are a number of configurations and types that are worth mentioning. The most notable is the 'Baxandal' circuit that was developed in the 1950s by Peter Baxandal, an engineer working for EMI in London. This was the first active (amplified) EQ circuit that gave predictable and very good sounding HF (high frequency) and LF (low frequency) lift and cut.

The conventional way to achieve variation in the mid frequencies was to use 'tuned circuits' composed of inductors (coils) and capacitors. Later circuits 'improved' on this by using active electronic models of inductors called 'gyrators'.

In practice the very best EQ units, and I include the TFPRO P9 in this, still use actual indictors even though they are expensive and heavy.

The P10 EQ is intended for mild 'colouring' of the sound and so it uses a combination of a Baxandal HF end, with 3 sections of gyrator based mid and bass.


It's time to put a voice in front of a microphone and think about what we are doing.

Our brains imagine a sort of idealised sound of a voice; the extraneous noises are ignored and the voice is clean and pure.... But this isn't true of course, in the real world there are all sorts of confusing noises and reflections, but to make our recording acceptable, we need to idealise it, so we place the microphone close to the voice. This has the effect of removing interfering noises, and it also 'softens' the sound because of the proximity effect.... If the sound source is very close to a microphone, there is a noticeable lifting of the bass response.

Because of the proximity, there's also a possible problem with wind blast from the breath.... Many mics are supplied with foam plastic wind filters, these are worse than useless; they are not very good at deflecting the 'wind' and they affect the sound, making it dull.

There are two types of effective 'popshield', the first is the nylon stocking type, and the second is the expanded metal type.


This is a good quality large diaphragm condenser mic (of the Chinese variety!)

(demo diaphragm size, VU underread etc.)

Now rather than keep saying 'hello, hello' into a microphone, here's an absolutely untreated voice sound that I recorded last week using a tube mic and this preamp.

The voice was placed about 20cms from the mic, and I used a stocking type popshield. The voice is my wife Barbara who was one of the session singers back in the Joe Meek days.

Demo..... gain, phase, compression and EQ.

Now for the purposes of this demo, I actually recorded the voice with no compression. I don't normally do that. This is very much personal preference, but I like to apply just a little compression during recording; it helps to get the best level onto the digital track..... OK, so it's a hangover from the days of analogue tape! I can try to justify that as well by saying that I like to use at least two types of compressor on a voice, to make it interesting and punchy, without obvious sounds of heaving gain changes.

Most voices, I record completely 'flat', that is with no EQ at all. But if there are signs of low frequency problems it's good to be able to use a high-pass filter; the one on the P10 works at 75Hz which is a good standard useful frequency. I usually keep the EQ switched OUT when recording, unless the voice sound is particularly dull, if so then it's OK to put in a touch of high mid lift, say 3dB at 6KHz just to fizz it up a bit. Ideally, I like to use a harmonic enhancer; just the merest touch during recording, this adds real life to the voice sound.

When we come to the mixdown, of course anything goes....

Next I have a very well known drum recording taken from a test disk.This demonstrates clearly the different effects possible using a good stereo compressor!

(Complete demo of compression range and relationship to EQ.)

Copyright Ted Fletcher 2005