Words; the way we describe sounds, sights and smells are so important and yet so imprecise. Descriptive words are, by their very nature, imprecise and ‘subjective’, giving a description that can only mean something in the light of the experience of the person to whom you are talking. And yet, it’s impossible to use purely ‘objective’ words to describe a sound, except in extremes: Saying ‘silent’ rather than quiet, ‘on’ or ‘off’, ‘major’ or ‘minor’, ‘rising’ or ‘falling.’
I'm trying to illustrate the difficulties in describing concepts in the world of perception of sound

Let me give a simple little example of some good interplay between subjective and objective….

In the 1960s I was in a recording studio one day struggling with the tuning on a guitar. With me was the MD of the session and we had been talking about musical pitch and how impressive it seemed when a conductor shouted at one of the second violins, complaining that that ‘his C sharp at bar 22 tends to be played under the note’, when actually, from where the conductor stands, any pitching error stands out like a sore thumb.
He suggested to me that as I was tuning the guitar I should try not to listen for the exact matching ‘pitch’ of the note (objective), but that I should listen more to the ‘colour’ or ‘mood’ of the note (subjective), that way the tuning would be easier and faster. I tried this, and I was amazed by the result. By carefully listening to the ‘colour’ of the note, I could get the tuning spot-on very quickly… you try it, it’s faster than using a digital tuner!

The point is that he was using subjective words to achieve real objective results, and that’s what we all must try to do, but the trick is to tailor the subjective words and phrases so that the picture created in the mind of the listener (to whom you are talking) is clear and unambiguous, it’s so easy to believe that we both mean the same thing when actually what is in our minds is completely different.

I want to think for a moment about eyes and ears.
The human eye has a lens at the front which focuses an image onto the retina. Now just try a little experiment. Look at a single object or place, and concentrate on keeping your eyes steady; don’t look from side to side at all. You will notice very quickly that a very small area of the thing you are looking at is sharply in focus and clearly defined, the remainder, and the vast majority of the field of vision is progressively less distinct the farther away from the centre it gets.

Now in everyday life what actually happens is that your eyes are constantly on the move….. notice on a television soap opera how when you see a close-up of two people, how their eyes are constantly scanning the face of the other…. It’s quite disturbing when it’s pointed out!
What’s actually happening is that the brain is absorbing masses of detail information from the most sensitive parts of the eye, and it’s creating a running image that we consciously think is pin sharp and detailed over the whole field of vision.

So what has this to do with sound?
Well The ear is actually a good bit more complicated:
We all know about the eardrum and the beautiful little bone structures that work to convey sounds from the eardrum across the middle ear and into the inner ear or ‘cochlea’, but that’s where the biology starts to get a little fuzzy and the only sensible discussions that move on from there are usually to do with hearing sensitivity, hearing loss, ageing and disease.
What really happens to those sound signals, and how is ‘quality’ maintained?
I think that the reality is frightening: My understanding of the physics (and I must admit that some of this is conjecture,) is that we should forget the idea of ‘sound’ as we know it being led into the cochlea, it’s easier to think of it as information signals; even think of it as digital if you like, although it certainly isn’t ‘digital’ as we know it.
Along the length of the cochlea there are millions of tiny hairs, the cilia. These connect to individual nerve cells, which ‘fire’ when the hair is moved. So the route of transfer of sound from the air to the brain is via your outer ear, the pinna, across the eardrum, through the hammer, anvil and stirrup bones to the fluid in the cochlea where hairs are displaced firing signals through nerve fibre to the brain where some very fancy processing takes place to create what seems to you to be the reality of what sound is around you.


Now talking about those processes in the ear, it's all really subjective, there is very little that can be measured or analysed about these processes between real life in the world around us, and what we perceive.

The personal realities of what is visually beautiful and what is ugly, what sounds true, pleasant, distorted or ordinary, these are subjective things which have to be based on a combination of our history of humanity, and on our individual experience in growing up and learning.

So to start to try to apply this sort of thinking to professional sound, we must get to grips with those things that are important and those that are less so. And this is where I got to some years ago, and where I almost gave up to take up farming because of the enormity of what became obvious. The truth is that however poor and inefficient that transmission path from the outside world to your brain seems to be, the resultant perception of sound that we all have is quite fantastic.
In school and college we are taught that we can hear across a frequency range of about 20Hz up to about 14Kilohertz, and from an intensity of 15 (absolute) decibels up to 120 decibels, being ever so quiet up to very loud. Yet this tells us nothing; it’s about as relevant as saying that the Mona Lisa is coloured paint on a board. It tells us nothing about quality and less about emotion.



Did you know, for example, that we can hear subtle differences in phase (that is time differences) over a huge range of frequencies and to very fine limits. This ability makes it possible to pinpoint the direction of a sound; not only in a radial sense around the body, but also up and down.

We have a little colony of Firecrest Wrens living in a forest close to my house; they love to scramble about in a patch of dense buddleia bushes. I can get quite close to them, and I can find them by standing very still and listening to their chirps. The sound they make is a short ‘chip’ and that gives my ears and brain enough information so that I can search a narrow area and know for sure that the little bird is there.

About 20 years ago I was working on monitor loudspeakers and quality assessment with the IBA, during the days of Michael Gerzon. I thought it would be interesting to do some experiments to see if it was possible to fool the directional accuracy of the ear by introducing small time delays in music signals. It’s commonplace now, but at the time it had never been done. I found that the perception of ‘direction’ was much more to do with phase (time) than volume, or the classic ‘pan’ control that was, and still is used to achieve image width.

And how about up/down information? That’s another story, it’s even subtler and it’s to do with those strange creases around our ears.


That schoolboy knowledge of what it’s possible to hear is a classic oversimplification. The reality is that we all have a sort of built-in volume control; that’s easy to show: If you can find a place that is truly quiet, and that’s quite impossible in London except possibly in the most sophisticated recording studios or radio studios, try sitting still in that ‘no noise’ environment for just a couple of minutes and before long you will start to hear sounds that are normally far too quiet to perceive, things like blood flow in your ears, your own heartbeat and body processes. These noises are far lower than can be heard normally, and it’s a sort of biological automatic gain control.

At the other end of the scale, I well remember sitting directly in front of the stage-left stack at an AC/DC concert where a friend of mine had designed the very first 10-kilowatt PA rig. The sound was excruciatingly loud, but as I’m sure everyone here already knows, within seconds you get used to it and it doesn’t hurt quite so much! That’s our automatic gain control again. It takes a few seconds to operate, to turn down, it takes a good deal longer to return to normal. The first stage, which I think is probably within the brain, operates within a few minutes, but there is a secondary effect, probably a desensitising mechanism within the cochlea, which actually takes up to 8 hours to recover, making you feel ‘deaf’ for hours….. yet even within that deafness the other hearing mechanisms work normally and the sensitivity of your ears is actually affected much less than it feels.

While we are on this subject, I have little respect for interfering politicians who in the depths of their ignorance place legal limitations on things like monitoring volume levels. My research may be verging on anecdotal, but the majority of sound engineers that I know who subjected themselves to high monitoring levels over a number of years, and I include myself in this, have retained excellent hearing, and contrary to expectation are not stone deaf.

Our ears are much cleverer than our so-called audiometric specialists realise.

(DF NOTE - Because my Dad's views on this are off the wall to say the least, I feel that I have to add that high noise levels ARE dangerous - there is plenty of evidence for that. So, please don't take off ear protection because Ted says it's ok to do so. If you are slightly deaf after exposure to high noise levels from monitoring or any other noise source, your ears probably won't ever be the same again! I have to add that engineers who have impaired hearing often don't or refuse to realise it... )


Now I want to start to move into less well-charted waters; how and why do we perceive some sounds as ‘nice’ and some as ‘nasty’? I believe that the basic answer to this one lies in what is natural and what is unnatural. By this, I mean that there are sounds that can occur in nature, these are sounds that man has been exposed to for thousands of years, the commonest one of course being human speech.

What I’m trying to lead up to here is one of the key points of my talk, it’s the perception of distortion.

(‘Distortion’ sounds like an objective word; it can be given a mathematical value, yet it isn’t, it’s an ‘umbrella’ word covering any sort of variation from what we think of as true or normal.)

So far I have been banging on about how wonderful the ear is and how amazingly well it can perform, yet when it comes to distortion it doesn’t seem to be quite so good:
In nature, harmonic distortion of sound is restricted to how sound is affected by passing through or over objects, trees or grass, and being reflected by rocks and the ground. If the sound is affected in any way at all, it is ‘bent’ asymmetrically; scientifically this is called second order distortion, (or possibly ‘even order distortion’,) Reflections and reverberations create higher order distortions, but these work as ‘colours’ to the sound and our hearing can interpret them in other ways, sensing distance and physical conditions. Basically, the ear is well used to hearing even order distortion, it is not unusual or disturbing and we subconsciously treat it as being ‘nice’.

Ted is searching for distortion on some old gear...

There are a number of sounds created by man that, to our ears, seem to exhibit other nastier forms of distortion. These are symmetrical clipping distortion, that is, the sound volume can’t go any higher or lower so it ‘clips’ at the extremes, and ‘crossover’ distortion, where although both positive and negative sound pressures work properly, there’s a sort of ‘no mans land’ in the middle where not a lot happens. An example in the real world is that particularly nasty 2-stroke motorcycle sound, but more importantly than that, it can be the type of distortion that occurs when poorly designed modern solid-state amplifiers are driven too hard or not hard enough.

Naturally occurring sounds generally sound pleasing, unnatural sounds generally sound nasty.

Natural sounds are usually composed of frequencies surrounded by even order harmonics; like second order distortion…. Although I’m not so sure about using that word here! For example a violin note: This is a fundamental frequency that is flattened on the bottom, and which has irregularities along its length. It can be described as a note with rich harmonics around it, it can also be described as a note with heavy even order distortion, and it can be described as ‘beautiful’.


When we have a musical signal, and we listen to it on what we think of as a ‘good’ reproduction system, we would (generally) be able to notice a difference from perfect reproduction if the distortion were much greater than 0.1% for even order harmonics; while for odd order harmonics the figure is much closer to 0.005%, and could be even lower.

Rupert Neve has had a good deal to say about this in the last few years… and I agree with him almost completely, although….Rupert combines the sensitivity of ears to distortion, with awareness of frequencies outside the normal range of hearing.
He states that a number of engineers can detect faults in sound if out-of-band frequencies are filtered out. I think that this is not as simple as that; the differences detected are more likely to be distortion artefacts introduced by the filters.

Danny Fletcher Note - What we are saying here is that the perceivable difference in sound when strong filters are introduced (for recording onto media like CD, with 20Hz-22Khz high order filters) is not necessarily the lack of extreme high and low sound, it is the subtle phase and harmonic implications within the audible hearing band from the filters themselves.


I have talked a bit about how hearing is much more than just a physical ear, which in itself is a device that has so many variables that it must be categorised as a work of art. And I have gone on to say how the physical ear is only the beginning of the story, the way the brain interprets sound is as much or even more important to an understanding of the fine points of sound.
One more little example of how the ear is only a part of the story….
When you are sitting in a pub or a restaurant have you noticed how disturbing it is if there are people sitting within earshot, but exactly behind you? If people are in front, in view it’s OK, you can relate sounds to what you can see, but if the sound is behind you, then it generates a whole new set of reactions which are, in the main, disturbing…. And this is why I think that original ‘surround sound’ did not meet with any significant success, whereas ‘5.1’ is much more successful because it is used almost exclusively for film soundtracks where to be disturbed is a significant part of the enjoyment.


Going back to the bits about how amazing our ears are at handling volume levels from so quiet that you can only hear in a silent environment, up to so loud that the sound makes your teeth rattle, it doesn’t take a great deal of imagination to see that to reproduce music, or any sound, with a pure and completely natural dynamic range, is both very difficult, and, with a little more thought, not a very good idea!

Dynamic compression is a necessity. Listening to a radio programme in a car, the volume level needs to be constant, the louds, not too loud, the quiets, not too soft.

Historically, volume compression was probably first used in the film industry in the mid 1920s. The idea of getting sound onto film had just been developed and in those early days the dynamic range of optical sound was about 40dB. The crackly noise from the irregular film emulsion was only about one ninetieth of the loudest volume that could be recorded. This was a severe restriction on the recording engineers of the time, so a primitive kind of compression was developed involving the mechanical shutting down of the light path in quiet passages, and opening it again when sound reached a certain threshold. This did the trick, but gave the sound an odd stilted quality which I’m sure older people will remember…. That rising and falling of the uneven hiss behind the speech on old black and white films.

I won’t go through all the history of compression in the early days, most of it was hit and miss and there was a considerable amount of good fortune in some of the developments! But more of that in a minute.

During the time of the Second World War, both broadcasting and record production had urgent need of good reliable compressors that reduced dynamic range for transmission, and that sounded as natural as possible.
Engineers thought up a novel way of using thermionic tubes; they altered the bias on the electrodes so that the gain of the amplification varied. This process is called ‘variable mu’ compression and all the early tube compressors used it.

This, in itself was a stroke of luck for the engineers; it was electronically quite easy to get the time constants, that is the attack and release times sounding acceptable to the human ear.

Danny Fletcher note - 'Time constants' refer to the non linear or 'natural' curves to attack and release, that luckily sound good to the human ear. To replicate these curves with modern components and ideas is a lot more difficult; so the electronic limitations of the time actually were an advantage to the overall sound.

Another stroke of luck which is not at all well known, is that as you push more volume into a tube compressor, it compresses up to a point, but beyond that, the compression actually starts to decrease. Now we know that this is a very valuable feature and contributes a huge amount to the good sound of many of these early compressors.

These tube compressors were large, heavy and expensive, taking up lots of rack space, and recording studios in the 1960s were discovering that it was a big advantage to have compressors, not only in the main recording path, but also on individual channels…. Multi-channel recording was starting to come in and mixers were getting bigger and bigger.

DIVERSION….. PLAY ‘The Four Seasons’ tracks; ‘We can work it out’ and ‘Silver Star’….. discuss compression and limiting sound.

Some engineers at the Fairchild Corporation in America started experimenting with other ways of changing the gain in an amplifier. They used a bit of lateral thinking and connected photo-sensitive resistors, usually used to sense the presence of daylight; for turning street lights on and off, across the input of an amplifier, and drove the output of the amplifier into a simple torch (flashlight) bulb. The light from the bulb acted on the resistor to reduce the gain, or volume, and there you have the first optical compressor.

A little later in 1965 some other engineers at Teletronix in the States, improved dramatically on the old Fairchild design by using a much faster acting light source. This was an electro-luminescent panel, designed originally as a futuristic light source in the 1930s but uneconomic, but it turned on and off quickly! Their compressor was called the LA2 and proved to be a huge success.

In 1962, I was regularly working with Joe Meek at his Holloway Road studio…..But the LA2 was not yet available in 1962.
In the Joe Meek studio, Joe had a couple of big valve compressors, but he was experimenting with cadmium sulphide photo-resistors and amplifiers much as the Fairchild engineers had done….. by that time Fairchild had produced a commercial photo compressor, but it really didn’t sound very good, and it was very difficult to use.

I played about with some cells and a 3-valve amplifier and achieved a working compressor that we used in the studio for some months. In the years 1963 to 1965 Joe used various versions of these compressors on everything he produced.


I want to play just a short example of what Joe Meek was all about. Just as Vincent Van Gough and Claude Monet were able to take advantage of the limitations and properties of paint, I think Joe Meek used the limitations of his equipment, and the technology of the time to come up with something that had an appeal all of its own.

His work was Simplistic….. It was often Naïve. I would say that 80% of his output was so bad, musically, that it was an embarrassment, and yet now and again he would come up with a sound that was right for the time…. What fairground would be without ‘Telstar!?

This is one of his better works! It’s ‘Have I the right’ by The Honeycombs. The original recording was hideous….. Joe worked on it with overdubbed piano and organ, cardboard boxes, stamping feet, vocal backings and finally, while he did the final master tape, he mixed in the output from an AKG D19 microphone while my brother hit it with a tambourine!

( PLAY ‘Have I the right’ )
discuss extreme compression.


During the years between Joe Meek the man and Joemeek the company, which I started in 1993, I was into running a studio in Denmark Street for Keith Prowse Music, and then designing first of all mixers, and later complete radio station installations for UK Independent Local Radio, the Foreign and Commonwealth Office, and the BBC.

After a flurry of activity in a completely different field, banking communications, I decided to start to take life a little easier down in Devon. I started to get into doing some video, and a few small recordings for local artists and bands…. And I needed a decent compressor.
Now bear in mind that this is 28 years later, we had had the introduction of solid state equipment, integrated circuits and digital was starting to come in.

I looked at what was available, and it was all VCA based…. These clever integrated circuit amplifiers that can change their gain absolutely accurately and predictably to the needs of the designer; I tried a few but they all sounded ghastly!!

I thought it was time to reinvent one of the old ones we used to use in the Joe Meek studio, but times had moved on and I had designed a fair amount of other equipment over the period, so I threw together an idea using some good professional solid-state thinking. I retained the old light cell that we used to use in the old days, but replaced the filament bulb with LEDs driven from a powerful servo amplifier, so that I could control the attack and release times accurately…. something we could never do in the old days. The results were instantly brilliant. In a matter of weeks I had built several working models and my recordings were sounding big and fat.


A couple of London studios tried out these home-brew devices and wouldn’t let me have them back, so I was forced into starting a sort of limited production line.
But what to call the thing….. There was only one name that came to mind, and that was Joemeek**, not because he was the greatest engineer, he wasn’t, but because that’s where the original thinking came from.

It’s only very recently that I’ve been able to start to understand some of the finer points of why these photo compressors sound as good as they do:

This is why I went into all that stuff about ears and hearing…. It’s relevant!

(PLAY beginning of ‘So Great’ and discuss heavy but clean compression)

A microphone picks up a human voice. The difference between the peaks of the signal and the average of the signal is generally taken to be about 12 to 15dB. That means that to get an undistorted recording, you have to record the voice at least 15dB below peak recording level. So, put a compressor in the path with say 10dB of compression on, and the voice will sound 10dB louder.

Sounds good?

For a lot of purposes all those assumptions are OK, but reality is very different. For a start the peak to average ratio may be predominantly 12 to 15dB, but there is a significant and audible content of very short term ‘spikes’ that can go at least 10dB beyond that.

It's essential, absolutely essential that the amplifiers that are handling these early signals straight from the microphone, can handle these extremes of volume level, even though the little 'spikes' only occur for a microsecond.

If an amplifier is well designed, it will handle these overloads, and it will not 'sag' when an overload hits it, it will respond perfectly even it has been smashed by a transient spike.

The audio signal, with all its spikes and overloads, goes to the compression cell and is compressed.

Now because the preamplifiers have responded well to the transient signals, the audio remains 'clean' and good sounding. The compression takes place, which squashes the audio into a band of volume level that we can easily hear, that is neither too loud nor too soft. If the amplifiers were less perfect (less good!) then reducing the dynamic range would show up these imperfections and sound like horrid distortions.


And talking about distortions…. How about attack times; that is the time it takes for the compression to start to pull down the volume of the signal when it is too loud.

One would think that the right way would be to have an infinitely fast attack time…… so that as soon as the audio signal went beyond a predetermined threshold, the compressor would start to work immediately. Now interestingly, in the 1960s when the Americans were trying to squeeze as much volume as possible out of their medium wave transmitters, they tried using primitive delay-lines so that they could achieve so called 'perfect' compression…. The results were not good. They found that a much better way was to have a slowish attack time, and to follow the compressor with a simple 'clipper' to take care of the momentary overloads. The clipper would create severe distortion, but only for a very short space of time. This sounded wonderful, and there was an added bonus…. Music started to sound more exciting!


Release times are a simpler subject. Again, it seems sensible to have a very fast release time so that the audio signal will return to a sort of 'normal' level as quickly as possible, but in practice, it's better to hold the compression on for a good few milliseconds before returning the gain to normal. And once again, when used on radio transmitters, this made the music sound even louder again, which was just what the sponsors wanted.


As technology got better and better in the 1970s and 80s these old truths got lost. Engineers started to design so called 'better' compressors and limiters and no one seemed to notice that generally sounds were getting brittle, or those that asked were pooh-poohed and told that what we were getting was 'better' and 'higher quality' and 'more accurate' and lots of other nonsense.

A few top engineers, particularly in Nashville and Los Angeles, have retained their old Pultec equalisers and LA2A compressors, they have avoiding modern IC based mixing consoles and appreciate that real quality audio can only be recorded with gear with serious overload margins….. It's easy to make a loud television commercial with an SSL desk some VCA compressors and a 'Finaliser', but that's not the way to make a hit record!

I have to admit that I have been very fortunate with my compressor designs.
At the time I had no idea how much the natural attributes of the cadmium sulphide cell and LED combination affected the success of the compressor as a 'sound' or even a mood-shaping tool.

It was certainly by chance that I stumbled on the effects of the weird time-lags that are in the cell. … The strange attack curve that you get with the photo electric compressor has this riveting effect on the human ear; it makes the sound seem louder, which any compressor will do, but it also adds a sort of 'urgency' to the sound which makes it seem even louder still. This is enhanced again by the funny release curve that allows the sound to recover quite evenly for most of the change, but when the gain is several dB from fully recovered, it jumps back to normal.
Paradoxically, this gives the impression of smoothness even with quite fast release times.

General comments about the start of TFPRO, the detail design of the ‘Edward’

(Question time…. And close)

Copyright Ted Fletcher 2005