Beating the On-Air Processors
It’s a battle waged behind closed doors, against an enemy that’s largely misunderstood. So why do some mixes sound loud on air, while others collapse in a heap? Let’s find out.
Text: Nigel Ross
For years radio stations, particularly those playing current hits, have been at war with their competitors over ‘perceived on-air volume’. It’s a battle that’s been raging since airwaves carried man-made signals and to this day, perceptions of volume are critical to a station’s commercial superiority. To achieve this, radio stations use on-air processors to maximise their output signals.
On-air processors from companies like Orban, CRL and many others, use every trick in the processing book to give a radio station a sonic edge. These processors are designed tao make the radio transmission as loud as is technically possible. The theory has always been that listeners will stop at the loudest station – and historically that was true to some extent – but whether being the loudest station still makes people stop and listen, over and above content and programming, is hard to say. Certainly, loudness wars arguably had more effect back in the days when most radios had knobs rather than scan buttons, but rightly or wrongly, there’s no doubt that radio stations are still trying to be the loudest on the dial. As a consequence of all this, the biggest issue producers and audio engineers face today still remains creating an audio product that doesn’t get squashed, or even pushed back in level, by these often heard but rarely seen audio crunching boxes.
BATTLING THE LIMITERS
I could spend pages discussing where I think stations go wrong setting up their processors, and how they cause more listener fatigue than a Boeing 747, but I’ll leave that for another day.
The battle of the radio producer and engineer is mainly centred around how to get past these on-air processors. It was without a doubt the biggest challenge I faced when the company I worked for many years ago started producing ‘ground-up’ musical imaging. Our production team back then was comprised entirely of musos, who, for the most part, had come from radio production backgrounds. Between us we’d literally made thousands of commercials, promos, sweepers and pretty much every produced element that radio stations broadcast. We certainly knew how to mix a VO (Voice Over) with pre-produced music and effects, and get it to leap out of the radio. But when it came to producing our own music from the ground up, well… that was a different matter!
Why was it so much tougher when we also had to engineer the music? Why were our ground-up productions struggling to leap out of the radio in the same way as those which carried the pre-produced music and effects? What we discovered after many months (or was it years?) of research – which involved sitting in with some of the best music engineers in the world and just trying everything we could think of – were techniques that apply to virtually any form of audio engineering. Whether you’re making the simplest radio ad, mixing a single, or creating imaging and commercials from the ground up, I hope what follows here will help shine a little light on how to get past the dreaded on-air processors.
GETTING THROUGH
I knew we had an issue when we built our first jingle-based radio imaging package for Melbourne’s TT FM; the tracks we created just weren’t jumping out like the songs around them. The brief from the station’s program director at the time was to make the package about 25% hotter than the current hits the radio station was playing. Back then we’re talking about M People, Haddaway, Real McCoy, Corona, Madonna… all well produced pop songs with driving beats. It was a time when samples were the big thing (particularly percussive loops); so how do you marry these with vocals and voice elements and still have them cut through on air with the same clarity and volume as the songs?
It was a big ask when you looked at the kind of budgets these artists had to record, produce and master their hits. We, on the other hand, had budgets so low I wouldn’t divulge them to my best mate, for fear of being laughed out of town. Sure, new technology helped, but this was before the days of plug-ins and PCs capable of processing everything on-board. Back then, you still had to spend the bucks on reasonable compressors for laying down vocals, decent desks for mixing, and outboard effects that were comparable to the big international studios.
So we spent considerable money on the gear, yet still we had issues getting our productions to push past the on-air processors. It was all very baffling because our productions sounded just as clear and loud off the CD masters as the current hits, but once they smacked into a station’s CRL it was like someone was messing with our volume and EQ.
PROCESSORS HATE EXTREME PEAKS
It was only after producing our first packages for TT FM and Gold 104 in Melbourne that the penny finally dropped. No books or internet searches were coming up with the answers, so I went straight to the most obvious guys who would likely know – the station techs. All these guys essentially proffered the same information: that on-air processors hate weird EQ peaks, particularly down low and up high, and that it’s important to avoid sub harmonics at all costs, as they will invariably hit a wall with the processors and push the overall level down.
Radio processors are set to cater to all kinds of radios, not just those big fat systems with sub woofers you hear bouncing down Lygon Street. So it’s important to filter off the frequencies at both ends, the ones that a lot of radios won’t be able to reproduce. That way you’ll have half a chance of sounding loud and clear on air. Avoid ‘loudness’ EQ curves wherever possible, as these do you no favours when on-air processors see them coming – down goes the overall level every time!
I kept asking the obvious question: ‘But how can this be, when on-air processors are multi-band? Surely these would allow for peaks at each end of the audio spectrum without affecting the overall level of the production?’ The answer I got was, ‘sure, but only to a degree.’ Most processors also do some overall levelling that’s not as frequency split as you might expect it to be.
LISTEN LIKE A PUNTER
Almost every audio engineer I know has a set of the worst speakers you could find on eBay to reference everything on, but do we all actually use them every time we mix something down? They mightn’t be the best sounding speakers in your studio, but those ghetto blaster-style speakers you’ve got setup in your studio should be used all the time, not just occasionally. Auratones (or, as many producers call them, ‘Horrortones’) were built for this exact purpose many decades ago, and they still do this job well, not because they sound good, but rather because they represent the lowest common denominator. They’re good at revealing the way in which most radio listeners will ultimately hear your stuff. If you don’t have a set of Auratones – or similar speakers – go get some, or at the very least, listen to your finished ‘product’ at the lowest possible level you can still hear it at before you hit ‘Render’. You want to be able to hear absolutely everything at very low volumes. If the voice-over disappears beneath the music or effects, crank it back up, just enough to still be able to hear everything around it. If certain frequencies seem to be jumping out, wind them back (not the others up).
USE GROUPS OR STEMS
Using groups is vital to any good production if you hope to beat the on-air processors. Different programs have different names for them; you may have heard music engineers referring to them as ‘stems’ or ‘sub-groups’. At Jingle House (where I work) we use Steinberg’s Cubase SX and Nuendo, and these programs use groups to allow you to sub-mix, or create stem mixes. They may not have been something you’ve ever considered using on basic commercial production, but mixing with sub-groups achieves a whole lot more than just saving on processing power.
One of the radio techs that hammered the aforementioned EQ peak issue into my brain also suggested group mixing and even mastering stems separately as a way of avoiding the adverse effects of on-air limiters. This was the best advise we ever got in our battle to beat the on-air processors. And once we started doing it, it seemed so obvious. How did we miss this?
Percussion is the biggest enemy of the on-air processor and is notorious for dropping your overall on-air level. It also causes the most listener fatigue when the end stereo file is brick-walled too much (in the hope of lifting its perceived volume). But it doesn’t have to be this way. Don’t let that snare or kick drum in your production dip everything else every time the beats smack the compressor/limiter. Particularly the snare, as parts of its EQ range typically sit right in the voice-over or vocal area.
SUB-GROUP ELEMENTS
When mixing music, I typically break my groups down into a variety of instruments, vocals and effects, depending on the music style. At the very least I will always separate the percussion from the rest of the audio, as this is the big offender and needs to be treated separately. In commercial and imaging production, you’re probably using loops and don’t need to treat individual drum parts, but it’s still worth adding a limiter to your loops to gain extra smack out of them, rather than limiting the final mix to achieve the same end result. There are some incredible multi-band mastering plug-ins out there, but unless you’ve pre-mastered the groups, chances are you’re handing the on-air processors something they’ll reprocess the same way a second time, and this just creates an over-compressed wall of noise. If you want a brickwalled sound, you’re better off creating individual brickwalled groups (percussion, all other instruments, effects and VO or vocal stems), then bring them together at the end of the chain with as little added limiting as possible. Leave this to the on-air processors to do. That’s why they’re there!
RESPONSES