0

A Whole New Way of Werking

Kraftwerk has been gigging in surround for decades. But never anything like this.

By

2 July 2013

This isn’t so much a story about a Kraftwerk gig, as it is a story about an entirely new way of mixing.

Love ’em or hate ’em; whether you care if Ralf and Florian have broken up or not; or whether you thought Tour de France was better than their ’70s output, it doesn’t matter; Kraftwerk are, in some ways, simply the hosts… the carriers… the ones with the nerve and the music to give this new way of presenting live sound a red hot go.

Back in Issue 89 Robert Clark wrote a rather excellent article for AT. It was how a new immersive surround sound processor called Iosono was used at the Sydney Opera House to virtually replicate the sound of a huge ensemble in an orchestra pit… when the musicians were actually playing in another room of the building.

Iosono used a process licensed from the Fraunhofer Institute called wave-field synthesis (WFS) to take a whole bunch of sources (56 microphones in the case of the Die Tote Stadt opera) and then route them, after passing through its algorithms, to a whole bunch of speakers to produce a stunningly authentic replication of how the orchestra would sound if it was actually there.

It was an awesomely clever trick. But in one respect, a tad regressive. A little like those ‘real dolls’ Clive James would carry on about. Why spend so much time and energy on attempting to replicate the look and feel of a real human, when an actual human is superior in every regard?

The Die Tote Stadt opera [and I urge you to go and have a look at the Issue 89 story again] had a perfectly legitimate excuse: they couldn’t shoe-horn the outsized orchestra required for the performance into the Joan Sutherland Theatre pit. But it did pose the question: what could you do with the Iosono system if your aim wasn’t to recreate an acoustic space? What if you could let the imagination run wild and, erm… let rip?

OUT OF STEP WITH LEFT/RIGHT 

Well, it turns out that d&b’s Ralf Zuleeg is way ahead of us. Ralf (d&b’s Head of Education and Application Support) has been an enthusiastic advocate of Wave-Field Synthesis (WFS), and for some time has been exploring the rock ’n’ roll possibilities for its application.

Ralf Zuleeg: “System design with a classic left/right PA has gone as far as it can. It’s very good compared to what we had even 10 years ago, but sound engineers are becoming frustrated with its limitations. The dilemma is this: the sound engineer can only mix to stereo, and move the musical ingredients forward and backward, and side to side.”

Ralf enlisted the cooperation of a nightclub in Stuttgart as a crash test dummy, fitting it out with the surround speaker system and Iosono brain, and has invited bands in to see and hear what it might be like to mix their live sound in a totally different way.

One of those bands was Kraftwerk.

With the Stuttgart club as a testing ground, Kraftwerk’s FOH Engineer, Serge Gräef, immediately grasped the possibilities. Using an iPad interface, Serge could grab a source and move it around the space. Or he could introduce reverb through the system in a dynamic manner; rolling it up and down the hall. As Serge Gräef said: “This is not surround sound; it’s something far more sophisticated that immerses the audience within the performance.” 

Subsequently, Kraftwerk has been working its way around the globe performing a show that combines 3D images with immersive surround sound. It’s a heady blend of art and music that’s just as popular with festival goers as it is chardonnay sippers.

Which all gives you some context as to what was recently achieved at the Sydney Opera House, where Kraftwerk played to packed houses during the Vivid Festival. But spare a thought for the Sydney Opera House tech staff, Jeremy Christian, and Rich Fenton, who were a little blind-sided by an initial Kraftwerk technical rider that vaguely referred to a ‘surround system’. Suffice it to say, they had little idea as to the rabbit hole they were about to jump down.

A call was put through to d&b’s Australian reps, NAS, which came to the party in a big way, and as Head of Sound AV Services Jeremy Christian puts it: “That’s when the craziness started.”

FAT LADY SINGS (WITH PA)

The Sydney Opera House’s Joan Sutherland theatre is predominantly for opera and ballet. The theatre packs a ‘vocal’ PA comprising a Meyer Sound M1D proscenium array. It was about to have an extreme PA makeover.

After sending Ralf Zuleeg a CAD file, a system design was devised comprising left/right arrays of d&b V Series either side of the stage with no less that four additional T Series arrays in between — forming a formidable curtain of PA above the stage. At stage level Q10s matched the array positions above as ground fill. From there a d&b T10 speaker was positioned every three metres around the theatre — 24 in all. Four additional T10s were employed as extra fill, amounting to 28 individual sends of T boxes in the venue.

No, the theatre isn’t built for this type of retrofit. Jeremy describes how finding speaker points for the stalls wasn’t too difficult but the dress circle was more of a challenge: “We had to pull lights out of the ceiling, drop steel rope through the cavity and hang speakers from the roof. It was ‘interesting’, shall we say.”

Seven V Subs were also ground stacked under the Kraftwerk riser and an additional d&b InfraSub were positioned left and right of stage. It’s a low-profile design masterminded by Production Manager, Winfried Blank

MUSIQUE NON STOP

So that’s the PA. Now it was time for the band to load in. Let’s run through the setup:

On stage, the four Kraftwerk musicians have computers loaded with virtual synths. The artists control the positions of the sound sources via a massive MIDI network.

Why MIDI? That’s where Felix Einsiedel comes in. Felix was hired as the ‘WFS guy’ and has been working with Ralf Zuleeg extensively in making WFS more than a set ’n’ forget spatial emulator. Felix has programmed a Midi Bridge for the Iosono system so Serge and the musicians are able to control the position and type of the sound source. It’s the vital link in making dynamic mixing in WFS surround a reality.

In addition to what the musicians are doing from stage, Serge has three possibilities available to him: for some songs the position information is supplied by a timecode-synced computer with Cubase running automation tracks. For the other songs Serge has programmed the position information as MIDI commands into snapshots in the Avid Profile console. Finally, he has two iPads running Lemur software connected to the system to allow him to control any and all positions live, and to monitor the data sent by the musicians.

So what’s it like to ‘pan’ with the iPads? Well, if you were to manipulate the GUI you’d notice the multiple loudspeaker sources are all distributed well outside the ‘four’ walls of the theatre. In effect, you’re panning around ‘virtual’ sources set back from the speakers. Felix: “The tighter curve of a closer wavefront is what gives us precise positional information, by pushing it further out, it arrives flat like a plain [distant] wavefront.”

The Lemur iOS has been heavily customised by Serge — it has been a long process of exploring what Lemur could do and Felix programming the MIDI bridge to make it happen.

It’s another level of creativity; it’s a big playground and lot of fun to work with it

MORE THAN ‘SURROUND’

Wave Field Synthesis (WFS) was developed by the Fraunhofer Institute (possibly best known for developing MP3 compression). The Fraunhofer algorithms are licensed to Iosono and run on its IPC100 processor. The key difference here is: 5:1, 7:1 and the like, work best with the listener in the proverbial sweet spot. With WFS, however, every listening position within the audience can be a sweet spot.

IN THE MIX

Mixing with Iosono is clearly a lot of fun. Observers note that Serge took his job of screwing with the audience’s heads very seriously indeed.

AT: It sounds like the possibilities are limitless, but I’m guessing there are some mixing no-nos?

Serge: You can’t have sounds flying around the room all the time — or you’ll lose the impact of the big moves. And there are some things you simply can’t move around the room without losing focus or the mix falling apart. For example, you can’t move percussion around in the general course of the song — as you’ll lose the focus to stage. Similarly, you can’t have the beat in the back and the bassline at the front. It won’t sync, thanks to the speed of sound through the air — it’ll throw the timing out.

AT: How has your approach to dynamics and effects changed?

Serge: Traditionally I’m doing a lot of bus compression with the Waves L3 Multimaximiser. But now I control the dynamics on a channel-by-channel basis with C6 multi-band compression. I’m using many more plugs now than I did before.

From an effects perspective, I have a bus out of the Profile for all the effects, which I position in normal stereo, and occasionally I’ll send a reverb to the Upmix as well.

AT: What’s Upmix?

Serge: Upmix, is an Iosono function that allows you to route reverb into the processor and Upmix makes that reverb work in your room given its acoustic properties and the number of speakers you have in the space. For example, Fritz [Hilpert, of the band] is often sending snare reverbs into the Upmix and the reverb comes from every speaker in the system. But it’s not just playing back the reverb, there’s an algorithm in the box that’s analysing all the room info — the pre-delay and other parameters along with timing delays on each speaker that controls the imaging in the room. It also means you can move that reverb around the room.

AT: What reverb and delays are you using?

Serge: I’m working with TC’s VSS3 as my reverb. For delays I’m using H-Delay from Waves, or I’m using Native Instruments’ Reaktor.

AT: Reaktor? Okay, this sounds like it could be interesting!

Serge: Right! I program eight tap delays and every tap has its own signal output to Iosono. I’ve built a control interface for Lemur to decide whether the delay is in 4ths, 8ths or 16ths of a bar via a tap tempo button. My song’s tempo is always set in the console’s scene automation but I can switch it manually on my interface to decide if it’s controlled from the desk or via tap speed on the iPad.

I have a ‘multi-ball’ element in the delay, with eight balls, and I control every position of every repeat on the delay. I’ve got a high-pass and low-pass filter available for each tap and I have built in feedback of 200–300%.

AT: Sounds insane.

Serge: Well, unfortunately, there’s no song where it fits! I’m using it on Planet of Visions for one move around. But I built it because I can and it’s good to explore what’s possible.

AT: Sounds like the system has got your creative juices flowing?

Serge: It’s another level of creativity; it’s a big playground and lot of fun to work with it.

SOUND TEAM

Sound engineer Kraftwerk: Stefan ‘Serge’ Graefe
Application Engineer d&b: Ralf Zuleeg,
Iosono & Interface Programming: Felix Einsiedel
Sydney Opera House Audio Supervisor: Rich Fenton
Sydney Opera House Production Manager: Chris Burn
Sydney Opera House Head of Sound AV Services: Jeremy Christian
Sydney Opera House Audio Technician: Jan Rosenthal

Hanging with Lemur: Sydney was the first time Serge used his Avid Profile automation to trigger all the start positions for each song. The console snapshot contains all the audio settings for the song (as per usual) and then a further three snapshots to describe X/Y positions (via MIDI control change messages) and a fourth to call up the correct Lemur interface for the iPads. Serge would probably take the MIDI control further here, but there’s a limitation on how many MIDI commands can be saved into one snapshot. Practically, what this means is, when the Next button is hit on the automation it automatically links the four additional MIDI snapshots via the Profile’s Events feature.

IOSONO: AT A PUB NEAR YOU?

AT: Does a system like Iosono have a more mainstream future?

Serge: If you’re talking about regular stereo panning, with Iosono the image is much clearer — because of the discrete nature of the outputs there are no masking effects from the electronic summing on the desk.

AT: Okay, so take a vocal sound as an example: how would Iosono make life easier on a regular gig.

Serge: I put Ralf’s [Hütter] vocals right where he’s standing on stage. But it’s more than that. If I want to localise Ralf precisely to that spot his wavefield position is discretely in the corresponding speaker above his head. But that’s obviously a bit restrictive for a vocal. So I will push his vocal back further and that’s where more speakers get involved…

AT: … closer to what’s referred to as a ‘flat’ wavefront — ie. from an infinity point?

Serge: Yes, a flat wavefront describes a sound source that is far, far away, with more speakers involved. So for vocals I pick a point source then move it back a little to get more speakers involved.

AT: So you’d suggest a system like Iosono matched with more speaker positions above the stage would have wider application?

Serge: I think so. Having a row of speakers on the front truss — just for stereo panning — would improve many shows. Take theatre as an example. You can’t do conventional stereo panning so much in theatres. But with Iosono you could. You have the stereo position, but it’s not just about level it’s the use of delay that pulls the image to one side or the other.

LEAD MYSELF INTO THE FUTURE?

It’s worth noting that manipulating the panning isn’t totally instantaneous. There’s a 41ms latency, as the Iosono brain frantically does the maths. This means panning moves need to be made smoothly and methodically, not like a scratch DJ. Felix Einseidel points out that 41ms represent 12m of travel time for sound, which is more than workable in a large performance venue such as the Joan Sutherland Theatre.

The result is an unnaturally natural immersive sound. Audiences note that the audio is simply ‘there’ without any discernible source. You’re actually in the mix, and surely there can be nothing more exciting for a FOH mix engineer than that.

KRAFTWERK’S TRUST IN QUAD’

Long-time technical accomplice and current Production Manager, Winfried Blank, describes Kraftwerk’s fascination with concert surround:

“We started in the late ’80s with a quad sound system, which effectively meant adding two stacks in the rear of a venue for some sound effects. The music of Kraftwerk is perfect because there were many sound samples — think of the car moving off in Autobahn or the bicycle sounds in Tour de France — that benefitted from being panned through the room. It put the audience in the thick of the song.

“Over the years we added more elements to the show including video and now 3D video. The current setup entails four musicians, four keyboards with control equipment and four screens in the back of the stage.

“Of course, Kraftwerk hasn’t been the only band to pioneer concert surround. The Who’s Quadrophenia is one notable early example, while Pink Floyd has also used quad surround speakers for many years.

“In combination with a good sound system and well designed placement of the speaker in the room there’s little doubt this ‘surround sound’ adds a higher quality of emotion and feeling. For example, the sound of a train arriving in the back of the room will grab your attention, or the sounds pushing up the Pyrenees on a bike will leave you breathing more heavily.”

RESPONSES

Leave a Reply

Your email address will not be published. Required fields are marked *

More for you

Filter by
Post Page
Mobile/iOS Apps Synthesizers + Keyboards Analogue Synths Korg Hybrid Synths Headphones + IEM Headphones Audio-Technica Waldorf Studio Monitoring Studio Monitors Kali Audio Reviews Alesis Interfaces Multi-track Recorders Tascam Audio interface Utility & Other Software Arturia Black Mountain IEM Systems Shure Universal Audio Guitars Guitar Pedal
Sort by