Blog

String Quartet with ‘Realistic’ Virtual Instruments

I recently revised a string quartet I wrote at the very beginning of my career, 18 years ago! It is a 3 movements quartet that was inspired by impressionism, when I was studying Debussy, Ravel, etc. It’s simple, nothing innovative, but I like some of its harmonic ideas and it has a nice coherence/flow. The revision fixed many mistakes and added a bit more depth to it. Currently, I have no opportunities for having my music performed, but still, I wanted to be able to listen to this quartet. I was dead curious to know how it really sounded and to get as close to the experience of listening to a quartet as possible. I have no financial means for paying musicians to record, so could technology help me?

Luckily I had the chance to try out the sample libraries of Berlin String’s First Chairs and Cinematic Studio’s Solo Strings. Both present samples and control of the 4 string instruments (violin I, violin II, viola and cello), with a variety of techniques (like tremolo, staccato, harmonics, etc.). They can be used with Kontakt and run straight from Finale (and probably Sibelius too), so you can listen to what you are composing. Don’t expect you can explore too many techniques though, and for this reason, many contemporary composers will not be able to fully express themselves through these instruments. But if you play with them first, understand their limitations and then decide to compose, they can be brilliant tools.

They are both great quality and quite similar, but the variety of techniques is different. Yet, be aware that they do not always articulate realistically, and we can still hear that “MIDI” robotized sound. But sometimes, they do produce a realistic performance.

I made a comparison video between these two libraries:

For my quartet, I only used the Berlin String’s First Chairs:

1st movement – PDF score

2nd movement – PDF score

3rd movement – PDF score

Audio only:

1st movement
2nd movement
3rd movement

Spatial & Imagined Soundscapes

On the 23rd of June 2022, I led a workshop with Prof Jacqueline Waldock at the Tate Liverpool called ‘Imagined Soundscapes’. The idea was to respond to the Radical Landscapes exhibition that was currently taking place. The exhibition expressed British landscape art, mostly paintings and photography from a variety of periods.

This workshop seemed like a great opportunity to test Schaeffer XXI, a multichannel sound interface I’ve been designing through Sonalux and which we hope to commercialise soon. The interface has many audio input possibilities (8 channels as a USB interface + 8 channels SD card + 4 audio line inputs) and 8 outputs and it also features an LED matrix with joystick control to easily distribute sound and it comes with a sequencer that responds to tap/tempo. The name was chosen in homage to Pierre Schaeffer, who worked on the design of the first sound spatialiser in 1951 (potentiomètre d’espace).

Schaeffer XXI – 8ch Sound Spatialiser (multichannel mixer)

The idea of the workshop was to show the participants an image of the exhibition and allow them to choose matching sounds from a sound ‘menu’. In addition, they could choose the levels and the direction from where the sound should be coming. Therefore, we set 8 speakers around the room (octaphonic), connected to Schaeffer XXI as a USB interface and a Max MSP patch I designed for this.

Max Interface showing photo from the exhibition and sound files menu.
Max interface showing exhibition photo (by Chris Killip), soundscape library and 8 individual outputs with some additional control.

Sounds could be chosen from three categories: anthropophonic, geophonic and biophonic. I built a sound library with personal recordings, in addition to sounds from the BBC archive and Freesound.org. There were many options of birdsongs and seascapes, I even included watermelon eating sounds! Sounds could also be played at different speeds, reversed, filtered and pitch-shifted. This allowed more imaginative ideas for when we presented abstract landscape paintings.

Imagined Soundscapes Menu

It was interesting to see the ideas that people came up with. For example, when we started using the sequencer and had sound rotating between all speakers, someone asked for a cat meowing sound. This imagined sound was soon being ‘chased’ by a dog, also sequenced through the speakers. Besides many funny ideas, I was intrigued by the potential of creating multichannel soundscapes. The seascape, for example, becomes very immersive when you have the wind behind you, water in front of you, birds flying on the sides, etc. I am sure there is much more to be explored with this setup and I hope to be running more workshops like this soon.

Workshop room
Participants at the Workshop

Stage Lighting Design with Max MSP, EOS and Capture

I have recently worked on two interactive stage lighting design projects for concert music. Whereas stage lighting is well explored in popular music, it’s still not very common in concert music. Of course, a light show must convey some meaning and adding flashing lights to concert music can be out of context, especially in music composed before the 20th century. However, on some occasions, a light show can add up to the concept of the music and make it multisensory, not to mention that new music can be composed specifically to be performed along with lights (see my compositions Future Lights and Intersidereal).

In these two recent projects (March 22 and April 22), I worked with the music of Scriabin, the first composer to notate light colours in the score (Prometheus), and the music of Messiaen, who also explored ideas of synesthesia and provided annotations in his score related to colours. Both concerts were held at the Tung Auditorium, a new venue in Liverpool hosted by the University of Liverpool. Initially, I thought it would be easy to explore its lighting console, the Ion XE 20, and fixtures (16x D60XTI and 4x SolaFrame Theatre + many house lights). However, the venue launched with a busy schedule, hosting not only performances but lectures and graduation events. There was little time reserved for me to connect equipment and test, so designing anything on the spot was impossible. Therefore, my only solution was to create a 3D simulation system, in a way that the same interface could be connected directly to the auditorium’s console.

A nice free software (student version) for simulation is Capture. I have used this for many of my virtual performances, including a 3D virtual stage I created to be used with chromakey, multiple cameras and other special effects for the online performance of Intersidereal. This time though, I used it only for simulating the stage and designing the light shows. If you know the basics of Capture, it’s not hard to recreate a stage. My references were only photos of the Tung Auditorium, so it wasn’t super precise, but enough for the job.

Tung Auditorium 3D simulation on Capture

Connectivity and Mapping Fixtures

The Max interface connects to the lighting console and to Capture.

The way I control Capture is through Cycling ’74s Max (Max MSP), via Art-Net, using the imp.artnet.controller object (Max external). To make this work, Capture’s Art-Net IP (Option/Connectivity) must match the IP in the Max object. This is usually the WiFi address you are using, but if there is no WiFi router (as happened to me during a rehearsal), it’s also possible to use a Npcap Loopback Adapter (in Windows).

The light show cues were all programmed in Max, sometimes receiving Serial (e.g. from an Arduino sensor) and MIDI (e.g., keyboard controller). Javascript within the js object was also used, especially for mapping fixtures and creating for loops. This allowed the quick implementation of light chasers, etc.

The ETC Ion XE 20 lighting console runs the EOS system. The way I connected Max to the console was via OSC (with an Ethernet cable), by using the Max object udpsend and sending messages like: /eos/chan/1/param/blue 31 . Note that the ping test message didn’t seem to work, and for successful connection it was required to restart the console sometimes (I posted the settings I used in the Max forum).

Max MSP sending messages to Capture (via Art-Net) and to EOS (via OSC), in addition to Javascript mapping systems.

The free (student version) of Capture doesn’t provide many choices of light fixtures, and so the Tung Auditorium’s fixtures were not available for simulation. Therefore, to create a system that worked simultaneously with the EOS (performance console) and Capture (3D simulation), I was required to do some mapping via Javascript. The fixtures I used in Capture were Idea Color Changer 575 and ColorWash 1200E AT. The house lights were not controlled in the performance and therefore not mapped, but in Capture, I had them designed with Par 64. In summary:

D60XTI (Auditorium Fixtures) > Idea Color Changer 575 (Capture 3D simulation fixtures)
SolaFrame Theatre (Auditorium Fixtures) > ColorWash 1200E AT (Capture 3D simulation fixtures)

Mapping was needed because different fixtures have different channels (for example, the red colour for fixture 1 is channel 3 on Capture, while the OSC message for the EOS would be /eos/chan/1/param/red 31). In addition to channel mapping, values also needed to be mapped. Capture receives values from 0 to 255, whereas the EOS receives 0 to 100. Conversions from RGB to CMY were also required. Capture’s fixtures were all CMY, but the D60XTI on EOS received RGB.

Control and Cues

For the Scriabin’s light show, a simple system of cues was set, as most changes were controlled by a MIDI keyboard that operated as a light organ, in addition to real-time light effects that were triggered from the audio input (microphone). The light organ was polyphonic, meaning it could control up to 6 different colours (coming from lights in different places) at once. Aside to the stage lighting, it also controlled the Insekta sculptures.

Max interface used in Scriabin’s Sonata No. 10.

There were 3 pieces to be designed for the Messiaen concert, and each piece required many annotated cues. Score images with cues were prepared in Photoshop, then implemented in Max via the fpic object. The preset object was used to store colours and fixture positions, while the radiogroup object indicated cues for triggering a variety of effects. The keyboard commands set were z/x for changing score pages, left/right arrows for changing cues, and the space bar for other interactive commands.

Interface on Max for following the score and to trigger cues.

There are many more details I could mention, but I think this explains the basics of the technology used in these two concerts.

Concepts

Scriabin’s colours were translated through an analysis system invented by Professor Kenneth Smith and the sculptures represented insects as the Sonata No 10 is known as the Insect Sonata.

Messiaen’s pieces were also analysed, following colour indications annotated on the pieces along with other ideas related to birds (the theme of the pieces). For example, in Abîme des Oiseaux, lights on the stage floor represented feathers. In Le Merle Noir, moving heads suggested birds moving around the stage, and in Le Merle Bleu, the blue and green tones were used to represent the sea, and distinct colours were assigned to every harmonic change.

This video shows a rehearsal simulation of Messiaen’s pieces on Capture:

This video shows excerpts of the performance:

You can watch the full performances on YouTube:

Olivier Messiaen, Birds and Colours – Performing Nature
Le Merle Noir, Messiaen, for Flute and Piano
Le Merle Bleu, Messiaen, for Piano
Abîme des Oiseaux, Messiaen, for Clarinet

James Kreiling plays Late Scriabin
Sonata n.10
click here more information about the Insekta sculptures used in this concert.




Layers by Orchestral Tools

I recently learned about Orchestral Tools and their collections of virtual instruments and was amazed to hear the pristine quality of their samples. There are many new things in the mockup market, but it’s still a challenge to make things sound realistic. Normally, if you are building a virtual full orchestra, at the moment you start mixing all the different instruments the final result doesn’t sound like a real orchestra texture. That’s because their samples were all recorded separately and they don’t blend in time and space as they should. However, if you are using recorded chords, then you can have a more realistic texture, and that’s the case with Layers.

Layers - Orchestral Toolsa

Layers is a free pack by Orchestral Tools that runs in their Sine Player plugin. You can select full orchestra, woodwinds, brass and strings.

I was very impressed to see that this was free, and I was expecting some limitations. Indeed, you can only have 3 types of chords (Major, Minor, Sus4)–why the heck did they leave out the diminished and augmented chords?!

Anyway, it can still be useful for some small jobs or even for some compositional experiments, so I thought I could have a go at making something with it. However, there was another thing annoying me: Layers doesn’t provide the score notes for the chords. For a composer that’s the most important thing! How could I notate my experiments with these materials?

To sort this out, I used some spectrum analysis software (and my ear) and here are all the C chord voicings:

layers - orchestral tools - score sheet notes

If you play C2, you get the first chord. C3, the next, etc. There are 6 voicings/inversions for each chord. This applies to all ensembles. If you choose strings, it’s quite clear what notes the instruments are playing. But if you choose to use a full orchestra or another ensemble, then if you change inversions some instruments will appear in certain voicings whereas others will disappear. So still, if I wanted to notate in a score sheet every single instrument that was used, it wouldn’t be easy to make it accurate.

I wrote a short compositional cliché to see how this could be notated. Here is the score and audio:

To make this sort of score match the audio (produced in Reaper) was quite time-consuming because I had to go through the ‘ chord codes’ (C3=chord1, C4=chord2, etc.) and transpose them to the root notes I needed. I wouldn’t write a full piece through this method. Still, considering the high quality of the recording I think it’s worth it to have Layers in my set of production tools. Not to mention that the VST works fine with Max MSP, which means it is possible to make something more interactive with orchestral sounds (or even create a better method for automatic transcribing the voicings to MIDI).

Another thing worth mentioning is that there is a control of dynamics via the Sine player. However, I couldn’t find any instructions or figure out how it works. It is super weird that there is a button for piano and forte, but both can be selected at once. Does that make it mf? No. I wish they could explain this better or have a mouse-over popup with instructions. The design is not very intuitive.

Despite all the problems, I still felt tempted to explore other of their products such as their Berlin Symphonic Strings pack. The quality seems supreme and it’s not limited to chords, meaning you can have full lines of violins, etc. The only issue is the price: 549 euros. How can a composer afford that?! And that’s one of their cheapest options, only including strings. Unless you are a Hollywood well-paid composer, I don’t think this is viable, especially for those like me who mainly compose contemporary music.

Overall, I hope in future they will provide score sheets, explain how the dynamics work and make more free packs.

How to record audio with binaural mics (Roland CS-10EM) using a PCM recorder (Tascam DR-40X)

Hi spatial visitors! I’m reporting today the hassle I had when trying to record with the Roland CS-10EM microphone using the Tascam DR-40X PCM (WAV) recorder. Maybe this can be useful for someone who is on a similar quest.

CS-10EM_Tascam_DR40X
CS-10EM and Tascam DR40X

For some years I had this amazing Roland CS-10EM that records sound directly from your ears and I think they are very useful for recording the sound of multichannel installations and soundscapes. Obviously, as everyone’s head is different (HRTF) the accuracy of binaural reproduction will change from person to person, but it’s still quite good for recreating the stereo field and the spatial reproduction back to my own ears is perfect, so I think it’s quite a special microphone to have.

The way I used to do it before was through a simple Sony ICD-PX333 audio recorder. It worked fine, but there was no input volume control and the format was mp3 at 192kbps. As I was recently working on a more professional video recording of a multichannel installation, I decided this had to be done in a PCM WAV format. I wanted to use the Tascam DR-40X, as this is my main microphone for going out in nature and doing field recording. It allows 2 external inputs, so “why not?”, I thought!

The CS-10EM is a microphone that takes “plug-in power”, similarly to lavalier or electret mics. These mics usually require some low voltage to function and plug-in power is usually provided by some devices in a range of 1.5-9V. This is also known as “bias voltage”.

The Tascam DR-40X does not provide this kind of output voltage to power the external microphones. Instead, it provides 48V (or 24V) phantom power, which is the standard for condenser mics. Probably they could not add more circuitry to the DR-40X design and since they didn’t expect people to use this with lavalier or this binaural mic, they just ignored this possible feature. Other Tascam models also lack a plug-in power feature, except for the DR-10 line, which is designed specifically for the lavaliers.

DR-10
Tascam DR-10

My plan wasn’t to buy another portable recorder (and the DR-10 line seems overpriced). I prefer carrying a single portable recorder when possible and add external microphones as needed. So what could be done?

One possibility was to design a circuit to convert the 48V/24V phantom power to something between 2-10V (as the CS-10EM manual specifies). Googling a bit, I found someone looking for the same application in the Arduino forum. There I saw that using a voltage regulator (step down) maybe wouldn’t work (higher voltage regulators seem to require more current than what phantom power can provide). One schematic using a transformer was posted, but I didn’t have the time to build the circuit (plus adapting it to stereo) and test it.

Another discussion on StackExchange suggested a simple voltage divider to make it work. This would require testing, soldering connectors and making some sort of adapter. I wasn’t convinced this would provide a clean line output and considering I only had a few days to do this and I was super busy with other projects, I looked for another, quicker, solution.

I found the RØDE Microphones VXLR+ adapter, it converts 48V to 5V. I don’t know what type of circuit is inside, but it’s RØDE so I imagine it’s well-made. It’s a bit overpriced (£21 on Amazon UK in 09/2021), but it seemed convenient. The issue is that I had to buy 2 of them to connect the 2 channels. The XLR input fits fine the Tascam, but to join the L and R lines with the CS-10EM it was necessary a splitter/adapter cable. More specifically: Y splitter cable, 3.5mm mono (L and R) male to 3.5mm stereo female.

VLRX+
VLRX+

So I wasted a few more hours looking for this specific connector. I found one on eBay UK but it was 1.5 metres long! That’s bad for the sound quality and more wiring on the way. At home, I had a more conventional Y splitter cable, but it was 6.35mm, requiring more adapters, which compromises the sound quality and make everything bulkier. So I thought about soldering my own cable, but I don’t like working with 3.5mm sockets and my time was compressed..so finally, I found the right splitter on eBay from a shop (BestPlug) from Germany. A bit expensive (12.88€ / £11 with shipping and VAT ) but it looked good quality and it was only 15cm long, so I went for it.

Y Splitter 15cm
Y Splitter 15cm (BestPlug, Germany, Ebay)

Now I can finally record with those binaural mics on the Tascam, phew! This upgrade cost me about £55 and I know I could have bought another simple PCM recorder for less, like a Philips DVT1110 (£35), but as I said, I prefer to carry a single device for practical reasons. It seems a bit silly to use 2 adapters to step down the voltage since a single one would be more power-efficient, but for now, that has been my best option.

Finally, the results of 2x VLRX+ and the Y splitter are great. The sound quality seems perfect with no issues to note. If you have any better (or cheaper) ideas for connecting this Tascam with the CS-10EM, please leave a comment! 🙂

Tascam DR40X > 2 VLRX+ > 3.5mm Y Splitter > CS-10EM