AI Aesthetics: Music’s Rebirth

https://replicate.delivery/pbxt/Z04bYONfGk30dinDgvvnGbti2YzPSO9yeupPgF2vUjS1XYySA/out-0.png


I’ve been meaning to write about this for a while, but I was waiting for AI-things to settle down. Since they never will, here it is…

I see many colleagues lamenting the fact AI can now generate music. For some, music is dead, and what we see now is the commodification of creativity. Critics also seem to think that because AI is inhuman, there is no passion, it is emotionless, and because of this, no ‘real’ art is being created. They also say that all AI does is copy what people have done, so all it generates is a meaningless copy. There is also concern that AI could replace composers and songwriters, making it harder for emerging artists to compete. I’m sure these thoughts will remain for a while, but will the perspective change as we adapt to this musical madness? Or perhaps, is this a new era of musical Renaissance?

The ontology of music has certainly been impacted by AI. With tools such as machine learning and neural networks, composers and songwriters are now able to explore new sonic landscapes that were previously unimaginable. AI can analyse massive datasets of music to identify patterns and trends, allowing any person (artist or not) to generate original compositions and push the boundaries of traditional music genres.

I will try to break down some of the experiences I had with AI in music so far:

  • As a listener who enjoys experiencing the ‘new’, I can listen to mixed genres that were never explored before (like a samba played with a sitar), lyrics that talk about a particular friend of mine, and fresh new timbre (a celloice, or something between a cello and a voice).
  • As a composer and producer, I can extract generated material and use it in my music. Some may argue they are not my ideas, but if they fit into my aesthetic proposal, why shouldn’t I repurpose them? Many musical ideas we create ourselves are often a reflection of our auditory memory and the patterns we enjoy. Besides, popular music producers often use ‘guide tracks’, and isn’t an AI music generator an excellent tool for creating the references needed?
  • As a guitarist, now I can play along to my favourite backing tracks, designed the way I like them. I can also remove the solo guitars from the famous songs I play, and have backing tracks of the actual recordings.
  • As the designer of Schaeffer XXI, the handheld spatial sound mixer (merchandise alert!), I can now separate my stereo music into eight channels and move the sound around my room as I feel like.

Suno and Udio are probably the most powerful music generation tools at the moment. They are able to generate very complex music material, with realistic instruments and vocals. The issue is that currently there isn’t enough control; meaning, a feature for changing the structure, tempo, key, etc. I believe it’s only a matter of months to see AI generation becoming some sort of elastic band that you can stretch and pop like popcorn (these are bad metaphors, but you get me!).

Another AI tool I had some fun using recently is Synthesizer V, where you design ‘vocals’ with lyrics. I made a little choir with it. For a composer, listening to what you are writing with realistic voices is something out of this world.

Then you ask me: Will people be listening to AI music?—They already are! And producers are already using it without telling anyone. We are being tricked!!

Don’t get me wrong, I still enjoy listening and composing by the traditional means. Writing music on paper feels good, and all the other musical creation processes will still be fun. This is just another method in the list. But I’m also enthusiastic to see this innovation progressing, and to hear new sounds, and humanely-inhumane songs.

Finally, I believe this will give reasons for good compositions to be more valued… That’s because the boring stuff is just too easy to make now!

If you enjoyed this article, leave a comment, or get in touch to chat about it! I have many other thoughts on the aesthetics of AI I’d like to discuss. Thank you.

The Three Phases of Pausini’s Production

I am a composer who has been recently experimenting with electronics, lighting and sound art, but among my various articles, I enjoy exploring all sorts of musical aesthetics. I will show here that pop music can also be rich in creativity, especially harmonically.

Many classical musicians began their careers playing pop music, just like those who started on a guitar learning chords from the Beatles. In my case, I did not learn much from the Beatles, my interest was in learning Tom Jobim, as well as the music of the Italian singer, Laura Pausini, as both were rich in harmony and great for learning jazzy chords and progressions.

Laura Pausini’s band at the 2001’s World Tour (keyboards, drums, guitar, bass, backing vocals)

Who is Laura Pausini? In the early 1990s, she was a popular phenomenon in Brazil, as well as in many other countries of Latin America, and Europe. A good part of the population of the southern states of Brazil (e.g., São Paulo) comes from Italian ancestry, thus, listening to Italian words was attractive to the young 90s generation. Her albums were also released in Spanish, and both versions were very well-welcomed by Brazilians like me, who bought her albums in both languages. If you are in the UK or US, you may have never heard of Laura Pausini, so for more context, you can read her Wikipedia page.

Laura Pausini in 2001

People who know her music will automatically think of her major romantic song successes, such as Non c’è, Strani Amori, La Solitudine or Incancellabile. But those who followed her career know well that there is a jazzy/bluesy element in her production (check La Voce, Angeli nel Blu, Tutt’al più, for example).

Her early productions included wide dynamics (from pianissimo introductions to fortissimo choirs), well-elaborated harmonic passages, interesting improvisations and a variety of timbre from Laura’s interesting and wide vocal range. The 1990s was indeed surrounded by pop-romantic music which sometimes was well-elaborated, like in hits by Bon Jovi or Mariah Carey, but I find this Italian style much more quirky and interesting.

La Solitudine and Strani Amori were songs that resonated all over South America—I remember that, back in 1993, the radio stations were playing these songs several times a day. I will explain here why I believe La Solutidine became this massive hit, and, I will try to outline Pausini’s musical aesthetics in different phases.

La Solitudine was composed by Angelo Valsiglio, Pietro Cremonesi and Federico Cavalli, a team of musicians who invented a well-elaborated formula for popular songs, aiming to support the powerful voice of young Laura. In 1993, she won the 43rd Sanremo Music Festival and soon released a series of albums with this same team. As her career progressed, her team and style changed, and the music became more simplified and perhaps, less interesting, at least from a musical point of view. Their attempt to imitate American pop style, or to add an English-language repertoire, didn’t have much repercusion and was a big contrast to the initial ‘compositional’ proposal.

Laura in 1993, Sanremo Music Festival

So how has La Solutidine become such a famous composition, covered by many other artists? I will explain what I hear when I listen to it:

0’00” As we can notice if we try to play along, the original song (from Laura Pausini 1993), is not in the standard 440Hz tuning. Its initial key is between B and Bb. Was this done in the mastering process, on purpose? It’d be interesting to know why the chose to do this, but I believe it gives the song a special effect.

0’16” The verse progression is a basic ballad, I-vi-IV-V. The dark synth in the background gives depth to the picture while Laura sings a love poem. The drums are played softly with a tight snare on the third beat.

0’45” The chorus comes, with a bass descending in a similar fashion to Pachelbel’s Canon’s progression. Then the piano leads a modulation bridge, the cymbals roll and now we have a more punchy and resonant snare.

1’30” The verse returns (I-vi-IV-V) in a key above, but now there is a new level of dynamics, we could perhaps say it came from piano to mezzo-forte. A guitarist picks the key (detuned C#) in the background.

1’59” The second chorus appears, Laura’s now sings one octave higher.

2’27” A bridge verse creates a crescendo effect that resolved in the new chorus, now fortissimo, with backing vocals singing almost at the same loudness as Laura. Along with the choir, Laura alternates between the common low melody and high-pitch improvisations. The song ends with a piano coda, matching rhythms with the drums’ cymbals.

Pausini’s backing vocal trio in a 2001 tour

I will now classify Pausini’s music into distinct phases, according to its musicality, and not related to other contexts, such as lyrics or life events.

Phase 1 (1993-1995) A. Valsiglio, P. Cremonesi, M. Marati
This is my favourite phase as it is made of songs I’ve been listening to for the last 30 years of my life (I’m 39 now). I enjoy all the songs composed during this period, particularly Gente, Perchè non torna più, Lettera, Il Coraggio che non c’è, Un amico è cosi, Le Cose Che Vivi, Che storia è and Il mondo che vorrei.

This phase is characterised by:
– Arrangements of piano and keyboards (FM digital synths, strings, organs and other effects), electric guitars (clean/chorus/delay and sometimes overdriven solos), long reverb drum snares with light percussions, electric bass and backing vocal choir (usually one male and two female voices). It also features guitar or saxophone solos, with space for more virtuous improvisations.
– Harmonic modulations, typically three times upwards in many songs, allowing Laura to show her impressive vocal range, textures and full tessitura control. Bass cadences in Pachelbel’s Canon style are common (like in Strani Amori or Lui non sta con te), stylised with suspended chords and 9ths. Diminished chords are well implemented in harmonic variations or bridge sections.
– Wide dynamics, with the final choruses always much louder and with full instrumentation. Something that we don’t see anymore nowadays, where every mastering (especially in pop) is super compressed.

In this phase, Strani Amori begins with the simplicity of three descending Major Scale notes in C Major. A story of this motif evolves, with vocals hitting the suspended notes (9ths and 4ths) of the chords. In the 2nd repetition of the chorus, the progression tricks us by presenting a half-diminished chord—Bm7(b5)-which goes to a V/vi (E7), creating an interesting variation. The song follows by going up to D, until it reaches a ‘glorious’ chorus, with more interesting harmonic variations, and an explosion to the coda. In each key, Laura’s voice changes its timbre, and you can hear ‘low Laura in C major’, ‘medium Laura in D major’ and ‘high Laura in E major’.

Another very interesting harmony from this period is found in Lettera, which fluctuates between A and F#.

Phase 1 Albums

Laura Pausini (1993 – Italian)
Laura (1994 – Italian)
Laura Pausini (1994 – Spanish)

Phase 2 (1996) E. Buffat, D. Vuletic and various international composers

Pausini’s saxophonist in a 1997 tour – my apologies but I can’t seem to find out his name anywhere!

A new production can be seen from 1996. Le cose che vivi was a very successful album and preserved much of the aesthetics from the previous period.

The composition Le cose che vivi goes through 5 keys, and between these modulations Laura sings a semi-tone, creating a very interesting effect. This is how this harmony goes:

|| F: I vi IV V Db: I iii IV V ||
|| G: I vi IV V Eb: I iii IV V ||
Then a long-progression bridge comes (Em B/D# G7/D A/C# Am/C Bm7 Em7 A11 A7(b5) D11 E11) and resolves in one more key:
|| A: I vi IV V ||

Another fantastic harmony is found in ll mondo che vorrei. Click here to see my chord transcription.

From La mia risposta (1998), the compositions began to add more modern pop elements, like electronic drums and more common harmonic progressions. The following album, Tra te e il mare (2000), was more successful, especially due to the title’s song, but didn’t carry on delivering harmonic surprises and original arrangements like before. Il Ritorno da Te, launched in her Best of collection (2001), was more interesting, and her world tour was very well produced, consisting of a professional band, with Gabriele Fersini creating interesting guitar solos. I believe the tours between 1997 and 2001 were the best of her career, showing very well-elaborated arrangements and original performances.

From the Inside (2002) was their first attempt in the English-speaking market, which was not much successful. In terms of musicality, it didn’t present anything too special.

In Resta in Ascolto (2004), Pausini’s production pushed to something more like a ‘rock-band’ style. I like this album, it was a contrast to the previous phase, but still came with many interesting harmonic ideas and creative progressions, such as in Vivimi. The chorus goes as such:
|: I | v (yes, minor!) | IV | ii | V 😐 )

Resta in Ascolto is a unique pop/rock/fusion album and I can’t recommend it enough. Besides Vivimi, Benedetta Passione (which modernises the style of her old-hit Ascolta il tuo core) flows with a very interesting harmony, and Come se non fosse stato mai amore has an impressive structure, lots of dynamic passages and sounds very ‘fusion’, a great combination of rock, pop, soul and jazz. Parlami is also a well-arranged song. All the songs seem to “match” the atmosphere, resulting in a well-presented conceptual work.

Phase 2 Albums

Le cose che vivi / Las cosas que vives (1996)
La mia risposta / Mi respuesta (1998)
Tra te e il mare / Entre tú y mil mares (2000)
The Best of Laura Pausini: E ritorno da te (2001)
From the Inside (2002 – English)
Resta in ascolto / Escucha (2004)

Check out Andrea Braido’s solo in Le cose che vive, in 1997

Phase 3 (2006) P. Carta, D. Vuletic, B. Antonacci

From Io canto (2006), Pausini’s production started to work with covers and dismissed the initial aesthetics. In Primavera in anticipo (2008), there is little modulation, almost no piano and no more backing vocal choirs. Harmony is always within 1 key, suspensions and diminished chords disappeared. In terms of musical creativity (elaborated harmonies/melodies/arrangements), this album has nothing interesting to present.

Unfortunately, this lack of creative development seems to have continued since 2011, in a style where it repeats the I-IV-V and other similar progressions, with no harmonic surprises, no dynamic arrangements, etc. Since then, the production seems to have no more musical authenticity, offering a more common pop music approach. The latest album, Anime Parallele, uses generic electronic loops and has a more ‘dance music’ aesthetics, with monotonous and simple arrangements. What happen here?

Phase 3 Albums

Io canto / Yo canto (2006)
Primavera in anticipo / Primavera anticipada (2008)
Inedito / Inédito (2011)
Simili / Similares (2015)
Laura Xmas / Laura Navidad (2016)
Fatti sentire / Hazte sentir (2018)
Anime Parallele / Almas Paralelas (2023)

Towards a Commercial Pop Model and the ‘Invisible’ Musicians

My hope was always that the Pausini production would continue its fusion approach or move to a more jazzy-soul style, and remain musically explorative. However, it seems to have stagnated into a more commercial proposal. I will not comment on the lyrics, or on Laura’s vocal performance and charisma, which has always been fantastic. I met her personally once, and I know how much of a great person she is. But from my perspective as a musician, the music has been much more interesting in its early periods.

Recently, in 2022, Laura released her film, Laura Pausini: Pleasure to Meet You (2022). Unfortunately, the film missed interviewing the musicians and talking more about the production side of the work. The film focuses on her personal life, but I feel that, without the magical harmonic minds of Angelo Valsiglio, Eric Buffat and the other musicians who composed her hits in the 90s and early 2000s, she wouldn’t have reached this level of popularity, so it would be interesting to have included them in the film.

My Personal Experience with Laura’s Music

I’m getting to the end of this article and to finalise it, I will tell you my ‘weird’ personal experience with the music of Laura Pausini. When I was young and was learning music I became obsessed with her music; I saw it as very good material for learning languages and harmony, so I collected all of her albums, went to concerts and even met her on a national television programme. While her fans were giving her CDs to sign, I gave her my sheet music songbook. She smiled and looked as if it was the first time she was singing in that book. I gave her a letter (written in badly translated Italian) saying I was studying to become a composer and hoped one day to compose music for her (maybe if I had gone through the songwriting/pop composition path, but I went to the mad scientist route of experimental music).

Another funny story: When I was around 17, I could play on the guitar most of her songs, and even had my original solo arrangements. One day I took my guitar to a luthier in São Paulo for repair, and a man showed up asking the luthier if he knew anyone who could perform a song with Laura on the Brazilian national TV that same day! Unfortunately, for some reason I missed the chance, which was frustrating afterwards as I would have loved to have performed with her.

In summary, I believe Pausini’s music and production deserve praise, not only for her amazing vocal skills. but also, for conceptualising many original ideas in a unique ‘fusion-pop’ style.

On the Listening Experience of a Musician

Today I would like to discuss the way our minds and body accommodate music and ingest/digest the vast ingredients of a musical experience. Yes, we do ‘eat’ or ‘drink’ music. Music is a dimension of abstractions (ideas, concepts, colours, rhythms) that flow into our minds and invoke endless symbolisms in a series of seconds. It also moves our body particles and evokes emotions. What is polemical about all this is the question of whether this is better sensed by those who have more musical experience, knowledge and skills. My purpose is not to try to find an objective answer to this, but merely to investigate the question. 

We can indeed say that a baby can enjoy Mozart as much as an adult can. But that’s not always the case. A baby perhaps can have a few similar emotions and a sense of rhythm, but lacks the developed and experienced brain that transcribes musical ‘data’; imprints of timbre, technique, and harmony, along with the symbolism of lyrics, for instance.

So what to say about a musician? Is a musician more capable of appreciation towards music? Perhaps, appreciation is not a fair word. Many music lovers are not musicians and enjoy listening to music all day long, or collecting albums, etc. Dancers might not be musicians, but will certainly understand rhythms and appreciate music alike. However, we could rephrase this: Is a musician more susceptible to a deeper musical experience? Now, this case of musical ‘arrogance’ deserves to be scientifically scrutinised. 

Scales, arpeggios, rhythms, harmonies, timbres, and instrumental techniques, alongside many other musical elements, flow through a musician’s mind. A professional violinist can tell a smooth, fluent or experienced performance from a more common one. This professional will be able to tell because of his/her understanding of all musical nuances that are part of internal knowledge and vocabulary, just as a watchmaker can tell a real watch from a fake watch, a sommelier can distinguish a fine wine from an average one and a skilled linguist can differentiate between regional dialects and accents with ease. What could we say, then, about a musical critic who has no musical skills?

Of course, music, at the top of its hierarchy, involves more than sharps and flats, and some critics know how to appreciate and point out other factors that music involves. There is context (social, cultural, anthropological), in addition to other recording and performance factors. One can be an expert in different areas of music. But if art is to be judged (which occasionally happens), an amateur is more likely to miss fundamental details–I will not get into taste here, if you want to talk about taste I suggest you should read David Hume, not Ian Costabile’s blog.

(Meanwhile, I recommended to a friend John Patitucci’s first album, a masterpiece recorded with Chick Corea. He told me it sounded like ‘background music’! What does that even mean? I could end the discussion here, but I persist…).

Judgment is not in question though. The question here is phenomenological. How more susceptible to a deeper experience are professional musicians? What goes in the ear of someone who appreciates as much tonal as atonal music? Who has progressed through the audacities of baroque, jazz, Stravinsky, Messiaen, Nancarrow, and the most experimental works in the history of music (I’m talking Scelsi’s Konx-om-pax level). Music that is not music as music is known by most inhabitants of this planet. Music that is music for musicians, some musicians, those who enjoy studying music for understanding, for exploration, and to see how far we can manipulate sounds over time (or even without time at all, as I’ve explored before).

But music doesn’t have to be at this level of complexity to evoke a deeper experience of musicians. Popular music can do the same when it’s well-elaborated. Back in the early 2000s, I attended a university module called ‘Music Appreciation’. I remember an example that our teacher, Dr Sidney Molina, gave us of an album recorded by Sting called Mercury Falling. We were all amazed by how much ‘musical’ food that album was carrying. Dr Molina showed us the unusual and exciting swing of a 9/8 time signature (in a 5/8 + 4/8 division) in I Hung my Head, the modal introduction of I was Brought to my Senses, among many other fantastic creative ideas that were the result of a combination of top musicians (like Vinnie Colaiuta on drums). 

All musical styles can be rich in musical nuances, but that’s not always the case. The commercial market, made by ‘business’ people who lack musical knowledge, will often bring music with ‘raw’ ingredients. In many cases, there is nothing special to be consumed. Especially nowadays, where everything needs to be made in the instant of an Instagram post, and where the ‘governors’ of media streamers have no consideration for history and aesthetics. Let’s take Spotify for example. Search for ‘Debussy’, then access the ‘albums’ page. You will come across the current year’s recordings because they will be organised by the latest published works, and almost every day there is a newly published album. Imagine searching for Mozart then; there is probably a new ‘album’ published every hour! In the middle of the Spotify mess, there is no decent classification and organisation for music. The young generations walk blindly through music playlists without any guidance whatsoever, stepping into all sorts of territories. They may miss the acclaimed recordings that can accurately reproduce a musical masterpiece (Debussy without Thibaudet, Bach without Gould, etc.).

The reason I dedicated my life to musical studies is phenomenological. The aesthetic experiences I had with music as my musical knowledge progressed, only became more and more profound. It’s not just intellectual, but physical as well; hearing the Major9 chords triggers me ‘pink’ synaesthesia, and I feel the sounds of strings can make my body more relaxed (I remember a performance of La Mer that gave me goosebumps, and experiences with the music of Giacinto Scelsi that stimulated a sensation on my back as if activating my kundalini energy).

Furthermore, every new shape or pattern of harmony I find gives me a great feeling of awe and wonder. From the sounds of ladders of 4ths to the ‘fantastic’ harmonies of Holst’s The Planets, or Tom Jobim to Chick Corea’s explorations. Returning to our question, if I was not aware of all these musical features, would I still feel the same way and have the same type of experience?

Music is a very powerful art form. But possibly, a trained ear may give access to a deeper journey and an ingress to the experience of the sublime, like when we are confronted with the endless expansion of the ocean or the night sky filled with stars (Kant’s mathematical sublime). Or when something feels so powerful, such as a storm at the sea, that we feel respect and admiration for it (Kant’s dynamical sublime).

I would like to express here my recent sense of wonder, as I was listening to Ernest Chausson’s String Quartet in C Minor and realised he was quoting Debussy’s String Quartet in C Minor. I was excited to recognise this just by chance, not through a lecture or article. I made a quick video showing the quote comparison:


For further clarification, Chausson and Debussy often shared musical ideas and influenced each other’s compositions. Debussy even dedicated his piano piece “Pour le Piano” to Chausson. However, their friendship was tragically cut short when Chausson died in a bicycle accident in 1899. Debussy wrote a short composition called “Canope” as a tribute to his late friend.

The power of a quote is also heard in jazz improvisation. In this respect, there is so much that could be discussed as musicians in the audience can follow ideas and achievements that happen on the go, which I believe could be easily missed by non-musicians. Thus jazz bars are often frequented by musicians.

To finalise, I would like to show you some ‘contemporary media art’ I’ve been enjoying, which is related to all this discussion and the idea of ‘quote’. It’s the work of a musician called Jacob Dupre, who likes combining simple pop songs with more sophisticated harmonies. By removing tracks and adjusting the pitch and tempo, he makes bizarre amalgamations of artists, such as Taylor Swift (the most popular singer at the moment) with the music of Dave Brubeck or even with Allan Holdsworth (a legend in jazz guitar who passed away in 2017). (Obviously, I like much more these funny results than Swift’s songs). Interestingly, he is using what a mostly non-musician audience listens to, and mixing it up with more sophisticated harmonies that are usually better appreciated by musicians.

Jacob Dupre’s art: Swift + Brubeck (check out the comments!)


You can find these videos on Instagram:
https://www.instagram.com/reel/C2APtRXuLnA/?utm_source=ig_web_copy_link&igsh=MzRlODBiNWFlZA==

https://www.instagram.com/reel/Cz_xI7VuzmP/?utm_source=ig_web_copy_link&igsh=MzRlODBiNWFlZA==

A Park’s Soundscape is Endangered

Coots around Princes Park’s island

Hello everyone! Today I would like to express my frustration with a council project that is threatening to damage the beauty of free nature manifestation, a small island at Princes Park (Liverpool, UK) where birds can nest and rest. The project plans to build a bridge to the island and allow unrestricted public entrance.

The impact of this project could cause:

Disruption of Avian Species: noise pollution, habitat fragmentation and disturbance; the sudden increase in human presence may drive some shy and less adaptable species away.

Degradation of the Ecosystem: The bridge construction process will inevitably involve deforestation, ground excavation, and alteration of the park’s landscape to accommodate the infrastructure. This would result in the loss of critical vegetation and disturbance of the soil, leading to erosion, habitat destruction, and the loss of native plant species.

The project’s concept is to build a memorial for Nelson Mandela. However, it is ambiguous whether Mandela himself would approve of this. Mandela not only fought for South Africa’s liberation from the oppressive apartheid regime but also emphasised the importance and value of our natural environment.

About the soundscape:

Princes Park is rich in the sound of several birds, including parakeets (even in the Winter), blackbirds, tits, robins, swallows, thrush, chiffchaffs, moorhens, ducks, geese, swans, and the island is often visited by a heron.

I went to the park on the 9th of July 2023 and crossed the construction bridge to make recordings of the soundscape that is endangered if the project goes forward. Watch the 1st video:

I shall continue to report our fight to allow this island to remain free to nature.

To sign the petition: https://www.change.org/p/save-the-island-at-prince-s-park-change-the-location-of-nelson-mandela-s-memorial

For more information:

https://www.livpost.co.uk/p/bridge-over-troubled-water
https://www.facebook.com/protecttheislandprincespark

Update 01/08/2023: Protesting with loudspeakers and soundscape recordings

At the opening ceremony, there was a peaceful demonstration by environmentalists and residents who were uncertain about the impact of this project on the park’s ecosystem.

In this demonstration, I took a portable speaker and a sign with me. The sign read ‘birds‘ on one side, and ‘people/phones/dogs‘ on the other. Thus, I switched between the soundscapes of birds, and anthropogenic noise, to raise the awareness of the people that supported the event, and to show that the soundscape also matters. However, I was shocked to see that a lot of people don’t seem to care about sounds; it seems to me that soundscapes are still not taken seriously. It was interesting to see that the police didn’t know how to react to the fact I wasn’t playing music or speaking on the loudspeakers. They seemed confused about the fact I was executing soundscape recordings, especially as the birdsongs began to merge with the park’s actual birdsongs.

It seems the council will go ahead with the project. That is comprehensible, taking into consideration the powerful meaning that the monument of Nelson Mandela can have to this Liverpudlian area (Toxteth, L8). It is just a shame that they didn’t consider that this could be installed anywhere else in the park, where nature would not need to be disturbed.

Further information:

On the inauguration and protest: https://www.itv.com/news/granada/2023-07-18/mandela-day-south-african-anti-apartheid-activist-statue-unveiled

On the possibility of corruption involving the project: https://www.livpost.co.uk/p/exclusive-ex-deputy-mayor-went-on

String Quartet with ‘Realistic’ Virtual Instruments

I recently revised a string quartet I wrote at the very beginning of my career, 18 years ago! It is a 3 movements quartet that was inspired by impressionism, when I was studying Debussy, Ravel, etc. It’s simple, nothing innovative, but I like some of its harmonic ideas and it has a nice coherence/flow. The revision fixed many mistakes and added a bit more depth to it. Currently, I have no opportunities for having my music performed, but still, I wanted to be able to listen to this quartet. I was dead curious to know how it really sounded and to get as close to the experience of listening to a quartet as possible. I have no financial means for paying musicians to record, so could technology help me?

Luckily I had the chance to try out the sample libraries of Berlin String’s First Chairs and Cinematic Studio’s Solo Strings. Both present samples and control of the 4 string instruments (violin I, violin II, viola and cello), with a variety of techniques (like tremolo, staccato, harmonics, etc.). They can be used with Kontakt and run straight from Finale (and probably Sibelius too), so you can listen to what you are composing. Don’t expect you can explore too many techniques though, and for this reason, many contemporary composers will not be able to fully express themselves through these instruments. But if you play with them first, understand their limitations and then decide to compose, they can be brilliant tools.

They are both great quality and quite similar, but the variety of techniques is different. Yet, be aware that they do not always articulate realistically, and we can still hear that “MIDI” robotized sound. But sometimes, they do produce a realistic performance.

I made a comparison video between these two libraries:

For my quartet, I only used the Berlin String’s First Chairs:

1st movement – PDF score

2nd movement – PDF score

3rd movement – PDF score

Audio only:

1st movement
2nd movement
3rd movement

Spatial & Imagined Soundscapes

On the 23rd of June 2022, I led a workshop with Prof Jacqueline Waldock at the Tate Liverpool called ‘Imagined Soundscapes’. The idea was to respond to the Radical Landscapes exhibition that was currently taking place. The exhibition expressed British landscape art, mostly paintings and photography from a variety of periods.

This workshop seemed like a great opportunity to test Schaeffer XXI, a multichannel sound interface I’ve been designing through Sonalux and which we hope to commercialise soon. The interface has many audio input possibilities (8 channels as a USB interface + 8 channels SD card + 4 audio line inputs) and 8 outputs and it also features an LED matrix with joystick control to easily distribute sound and it comes with a sequencer that responds to tap/tempo. The name was chosen in homage to Pierre Schaeffer, who worked on the design of the first sound spatialiser in 1951 (potentiomètre d’espace).

Schaeffer XXI – 8ch Sound Spatialiser (multichannel mixer)

The idea of the workshop was to show the participants an image of the exhibition and allow them to choose matching sounds from a sound ‘menu’. In addition, they could choose the levels and the direction from where the sound should be coming. Therefore, we set 8 speakers around the room (octaphonic), connected to Schaeffer XXI as a USB interface and a Max MSP patch I designed for this.

Max Interface showing photo from the exhibition and sound files menu.
Max interface showing exhibition photo (by Chris Killip), soundscape library and 8 individual outputs with some additional control.

Sounds could be chosen from three categories: anthropophonic, geophonic and biophonic. I built a sound library with personal recordings, in addition to sounds from the BBC archive and Freesound.org. There were many options of birdsongs and seascapes, I even included watermelon eating sounds! Sounds could also be played at different speeds, reversed, filtered and pitch-shifted. This allowed more imaginative ideas for when we presented abstract landscape paintings.

Imagined Soundscapes Menu

It was interesting to see the ideas that people came up with. For example, when we started using the sequencer and had sound rotating between all speakers, someone asked for a cat meowing sound. This imagined sound was soon being ‘chased’ by a dog, also sequenced through the speakers. Besides many funny ideas, I was intrigued by the potential of creating multichannel soundscapes. The seascape, for example, becomes very immersive when you have the wind behind you, water in front of you, birds flying on the sides, etc. I am sure there is much more to be explored with this setup and I hope to be running more workshops like this soon.

Workshop room
Participants at the Workshop

Stage Lighting Design with Max MSP, EOS and Capture

I have recently worked on two interactive stage lighting design projects for concert music. Whereas stage lighting is well explored in popular music, it’s still not very common in concert music. Of course, a light show must convey some meaning and adding flashing lights to concert music can be out of context, especially in music composed before the 20th century. However, on some occasions, a light show can add up to the concept of the music and make it multisensory, not to mention that new music can be composed specifically to be performed along with lights (see my compositions Future Lights and Intersidereal).

In these two recent projects (March 22 and April 22), I worked with the music of Scriabin, the first composer to notate light colours in the score (Prometheus), and the music of Messiaen, who also explored ideas of synesthesia and provided annotations in his score related to colours. Both concerts were held at the Tung Auditorium, a new venue in Liverpool hosted by the University of Liverpool. Initially, I thought it would be easy to explore its lighting console, the Ion XE 20, and fixtures (16x D60XTI and 4x SolaFrame Theatre + many house lights). However, the venue launched with a busy schedule, hosting not only performances but lectures and graduation events. There was little time reserved for me to connect equipment and test, so designing anything on the spot was impossible. Therefore, my only solution was to create a 3D simulation system, in a way that the same interface could be connected directly to the auditorium’s console.

A nice free software (student version) for simulation is Capture. I have used this for many of my virtual performances, including a 3D virtual stage I created to be used with chromakey, multiple cameras and other special effects for the online performance of Intersidereal. This time though, I used it only for simulating the stage and designing the light shows. If you know the basics of Capture, it’s not hard to recreate a stage. My references were only photos of the Tung Auditorium, so it wasn’t super precise, but enough for the job.

Tung Auditorium 3D simulation on Capture

Connectivity and Mapping Fixtures

The Max interface connects to the lighting console and to Capture.

The way I control Capture is through Cycling ’74s Max (Max MSP), via Art-Net, using the imp.artnet.controller object (Max external). To make this work, Capture’s Art-Net IP (Option/Connectivity) must match the IP in the Max object. This is usually the WiFi address you are using, but if there is no WiFi router (as happened to me during a rehearsal), it’s also possible to use a Npcap Loopback Adapter (in Windows).

The light show cues were all programmed in Max, sometimes receiving Serial (e.g. from an Arduino sensor) and MIDI (e.g., keyboard controller). Javascript within the js object was also used, especially for mapping fixtures and creating for loops. This allowed the quick implementation of light chasers, etc.

The ETC Ion XE 20 lighting console runs the EOS system. The way I connected Max to the console was via OSC (with an Ethernet cable), by using the Max object udpsend and sending messages like: /eos/chan/1/param/blue 31 . Note that the ping test message didn’t seem to work, and for successful connection it was required to restart the console sometimes (I posted the settings I used in the Max forum).

Max MSP sending messages to Capture (via Art-Net) and to EOS (via OSC), in addition to Javascript mapping systems.

The free (student version) of Capture doesn’t provide many choices of light fixtures, and so the Tung Auditorium’s fixtures were not available for simulation. Therefore, to create a system that worked simultaneously with the EOS (performance console) and Capture (3D simulation), I was required to do some mapping via Javascript. The fixtures I used in Capture were Idea Color Changer 575 and ColorWash 1200E AT. The house lights were not controlled in the performance and therefore not mapped, but in Capture, I had them designed with Par 64. In summary:

D60XTI (Auditorium Fixtures) > Idea Color Changer 575 (Capture 3D simulation fixtures)
SolaFrame Theatre (Auditorium Fixtures) > ColorWash 1200E AT (Capture 3D simulation fixtures)

Mapping was needed because different fixtures have different channels (for example, the red colour for fixture 1 is channel 3 on Capture, while the OSC message for the EOS would be /eos/chan/1/param/red 31). In addition to channel mapping, values also needed to be mapped. Capture receives values from 0 to 255, whereas the EOS receives 0 to 100. Conversions from RGB to CMY were also required. Capture’s fixtures were all CMY, but the D60XTI on EOS received RGB.

Control and Cues

For the Scriabin’s light show, a simple system of cues was set, as most changes were controlled by a MIDI keyboard that operated as a light organ, in addition to real-time light effects that were triggered from the audio input (microphone). The light organ was polyphonic, meaning it could control up to 6 different colours (coming from lights in different places) at once. Aside to the stage lighting, it also controlled the Insekta sculptures.

Max interface used in Scriabin’s Sonata No. 10.

There were 3 pieces to be designed for the Messiaen concert, and each piece required many annotated cues. Score images with cues were prepared in Photoshop, then implemented in Max via the fpic object. The preset object was used to store colours and fixture positions, while the radiogroup object indicated cues for triggering a variety of effects. The keyboard commands set were z/x for changing score pages, left/right arrows for changing cues, and the space bar for other interactive commands.

Interface on Max for following the score and to trigger cues.

There are many more details I could mention, but I think this explains the basics of the technology used in these two concerts.

Concepts

Scriabin’s colours were translated through an analysis system invented by Professor Kenneth Smith and the sculptures represented insects as the Sonata No 10 is known as the Insect Sonata.

Messiaen’s pieces were also analysed, following colour indications annotated on the pieces along with other ideas related to birds (the theme of the pieces). For example, in Abîme des Oiseaux, lights on the stage floor represented feathers. In Le Merle Noir, moving heads suggested birds moving around the stage, and in Le Merle Bleu, the blue and green tones were used to represent the sea, and distinct colours were assigned to every harmonic change.

This video shows a rehearsal simulation of Messiaen’s pieces on Capture:

This video shows excerpts of the performance:

You can watch the full performances on YouTube:

Olivier Messiaen, Birds and Colours – Performing Nature
Le Merle Noir, Messiaen, for Flute and Piano
Le Merle Bleu, Messiaen, for Piano
Abîme des Oiseaux, Messiaen, for Clarinet

James Kreiling plays Late Scriabin
Sonata n.10
click here more information about the Insekta sculptures used in this concert.




Layers by Orchestral Tools

I recently learned about Orchestral Tools and their collections of virtual instruments and was amazed to hear the pristine quality of their samples. There are many new things in the mockup market, but it’s still a challenge to make things sound realistic. Normally, if you are building a virtual full orchestra, at the moment you start mixing all the different instruments the final result doesn’t sound like a real orchestra texture. That’s because their samples were all recorded separately and they don’t blend in time and space as they should. However, if you are using recorded chords, then you can have a more realistic texture, and that’s the case with Layers.

Layers - Orchestral Toolsa

Layers is a free pack by Orchestral Tools that runs in their Sine Player plugin. You can select full orchestra, woodwinds, brass and strings.

I was very impressed to see that this was free, and I was expecting some limitations. Indeed, you can only have 3 types of chords (Major, Minor, Sus4)–why the heck did they leave out the diminished and augmented chords?!

Anyway, it can still be useful for some small jobs or even for some compositional experiments, so I thought I could have a go at making something with it. However, there was another thing annoying me: Layers doesn’t provide the score notes for the chords. For a composer that’s the most important thing! How could I notate my experiments with these materials?

To sort this out, I used some spectrum analysis software (and my ear) and here are all the C chord voicings:

layers - orchestral tools - score sheet notes

If you play C2, you get the first chord. C3, the next, etc. There are 6 voicings/inversions for each chord. This applies to all ensembles. If you choose strings, it’s quite clear what notes the instruments are playing. But if you choose to use a full orchestra or another ensemble, then if you change inversions some instruments will appear in certain voicings whereas others will disappear. So still, if I wanted to notate in a score sheet every single instrument that was used, it wouldn’t be easy to make it accurate.

I wrote a short compositional cliché to see how this could be notated. Here is the score and audio:

To make this sort of score match the audio (produced in Reaper) was quite time-consuming because I had to go through the ‘ chord codes’ (C3=chord1, C4=chord2, etc.) and transpose them to the root notes I needed. I wouldn’t write a full piece through this method. Still, considering the high quality of the recording I think it’s worth it to have Layers in my set of production tools. Not to mention that the VST works fine with Max MSP, which means it is possible to make something more interactive with orchestral sounds (or even create a better method for automatic transcribing the voicings to MIDI).

Another thing worth mentioning is that there is a control of dynamics via the Sine player. However, I couldn’t find any instructions or figure out how it works. It is super weird that there is a button for piano and forte, but both can be selected at once. Does that make it mf? No. I wish they could explain this better or have a mouse-over popup with instructions. The design is not very intuitive.

Despite all the problems, I still felt tempted to explore other of their products such as their Berlin Symphonic Strings pack. The quality seems supreme and it’s not limited to chords, meaning you can have full lines of violins, etc. The only issue is the price: 549 euros. How can a composer afford that?! And that’s one of their cheapest options, only including strings. Unless you are a Hollywood well-paid composer, I don’t think this is viable, especially for those like me who mainly compose contemporary music.

Overall, I hope in future they will provide score sheets, explain how the dynamics work and make more free packs.

How to record audio with binaural mics (Roland CS-10EM) using a PCM recorder (Tascam DR-40X)

Hi spatial visitors! I’m reporting today the hassle I had when trying to record with the Roland CS-10EM microphone using the Tascam DR-40X PCM (WAV) recorder. Maybe this can be useful for someone who is on a similar quest.

CS-10EM_Tascam_DR40X
CS-10EM and Tascam DR40X

For some years I had this amazing Roland CS-10EM that records sound directly from your ears and I think they are very useful for recording the sound of multichannel installations and soundscapes. Obviously, as everyone’s head is different (HRTF) the accuracy of binaural reproduction will change from person to person, but it’s still quite good for recreating the stereo field and the spatial reproduction back to my own ears is perfect, so I think it’s quite a special microphone to have.

The way I used to do it before was through a simple Sony ICD-PX333 audio recorder. It worked fine, but there was no input volume control and the format was mp3 at 192kbps. As I was recently working on a more professional video recording of a multichannel installation, I decided this had to be done in a PCM WAV format. I wanted to use the Tascam DR-40X, as this is my main microphone for going out in nature and doing field recording. It allows 2 external inputs, so “why not?”, I thought!

The CS-10EM is a microphone that takes “plug-in power”, similarly to lavalier or electret mics. These mics usually require some low voltage to function and plug-in power is usually provided by some devices in a range of 1.5-9V. This is also known as “bias voltage”.

The Tascam DR-40X does not provide this kind of output voltage to power the external microphones. Instead, it provides 48V (or 24V) phantom power, which is the standard for condenser mics. Probably they could not add more circuitry to the DR-40X design and since they didn’t expect people to use this with lavalier or this binaural mic, they just ignored this possible feature. Other Tascam models also lack a plug-in power feature, except for the DR-10 line, which is designed specifically for the lavaliers.

DR-10
Tascam DR-10

My plan wasn’t to buy another portable recorder (and the DR-10 line seems overpriced). I prefer carrying a single portable recorder when possible and add external microphones as needed. So what could be done?

One possibility was to design a circuit to convert the 48V/24V phantom power to something between 2-10V (as the CS-10EM manual specifies). Googling a bit, I found someone looking for the same application in the Arduino forum. There I saw that using a voltage regulator (step down) maybe wouldn’t work (higher voltage regulators seem to require more current than what phantom power can provide). One schematic using a transformer was posted, but I didn’t have the time to build the circuit (plus adapting it to stereo) and test it.

Another discussion on StackExchange suggested a simple voltage divider to make it work. This would require testing, soldering connectors and making some sort of adapter. I wasn’t convinced this would provide a clean line output and considering I only had a few days to do this and I was super busy with other projects, I looked for another, quicker, solution.

I found the RØDE Microphones VXLR+ adapter, it converts 48V to 5V. I don’t know what type of circuit is inside, but it’s RØDE so I imagine it’s well-made. It’s a bit overpriced (£21 on Amazon UK in 09/2021), but it seemed convenient. The issue is that I had to buy 2 of them to connect the 2 channels. The XLR input fits fine the Tascam, but to join the L and R lines with the CS-10EM it was necessary a splitter/adapter cable. More specifically: Y splitter cable, 3.5mm mono (L and R) male to 3.5mm stereo female.

VLRX+
VLRX+

So I wasted a few more hours looking for this specific connector. I found one on eBay UK but it was 1.5 metres long! That’s bad for the sound quality and more wiring on the way. At home, I had a more conventional Y splitter cable, but it was 6.35mm, requiring more adapters, which compromises the sound quality and make everything bulkier. So I thought about soldering my own cable, but I don’t like working with 3.5mm sockets and my time was compressed..so finally, I found the right splitter on eBay from a shop (BestPlug) from Germany. A bit expensive (12.88€ / £11 with shipping and VAT ) but it looked good quality and it was only 15cm long, so I went for it.

Y Splitter 15cm
Y Splitter 15cm (BestPlug, Germany, Ebay)

Now I can finally record with those binaural mics on the Tascam, phew! This upgrade cost me about £55 and I know I could have bought another simple PCM recorder for less, like a Philips DVT1110 (£35), but as I said, I prefer to carry a single device for practical reasons. It seems a bit silly to use 2 adapters to step down the voltage since a single one would be more power-efficient, but for now, that has been my best option.

Finally, the results of 2x VLRX+ and the Y splitter are great. The sound quality seems perfect with no issues to note. If you have any better (or cheaper) ideas for connecting this Tascam with the CS-10EM, please leave a comment! 🙂

Tascam DR40X > 2 VLRX+ > 3.5mm Y Splitter > CS-10EM