Welcome to %s forums

BrainModular Users Forum

Login Register

Automated spatial audio

General Discussion about whatever fits..
Post Reply
colorado1876
New member
Posts: 3
Contact:

Automated spatial audio

Unread post by colorado1876 » 15 May 2024, 17:55

I hope to implement an automated spatial audio system using a microprocessor and sensors and am curious of Usine Hollyhock (UH) is a suitable application.

Imagine you are with a friend and each of you can choose to listen to any of 40 unique sounds (sources). After each of you select the sounds you want to hear, you separate and walk along a wall which has 64 equally spaced speakers. As you walk, only the speaker nearest you plays only the sounds you chose. If you and your friend pass each other, at the moment of passing, you both hear the sounds each of you chose. Essentially, you and your friend each have unique IDs which sensors (64 total: one per speaker) can detect to correlate the sounds your chose and play them where you are. A microprocessor (Arduino variant) uses the sensor input to determine which speakers should play which inputs. The microprocessor sends OSC/MIDI/MMQT messages to Usine Hollyhock to control the audio routing matrix.

Attached are images depicting the proposed setup. Note that as the sensors track specific objects (people), you (red) will only hear purple and pink sounds while your friend (blue) will only hear brown and orange sounds, unless you are in the same area - in which you both would hear each others chosen sounds. This is a simplified example as there could be up to 16 unique people and they could chose to hear from any combination of 40 sound sources. Also note that in addition to a 16x64 audio matrix, an input mixer is used to combine the selected sounds into 16 channels (matrix columns) and an output mixer is used to sum all of the sounds into each of the 64 outputs (matrix rows).

Audio path: Analog audio sources>ADC soundcard inputs>Input mixer>Routing matrix>Output mixer>DAC outputs>Amplifiers>Speakers

Sources: Duplex sound card (40 inputs)
Sensors: ID based (64 sensors)
Microprocessor: ESP32 (for sensors, algorithm, and sending OSC/MIDI/MMQT messages)
Outputs: Duplex sound card (64 channels/speakers)
VST host: Usine Hollyhock
Communication: either OSC/MIDI/MMQT

As Usine Hollyhock has mixing, matrixes with varying audio levels, and is compatible with several commication protocols, it seems to be the ideal audio software for my application. However, to be sure, I have several questions:

1. The users select the desired inputs manually, but the audio matrix routing is automated. Can the UH audio matrix be automated so the channels selected are controlled via OSC/MIDI/MMQT messages from a microcontroller?
2. As the outputs (speakers) are equally spaced in a line - the goal is for the sound to transition smoothly from one speaker to the adjacent speaker as people move. When changing channels in the audio matrix, can the transition from one speaker (attenuation) to the next (amplification) be done gradually (controllable), similar to panning laws?
3. If I added 8 subwoofers (one for every 8 speakers) using LP filter for 8 additional channels (64 fullrange = 8 subwoofer=72 channels), would this exceed UH channel capacity?
4. If additional VST plugins (2 or 3) were used on each of the 16 channels (vertical columns in audio matrix) would this be problematic for UH or is this simply dependent on host computer CPU/RAM?
5. Is anyone aware of any threads/tutorial on using OSC/MIDI/MMQT commands to control an UH audio matrix?
6. With duplex sound cards, can UH input from one sound card and output to a different one? When connecting my duplex soundcards to Plogue Bidule, I can only input and output through the same sound card. I'm not sure if this is a duplex sound card limitation or if this is a Plogue Bidule limitation.
7. Am I missing anything or is there some reason this implementation will not work with UH?

I have all of the hardware necessary to test a simplified setup with less channels first (8 inputs/8 outputs). The only anticipated challenging part is controlling the audio matrix with OSC/MIDI/MMQT commands (and learning the UH interface 8) ).

Thank-you for any insight!
Attachments
Automated audio matrix 1.jpg
Automated audio matrix 2.jpg
Automated audio matrix 3.jpg

User avatar
rlgsbt
Member
Posts: 187
Location: Marseille
Contact:

Unread post by rlgsbt » 16 May 2024, 09:37

It's a great projet and Usine is perfect for that !
quickly :
1. You can control everything in OSC/MIDI in Usine. It's also possible to use polyphony to manage your OSC/MIDI commands.
2. https://brainmodular.com/manuals/hh6/en ... e/surround
3. If you use an LP, you don't need extra channels. You can do 8 x "8+1" routing (subwoofer are plugged in series)
4. yes > dependent on host computer CPU/RAM
5. https://brainmodular.com/manuals/hh6/en/learn-usine/osc
6. to check, but I think duplex sound cards only work on MAC
7. I'm especially curious to know how your ESP32 works and how it's programmed...
+++

colorado1876
New member
Posts: 3
Contact:

Unread post by colorado1876 » 16 May 2024, 17:41

@rigsbt Thank-you for your feedback! It seems Usine has the functionality I hoped it would. The linked tutorials were helpful.
6. to check, but I think duplex sound cards only work on MAC
I'm actually using it on a Windows 10 PC. It is an older MOTU duplex sound card. It may just be a limitation of the sound card or Plogue Bidule. I could only route sounds in/out of the same soundcard, not to computer audio or other soundcards. I'll experiment with Usine as it would be nice be have more flexibility with the input/output channels.
I'm especially curious to know how your ESP32 works and how it's programmed...
It isn't programmed yet. I'm just trying to identify a possible working setup first. To get OSC working in Usine, I was just going to manual send commands using Ayaya https://github.com/hannesbraun/ayaya, then once that worked, setup up the microprocessor to send OSC.

Time to see if I can get OSC commands to control an audio matrix...

User avatar
oli_lab
Member
Posts: 1261
Location: Brittany, France
Contact:

Unread post by oli_lab » 17 May 2024, 02:56

Hi,
Usine can do this and much more : It can play the files and you wouldn't need the 40 inputs.

I don't know the spacing between the loudspeakers, but it seems to me that if they're 5 meters appart, the wall will be 315m long. the cable management has to be taken into account and will probably drive the technological choices you'll have to make.
You can't use 1 ESP32 and use such length of wires for the sensors.
a BUS structure like RS485 could tacle those length better. (up to 1200m) so modbus could be a candidate, and Usine do Modbus as well.
or maybe you could think of another way of detecting the position of the listener, with camera and computer vision software ?

cheers
http://oli-lab.org

Win11 Ryzen9/32GB RAM - RME MADIFACE - SSL alpha link 4-16 - OSC capable interfaces

follow OLI_LAB adventures on Mastodon
@olivar_premier@mastodon.social

colorado1876
New member
Posts: 3
Contact:

Unread post by colorado1876 » 17 May 2024, 18:14

@oli-lab

The speakers are closely spaced only 2ft (~2/3 meter) apart, so cable management is not a concern. In this case, the ESPY32 will only be acting as a translator for the sensors, creating and sending the OSC commands to Usine. Another computer will be handling the sensors directly. I didn't want to go into those details as that part is all sorted. My primary concern is selecting a capable software platform and it seems Usine is it. Thank-for this this amazing software!

If/when I get stuck implement, I may need some assistance...but I'll try to solve first by self-learning.

User avatar
senso
Site Admin
Posts: 4424
Location: France
Contact:

Unread post by senso » 30 May 2024, 10:38

yes Usine is made for you!
for info BeSpline use Usine to develop the "Fletcher-Machine" : https://bespline.com/
128 sources, 128 Speakers, compatible with almost all tracking systems.

Post Reply

Who is online

Users browsing this forum: No registered users and 146 guests