Since my last check in I have been in full on production mode, building the actual web app and haven't stopped to update my journal but I did remember to document the different stages so I can share my process now. Josh gave me the UI design for the site but apart from that I had complete freedom so the first thing I worked on was the reaction to the sound input and how that part of the visualiser works. Having already made a first "test" version of the visualiser, I knew the basic concepts that I had to follow, however the new challenge was getting the input from multiple devices and displaying them on a single output. As we are building it on the web I was lucky in that i didn't have to search very long to find something that would be useful for this. Liam suggested using Sockets IO as that's what he is using for our studio project and it allows you to easily create web sockets which are just connections between browsing sessions. To make sure that the data that was being sent between devices wasn't too large (as we figure some people might be using it on mobile data) and to make sure the visualiser wasn't sporadic and sickening to look at, I knew there had to be some sort of data processing that had to happen and it would be for the best if that didn't happen in browser. So the path was going to have to look something like this: Mobile Device (input) > Web Server (processing) > Output device. This was an exciting venture for me as it meant not only did I have to learn a new tool (Sockets IO) but I also had to learn Node JS which essentially allows you to create a backend written completely in javascript (JS). And lastly it allowed me to really practice my JS as previously I have only briefly dabbled in it and now our entire application is built on it. So the server takes the raw audio data that the input is sending (which at this stage was 5 different frequency zones between 20 - 20000Hz) and the volume level, later on we dropped the volume level though. The server then averages it across the 5 most recent values and returns this average for each value to the output. To make sure we weren't using the same data levels as essentially streaming live audio but making sure it was still responsive enough to live audio, its set to only output the values every 0.25s and then the output smoothly transitions between these values.

https://vimeo.com/362181157

Video of getting the cross device frequency data being visualised, you can see the average volume level at the top.

The next step was was to incorporate the models that Josh had built which are designed to represent your typical party goers. This step was relatively straight forward as I was able to see the number of connections. What you can see in this video then is it randomly selecting models and render positions from lists of predetermined models and positions and then rendering as many models as there are connections. This was a relatively straight forward process and so I don't really have much more to say.

https://vimeo.com/362181215

The next stage definitely proved to be the most challenging as the idea behind the models was that when you were on the input page you were assigned a model and then you could see the screen that was displaying the output and see "yourself" on the output. This certainly proved easier said than done as we were also looking at making any device act as an inout or output on a single page website. After looking into it and think of a way it could be done we had to scrap the design of the site we had and make separate pages for mobile and desktop. We settled on making the mobile site exclusively input as the visualiser doesn't resize very well and so to avoid any issues there it was easier to take it out completely. When it came to assigning a model to each new device I thought it would be pretty straight forward but then an issue developed that sometimes the model wouldn't render. Looking at the error codes that came up I couldn't for the life of me explain it as the error was saying that it could read the value of "vertices" for the model but when I told it to log said value, it could do that no problem. Through some jigging around with it I managed to get that fixed though and again, I am not sure how I did it but whatever I have done, I haven't gone back to touch it since in fear of breaking it.

So once that was sorted it was pretty much complete and now we are just ironing out small issues. Getting the site to host on our domain is proving to be supremely difficult for starters. Something I was unsure about was how it would handle a larger amount of connected devices than models as I thought it might break it completely but after our test today, that didn't prove to be a problem at all. All it does is just not load a model onto the mobile screen so we need to work out something else that can go on there instead.

https://vimeo.com/362181322

I am really stoked with how this project has worked out and even though I have had to do a lot of the heavy lifting as I am the only one programming the whole thing, I have really enjoyed learning it all and doing it. it has stemmed away from our initial intention of being an even visualiser, but we found that making it a party visualiser would be more interesting as reacting to music is far more dynamic than reacting to general noise. We were also able to test it at a recent house party and it turned out really well and people were intrigued by it, which is positive. It definitely showed that it worked in the setting and someone even posted it on their instagram which was a win!