So after attending the showcase planning meetings we realised that if we were going to advertise the showcase we were very limited in terms of freedom as we weren't allowed to do any branding around the showcase because it is already set as "The AUT Creative Technologies Showcase". Plus the committee did definitely try and get a foot into the project so we had to be careful with what we did. Then we had the prototyping session in class where we chose to prototype the space as that was something we had to define and as such would define our audience. I won't repeat what is already covered under the prototyping section of this Notion, but we settled on the area that we all mingle in before entering the BCT awards ceremony. We also realised that at this stage there was no point in "advertising" the showcase as all the people that are interested in the showcase are already attending it. So our focus shifted to wanting to have an interactive piece that people at the event could interact with. From this we understood that any interaction we were going to have, had to be passive interaction as typically people have food and drink in their hands. We started to look at using computer vision and other things of the like to essentially analyse the space around the installation and then react to it.
Josh and I had a conversation about how it would be cool if what we made could have an application outside of the showcase environment so we could use it as an effective portfolio piece and then we realised that instead of making something that showcases the BCT, we should make something that showcases what the three of us have learned to love here at the BCT, which for me is web development. In the same conversation we realised that earlier we had discounted sound as in a crowded space it is not conducive to add more sound to that equation, however we never considered it as an input. Sound can be a passive input as well as it can be used to pick up the "buzz" of a space and doesn't require your full attention to interact with.
Using these ideas we came up with the idea of the "vibeiliser" (name pending), which is a web application that you can use to map the "vibe" of the room. Being able to be used in any social setting from Sunday brunch to your Friday night house party and even the Creative Technologies awards. The idea being that because it is built on the web (as that involves my area of interest), it can be accessed by anyone, anywhere so they can see how cranking the party is before they arrive. We also intend on making it so you can have multiple inputs (people's phones) all filtering into one output, so the web app will have the ability to either use your phone as an input or view the output.
The first thing to try was to see what sort of data we can pick up based on the audio as volume alone doesn't give us much scope. I came across P5.js which is a javascript library that essentially puts the program Processing (which we all dabbled in during first year) on the web. In this library it also includes sound input and output and one of the things it allows you to get is the volume of individual frequencies from 20-20000 Hz (range of human hearing). We tested it and unfortunately i didn't document the first test that I made but it was just 8 circles that changed size in relation to the volume of 8 different frequency zones. However, with an upcoming house party we decided to make a more visually pleasing version and then test it in one of the environments it is designed to be in. The video from this test can be viewed in the prototype section of Notion. Each point on the 8 pointed stars relates to a 2000 Hz zone of sound and the background changes from black to a dark red based on the volume.
The bonus was that because it was running on the web, we just used the host's laptop to run it and didn't require any setting up except for going to the link. I made a couple of observations from that night and in testing it myself that most sounds that we make, especially when we talk all have a heavy presence in the 20-2000Hz range and so that peak was always quite high and secondly for music it works really well as you can almost "see" the song. However, in terms of measuring the "vibe" it doesn't do quite such a good job as it is too general. But in future designs I will make sure to rejig the frequency zones so there is a more equity across the levels.
At this stage the current idea is to have a grid "floor" which has certain points mapped to frequency zones and will raise and lower based on the levels so the result will be a constantly shifting landscape that is a bit easier to follow than a rotating star. We are then also going to have some 3D models (at this stage we are thinking of iced animals to portray party animals that are also child friendly) that appear based on how many devices are feeding in audio to the visualisation. For more on this, see the prototype folder and UX/UI. For me however, the next step is to learn node.js and figure out how to have multiple cross device inputs feeding to one output. Will be a challenge but that's half the fun.