Liv Web midterm: Asymmetrical Poetry
I've been once again inspired by the somewhat isolating experience of connecting via screen, which has somehow distanced us while bringing us closer together. How did something that promised to pinch the gap of the remote, to show us the other side of the world and beyond in real time, somehow float us apart from the sort of tangibility of connection? A type of "flattening of texture" I suppose.
Yes the thrill of finding your flock, yes the opportunity to pursue the bodily-embedded syntax of a conversation with far-away family, friends, partners. But I also get a feeling of "unreal-ing" perhaps personally exacerbated as virtual connection seems to be my main way of connecting while immersed in school. Funny, we have "interactive" in our name but I often feel isolated when producing these projects that are supposed to showcase interaction.
This idea of sharing texture as an attempt toward remote intimacy has been something I've been exploring a lot lately. The idea is to build a world to share, small snippets of life that can draw someone close by giving them unexpected insights into the day/life. Video has rendered people 2D representations of self, text has removed the syntax as we have all experienced when a message is badly misinterpreted.
Asymmetrical Poetry is a first and very baby attempt at exploring the delight and depth of texture through audio. The project connects users via socket, using a "daisy chain" chatroom where each user is paired with the person who joined before them. Currently the users are anonymously linked and there is no signal that there is someone else in the room with them, which I plan to change in the future. They are then given random bits of poetry (currently using lyrics from "Atom Dance" by Bjork feat. Anoni) that they are prompted to record, and the recording sends upon the release of the mouse click. The user on the other side sees a different random bit of poetry from the same text while user 1 is recording. If they trade lines long enough they will realize they've been reading the same poem to each other all along as they begin to see lines that they heard read before.
It's a little difficult to properly document by myself since I can't feature the audio send on one laptop (and it's just my voice so there's no differentiation ha), hopefully I'll update by recording with the help of a remote user, maybe we can simultaneously screen record and I can line it up appropriately! Another strange type of texture.
Here's a little two-browser window documentation in the meantime..
Obviously there's a *lot* of room for improvement. I'm learning all the languages sort of at once right now, and concentrating on nailing down the socket stuff before making appearances "web ready." But making the text legible and constrained to the screen is an easy first tackle, and then I'd like to nail down the UI in a way that instruction can be delivered a little more poetically. The content itself is also something I'd like to think about more.
Here's the code (minus p5.dom/sound, SSL certificates, and node modules):
And helpful links (also in the code):
https://creative-coding.decontextualize.com/arrays-and-objects/ https://creative-coding.decontextualize.com/text-and-type/ https://www.kirupa.com/html5/picking_random_item_from_array.htm audio: https://www.npmjs.com/package/microphone-stream Audio: https://stackoverflow.com/questions/24874568/live-audio-via-socket-io-1-0 Background video credit: The phase and libration of the Moon for 2011, at hourly intervals: https://svs.gsfc.nasa.gov/cgi-bin/details.cgi?aid=3810
Emily & Tushar's helpful room "segmentation" code: https://github.com/asd0999/live_web/blob/2ce002732c72594cbe81bb18c77b24a4fc92612a/2_week/gibber_chat/server.js#L93
Big extra thanks to office hours with Shawn, extra help with Keith, and the couple of user testers I could squeeze in before presentations.