This summer I was invited by Dessislava Desseva to take part in building an interactive object for collective exhibition in Bci London. We decided to transform the visitors of the gallery into abstract shapes by using Microsoft Kinect 2 camera and of course my software tool choice was TouchDesigner.
I already had some experience with that concept. On my last birthday I did a party where I was tracking my guest by camera and projected them back onto themselves – MFM Unheard. And now I thought that this TD patch will be suitable for the project.
First thing was to get usable content from the camera. I decided to use Depth Image for isolating silhouettes. But in the human texture there were some parts in the edges and in the background which I didn’t wanted. It is easy it TouchDesigner to take control on content in real-time so I ended up with Level-Chroma Key-Luma Level TOP network. Like this the silhouette was clean enough to use it as a texture in 3D rendering.
Next was to generate visual content from the Depth Image data. I looked at some components and stuff and decided to displace and translate the human silhouette into particles. I have a lifetime obsession with translating a concrete form into something chaotic and abstract so this choice was pretty straightforward. For the implementation of the idea I used a patch I tweaked while ago based on Vincent Houze’s June NY Touchin project shared in the forum. What it does is generating instances with random size and rotation. So I had good control on the particles and different look for each instance which was cool. And Instancing is a GPU process which is performance wise.
What I also did in this network was to put some different SOPs geometry for the instancing so to have more interesting and changing look. And I added the possibility for the user to switch between the way he is visualizing himself by using a Kinect gesture – waving his right hand up, just like saying “Hello”.
So in the end of the night I had those screenshots images created by translating my Depth Image Human Texture into different Instancing Geometry Forms:
All the rest in this project was done by Dessi. Her concept was to put the Kinect into the body of a cute box-shaped robot named See_MO. He is also able to illuminate his eyes by sensor once there is a human in front of him. By the words of Dessi :
See_MO is one of a kind mecha-being capable of perceiving reality in a unique way. See_MO can sense the presence of humans and translate it into its own visual language, recreating their physical presence in a digital way.
So there goes our exhibition proudly named HUMAN MORPHS – Interdisciplinary Synergy Show! It was a huge fun to be part of it! And there is a video about what people did with their silhouettes: