Computer Vision Luciferins Projection Mapping

Projection Mapping

The vision that has been consistent throughout this odyssey, is glowing soft fiber structures. This vision has haunted me since 2006. Since then, I’ve looked into numerous ways to accomplish this.

I’ve worked with all sorts of hardware for previous projects from incandescent bulbs to LEDs. While I have made other pieces that utilize that hardware (which has come a loooong way in recent years), such as Transmissions From The System, Memory Capsules, among others, it’s not the look that I am seeking. I am not looking for a glow from within the fibers. I do not want visible individual lights nor hardware. I am seeking a field of light, a cloud or disbursement of light. I am after a light that has minimal visual supports (ie hardware, wires, etc), and is light weight and soft (ie, flexible). This rules out LEDs (even the sewable and flexible ones). Considering the scale that I envision this piece, using hardware would require scads of wires and connections that signify either a lot of labor or manufacturing or some kind. Both take away from the magic and etherealness that I am after.

I am going for an organic look, similar to what sailors refer to as milky seas caused by the luminous bacteria Vibrio harveyi, living in association with microalgal blooms. The defined mechanical nature of hardware never fitted with the conceptual vision. Projection mapping is closer. With projection mapping, you can project an image onto a surface, and only have the image upon the surface. The projected image will not spill out to the outer space or the room. You define where the projection will go and confine it to that area. The dawn of depth cameras accelerated this field. Depth cameras take a scan of a space (often using infrared light/projections), noting not only x and y coordinates, but also z coordinates. They can also do this in realtime, as I use this to detect where people are in the space at any given moment. With a depth camera scan, I can use that scan and isolate where the graphics are to appear.

I took a projection mapping workshop through CHiKA several years ago using Madmapper and modul8. I have also worked with VPT and used it with my students several times. I started pulling the projection mapping with VPT and a rough setup in my studio using the knitted prototypes. After playing around with VPT for a bit, it seemed to have a few extra steps and be quite processor heavy on my laptop. I am concerned about processor intensity as the piece will have many processes running at once (interaction software, graphical animation, sound, network traffic display….). I switched to Madmapper.

Even though I have worked with Madmapper before, I had to sit with it for quite a bit to translate what I wanted to do, with how the software worked. This took a bit longer than I expected, but was a fun exercise in being thrown into a new environment and getting things to fly.

Slowly getting there

So after a few days of translating, I finally got this image. I was excited, despite it being still far away from the finished piece. Baby steps. It’s critical to have mini steps along the way to trouble shoot and to get things right.

A simple Processing sketch which changes color over time.

I threw together a super simple app to test things, before plugging in the real content. I made a Processing sketch that changes color over time. A very simple animation. I was able to connect Processing with Madmapper and to projection map the output of Processing on the fibers (image above). Next step: connecting the latest Processing code of graphical animation, then connecting the blob tracking code with Max.

At this point, I realized that I had to take individual masks and composite them to one image to invert, as a result of having localized graphics. This was a fun bump to work through, considering my absorption over the last few days of projection mapping. It also feel fabulous to translate one’s ideas into actions and results. It is super exciting to see this project gel more and more. It’s been such a long time coming. But, there are always problems that unveil themselves along the way.

After getting the masking completed properly, I realized that I had to tweak some aspects of the graphics and the interaction. I had to un-mirror the projection as a result of not testing this stuff on my laptop while sitting in front of the camera. I also had to scale the output content so that it was properly falling on the fibers, with reference to the space. This was important as to not have the content stretched, as well as to not generate content that wasn’t being used (unnecessary processor cycles). After a bit of head scratching, sorted all of this.

Next up: Connecting the blob tracking code through Max/MSP with the prototype.

As I was playing with the blob tracking with the prototype, I realized many things about the placement of the depth camera, with reference to the piece. The depth camera has a range of about 4 – 11.5 feet. As a result, it needs to be as close to the installation as possible (while the projector needs to be rather far). It’s good to have a simple depth sketch running while trying to figure out the placement of the camera, so I can see what the camera sees, as I move it around the space.

After finding a good (?) spot for it (zipped tied and stapled to the ceiling), I had to change some settings in my code to map properly to space with regard to the position. Once I had everything in place, I could see how it ran with these aspects together (fibers, graphics, projection mapping, interaction). It’s interesting and soooo important to set things up to life scale and to try things out. You realize all these little things that perhaps you overlooked before, or perhaps your understanding was *just* slightly off. All this time with programming the graphical animations and the interaction, I was working with a laptop screen and a hand or body moving over that entire space. With the prototype setup, the fibers hang from the ceiling and people walk amongst them. As such, the people will be anchored (so to speak) to the floor. With the blob tracking people walking around, then the blobs/tracking will then be on the lower half of the fibers and thus places the graphics solely on the lower half. The upper 2/3 of the fibers were then in the dark with no animations because (living 😉 ) people do not float in the air (and thus triggering the upper areas of the fibers to illuminate). As a result I needed to figure a different location with the depth camera (ie, have a much larger space height-wise) so I could place the depth camera far above the tops of the fiber structures, and then hang the structures well below the camera. This would give the depth camera a birds-eye view of the space, which is the approach that I was envisioning as I was programming the piece. My current set-up does not allow for this, as I’m hanging the fibers from the ceiling, not the walls. I’ll have to wait a bit to try this. My current studio space doesn’t have such a height to allow for such a setup. *sigh*

Alternatively, with programming, I can control when and where the graphics appear. I could have the graphics spawn up the fibers, from the trigger location. I’d have to do this regardless of the location of the depth camera, with reference to the fibers. That’s one of the cool things about programming. You can control anything, which helps greatly in such predicaments. So I did some tweaking with my code to have the growths move up and down (versus 360 degrees around the trigger location). This also saves processor cycles. Why generate graphics to a location that isn’t going to be visualized (ie, it would be outside of the masked areas)?

I also realized is that I would need to revise the communication between the blob location and the associated location for the projection. I hadn’t realized that aspect and the scaling and communication that is required. This took quite a bit of head scratching and playing to get it just right.

I’m super happy to see this working at this stage. I’ll be even more excited seeing it on the latest version fiber structures that I am working on. More on that soon.

After playing with it for some time while tweaking it, I’ve come to several realizations. The gist being that I need to either revise/tweak my tracking method and/or to try a slightly different setup where the fibers are hung from the wall (ie, much lower than the connect). 

With the current setup, a couple of the fiber structures are slightly occulding each other. A different setup of the projector and the depth camera would ease this. Also, I’m sending a hard location point to the graphics code, telling the graphics where they need to appear. A better technique is further building off of what I current have to generate zones that trigger the particular structures. Each fiber will govern over a particular zone around the fiber. If someone is in that zone, that particular fiber will illuminate. This makes more sense as it will make the piece not only more responsive, but to also be in line conceptually with the bacterial species that light up the oceans’ waters at night.

To be continued.