A little over two years ago, I wrote about Digital light… showing the way. The basic idea was that light could be modulated to provide intelligent, personalized, wayfinding. In my concept, street lighting would be used and I included an animation of how a jogger could use it. “Tall-drinks” has come up with a very similar idea — the difference is, in his concept, each person would have their own source of digital light. He’s even built a simple prototype –for jogging — using a picoprojector. Even better, he’s provided instructions on how to make one for yourself.
Fundamentally we need to understand that a projector is nothing more than a light source — a light source that can be modulated. It’s one good way of creating digital light, something that has been pointed out many times on this site.
Simply projection mapping, you say? This is so much more than projection mapping; it’s something that will make practical differences in everyone’s lives. For some examples, you might want to look at the posts listed below. Some of the applications will seem mundane and that’s exactly the point. Digital light will be part of our everyday lives.
We need to stop equating projectors with screens — “screens are prison cells for pixels“, as Natan Linder says. Once people realize that, a huge number of opportunities open up.
It’s very encouraging to see more and more people experimenting with this. I wonder when mainstream lighting manufacturers wake up to the real potential of digital light and turn it into practical products. Or, maybe it will take some unknown startup on Kickstarter to finally get the ball rolling.
Check out his video below to see Talldrink’s prototype in action. His website has more details– http://talldrinks.com/?p=329 . You can learn how to make your own on his Instructables page: http://www.instructables.com/id/Ground-Projected-Information-Display-for-night-jo/
(as is sometimes the case, our friends at the DailyDOOH also published this article in a slightly different form)
In a presentation I gave at the last Thought Leadership Summit), I analyzed what the ‘perfect pixel‘ size should be for interactivity with large area, up-close displays. I included the scenario where pixels would be on the floor which raised a few eyebrows. I’ve been writing about ‘pixels everywhere‘ for quite some time, but apparently some people think that floors are somewhere pixels ought NOT to be.
However, some researchers at the Hasso Plattner Institute in Germany think floors are perfectly good places for pixels. They’ve demonstrated an interactive floor project called GravitySpace, targeted at gaming and pictured here. (continue reading…)
Chris Harrison of Carnegie Mellon University and Microsoft researchers have been collaborating on using the human body as an input surface. They call this approach “OmniTouch”. Chris Harrison’s earlier project called ‘Skinput’ had similar goals and had a more interesting name but OmniTouch has some interesting advances over Skinput. Read on.
If this looks familiar to you, you may be thinking of Pranav Mistry’s SixthSense project at MIT Medialab a while back. OmniTouch has some interesting advances over SixthSense, though.
Mistry’s SixthSense also projected an image on ones body but used fiducials (colored marks) on the tips of the users fingers. A wearable camera tracked the fiducials and a computer deduced if a finger was touching a projected input point.
Harrison’s Skinput used a picoprojector to display an image on a body surface like a hand or a forearm. Bioacoustic techniques using a specially-designed armband detected taps on the skin and, with a bit of signal processing, could determine where on the skin the tap occurred. Skinput needed an armband which could be covered by clothing so it was a bit less obtrusive than the SixthSense fingertip markers.
OmniTouch enables a wide variety of surfaces to be input devices, not just a body surface. It uses a wearable projector and camera like SixthSense, but doesn’t require SixthSense’s markers on ones fingertips. OmniTouch uses a depth sensing camera, similar to Kinnect, but capable of shorter focus distances which increases flexibility. So, for example, one could use a wall or a pad of paper as an interactive surface. Depth sensing allows touch as well as hovering gestures.
The concept is very interesting. Wearing a projector and a depth sensing camera is clunky, but the concept is interesting…. interactive pixels everywhere in front of you. It’s just a proof of concept… now we need smart companies to make it small, stylish, and usable. Check out the video below from Chris Harrison’s website.
Here are a couple of links if you want more detail: http://chrisharrison.net/projects/skinput/SkinputHarrison.pdf and http://chrisharrison.net/projects/omnitouch/omnitouch.pdf
One of my earliest posts (way back in June) was Seeing the (digital) light. In it, I mentioned one of my early ideas was to use picoprojectors as digital light sources for interactive, responsive, desk lamps and room lighting.
Other people have been thinking about responsive light and it should be no surprise that one of those people is at MIT Medialab. Natan Linder has a project called LuminAR that uses a picoprojector as a digital bulb. He combines that with a camera system and a robotic arm and cool things happen… gesture based interaction, lighting, pixel modulation —digital light!
Watch his video below and see for yourself. It’s not pretty –it’s a proof of concept, after all — but I think it’s beautiful!
The transition to Digital Light won’t be due to one technology alone. Instead, both projection technologies and direct-view technologies (like LED, OLED, LCD, electroluminescent, e-Ink, etc.) will be used. But for Digital Light to become truly commonplace, products will have to be inexpensive, long-lasting and energy-efficient.
That’s why I am excited about this press release from the Institute of Materials Research and Engineering: http://www.imre.a-star.edu.sg/fckeditor/uploadfiles/Press%20Release_IMRE%20record%20breaking%20blue%20emitters%282%29.pdf (continue reading…)