Archive for November, 2011
I found this on the Arduino blog recently: Matt Leggett has been having some fun with wearable pixels. He sewed an alcohol sensor, some LEDs and an Arduino processor into a jacket. The idea is that you breathe into the sensor and the LEDs light up to show how inebriated you are. Too many lit LEDs and your friends should call a taxi for you. Perhaps the Arduino could even make the call automatically. Maybe Moritz Waldemeyer or Vega Wang could incorporate this into their wearable electronics.
I think this could be done without needing to blow into a sensor. After all, too many drinks and you might not remember how! SoberSteering is developing a car steering wheel that senses blood alcohol levels through ones skin. Maybe their sensor could be built into clothing and sense alcohol levels in real time.
I know of more than a few fellow bloggers who probably wouldn’t wear this kind of this kind of clothing. For them, it would mean pixels everywhere would keep them from going anywhere, by car at least <grin>. Or, at the very least, they may stop getting served earlier in the evening.
In fact, MicroTiles might never have been invented if certain unnamed inventors had been wearing Leggett’s invention!
Chris Harrison of Carnegie Mellon University and Microsoft researchers have been collaborating on using the human body as an input surface. They call this approach “OmniTouch”. Chris Harrison’s earlier project called ‘Skinput’ had similar goals and had a more interesting name but OmniTouch has some interesting advances over Skinput. Read on.
If this looks familiar to you, you may be thinking of Pranav Mistry’s SixthSense project at MIT Medialab a while back. OmniTouch has some interesting advances over SixthSense, though.
Mistry’s SixthSense also projected an image on ones body but used fiducials (colored marks) on the tips of the users fingers. A wearable camera tracked the fiducials and a computer deduced if a finger was touching a projected input point.
Harrison’s Skinput used a picoprojector to display an image on a body surface like a hand or a forearm. Bioacoustic techniques using a specially-designed armband detected taps on the skin and, with a bit of signal processing, could determine where on the skin the tap occurred. Skinput needed an armband which could be covered by clothing so it was a bit less obtrusive than the SixthSense fingertip markers.
OmniTouch enables a wide variety of surfaces to be input devices, not just a body surface. It uses a wearable projector and camera like SixthSense, but doesn’t require SixthSense’s markers on ones fingertips. OmniTouch uses a depth sensing camera, similar to Kinnect, but capable of shorter focus distances which increases flexibility. So, for example, one could use a wall or a pad of paper as an interactive surface. Depth sensing allows touch as well as hovering gestures.
The concept is very interesting. Wearing a projector and a depth sensing camera is clunky, but the concept is interesting…. interactive pixels everywhere in front of you. It’s just a proof of concept… now we need smart companies to make it small, stylish, and usable. Check out the video below from Chris Harrison’s website.
Here are a couple of links if you want more detail: http://chrisharrison.net/projects/skinput/SkinputHarrison.pdf and http://chrisharrison.net/projects/omnitouch/omnitouch.pdf