OLED Transparent Contact Lenses
We met with a local inventor yesterday who is working on OLED technology and when his proof of concept is complete, we will do a blog video. While contact lenses are a little further out than products like transparent unidirectional OLED screens, like an OLED window, this is in the future.
And the article:
Personal contact lens displays: The transparent OLED done one better
May. 14, 2009 (2:39 pm) By: Rick Hodgin
Personal contact lens displays: http://www.geek.com/wp-content/uploa...5/the_eye1.jpg
If you’ve ever dreamed of having vision like Star Trek’s Georgi LaForge, seeing things beyond what’s actually there in the real world, even zooming in on the far off… well then you may need therapy, but apart from that there’s an emerging reality in a related field of science which may soon produce contact lens displays which augment the real-world experience.
Imagine an integrated, autonomous system designed to augment your visual experience with information relevant to your personal surroundings and immediate circumstances. Such a system would allow you to have access to information beyond your personal abilities, by providing reminders or cues that may not have otherwise occurred to you.
For example, the visual cues could be providing real-time GPS navigation which, rather than constantly looking over at your TomTom or Garmin, is naturally there with arrows, even being completely superimposed over what you’re naturally seeing, such as highlighting the exact path the car needs to take. Or, suppose you’re at a dinner party and you approach someone you met at a business meeting last year. You can’t quite remember his name, but in your heads up display — through some form of camera face recognition and access to a database — it pulls up not only his name, but his business line, family, and recent news which may be pertinent. You see all of this in real-time, but he just sees you. And when you say “Hi, Bill! How are Margie and the kids? And did that deal with XYZ Corp ever go through?” He’ll be impressed beyond words.
The truth is he probably won’t be, because when these devices are ready for such an application it’s probable he’ll also have one. And then I suppose it will come down to the geek level of how comprehensive each other’s database is: “Hi, William H. Tucker, III. How are Margaret Eleanor and your two kids, Christian Joseph, 11 and Amy Elizabeth, 8? And did that refinancing deal with XYZ Corp last October ever go through — I believe you were speaking with Danny Thompson?” To which he replies, “You know, Rick C. Hodgin, I’ve been following your articles on Geek.com in recent weeks … what’s up with all these sidelong allegories?” Okay, point taken.
At the University of Washington (UW), and several other research centers around the world right now, the technology exists to create limited visual inputs which relate to tiny light sources embedded within the contact lens. While these were not physically enabled in the device linked below (meaning they did not illuminate due to no external power source hookup), they were physically present on the contact lens. Researchers believe any light presented by the lens at this stage would be interpreted by the eyes as being present and subsequently affecting the vision, but it would not be in focus or in any way usable.
In short, the technology necessary to create images or “data portals” as described above are many years away. And in fact, the technology necessary to make this kind of image appear is also quite distant.
Dr. Babak Amir Parviz, a UW assistant professor of electrical engineering at UW said:
Looking through a completed lens, you would see what the display is generating superimposed on the world outside. This is a very small step toward that goal, but I think it’s extremely promising. People may find all sorts of applications for it that we have not thought about. Our goal is to demonstrate the basic technology and make sure it works and that it’s safe.
In 2008, he was awarded Time Magazine’s Best Invention of the Year award for this device, which most believe shows amazing promise as the field of nanotechnology continues to grow and research into these technologies increases. When tiny, flexible on-lens solar cells are enabled to draw power from the light in the room, or when some tiny power supply can be created which uses minute reservoirs of fluid squeezed past nano-turbine generators through the muscle-based natural eye blinking motion, then the technology will be approachable for real-world applications.
Until then, research like what is taking place at NASA and their JORDY (Joint Optical Reflective Display) device, a real-world device which does augment vision notably for those afflicted with diabetes, macular degeneration, glaucoma, etc., will be usable. This device allows for real-world zooming, adjustment of contrast, brightness, colors, etc., to compensate for the way the eye is seeing images.
As of the last report Dr. Pariz has published on the technology (from January, 2008), he was at the point where he’d tested the lens only on rabbits, and only for 20 minutes (see purple image above). The rabbits showed no ill effects from “wearing” the device, and his research is continuing.
His research was funded by the National Science Foundation, and a Technology Gap Innovation Fund from the University of Washington in Seattle.
See the original article published on Eurekalert in January, 2008.
When I met and spent the morning with Dr. Jerry Bautista on their prototype Terascale project (back in late 2007), one of the potential uses he discussed for Terascale was this very application. The ability to have enough computing power to present to the user via some kind of display, an overlay of the real world. He imagined the device mathematically compensating for the distance between the device and the user’s eyes, correct the display so that it exactly overlays the real-world.
One particular item we discussed was GPS assisted driving, as well as tourist-like information. We envisioned holding up a sheet-of-glass-like transparent display, which then allows us to see buildings directly through it. But then overlaid on each building would be histories, businesses inside, hours of operation, distance to get there, last time we were there, etc. Terascale would provide enough parallel compute abilities to do as many database lookups for information relating to every aspect within our field of vision that the data could be displayed in real-time.
Imagine driving by a movie theater in your car. You hold up the device so you can see the movie theater through the glass and your eye, and it automatically shows you shows playing, show times, cost, and with real-time feedback, how crowded the theater is, etc.
These kinds of massively parallel compute-enabled human augmentation devices are coming. The form they take will not really be the crucial issue at first, but rather getting there. With today’s power requirements, it would literally take something plugged into the wall to provide the computing power necessary to handle it all. In addition, the databases and real-time wireless Internet access are simply beyond our abilities outside of the lab today. However, tomorrow is another story, and these are all just engineering hurdles to be overcome.
Do you think when man invented the first car he envisioned the Ferrari? At some point someone took the idea and extended it and the Ferrari came into existence. The same is going to happen with The Eye.