Good luck with that
Using the Intel's library, now face position recognition.
Now i'm searching to build model for first, recognize one face and only one, second build model of lips while speaking and make it able to recognize phonemes and by the way read lips. This plus voice recognition will kill the weak point of simple voice recognition.
So i will use the same tools or inspired from the Sphinx project for models of pronouciation, i will record some news on tv while people are speaking and try to get some vectors models from those records. In the end maybe it will help me to build a lips reading model.
It won't hurt trying
i found model of vectors for letters recognition using this library, so a particular mouth shape in a moment or a letter...still vectors
follow my eyes
now read my lips...well not yet but with this sample code, i'll attempt to build a tool to first :
fill one array with points, second define a vector based on those points, in the end with a gentle and patient model bulding should give something that looks like a result
What do you think about gesture recognition? Or about a "virtual console"? For example just slide the finger over the dash to change volume or seek.
Good luck with your project!
I thought about this but with a follow my eyes function but actually following an eye direction is much more difficult as it request a very sensitive camera or a high lighted environment, so a car is not a so good place for so tiny details....a finger to "touch" in the air some functions displayed on screen could be an elegant solution and a far "easier" to do
In addition of the follow me function, some morpho functions that could identify a particular face (opencv have them) could be amusing and a pretty good anti-theft system
Here is an attempt to make the thing follow my eyes and also detect in which direction i look.
To understand what is happening in the video, the circles appearing are here to materialize a region of interest the code detected. Inside of the circle you will see a segment starting from center and showing the heading of the move that happens.
After filtering i'm pretty sure that i can get it simplified and be usable as a mouse cursor or better as a virtual touch screen, still have work and probably it won't work but it's fun
Please note that the purpose of this video is not to be nice to watch, if i can do what i plan to do, the video shown actually won't even be visible. Actually it is just to show a feedback of what is happening inside the computer.
This time a far better code very fluid with the real video layer, no delayed timings, looks like i was playing with some alive stuff or manipulating mercury but no directions detection, it is more suitable for gestures. The tracking abilities of this code and the direction detection of the other would probably make something really usable
your experiments are very interesting and i think an iphone like virtual interface is possible...
I've stopped my research, cause its very hard to use it in the car. There are so different light situations (even no light, when you drive @ night).
So it needs at least an infrared ilumination and when the sun is near the horizon - thats a great challenge
I have another experiment 4 you: I am trying to digitize the power line. All connected parts leave their traces in form of signals there. A neural network processes the signals and can find out if somethings "wrong". This could be an early detection of a possible malfunction. With enough data it would even be possible to identify the bad component. Second step would be the same with a microphone in the engine bay.
The system had 2 learn normal events like turning on lights or shifting gear etc