No announcement yet.

any optics guru here ?

  • Filter
  • Time
  • Show
Clear All
new posts

  • any optics guru here ?

    In regard of the eye tracking system project, i'm searching a way to buy a cheap right angle prism or build a sort of prism off some ordinary glass / plexi and reflective solar chromed film...any suggestions ?
    This prism would be to relay eye motion to a camera but without the hassle to have a camera in front of the eye.
    Now i understand that a prism is a high precision part so building one myself is science fiction but without trying i can't say

    ok it is for linux but the library is opencv which runs on any computer flavor and code is built with C so it can be used with windows or mac.

    In addition to all this, i'm planning to add a second camera turned to the scene so the eye can aim directly functions and/or icons on the screen, no more need of touchscreen or powermate.

    Actually the code can recognize the angles (directions) where the eye is looking at, i simplified this to move the mouse cursor in the direction in which the eye is looking but this is not the final target, more to come if i can get a prism or an idea to build it.

    Edit :
    while i'm here...maybe a kind of fiber optic ?

  • #2
    If some are wondering what i'm talking about...


    • #4
      lol so you are everywhere
      Thank you for your nice findings

      i'll have a look at those, what i'm looking for is to release the weight of a camera mounted on glasses structure but maybe the prism is not the best way to do this. I was looking in a way to make the glasses the lightest as possible.

      Fiber optic sounds ideal and it would be nice but i don't know how i could play with it to drive image of the eye to the camera lens through fiber optic, i suppose it wouldn't be possible at all...i know dreamer etc lol

      Well back to the prism, at least a prism should allow a more opened angle to avoid having camera right in front of the eye.


      • #5
        Originally posted by shotgunefx View Post
        Some cheap seemingly high quality ones surplus
        This one is the right one, it doesn't explode light in a rainbow, good finding, thank you


        • #7
          Hi Kraft,

          I saw your videos of moving objects tracking, impressive!
          I was thinking of what you're trying to achieve using prism, but here comes one question;
          originally you were able to track whole face, or the eyes in particular, and the movements of eyes were relative to some static point which would be the camera stand or the desk.
          With glasses with a prism and the camera mounted on it, you change the point of relative position to the glasses assembly.

          What I'm trying to say is that with such glasses on your head, you can easily trace your eyes movement, but you cannot control the position of whole head against the screen. It means that you can look at the screen from different angles, and your relative eyes position will depend on that angle.

          I think you would need another camera to track your head, and recognize both the movement and rotation against your screen. Once you wear your glasses with cameras, and prism, you loose the ability to track your eyes actually that will contaminate the results of readings.

          Best case scenario would be if you used g00gles with monitors mounted directly on them (like that gamer's toy).

          Overall it's not an easy task, but anyway, I'm impressed with the progress of your work.

          Regarding fibre optics, I don't think it would ever happen. You cannot transfer the view directly, since your eye can see rather straight directed light rays, but the fibre optics is based on angled rays, so they are rather not compatible. If you need any assistance regarding this matter, feel free to ask me. I have pretty good knowledge about it.

          Let me know what do you think?
          EPIA TC 1G 256MB 60GB Linux,WindowMaker, Roadnav, Xine, XMMS, iGuidance3
          Lilliput 8", Pharos i360, WUSB11v2.6 WiFi


          • #8
            Hello, thanks for your interest about this project, well we have to things fighting one against the other, i'll try to be clear with my explanations as my English is a bit broken.

            - In one hand we have what you already saw, tracking the whole head and inside the head attempt to track eyes but due to poor camera's result (we must remain in cheap range of price) i'm not sure something really usable can be done with it.

            - In other hand we have another solution which would represent, that's right a static camera watching only the iris moves.

            With first case, the overall calculations, comparison and recognition are huge while we are polluted by environment false alarms (sometimes it can be that the background is interpreted lighting changes etc). This first solution is the most elegant (we do not have to wear some glasses) but still not enough accurate.

            With second case we have a far best accuracy just because we are not bothering anymore with background, detection is much faster due to less complex image to decode and we get far less errors. The prism thing was just to open a bit more the camera's mounting angle on the glasses, this would let the gaze not be interefered by some stuff just in front of the eye. This is in regard of safety issues that could occur due to an object sitting in front of the eye.

            I was thinking about optic fiber when i remember that some surgeons are using this kind of hardware to minimize body invasion and get picture from the area on which they are working.
            So as we need to get picture only from the pupil moving nothing more, maybe with a tiny prism fixed on the glasses aiming only the pupil could concentrate image information, transmit it through the fiber and finish on the other side on another prism to restore the picture and finally feed the camera's ccd with it. The advantage of this would be the light weight, no need to carry the camera on the glasses structure.

            In first instance i was feeding a dummy device acting like a mouse fifo but playing with durations in a blind way is not usable so from this i touched the point of the relative move to the environment. I searched the internet and found this



            They are also offering a software already made for linux but it's outdated and works only with firewire cameras ( i couldn't get it work due to too old libraries and v4l not supported).


            I also saw their suggestion to build some hardware but this is not enough stealth for me.

            So you are absolutely right about the fact that we need some relative information to let know the program what we are watching and that's right we need a second camera but not for watching headbut to watch the scene instead, screen, road etc then it's possible to calibrate the program as a virtual touch screen, four extreme coordinates to limit a perimeter in which the eye can activate something and because we get two informations, the first where your eye is looking, the second the scene relative coordinates inside the picture, we build off those two informations a mouse move. So this means that when the eye is moving inside the defined perimeter then mouse moves are understood otherwise when out of this perimeter the eye moves are simply ignored.

            This is the thing on which i'm actually working (getting a kind of coordination between eye moves and real world scene inside a kind of "sensitive" perimeter).


            • #9

              first of all your english is good enough, at least better than my french is

              Regarding the problem, I think that a good idea would be to use infrared camera. Since our eyes cannot intercept IR, we can enlighten the eyes to eliminate errors.

              After I saw first of those videos, I said to myself that there are some filters needed to compensate micro-movements of iris, and this can be done rather easily by setting up a threshold for sensitivity. I even didn't realize that iris makes so many moves. Our brain is used to it though, it works as image stabilizer

              After I saw second video, I was actually impressed, the accuracy of readings, and tracing is exeptional. The only problem is that as soon as the distance to observed object is pretty big (like few meters), the accuracy is just fine. Since we will trace objects on rather small screen, and located rather close to eyes, we get more noise caused by fast iris movements. I believe some of the noise can be reduced by filters though.

              Another thing is the objects being observed. If it is monitor screen, our video tracing module should know what we are looking at, so it should recognize the shape and size of observed screen. It should be able to "lock" on the screen, and then recognize the objects within. I think it can be done rather easily too.

              The only thing I don't have any ideas about is once we point our cursor to some object (with iris movements obviously), how do we select it (click?)? Blinking the eyes is not going to work, and blinking just one eye is sometimes hard to be trained on, and I believe the iris of the eye that blinks is moving slightly causing pointing error.

              Let's talk about fibre optics too. Devices used in endoscopy are very expensive. The reason is that these use special correction optic lenses to modify the rays. Here's how it works:
              first the rays (light) goes through fixed lens and is "bent"-converted into angled rays that goes into optic fibre.
              at the end of fibe (that is actually also fixed length) the rays get converted into straight forward back, and transfered to the camera or observer eyes directly.

              Below I attach quick sketch, the rays to the right should be concentrated in one point as on the left for this thing to work correctly. That's why the length of fibre has to be corelated with lenses, numerical apperture of itself, and distances in between lens and fibre. The wavelength of light being transported does matter too, for best effect, it should be monochromatic. I don't think it it will matter though, the discrepancies are not really noticeable with naked eye.

              That would be it for today. I'll try to review the web site you pointed at, and we'll talk about it more.
              EPIA TC 1G 256MB 60GB Linux,WindowMaker, Roadnav, Xine, XMMS, iGuidance3
              Lilliput 8", Pharos i360, WUSB11v2.6 WiFi


              • #10
                For the screen detection, opencv contains contour finding algo so this one completed with houghlines algo which task is mainly to find lines would be helpfull to find the screen flawlessly.

                About the click, it is possible to launch external code with the only task to count time, the opencv is able to find out if a surface is skin or not, at the moment the eye blinks we are launching a counter, if still skin surface when countdown is touched then accept this as a click command. I think it can be found what is a normal blink and a bit longer blink.

                Yesterday i received a paper from Maxim (Dallas) and they are showing their new product, the maxim equalizer Max3982.

                They say :

                In a context of loss, reflections and crosstalk impair data eye, equalizer restores eye (the schematic you showed with a nice eye inside the fiber) and reduces jitter. So maybe this component could help to correct poor conditions with some homebrew hardware ?



                Maybe the simplest could be to find some surplus endoscopic hardware ?

                lol i looked at the prices, i guess the best is to forget the optic fiber idea.


                • #11
                  Some news with this project, i found out a way to lock eyes position in despite of head moves, this releases us from the mandatory camera hanged on glasses (the one watching the eye permanently), now only a single camera turned towards the scene is necessary (have a look in the linux thread about computer vision stuff) to see eyes locking video.