April 5, 2008
WikiClass 7 Notes
Class 7 Notes
We manipulated images, today we arrived to the point where we are going to talk about video tracking. What types of trackign we can do on the video. We never thought that we had a sequence, looking for a chance is possible in video too. The work at the Golden State Bridge. Tracking people throwing themselves to the water. tracking brightness is another tracking we can do with the video. Brightness is subset of color. If you wanna track more than one thing you can track them separately too. You can track many colors if they are distinct enough.
When we are tracking anything, whenever we input to the camera we have few components, we start with the object that we are tracking, then we hav e the light shining on it, then we have the camera which is connected to our computer. If you look at this chain of events there is so much variables. Yellow is not the same yellow in a different light scheme. The way we see the scene in our eyes is better than what camera sees. The next big information comes from the camera. The camerais connected to the computer with a wire. If it is a composite cmara that is composite lines we start losing signals, if it is firewire than it will be better, but firewire signals are compressed. We are losing information. Once we arrive in our computer we have drivers that are native and APIs on top of them. So every of those steps creates a latency. Until now it was not that bad in terms of latency, but if you start to use tracking, and drawing with a tight relation, it is really important to keep the latency down. It is important to keep the calculations simple as it is.
You really have to think about the situation of the signals. It is hard to recover the loss of this visual information. The most difficult thing is to create a tracking application in the environment you set up.
By the time RGB values become available to you, and you find algorithm to convert them it might be too late. You have to solve lots of things on the production line.
If you look at our application, we can see that there are reflective things that is looking at the camera. You can try to create an environment for example taking the exposure to manual and making it go down. Basically you can play with the camera’s settings. This is taking advantage of the camera. You can create an environment that might be trackable.
Tracking for white light, brightness is the most rewarding tracking you can get. He encourages us to use it instead of using colors and other weird things.
Another thing is infrared. Cameras can track infrared lights. A tiny infrared LED is the most rewarding. You can use it as a bright light source. In some cases thee amount of information of the surrounding environment could be distracting. You can use a infrared filter to avoid this. You should get a infrared filter. Undevelop films work, old zip cartidges. you can get small glass ones where the military uses. 25$ but they are pretty good. How they work is simple, you just put the filter infront of the camera and the cameraonly sees the infrared source and nothing else. Sometimes you might want to track something in front of the projection, you can still use the filter. The infrared light kills the whole flurosecent light but it doesn’t kill the normal direct spot lights. All cameras sees infrared but black white would see it better. Cheap some security cameras are way much more better.
Hot mirror is the opposite, it reflect backs the infrared light but get all the remaining light. You can use it if you have a infrared source in the range of the camera that you don’t want your audience to see.
If you look at the human eye, we have the ability to adjust million times, we can tolerate is the ratio of 1/1million.
The next thing we are going to track is color. Youcannot track the leds as they are since they are too saturated. You can tweak the camera settings and make them trackable lowering the exposure level.
Finding the difference of the two points in the space. We are trying to compare two colors in a 3D dimension.
We are trying to see which marker is closer to this marker black. We can calculate it with the pythogrem theorem, we don’t care the accurate what we need is which one is closer. So we are looking at them their values in relativite to each other. We can get rid of the square.
dist = deltaR2 + deltaG2 + deltaB2
Two things you can track is brightness and color which are based on RGB values. If you wanna go in higher level of abstraction you can identify more things. objects, pattern recognition. you can look for libraries that does pattern recognition for you. Face recognition is one set of pattern recognition. Face recognition looks for where the face, for an interactive installations face is important thing to know. Face recognition could be use for face identification but in our sense it is not so important. You can download libraries that let you do that.
Those are things you can get from single frames of video.
We can also track of change. You can identify where the motion is happening. Not looking for change every frame, but more holistic approach. Reference. Looking at to the room which is empty and full. Comparing different frames betweeen those times.
Naive Track Change takes a reference snapshot frame and then when it sees a change in the frame it will show you. It is important to make your camera steady in that setup. We shouldbe careful about auto focus, since when we put our hand in front of the camera the rest would be changed to out of focus and give us wrong values. Brightness is the same. We should go manual as much as we can to have steady preferences.
The DSP chip in the camera has a property for a rough movement. Steady Shot. alllows to camera to move the pixels one or two to make it look steady. This was one type of change. We are giving it by giving it a reference frame and then going according to that.
Looking for movement is another type of change we are looking for. This frame will not let us know iuf someone in front of the frame or not. These are two different phenomenons that we can use. This will tell you how fast you are moving and what is moving in the frame. If you want to paint with your hand you need to write an application to take close areas and close them and draw it because the middle of the hand is hollow. The core idea is the reference frame.
New Color Track, it looks for occurances that are connected around them. So it doesn’t track for real small pixels, hey show up like yellow. Hard edges are really problem. You need a blurry image in order to track movement easier. Iti s impotant not to have hard edge stuff in your reference frame.
Bluescreen, background removal, difference, masking all takes advantages of reference frame. We are going to talk about them next week.
The problem with our code is we are looking for a change and we are adding them so if we have separate two hands in the screen we are going to have a big rect. What we can do is we can look for separate blobs. It can be a good development process. Blobing over time is another blobing type. Find a way of persistance of an object in the frame. When we are walking in front of the frame, the next frame he saws the same person, you can start to count the blobs and it will give you people that come into the frame, if there’s people in the frame etc.
openCV is one library you can use for this kind of thing.
Continue Reading
Back to Archive