Hello forum,
In my project, the object ( only one ) to be tracked is small ( ~30 pixels ) and has very little to no features. It however, has certain properties ( shape of contour, rigidity, etc ) that can be utilized. Its motion however, is completely random.
By performing background subtraction (MOG) (OpenCV function) over multiple frames and by utilizing the properties, I have managed to detect the object of interest on a not-so-noisy background.
I believe the technique is similar to the Greedy Search algorithm where the nodes (object + noise) are detected over multiple frames ( 3 frames in my case ) and the nodes that are connected by the lowest weights ( score utilizing the object properties ) represent the actual object. I have attached a picture showing an example of it.
By running background subtraction continuously, I am able to track the desired object with rather decent accuracy. The program switches to a different algorithm under occlusion with MOG running continuously in the background to re-detect the object in case it is lost.
The problem is, as the project requires multiple cameras, the program lags and hence is no longer real-time.
Should the program switch to a more efficient searching/tracking algorithm after the object is successfully detected and call up the Background Subtraction algorithm only when the object is lost or is it okay to run Background Subtraction 24/7 ? ie. the program could be lagging because I am running it on a laptop.
Secondly, since Background Subtraction (MOG) requires a few frames to initialize + I require the nodes to be established over at least 3 frames, I am worried that calling MOG only when the object is lost may be too late thus reducing my accuracy.
Secondly, is optical flow more efficient than Background Subtraction ?
Hope the experts here can enlighten me.
Regards,
Haziq
PS: I really like my current work so far and would like to improve/optimize it instead of switching to a different technique.