I have two calibrated cameras, and i want to measure the size of moving object i.e car or human. how i can use camera calibration information to measure object size?
With a stereo-vision system; the 3rd dimension of any object detected in the scene is measurable. Each pixel in a 3D image equals a specifically calibrated value (some centimeters) of the real world. In this case, it will be sufficient to make a segmentation step for the main contours describing that detected object, then determining the size according to its geometric properties will be feasible.
Please Prefer CV2.moments it helps to measure the size of an Object like Area, Centroid. May I know which tool you are using. if OpenCV then it will help to find
basically, you can measure the object from two pictures from different distances(in the same line). you should know the real distance of the object using stereo vision. you can't estimate the real size of an object from an image if you do not have any information about the distance of the object.
If the object is moving can be slightly complicated unless you use a stereo camera system. (two cameras that are calibrated together) then you can use the disparity map. The difference map of what each camera see, to estimate the distance to the camera and therefore the size of the object.
To do this, both cameras should be identical (for convenience purposes) you need to mount the cameras in a binocular type of model, with a particular distance from the two optical axis of each camera. The working distance (the range of distances to the camera system where you are able to measure objects) of a stereo system deppends on the distance to the cameras to each other. Also the estimation of the distance to the camera in an stereo system is much less accurate than the horizontal and vertical measurements in the image. In practise, to make this system work reliably, you need to work in a specific controlled space. However, depending on the circumstances you may not be able to solve your problem with an stereo system with enough accuracy (this deppends on your needs).
other options are using a secondary sensor to solve the distance or size problem. Sometimes you may be able to use a single camera system to solve the problem.
Using a single camera however, the calibration will give you the camera parameters, but to estimate the size of an object you need to know the distance to the camera, to solve the projection model. The usual way to do this is to work in a plane, with a known pose relative to the camera image plane, you assume all objects are located into that plane. If you also use for calibration this plane, you dont need to estimate the pose. And then you can easily change scales.
If you use the later model, you can always do a pseudocalibration, disregarding the camera distortion (so you cannot use lens whit noticeably distortion such as fish eye lens), you also need to use a plane parallel to the camera image. In this particular case the vertical and horizontal pixel to mm scales are constant and you can estimate them from the size of any known object in the image.
After knowing the two sets of the parameters of the two cameras you need to solve an intersection problem. This problem can be solved under the collinearity equations or the coplanarity equation. Please consult any standard text in photogrammetry find the relevant equations.