Prev Tutorial: Detection of Diamond Markers
Next Tutorial: Aruco module FAQ
The ArUco module can also be used to calibrate a camera. Camera calibration consists in obtaining the camera intrinsic parameters and distortion coefficients. This parameters remain fixed unless the camera optic is modified, thus camera calibration only need to be done once.
Camera calibration is usually performed using the OpenCV cv::calibrateCamera()
function. This function requires some correspondences between environment points and their projection in the camera image from different viewpoints. In general, these correspondences are obtained from the corners of chessboard patterns. See cv::calibrateCamera()
function documentation or the OpenCV calibration tutorial for more detailed information.
Using the ArUco module, calibration can be performed based on ArUco markers corners or ChArUco corners. Calibrating using ArUco is much more versatile than using traditional chessboard patterns, since it allows occlusions or partial views.
As it can be stated, calibration can be done using both, marker corners or ChArUco corners. However, it is highly recommended using the ChArUco corners approach since the provided corners are much more accurate in comparison to the marker corners. Calibration using a standard Board should only be employed in those scenarios where the ChArUco boards cannot be employed because of any kind of restriction.
Calibration with ChArUco Boards
To calibrate using a ChArUco board, it is necessary to detect the board from different viewpoints, in the same way that the standard calibration does with the traditional chessboard pattern. However, due to the benefits of using ChArUco, occlusions and partial views are allowed, and not all the corners need to be visible in all the viewpoints.
ChArUco calibration viewpoints
The example of using cv::calibrateCamera()
for cv::aruco::CharucoBoard:
aruco::CharucoBoard board(
Size(squaresX, squaresY), squareLength, markerLength, dictionary);
aruco::CharucoDetector detector(board, charucoParams, detectorParams);
vector<Mat> allCharucoCorners, allCharucoIds;
vector<vector<Point2f>> allImagePoints;
vector<vector<Point3f>> allObjectPoints;
vector<Mat> allImages;
while(inputVideo.grab()) {
Mat image, imageCopy;
inputVideo.retrieve(image);
vector<int> markerIds;
vector<vector<Point2f>> markerCorners;
Mat currentCharucoCorners, currentCharucoIds;
vector<Point3f> currentObjectPoints;
vector<Point2f> currentImagePoints;
detector.detectBoard(image, currentCharucoCorners, currentCharucoIds);
Size2i Size
Definition: modules/core/include/opencv2/core/types.hpp:370
if(key == 'c' && currentCharucoCorners.total() > 3) {
board.matchImagePoints(currentCharucoCorners, currentCharucoIds, currentObjectPoints, currentImagePoints);
if(currentImagePoints.empty() || currentObjectPoints.empty()) {
cout << "Point matching failed, try again." << endl;
continue;
}
cout << "Frame captured" << endl;
allCharucoCorners.push_back(currentCharucoCorners);
allCharucoIds.push_back(currentCharucoIds);
allImagePoints.push_back(currentImagePoints);
allObjectPoints.push_back(currentObjectPoints);
allImages.push_back(image);
imageSize = image.size();
}
}
Mat cameraMatrix, distCoeffs;
cameraMatrix = Mat::eye(3, 3,
CV_64F);
cameraMatrix.at<double>(0, 0) = aspectRatio;
}
double repError =
calibrateCamera(allObjectPoints, allImagePoints, imageSize, cameraMatrix, distCoeffs,
double calibrateCamera(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints, Size imageSize, InputOutputArray cameraMatrix, InputOutputArray distCoeffs, OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs, OutputArray stdDeviationsIntrinsics, OutputArray stdDeviationsExtrinsics, OutputArray perViewErrors, int flags=0, TermCriteria criteria=TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, DBL_EPSILON))
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.
@ CALIB_FIX_ASPECT_RATIO
Definition: calib3d.hpp:503
InputOutputArray noArray()
Returns an empty InputArray or OutputArray.
#define CV_64F
Definition: core/include/opencv2/core/hal/interface.h:79
The ChArUco corners and ChArUco identifiers captured on each viewpoint are stored in the vectors allCharucoCorners
and allCharucoIds
, one element per viewpoint.
The calibrateCamera()
function will fill the cameraMatrix
and distCoeffs
arrays with the camera calibration parameters. It will return the reprojection error obtained from the calibration. The elements in rvecs
and tvecs
will be filled with the estimated pose of the camera (respect to the ChArUco board) in each of the viewpoints.
Finally, the calibrationFlags
parameter determines some of the options for the calibration.
A full working example is included in the calibrate_camera_charuco.cpp
inside the samples/cpp/tutorial_code/objectDetection
folder.
The samples now take input via commandline via the cv::CommandLineParser
. For this file the example parameters will look like:
"camera_calib.txt" -w=5 -h=7 -sl=0.04 -ml=0.02 -d=10
-v=path/img_%02d.jpg
The camera calibration parameters from opencv/samples/cpp/tutorial_code/objectDetection/tutorial_camera_charuco.yml
were obtained by the img_00.jpg-img_03.jpg
placed from this folder.
Calibration with ArUco Boards
As it has been stated, it is recommended the use of ChAruco boards instead of ArUco boards for camera calibration, since ChArUco corners are more accurate than marker corners. However, in some special cases it must be required to use calibration based on ArUco boards. As in the previous case, it requires the detections of an ArUco board from different viewpoints.
ArUco calibration viewpoints
The example of using cv::calibrateCamera()
for cv::aruco::GridBoard:
aruco::GridBoard gridboard(
Size(markersX, markersY), markerLength, markerSeparation, dictionary);
aruco::ArucoDetector detector(dictionary, detectorParams);
vector<vector<vector<Point2f>>> allMarkerCorners;
vector<vector<int>> allMarkerIds;
while(inputVideo.grab()) {
Mat image, imageCopy;
inputVideo.retrieve(image);
vector<int> markerIds;
vector<vector<Point2f>> markerCorners, rejectedMarkers;
detector.detectMarkers(image, markerCorners, markerIds, rejectedMarkers);
if(refindStrategy) {
detector.refineDetectedMarkers(image, gridboard, markerCorners, markerIds, rejectedMarkers);
}
if(key == 'c' && !markerIds.empty()) {
cout << "Frame captured" << endl;
allMarkerCorners.push_back(markerCorners);
allMarkerIds.push_back(markerIds);
imageSize = image.size();
}
}
Mat cameraMatrix, distCoeffs;
cameraMatrix = Mat::eye(3, 3,
CV_64F);
cameraMatrix.at<double>(0, 0) = aspectRatio;
}
vector<Point3f> objectPoints;
vector<Point2f> imagePoints;
vector<Mat> processedObjectPoints, processedImagePoints;
size_t nFrames = allMarkerCorners.size();
for(size_t frame = 0; frame < nFrames; frame++) {
Mat currentImgPoints, currentObjPoints;
gridboard.matchImagePoints(allMarkerCorners[frame], allMarkerIds[frame], currentObjPoints, currentImgPoints);
if(currentImgPoints.total() > 0 && currentObjPoints.total() > 0) {
processedImagePoints.push_back(currentImgPoints);
processedObjectPoints.push_back(currentObjPoints);
}
}
double repError =
calibrateCamera(processedObjectPoints, processedImagePoints, imageSize, cameraMatrix, distCoeffs,
A full working example is included in the calibrate_camera.cpp
inside the samples/cpp/tutorial_code/objectDetection
folder.
The samples now take input via commandline via the cv::CommandLineParser
. For this file the example parameters will look like:
"camera_calib.txt" -w=5 -h=7 -l=100 -s=10 -d=10 -v=path/aruco_videos_or_images