您的位置:首页 > 运维架构

【OpenCV】基于OpenCV的双目视觉测试

2016-12-09 10:31 323 查看
转载:http://blog.csdn.net/loser__wang/article/details/52836042

代码参考邹宇华老师的双目,Camera
calibration With OpenCV,Camera Calibration and 3D Reconstruction部分,按照自己的情况进行了更改。

如果读者是想快速工程使用,那可以看我的这篇博客,如果想要系统学习,请先看相关教材,并辅以邹宇华老师的博客


准备环境

因为本文是进行双目立体视觉实验,所以你必须有两个摄像头,单摄像头标定的实验可以看我的前一篇文章。解决方案,并且有特别需要注意的点我都会仔细说明。


双目摄像头准备

直接购买两个普通的usb摄像头,这个方案是我最早采用的,但是遇到不少坑。

注意不要太广角,因为广角的畸变会很大,这在后面匹配的时候带来很大的问题

安装的时候注意把两个安装的较为平行,虽然可以矫正出来,但是当然还是自身比较平行的好

同时也要注意两个摄像头的轴距,不要太远或者太近,5-10cm为较为适宜的

分辨率可以高一点,但实验上我把他限制在320*240,其实可以标定采用高分辨率,匹配采用低分辨率

购买淘宝的一种双目摄像头

同样注意是否广角,这点我觉得有点坑,因为不想涉及到打广告,淘宝那家集成的很好,但是,最广角的貌似有点问题,我换成不太广的了


标定板准备

淘宝购买标定板,一个字贵,土豪可以忽略。
自己打印,那么可以按照我的前一篇文章里的方法准备,注意记好到底是几乘几的。

这个几乘以几是按照黑白格子的交叉的数量算的,例如,标定板就是9*6的,记好。


双目标定环节

最下面给了所有实验的源代码,代码有点乱,还是分开来说。双目标定环节就是需要得到两个摄像机各自的内参,以及他们俩的外参数。


双目摄像机读取

直接上双目读取的代码,我把两个摄像头的分辨率都改了一下。这个得看具体摄像头的支持程度,有的无法改,你改了也没效果。
#include "opencv2/opencv.hpp"
#include "opencv2/opencv.hpp"
#include "opencv2/highgui.hpp"

using namespace std;
using namespace cv;

int main(){
VideoCapture camera_l(1);
VideoCapture camera_r(0);
if (!camera_l.isOpened()) { cout << "No left camera!" << endl; return -1; }
if (!camera_r.isOpened()) { cout << "No right camera!" << endl; return -1; }
camera_l.set(CAP_PROP_FRAME_WIDTH, 320);
camera_l.set(CAP_PROP_FRAME_HEIGHT, 240);
camera_r.set(CAP_PROP_FRAME_WIDTH, 320);
camera_r.set(CAP_PROP_FRAME_HEIGHT, 240);
cv::Mat frame_l, frame_r;
while (1) {
camera_l >> frame_l;
camera_r >> frame_r;
imshow("Left Camera", frame_l);
imshow("Right Camera", frame_r);
char key = waitKey(1);
if (key == 27 || key == 'q' || key == 'Q') //Allow ESC to quit
break;
}
return 0;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[/code]

本来双目摄像机读取起来并没有什么可说的,但是我在后期做别的实验的时候发现,摄像机有时候读取的有问题,无法完成初始化,如果确定两个摄像头分辨读取没有问题,只是无法一起读取,有如下两种解决方案。

* 邹老师的方案

* 在两个摄像机分别都可以运行之后,我们改变初始化部分的代码,把两个if判断改成如下两行。
while (!camera_l.isOpened()) { camera_l.open(1); };
while (!camera_r.isOpened()) { camera_r.open(0); };
1
2
1
2
[/code]

实验效果如下:



注意到,右摄像头的图像相对于左摄像头的图像有点“左移”。这点自己分析一下原因。很重要,如果不是这项,你下面的工作会白做。因为匹配的算法就是遵循这种“左移”的。


标定环节

基本上的流程就是读取左右摄像头,分别检测棋盘的角点,当同时都检测到完整的角点之后,进行精细化处理,得到更精确地角点并存储。攒够一定的数量之后(20-30)之后进行参数计算。并将参数进行存储。还是直接上代码,并说明一些实现的细节部分。

请慢慢阅读。

一上来的这个ChessboardStable 是用来检测棋盘格是否稳定的,因为在我的试验中,双目摄像头是用手拿着的,或多或少会有一些抖动,这样如果只是检测是否存在角点,可能会通过不是很清晰稳定的图像进行分析,这样会带来比较大的误差,如果通过一个队列判断是否稳定,则可以避免这种误差。我是简单粗暴的使用vector代替队列的。

后面的部分需要注意的就是boardSize,squareSize 需要设置为你的标定板对应的尺寸,我拿A4纸简单的打印的一份,每个格子的大小经过测量时26mm,你可以根据你自己的标定板进行相应的设置。
#include <string>
#include <stdio.h>
#include <iostream>
#include "opencv2/opencv.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/core/core.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui/highgui.hpp"

using namespace std;
using namespace cv;

vector<vector<Point2f> >corners_l_array, corners_r_array;
int array_index = 0;
bool ChessboardStable(vector<Point2f>corners_l, vector<Point2f>corners_r){
if (corners_l_array.size() < 10){
corners_l_array.push_back(corners_l);
corners_r_array.push_back(corners_r);
return false;
}
else{
corners_l_array[array_index % 10] = corners_l;
corners_r_array[array_index % 10] = corners_r;
array_index++;
double error = 0.0;
for (int i = 0; i < corners_l_array.size(); i++){
for (int j = 0; j < corners_l_array[i].size(); j++){
error += abs(corners_l[j].x - corners_l_array[i][j].x) + abs(corners_l[j].y - corners_l_array[i][j].y);
error += abs(corners_r[j].x - corners_r_array[i][j].x) + abs(corners_r[j].y - corners_r_array[i][j].y);
}
}
if (error < 1000)
{
corners_l_array.clear();
corners_r_array.clear();
array_index = 0;
return true;
}
else
return false;
}
}

int main(){
VideoCapture camera_l(1);
VideoCapture camera_r(0);

while (!camera_l.isOpened()) { camera_l.open(1); };
while (!camera_r.isOpened()) { camera_r.open(0); };
camera_l.set(CAP_PROP_FRAME_WIDTH, 320);
camera_l.set(CAP_PROP_FRAME_HEIGHT, 240);
camera_r.set(CAP_PROP_FRAME_WIDTH, 320);
camera_r.set(CAP_PROP_FRAME_HEIGHT, 240);

Mat frame_l, frame_r;

Size boardSize(9, 6);
const float squareSize = 26.f;  // Set this to your actual square size

vector<vector<Point2f> > imagePoints_l;
vector<vector<Point2f> > imagePoints_r;

int nimages = 0;

while (1){
camera_l >> frame_l;
camera_r >> frame_r;

bool found_l = false, found_r = false;

vector<Point2f>corners_l, corners_r;
found_l = findChessboardCorners(frame_l, boardSize, corners_l,
CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_NORMALIZE_IMAGE);
found_r = findChessboardCorners(frame_r, boardSize, corners_r,
CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_NORMALIZE_IMAGE);

if (found_l && found_r &&ChessboardStable(corners_l, corners_r)) {
Mat viewGray;
cvtColor(frame_l, viewGray, COLOR_BGR2GRAY);
cornerSubPix(viewGray, corners_l, Size(11, 11),
Size(-1, -1), TermCriteria(TermCriteria::EPS + TermCriteria::COUNT, 30, 0.1));
cvtColor(frame_r, viewGray, COLOR_BGR2GRAY);
cornerSubPix(viewGray, corners_r, Size(11, 11),
Size(-1, -1), TermCriteria(TermCriteria::EPS + TermCriteria::COUNT, 30, 0.1));

imagePoints_l.push_back(corners_l);
imagePoints_r.push_back(corners_r);
++nimages;
frame_l += 100;
frame_r += 100;

drawChessboardCorners(frame_l, boardSize, corners_l, found_l);
drawChessboardCorners(frame_r, boardSize, corners_r, found_r);

putText(frame_l, to_string(nimages), Point(20, 20), 1, 1, Scalar(0, 0, 255));
putText(frame_r, to_string(nimages), Point(20, 20), 1, 1, Scalar(0, 0, 255));
imshow("Left Camera", frame_l);
imshow("Right Camera", frame_r);
char c = (char)waitKey(500);
if (c == 27 || c == 'q' || c == 'Q') //Allow ESC to quit
exit(-1);

if (nimages >= 30)
break;
}
else{
drawChessboardCorners(frame_l, boardSize, corners_l, found_l);
drawChessboardCorners(frame_r, boardSize, corners_r, found_r);

putText(frame_l, to_string(nimages), Point(20, 20), 1, 1, Scalar(0, 0, 255));
putText(frame_r, to_string(nimages), Point(20, 20), 1, 1, Scalar(0, 0, 255));
imshow("Left Camera", frame_l);
imshow("Right Camera", frame_r);

char key = waitKey(1);
if (key == 27)
break;
}

}
if (nimages < 20){ cout << "Not enough" << endl; return -1; }

vector<vector<Point2f> > imagePoints[2] = { imagePoints_l, imagePoints_r };
vector<vector<Point3f> > objectPoints;
objectPoints.resize(nimages);

for (int i = 0; i < nimages; i++)
{
for (int j = 0; j < boardSize.height; j++)
for (int k = 0; k < boardSize.width; k++)
objectPoints[i].push_back(Point3f(k*squareSize, j*squareSize, 0));
}

cout << "Running stereo calibration ..." << endl;

Size imageSize(320, 240);

Mat cameraMatrix[2], distCoeffs[2];
cameraMatrix[0] = initCameraMatrix2D(objectPoints, imagePoints_l, imageSize, 0);
cameraMatrix[1] = initCameraMatrix2D(objectPoints, imagePoints_r, imageSize, 0);
Mat R, T, E, F;

double rms = stereoCalibrate(objectPoints, imagePoints_l, imagePoints_r,
cameraMatrix[0], distCoeffs[0],
cameraMatrix[1], distCoeffs[1],
imageSize, R, T, E, F,
CALIB_FIX_ASPECT_RATIO +
CALIB_ZERO_TANGENT_DIST +
CALIB_USE_INTRINSIC_GUESS +
CALIB_SAME_FOCAL_LENGTH +
CALIB_RATIONAL_MODEL +
CALIB_FIX_K3 + CALIB_FIX_K4 + CALIB_FIX_K5,
TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 100, 1e-5));
cout << "done with RMS error=" << rms << endl;

double err = 0;
int npoints = 0;
vector<Vec3f> lines[2];
for (int i = 0; i < nimages; i++)
{
int npt = (int)imagePoints_l[i].size();
Mat imgpt[2];
imgpt[0] = Mat(imagePoints_l[i]);
undistortPoints(imgpt[0], imgpt[0], cameraMatrix[0], distCoeffs[0], Mat(), cameraMatrix[0]);
computeCorrespondEpilines(imgpt[0], 0 + 1, F, lines[0]);

imgpt[1] = Mat(imagePoints_r[i]);
undistortPoints(imgpt[1], imgpt[1], cameraMatrix[1], distCoeffs[1], Mat(), cameraMatrix[1]);
computeCorrespondEpilines(imgpt[1], 1 + 1, F, lines[1]);

for (int j = 0; j < npt; j++)
{
double errij = fabs(imagePoints[0][i][j].x*lines[1][j][0] +
imagePoints[0][i][j].y*lines[1][j][1] + lines[1][j][2]) +
fabs(imagePoints[1][i][j].x*lines[0][j][0] +
imagePoints[1][i][j].y*lines[0][j][1] + lines[0][j][2]);
err += errij;
}
npoints += npt;
}
cout << "average epipolar err = " << err / npoints << endl;

FileStorage fs("intrinsics.yml", FileStorage::WRITE);
if (fs.isOpened())
{
fs << "M1" << cameraMatrix[0] << "D1" << distCoeffs[0] <<
"M2" << cameraMatrix[1] << "D2" << distCoeffs[1];
fs.release();
}
else
cout << "Error: can not save the intrinsic parameters\n";

Mat R1, R2, P1, P2, Q;
Rect validRoi[2];

stereoRectify(cameraMatrix[0], distCoeffs[0],
cameraMatrix[1], distCoeffs[1],
imageSize, R, T, R1, R2, P1, P2, Q,
CALIB_ZERO_DISPARITY, 1, imageSize, &validRoi[0], &validRoi[1]);

fs.open("extrinsics.yml", FileStorage::WRITE);
if (fs.isOpened())
{
fs << "R" << R << "T" << T << "R1" << R1 << "R2" << R2 << "P1" << P1 << "P2" << P2 << "Q" << Q;
fs.release();
}
else
cout << "Error: can not save the extrinsic parameters\n";

return 0;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
[/code]

注意标定的时候把各个方向,大小都照顾到。



到30之后就进入标定环节了。


立体匹配

直接上完整代码了。注意之前有人问我标定完之后如何去黑边,可以注意一下里面的函数
initUndistortRectifyMap(cameraMatrix[0], distCoeffs[0], R1, P1, imageSize, CV_16SC2, rmap[0][0], rmap[0][1]);
1
1
[/code]

这个就是用来计算校正后的映射的。后面可以计算出校正后的内接矩形,也就是校正后的无黑边的图像部分,会损失掉原图像的边缘。



此外,额,这次我自己测试的效果不是很好,之前在实验室的时候要比写文档这次好很多,所以也希望你们可以把自己的结果发出来。看下图的横线的话,感觉还是标定的不太好。这次的文档就当是一个流程介绍吧。





#include <string>
#include <stdio.h>
#include <iostream>
#include "opencv2/opencv.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/core/core.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui/highgui.hpp"

using namespace std;
using namespace cv;

vector<vector<Point2f> >corners_l_array, corners_r_array;
int array_index = 0;
bool ChessboardStable(vector<Point2f>corners_l, vector<Point2f>corners_r){
if (corners_l_array.size() < 10){
corners_l_array.push_back(corners_l);
corners_r_array.push_back(corners_r);
return false;
}
else{
corners_l_array[array_index % 10] = corners_l;
corners_r_array[array_index % 10] = corners_r;
array_index++;
double error = 0.0;
for (int i = 0; i < corners_l_array.size(); i++){
for (int j = 0; j < corners_l_array[i].size(); j++){
error += abs(corners_l[j].x - corners_l_array[i][j].x) + abs(corners_l[j].y - corners_l_array[i][j].y);
error += abs(corners_r[j].x - corners_r_array[i][j].x) + abs(corners_r[j].y - corners_r_array[i][j].y);
}
}
if (error < 1000)
{
corners_l_array.clear();
corners_r_array.clear();
array_index = 0;
return true;
}
else
return false;
}
}

int main(){
cv::VideoCapture camera_l(1);
cv::VideoCapture camera_r(0);

camera_l.set(CAP_PROP_FRAME_WIDTH, 320);
camera_l.set(CAP_PROP_FRAME_HEIGHT, 240);
camera_r.set(CAP_PROP_FRAME_WIDTH, 320);
camera_r.set(CAP_PROP_FRAME_HEIGHT, 240);

if (!camera_l.isOpened()){ cout << "No left camera!" << endl; return -1; }
if (!camera_r.isOpened()){ cout << "No right camera!" << endl; return -1; }

cv::Mat frame_l, frame_r;

Size boardSize(9, 6);
const float squareSize = 26.f;  // Set this to your actual square size

vector<Mat> goodFrame_l;
vector<Mat> goodFrame_r;

vector<vector<Point2f> > imagePoints_l;
vector<vector<Point2f> > imagePoints_r;

vector<vector<Point3f> > objectPoints;

int nimages = 0;

while (1){
camera_l >> frame_l;
camera_r >> frame_r;

bool found_l = false, found_r = false;

vector<Point2f>corners_l, corners_r;
found_l = findChessboardCorners(frame_l, boardSize, corners_l,
CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_NORMALIZE_IMAGE);
found_r = findChessboardCorners(frame_r, boardSize, corners_r,
CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_NORMALIZE_IMAGE);

if (found_l && found_r &&ChessboardStable(corners_l, corners_r)){
goodFrame_l.push_back(frame_l);
goodFrame_r.push_back(frame_r);

Mat viewGray;
cvtColor(frame_l, viewGray, COLOR_BGR2GRAY);
cornerSubPix(viewGray, corners_l, Size(11, 11),
Size(-1, -1), TermCriteria(TermCriteria::EPS + TermCriteria::COUNT, 30, 0.1));
cvtColor(frame_r, viewGray, COLOR_BGR2GRAY);
cornerSubPix(viewGray, corners_r, Size(11, 11),
Size(-1, -1), TermCriteria(TermCriteria::EPS + TermCriteria::COUNT, 30, 0.1));

imagePoints_l.push_back(corners_l);
imagePoints_r.push_back(corners_r);
++nimages;
frame_l += 100;
frame_r += 100;

drawChessboardCorners(frame_l, boardSize, corners_l, found_l);
drawChessboardCorners(frame_r, boardSize, corners_r, found_r);

putText(frame_l, to_string(nimages), Point(20, 20), 1, 1, Scalar(0, 0, 255));
putText(frame_r, to_string(nimages), Point(20, 20), 1, 1, Scalar(0, 0, 255));
imshow("Left Camera", frame_l);
imshow("Right Camera", frame_r);
char c = (char)waitKey(500);
if (c == 27 || c == 'q' || c == 'Q') //Allow ESC to quit
exit(-1);

if (nimages >= 30)
break;
}
else{
drawChessboardCorners(frame_l, boardSize, corners_l, found_l);
drawChessboardCorners(frame_r, boardSize, corners_r, found_r);

putText(frame_l, to_string(nimages), Point(20, 20), 1, 1, Scalar(0, 0, 255));
putText(frame_r, to_string(nimages), Point(20, 20), 1, 1, Scalar(0, 0, 255));
imshow("Left Camera", frame_l);
imshow("Right Camera", frame_r);

char key = waitKey(1);
if (key == 27)
break;
}

}
if (nimages < 20){ cout << "Not enough" << endl; return -1; }

vector<vector<Point2f> > imagePoints[2] = { imagePoints_l, imagePoints_r };

objectPoints.resize(nimages);

for (int i = 0; i < nimages; i++)
{
for (int j = 0; j < boardSize.height; j++)
for (int k = 0; k < boardSize.width; k++)
objectPoints[i].push_back(Point3f(k*squareSize, j*squareSize, 0));
}

cout << "Running stereo calibration ..." << endl;

Size imageSize(320, 240);

Mat cameraMatrix[2], distCoeffs[2];
cameraMatrix[0] = initCameraMatrix2D(objectPoints, imagePoints_l, imageSize, 0);
cameraMatrix[1] = initCameraMatrix2D(objectPoints, imagePoints_r, imageSize, 0);
Mat R, T, E, F;

double rms = stereoCalibrate(objectPoints, imagePoints_l, imagePoints_r,
cameraMatrix[0], distCoeffs[0],
cameraMatrix[1], distCoeffs[1],
imageSize, R, T, E, F,
CALIB_FIX_ASPECT_RATIO +
CALIB_ZERO_TANGENT_DIST +
CALIB_USE_INTRINSIC_GUESS +
CALIB_SAME_FOCAL_LENGTH +
CALIB_RATIONAL_MODEL +
CALIB_FIX_K3 + CALIB_FIX_K4 + CALIB_FIX_K5,
TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 100, 1e-5));
cout << "done with RMS error=" << rms << endl;

double err = 0;
int npoints = 0;
vector<Vec3f> lines[2];
for (int i = 0; i < nimages; i++)
{
int npt = (int)imagePoints_l[i].size();
Mat imgpt[2];
imgpt[0] = Mat(imagePoints_l[i]);
undistortPoints(imgpt[0], imgpt[0], cameraMatrix[0], distCoeffs[0], Mat(), cameraMatrix[0]);
computeCorrespondEpilines(imgpt[0], 0 + 1, F, lines[0]);

imgpt[1] = Mat(imagePoints_r[i]);
undistortPoints(imgpt[1], imgpt[1], cameraMatrix[1], distCoeffs[1], Mat(), cameraMatrix[1]);
computeCorrespondEpilines(imgpt[1], 1 + 1, F, lines[1]);

for (int j = 0; j < npt; j++)
{
double errij = fabs(imagePoints[0][i][j].x*lines[1][j][0] +
imagePoints[0][i][j].y*lines[1][j][1] + lines[1][j][2]) +
fabs(imagePoints[1][i][j].x*lines[0][j][0] +
imagePoints[1][i][j].y*lines[0][j][1] + lines[0][j][2]);
err += errij;
}
npoints += npt;
}
cout << "average epipolar err = " << err / npoints << endl;

FileStorage fs("intrinsics.yml", FileStorage::WRITE);
if (fs.isOpened())
{
fs << "M1" << cameraMatrix[0] << "D1" << distCoeffs[0] <<
"M2" << cameraMatrix[1] << "D2" << distCoeffs[1];
fs.release();
}
else
cout << "Error: can not save the intrinsic parameters\n";

Mat R1, R2, P1, P2, Q;
Rect validRoi[2];

stereoRectify(cameraMatrix[0], distCoeffs[0],
cameraMatrix[1], distCoeffs[1],
imageSize, R, T, R1, R2, P1, P2, Q,
CALIB_ZERO_DISPARITY, 1, imageSize, &validRoi[0], &validRoi[1]);

fs.open("extrinsics.yml", FileStorage::WRITE);
if (fs.isOpened())
{
fs << "R" << R << "T" << T << "R1" << R1 << "R2" << R2 << "P1" << P1 << "P2" << P2 << "Q" << Q;
fs.release();
}
else
cout << "Error: can not save the extrinsic parameters\n";

// OpenCV can handle left-right
// or up-down camera arrangements
bool isVerticalStereo = fabs(P2.at<double>(1, 3)) > fabs(P2.at<double>(0, 3));

// COMPUTE AND DISPLAY RECTIFICATION
Mat rmap[2][2];
// IF BY CALIBRATED (BOUGUET'S METHOD)

//Precompute maps for cv::remap()
initUndistortRectifyMap(cameraMatrix[0], distCoeffs[0], R1, P1, imageSize, CV_16SC2, rmap[0][0], rmap[0][1]);
initUndistortRectifyMap(cameraMatrix[1], distCoeffs[1], R2, P2, imageSize, CV_16SC2, rmap[1][0], rmap[1][1]);

Mat canvas;
double sf;
int w, h;
if (!isVerticalStereo)
{
sf = 600. / MAX(imageSize.width, imageSize.height);
w = cvRound(imageSize.width*sf);
h = cvRound(imageSize.height*sf);
canvas.create(h, w * 2, CV_8UC3);
}
else
{
sf = 300. / MAX(imageSize.width, imageSize.height);
w = cvRound(imageSize.width*sf);
h = cvRound(imageSize.height*sf);
canvas.create(h * 2, w, CV_8UC3);
}

destroyAllWindows();

Mat imgLeft, imgRight;

int ndisparities = 16 * 5;   /**< Range of disparity */
int SADWindowSize = 31; /**< Size of the block window. Must be odd */
Ptr<StereoBM> sbm = StereoBM::create(ndisparities, SADWindowSize);
sbm->setMinDisparity(0);
//sbm->setNumDisparities(64);
sbm->setTextureThreshold(10);
sbm->setDisp12MaxDiff(-1);
sbm->setPreFilterCap(31);
sbm->setUniquenessRatio(25);
sbm->setSpeckleRange(32);
sbm->setSpeckleWindowSize(100);

Ptr<StereoSGBM> sgbm = StereoSGBM::create(0, 64, 7,
10 * 7 * 7,
40 * 7 * 7,
1, 63, 10, 100, 32, StereoSGBM::MODE_SGBM);

Mat rimg, cimg;
Mat Mask;
while (1)
{
camera_l >> frame_l;
camera_r >> frame_r;

if (frame_l.empty() || frame_r.empty())
continue;

remap(frame_l, rimg, rmap[0][0], rmap[0][1], INTER_LINEAR);
rimg.copyTo(cimg);
Mat canvasPart1 = !isVerticalStereo ? canvas(Rect(w * 0, 0, w, h)) : canvas(Rect(0, h * 0, w, h));
resize(cimg, canvasPart1, canvasPart1.size(), 0, 0, INTER_AREA);
Rect vroi1(cvRound(validRoi[0].x*sf), cvRound(validRoi[0].y*sf),
cvRound(validRoi[0].width*sf), cvRound(validRoi[0].height*sf));

remap(frame_r, rimg, rmap[1][0], rmap[1][1], INTER_LINEAR);
rimg.copyTo(cimg);
Mat canvasPart2 = !isVerticalStereo ? canvas(Rect(w * 1, 0, w, h)) : canvas(Rect(0, h * 1, w, h));
resize(cimg, canvasPart2, canvasPart2.size(), 0, 0, INTER_AREA);
Rect vroi2 = Rect(cvRound(validRoi[1].x*sf), cvRound(validRoi[1].y*sf),
cvRound(validRoi[1].width*sf), cvRound(validRoi[1].height*sf));

Rect vroi = vroi1&vroi2;

imgLeft = canvasPart1(vroi).clone();
imgRight = canvasPart2(vroi).clone();

rectangle(canvasPart1, vroi1, Scalar(0, 0, 255), 3, 8);
rectangle(canvasPart2, vroi2, Scalar(0, 0, 255), 3, 8);

if (!isVerticalStereo)
for (int j = 0; j < canvas.rows; j += 32)
line(canvas, Point(0, j), Point(canvas.cols, j), Scalar(0, 255, 0), 1, 8);
else
for (int j = 0; j < canvas.cols; j += 32)
line(canvas, Point(j, 0), Point(j, canvas.rows), Scalar(0, 255, 0), 1, 8);

cvtColor(imgLeft, imgLeft, CV_BGR2GRAY);
cvtColor(imgRight, imgRight, CV_BGR2GRAY);

//-- And create the image in which we will save our disparities
Mat imgDisparity16S = Mat(imgLeft.rows, imgLeft.cols, CV_16S);
Mat imgDisparity8U = Mat(imgLeft.rows, imgLeft.cols, CV_8UC1);
Mat sgbmDisp16S = Mat(imgLeft.rows, imgLeft.cols, CV_16S);
Mat sgbmDisp8U = Mat(imgLeft.rows, imgLeft.cols, CV_8UC1);

if (imgLeft.empty() || imgRight.empty())
{
std::cout << " --(!) Error reading images " << std::endl; return -1;
}

sbm->compute(imgLeft, imgRight, imgDisparity16S);

imgDisparity16S.convertTo(imgDisparity8U, CV_8UC1, 255.0 / 1000.0);
cv::compare(imgDisparity16S, 0, Mask, CMP_GE);
applyColorMap(imgDisparity8U, imgDisparity8U, COLORMAP_HSV);
Mat disparityShow;
imgDisparity8U.copyTo(disparityShow, Mask);

sgbm->compute(imgLeft, imgRight, sgbmDisp16S);

sgbmDisp16S.convertTo(sgbmDisp8U, CV_8UC1, 255.0 / 1000.0);
cv::compare(sgbmDisp16S, 0, Mask, CMP_GE);
applyColorMap(sgbmDisp8U, sgbmDisp8U, COLORMAP_HSV);
Mat  sgbmDisparityShow;
sgbmDisp8U.copyTo(sgbmDisparityShow, Mask);

imshow("bmDisparity", disparityShow);
imshow("sgbmDisparity", sgbmDisparityShow);
imshow("rectified", canvas);
char c = (char)waitKey(1);
if (c == 27 || c == 'q' || c == 'Q')
break;
}
return 0;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
[/code]

此外你们可能还需要的代码是读取参数并进行双目匹配的代码,我也在后面放出来了。

额,因为不可能用一次标一次。
#include <string>
#include <stdio.h>
#include <iostream>
#include "opencv2/opencv.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/core/core.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui/highgui.hpp"

using namespace std;
using namespace cv;

int main(){
cv::VideoCapture camera_l(1);
cv::VideoCapture camera_r(0);

camera_l.set(CAP_PROP_FRAME_WIDTH, 320);
camera_l.set(CAP_PROP_FRAME_HEIGHT, 240);
camera_r.set(CAP_PROP_FRAME_WIDTH, 320);
camera_r.set(CAP_PROP_FRAME_HEIGHT, 240);

if (!camera_l.isOpened()){ cout << "No left camera!" << endl; return -1; }
if (!camera_r.isOpened()){ cout << "No right camera!" << endl; return -1; }

Mat cameraMatrix[2], distCoeffs[2];

FileStorage fs("intrinsics.yml", FileStorage::READ);
if (fs.isOpened())
{
fs["M1"] >> cameraMatrix[0];
fs["D1"] >> distCoeffs[0];
fs["M2"] >> cameraMatrix[1];
fs["D2"] >> distCoeffs[1];
fs.release();
}
else
cout << "Error: can not save the intrinsic parameters\n";

Mat R, T, E, F;
Mat R1, R2, P1, P2, Q;
Rect validRoi[2];
Size imageSize(320, 240);

fs.open("extrinsics.yml", FileStorage::READ);
if (fs.isOpened())
{
fs["R"] >> R;
fs["T"] >> T;
fs["R1"] >> R1;
fs["R2"] >> R2;
fs["P1"] >> P1;
fs["P2"] >> P2;
fs["Q"] >> Q;
fs.release();
}
else
cout << "Error: can not save the extrinsic parameters\n";

stereoRectify(cameraMatrix[0], distCoeffs[0],
cameraMatrix[1], distCoeffs[1],
imageSize, R, T, R1, R2, P1, P2, Q,
CALIB_ZERO_DISPARITY, 1, imageSize, &validRoi[0], &validRoi[1]);

// OpenCV can handle left-right
// or up-down camera arrangements
bool isVerticalStereo = fabs(P2.at<double>(1, 3)) > fabs(P2.at<double>(0, 3));

// COMPUTE AND DISPLAY RECTIFICATION
Mat rmap[2][2];
// IF BY CALIBRATED (BOUGUET'S METHOD)

//Precompute maps for cv::remap()
initUndistortRectifyMap(cameraMatrix[0], distCoeffs[0], R1, P1, imageSize, CV_16SC2, rmap[0][0], rmap[0][1]);
initUndistortRectifyMap(cameraMatrix[1], distCoeffs[1], R2, P2, imageSize, CV_16SC2, rmap[1][0], rmap[1][1]);

Mat canvas;
double sf;
int w, h;
if (!isVerticalStereo)
{
sf = 600. / MAX(imageSize.width, imageSize.height);
w = cvRound(imageSize.width*sf);
h = cvRound(imageSize.height*sf);
canvas.create(h, w * 2, CV_8UC3);
}
else
{
sf = 300. / MAX(imageSize.width, imageSize.height);
w = cvRound(imageSize.width*sf);
h = cvRound(imageSize.height*sf);
canvas.create(h * 2, w, CV_8UC3);
}

cv::Mat frame_l, frame_r;
Mat imgLeft, imgRight;

int ndisparities = 16 * 5;   /**< Range of disparity */
int SADWindowSize = 31; /**< Size of the block window. Must be odd */
Ptr<StereoBM> sbm = StereoBM::create(ndisparities, SADWindowSize);
//    sbm->setMinDisparity(0);
//    sbm->setNumDisparities(64);
//    sbm->setTextureThreshold(10);
//    sbm->setDisp12MaxDiff(-1);
//    sbm->setPreFilterCap(31);
//    sbm->setUniquenessRatio(25);
//    sbm->setSpeckleRange(32);
//    sbm->setSpeckleWindowSize(100);

Ptr<StereoSGBM> sgbm = StereoSGBM::create(0, 64, 7,
10 * 7 * 7,
40 * 7 * 7,
1, 63, 10, 100, 32, StereoSGBM::MODE_SGBM);

Mat rimg, cimg;
Mat Mask;
while (1)
{
camera_l >> frame_l;
camera_r >> frame_r;

if (frame_l.empty() || frame_r.empty())
continue;

remap(frame_l, rimg, rmap[0][0], rmap[0][1], INTER_LINEAR);
rimg.copyTo(cimg);
Mat canvasPart1 = !isVerticalStereo ? canvas(Rect(w * 0, 0, w, h)) : canvas(Rect(0, h * 0, w, h));
resize(cimg, canvasPart1, canvasPart1.size(), 0, 0, INTER_AREA);
Rect vroi1(cvRound(validRoi[0].x*sf), cvRound(validRoi[0].y*sf),
cvRound(validRoi[0].width*sf), cvRound(validRoi[0].height*sf));

remap(frame_r, rimg, rmap[1][0], rmap[1][1], INTER_LINEAR);
rimg.copyTo(cimg);
Mat canvasPart2 = !isVerticalStereo ? canvas(Rect(w * 1, 0, w, h)) : canvas(Rect(0, h * 1, w, h));
resize(cimg, canvasPart2, canvasPart2.size(), 0, 0, INTER_AREA);
Rect vroi2 = Rect(cvRound(validRoi[1].x*sf), cvRound(validRoi[1].y*sf),
cvRound(validRoi[1].width*sf), cvRound(validRoi[1].height*sf));

Rect vroi = vroi1&vroi2;

imgLeft = canvasPart1(vroi).clone();
imgRight = canvasPart2(vroi).clone();

rectangle(canvasPart1, vroi1, Scalar(0, 0, 255), 3, 8);
rectangle(canvasPart2, vroi2, Scalar(0, 0, 255), 3, 8);

if (!isVerticalStereo)
for (int j = 0; j < canvas.rows; j += 32)
line(canvas, Point(0, j), Point(canvas.cols, j), Scalar(0, 255, 0), 1, 8);
else
for (int j = 0; j < canvas.cols; j += 32)
line(canvas, Point(j, 0), Point(j, canvas.rows), Scalar(0, 255, 0), 1, 8);

cvtColor(imgLeft, imgLeft, CV_BGR2GRAY);
cvtColor(imgRight, imgRight, CV_BGR2GRAY);

//-- And create the image in which we will save our disparities
Mat imgDisparity16S = Mat(imgLeft.rows, imgLeft.cols, CV_16S);
Mat imgDisparity8U = Mat(imgLeft.rows, imgLeft.cols, CV_8UC1);
Mat sgbmDisp16S = Mat(imgLeft.rows, imgLeft.cols, CV_16S);
Mat sgbmDisp8U = Mat(imgLeft.rows, imgLeft.cols, CV_8UC1);

if (imgLeft.empty() || imgRight.empty())
{
std::cout << " --(!) Error reading images " << std::endl; return -1;
}

sbm->compute(imgLeft, imgRight, imgDisparity16S);

imgDisparity16S.convertTo(imgDisparity8U, CV_8UC1, 255.0 / 1000.0);
cv::compare(imgDisparity16S, 0, Mask, CMP_GE);
applyColorMap(imgDisparity8U, imgDisparity8U, COLORMAP_HSV);
Mat disparityShow;
imgDisparity8U.copyTo(disparityShow, Mask);

sgbm->compute(imgLeft, imgRight, sgbmDisp16S);

sgbmDisp16S.convertTo(sgbmDisp8U, CV_8UC1, 255.0 / 1000.0);
cv::compare(sgbmDisp16S, 0, Mask, CMP_GE);
applyColorMap(sgbmDisp8U, sgbmDisp8U, COLORMAP_HSV);
Mat  sgbmDisparityShow;
sgbmDisp8U.copyTo(sgbmDisparityShow, Mask);

imshow("bmDisparity", disparityShow);
imshow("sgbmDisparity", sgbmDisparityShow);
imshow("rectified", canvas);
char c = (char)waitKey(1);
if (c == 27 || c == 'q' || c == 'Q')
break;
}
return 0;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
[/code]

谢谢你能有耐心看完这篇文档
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: