您的位置:首页 > 其它

三维重建是怎么做的?(未解决)

2014-04-28 16:49 211 查看

1、方式一:matlab2014a

%% Reconstruct the 3-D Scene

% Reconstruct the 3-D world coordinates of points corresponding to each

% pixel from the disparity map.

pointCloud = reconstructScene(disparityMap, stereoParams);

% Convert from millimeters to meters.

pointCloud = pointCloud / 1000;

%% Visualize the 3-D Scene

%Plot the 3-D points by calling the plot3 function. We have to call the

%function in a loop for each color, because it can only plot points of a

%single color at a time. To minimize the number of iteration of the loop,

%reduce the number of colors in the image by calling rgb2ind.

% Reduce the number of colors in the image to 128.

[reducedColorImage, reducedColorMap] = rgb2ind(J1, 128);

%这里的坐标是怎么回事?840 630?

% Plot the 3D points of each color.

hFig = figure; hold on;

set(hFig, 'Position', [1 1 840 630]);

hAxes = gca;

X = pointCloud(:, :, 1);

Y = pointCloud(:, :, 2);

Z = pointCloud(:, :, 3);

for i = 1:size(reducedColorMap, 1)

% Find the pixels of the current color.

x = X(reducedColorImage == i-1);

y = Y(reducedColorImage == i-1);

z = Z(reducedColorImage == i-1);

if isempty(x)

continue;

end

% Eliminate invalid values.

idx = isfinite(x);

x = x(idx);

y = y(idx);

z = z(idx);

% Plot points between 3 and 7 meters away from the camera.

maxZ = 7;

minZ = 3;

x = x(z > minZ & z < maxZ);

y = y(z > minZ & z < maxZ);

z = z(z > minZ & z < maxZ);

plot3(hAxes, x, y, z, '.', 'MarkerEdgeColor', reducedColorMap(i, :));

hold on;

end

% Set up the view.

grid on;

cameratoolbar show;

axis vis3d

axis equal;

set(hAxes,'YDir','reverse', 'ZDir', 'reverse');

camorbit(-20, 25, 'camera', [0 1 0]);

2、方式二:matlabs2013a

%% Step 7. Backprojection

% % With a stereo depth map and knowledge of the intrinsic parameters of the

% % camera, it is possible to backproject image pixels into 3D points [1,2].

% % One way to compute the camera intrinsics is with the MATLAB Camera

% % Calibration Toolbox [6] from the California Institute of Technology(R).

% % Such a tool will produce an intrinsics matrix, K, of the form:

% %

% % K = [focal_length_x skew_x camera_center_x;

% % 0 focal_length_y camera_center_y;

% % 0 0 1];

% %

% % This relates 3D world coordinates to homogenized camera coordinates via:

% %

% % $$[x_{camera} \, y_{camera} \, 1]^T = K \cdot [x_{world} \, y_{world} \, z_{world}]^T$$

% %

% % With the intrinsics matrix, we can backproject each image pixel into a 3D

% % ray that describes all the world points that could have been projected

% % onto that pixel on the image. This leaves unknown the distance of that

% % point to the camera. This is provided by the disparity measurements of

% % the stereo depth map as:

% %

% % $$z_{world} = focal\_length \cdot \frac{1 + stereo\_baseline}{disparity}$$

% %

% % Note that unitless pixel disparities cannot be used directly in this

% % equation. Also, if the stereo baseline (the distance between the two

% % cameras) is not well-known, it introduces more unknowns. Thus we

% % transform this equation into the general form:

% %

% % $$z_{world} = a + \frac{b}{disparity}$$

% %

% % We solve for the two unknowns via least squares by collecting

% % a few corresponding depth and disparity values from the scene and using

% % them as tie points. The full technique is shown below.

% Camera intrinsics matrix

% K = [409.4433 0 204.1225

% 0 416.0865 146.4133

% 0 0 1.0000];

%20140427用matlab2014a的标定函数生成的内参矩阵,为啥重建不对?

K = [23851.8619753065,0,0;0,23561.6461045461,0;591.522934472755,430.985873195448,1];

% Create a sub-sampled grid for backprojection.

dec = 2;

[X,Y] = meshgrid(1:dec:size(leftI,2),1:dec:size(leftI,1));

P = K\[X(:)'; Y(:)'; ones(1,numel(X), 'single')];

Disp = max(0,DdynamicSubpixel(1:dec:size(leftI,1),1:dec:size(leftI,2)));

hMedF = vision.MedianFilter('NeighborhoodSize',[5 5]);

Disp = step(hMedF,Disp); % Median filter to smooth out noise.

% Derive conversion from disparity to depth with tie points:

knownDs = [15 9 2]'; % Disparity values in pixels

knownZs = [4 4.5 6.8]';

% World z values in meters based on scene measurements.

ab = [1./knownDs ones(size(knownDs), 'single')] \ knownZs; % least squares

% Convert disparity to z (distance from camera)

ZZ = ab(1)./Disp(:)' + ab(2);

% Threshold to [0,8] meters.

ZZdisp = min(8,max(0, ZZ ));

Pd = bsxfun(@times,P,ZZ);

% Remove near points

bad = Pd(3,:)>8 | Pd(3,:)<3;

Pd = Pd(:,~bad);

% In the reprojection, the walls, ceiling, and floor all appear mutually

% orthogonal, and the scene is well reconstructed. Since camera calibration

% also gives intrinsics with units, we can assign units to the

% backprojected points. The dimensions of the plot are given in meters, and

% one can verify that the sizes of objects and the scene appear correct.

% Collect quantized colors for point display

Colors = rightI3chan(1:dec:size(rightI,1),1:dec:size(rightI,2),:);

Colors = reshape(Colors,[size(Colors,1)*size(Colors,2) size(Colors,3)]);

Colors = Colors(~bad,:);

cfac = 20;

C8 = round(cfac*Colors);

[U,I,J] = unique(C8,'rows');

C8 = C8/cfac;

figure(2), clf, hold on, axis equal;

for i=1:size(U,1)

plot3(-Pd(1,J==i),-Pd(3,J==i),-Pd(2,J==i),'.','Color',C8(I(i),:));

end

view(161,14), grid on;

xlabel('x (meters)'), ylabel('z (meters)'), zlabel('y (meters)');
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: