Monday, August 4, 2008

A11 - Camera Calibration

As the title suggests, a camera calibration process was used to relate the object’s real world coordinate to its captured image's coordinate. A checker board was chosen to be the object because its boxes can serve as grid lines of a 3 dimensional system. The object’s image is shown in figure 1. Note that each box has a dimension of 1”x 1” in the real world.

One must also note that we have a 3 dimensional object and every object point can be labelled as (x,y,z). On the other hand, any point in the captured image is only 2 dimensional. Linear algebra can be used so that we can perform the following transformation: (x,y,z) --> (y,z).


Figure 1. Checker board

1. The first step is to choose the primary axes both for the object and the image. Figure 2 shows the x-,y-, and z-axis for the object.


Figure 2. Object's axes

For the image, the default image axes in scilab are shown below


Figure 3: Image's axes

In the Camera Calibration hand out, it was shown that if (x,y,z) is a certain point in the real world and (y,z) is its corresponding point in the image, then we have the following relations:


Figure 4. Object to Image relation


2. What I need are 23 points for both the object and the image coordinates. Each point is to be substituted to the relation above. After which we can obtain A.

Locating 23 object points is easy. Every box is 1 inch long/wide, so all I have to do is count the boxes. The chosen points in the object are shown in Figure 4. The origin is labelled “1”.


Figure 4. Object points with labels


Figure 5. Object coodinates (inches)


3. What was left to do is to locate the corresponding image coordinates. We can load the image shown in Figure 5 in scilab and this will give us the pixel coordinates of the 23 points.


//Locate 23 points arbitrary points in the checkerboard image
stacksize(20000000);
im = imread("check.jpg");
imshow(im); o = locate(1); //to locate the origin
xbasc;
imshow(im);
w = locate(23); //to locate 23 arbitrary points in the image
xbasc;


Using this, I obtained the the following:


Figure 6. Image coordinates (pixels)


4. Since I want to perform the operation in a series of points, I listed these points in scilab.

//Image Coordinates in y and z
yp = [w(1,1) w(1,2) w(1,3) w(1,4) w(1,5) w(1,6) w(1,7) w(1,8) w(1,9) w(1,10) w(1,11) w(1,12) w(1,13) w(1,14) w(1,15) w(1,16) w(1,17) w(1,18) w(1,19) w(1,20) w(1,21) w(1,22) w(1,23)] - o(1,1);
zp = [w(2,1) w(2,2) w(2,3) w(2,4) w(2,5) w(2,6) w(2,7) w(2,8) w(2,9) w(2,10) w(2,11) w(2,12) w(2,13) w(2,14) w(2,15) w(2,16) w(2,17) w(2,18) w(2,19) w(2,20) w(2,21) w(2,22) w(2,23)] - o(2,1);
//Corresponding object coordinates
x = [0 0 0 0 0 0 0 0 0 0 0 0 3 3 5 3 2 4 7 6 8 0 8];
y = [2 4 6 2 6 2 4 2 2 0 8 8 0 0 0 0 0 0 0 0 0 0 0];
z = [1 3 5 5 7 7 9 9 11 12 12 0 1 3 5 6 9 10 8 11 12 7 0];

5. I can now define the matrix to be used.

for i = 1:length(x);

Q((2*i)+1,:) = [x(i) y(i) z(i) 1 0 0 0 0 -(yp(i)*x(i)) -(yp(i)*y(i)) -(yp(i)*z(i))];

Q((2*i)+2,:) = [0 0 0 0 x(i) y(i) z(i) 1 -(zp(i)*x(i)) -(zp(i)*y(i)) -(zp(i)*z(i))];

d((2*i)+1) = yp(i);

d((2*i)+2) = zp(i);

end;

6. Solving for a:

a = inv(Q'*Q)*Q'*d;


I then obtained the following values:
Figure 7. “a” values

7. To check if the process above is accurate, we get new object and image coordinates. We substitute the object coordinates in one of our relation and see how much the result deviates from our located image.



Figure 8. New test coordinates (red)

//Accuracy test

xtest = [6 6 4 0 0 0];

ytest = [0 0 0 5 5 4];

ztest = [6 9 7 10 7 5];

for j=1:6;

yptest(j) = ((a(1)*xtest(j))+(a(2)*ytest(j))+(a(3)*ztest(j))+a(4))/((a(9)*xtest(j))+(a(10)*ytest(j))+(a(11)*ztest(j))+1);

zptest(j) = ((a(5)*xtest(j))+(a(6)*ytest(j))+(a(7)*ztest(j))+a(8))/((a(9)*xtest(j))+(a(10)*ytest(j))+(a(11)*ztest(j))+1);

end;

yzptest = [yptest,zptest];

imtest = imread("check2.jpg");

imshow(imtest);

wtest = locate(6);

dev = (yzptest - wtest')

devy = dev(:,1);

devz = dev(:,2);

wy = wtest(1,:);

wz = wtest(2,:);

for i=1:6;

%devy(i) = devy(i).*100/wy(i)

%devz(i) = devz(i).*100/wz(i)

end;


And for each of these points the deviation is given by:

% deviation in y:


2.1570407

0.9817405

0.5051955

- 0.2694823

- 0.1936041

- 0.0723059


% deviation in z:


0.0671348

- 0.2211321

0.1453209

- 0.2033003

- 0.5786263

- 0.0462943


I have 2.1570407% as my largest deviation, which I account on the small graphics window and the locate function in scilab. Therefore I can conclude that the process is successful.

I rate myself 9/10 for this activity, because although I put a lot of effort to finish this, I was not able to meet the deadline.


Acknowledgement:

Ed helped me in this activity. Ralph helped me in publishing this blog since I've been having internet problems this week.

1 comment:

Jing said...

You deserve a 10 on this one.