Thursday, August 28, 2008

A15 - Color Camera Processing

In photography and image processing, color balance is the global adjustment of the intensities of the colors (typically red, green, and blue). An important goal of this adjustment is to render specific colors – particularly neutral colors – correctly; hence, the general method is sometimes called gray balance, neutral balance, or white balance. Color balance changes the overall mixture of colors in an image and is used for color correction; generalized versions of color balance are used to get colors other than neutrals to also appear correct or pleasing.

Digital cameras come with white balance (WB) settings. Here are the some white balance settings available in digital cameras:

Daylight - used when taking photos outdoors
Incandescent - used when taking photos under a incandescent bulb
Flourescent - used when taking photos under a flourescent bulb
Sunset - used when taking photos during sunset
Cloudy - used when taking photos during cloudy days

When you set the right WB settings in your camera, then you can remove unrealistic color casts, so that objects which appear white in person are rendered white in your photo.

In this activity, I captured images under the wrong WB settings and try to correct it using scilab.
The process includes taking the RGB value of the white part of the image and divide the entire image with this RGB value.
I used the following codes to treat the images:

stacksize(20000000);
im = imread("leaves.jpg");
im1 = imread("leaves1.jpg");
//Reference white
Rw = mean(im1(:,:,1));
Gw = mean(im1(:,:,2));
Bw = mean(im1(:,:,3));
//Getting the RGB values of the image
R = im(:,:,1);
G = im(:,:,2);
B = im(:,:,3);
//Treating the image
new = zeros(im);
new(:,:,1) = R./Rw;
new(:,:,2) = G./Gw;
new(:,:,3) = B./Bw;
//Writing the new image
maxa = max(max(max(new)));
imnew = new./maxa;
imwrite(imnew,"newleaf.jpg");
//Gray World
Rwg = mean(im(:,:,1));
Gwg = mean(im(:,:,2));
Bwg = mean(im(:,:,3));
newg = zeros(im);
newg(:,:,1) = im(:,:,1)./Rwg;
newg(:,:,2) = im(:,:,2)./Gwg;
newg(:,:,3) = im(:,:,3)./Bwg;
maxb = max(max(max(newg)));
imnewg = newg./maxb;
imwrite(imnewg,"newgleaf.jpg");

And here are the results:
Figure 1. Photo taken under incandescent setting but should be taken under flourescent setting.
Figure 2. Treated Image of Figure 1 (White Balanced)
Figure 3. Treated Image of Figure 1 (Gray Balanced)

The other treated image appears below.

Figure 4. Photo of leaves taken under broad daylight.(L-R) The setting used was incandescent. White balanced image of the leaves. Gray balanced image of the leaves.

Figure 5. Photo of objects taken under broad daylight.(L-R) The setting used was incandescent. White balanced image of the leaves. Gray balanced image of the leaves.

References:
en.wikipedia.org/wiki/White_balance
www.cambridgeincolour.com/tutorials/white-balance.htm

Thursday, August 7, 2008

A13 - Photometric Stereo

Consider an object illuminated by a source. We assume that the intensity captured by the camera at point (x,y) is given by I(x,y). The goal of this activity is to approximate the shape of the surface of the object given the intensity. Multiple images of the surfaces with the sources at different locations, which we denote as v(x,y), will give information about the shape of the surface. We start with the following expression:

I = vg

where I is the intensity, v is the source location, and g is to be computed. I and v are given so that we can express g as:

g = (inverse(v*v))v*I

What we need is the norm of g which is just given by:

n = g/abs(g)

We then let z = f(x,y) be the equation of the surface. If we equate the gradient of the surface to the norm of g, we get the following expression:

df/dx = -nx/nz

df/dy = -ny/nz

where (nx, ny, nz) is the norm of g.

Finally, integrating the RHS of the two equations above, we get an approximate of f(x,y)

The codes are given below:

//We first load the gathered intensity of the object which is captured by the camera
loadmatfile('Photos.mat',['I1','I2','I3', 'I4']);
//Defining the matrices to be used
v = [0.085832 0.17365 0.98106; 0.085832 -0.17365 0.98106; 0.17365 0 0.98481; 0.16318 -0.34202 0.92542];
Im1 = I1(:);
Im2 = I2(:);
Im3 = I3(:);
Im4 = I4(:);
I = [Im1';Im2';Im3';Im4'];
g = inv(v'*v)*v'*I;
//To get the norm of g
for i=1:size(g,2);
n1(i)=sqrt((g(1,i)**2)+(g(2,i)**2)+(g(3,i)**2));
n1 = n1 + 0.000000001;
end;
n(1,:) = g(1,:)./n1';
n(2,:) = g(2,:)./n1';
n(3,:) = g(3,:)./n1';
//Differentials
dfx= -n(1,:) ./(n(3,:)+0.000000001);
dfy= -n(2,:)./(n(3,:)+0.000000001);
//Resizing
Imx = matrix(dfx,128,128);
Imy = matrix(dfy,128,128);
//Integration
intx = cumsum(Imx,2);
inty = cumsum(Imy,1);
Image = intx + inty;
mesh(Image);

The result is:

April and Rica helped me in this activity.

I rate myself 10/10 for my effort in this activity.

Monday, August 4, 2008

A11 - Camera Calibration

As the title suggests, a camera calibration process was used to relate the object’s real world coordinate to its captured image's coordinate. A checker board was chosen to be the object because its boxes can serve as grid lines of a 3 dimensional system. The object’s image is shown in figure 1. Note that each box has a dimension of 1”x 1” in the real world.

One must also note that we have a 3 dimensional object and every object point can be labelled as (x,y,z). On the other hand, any point in the captured image is only 2 dimensional. Linear algebra can be used so that we can perform the following transformation: (x,y,z) --> (y,z).


Figure 1. Checker board

1. The first step is to choose the primary axes both for the object and the image. Figure 2 shows the x-,y-, and z-axis for the object.


Figure 2. Object's axes

For the image, the default image axes in scilab are shown below


Figure 3: Image's axes

In the Camera Calibration hand out, it was shown that if (x,y,z) is a certain point in the real world and (y,z) is its corresponding point in the image, then we have the following relations:


Figure 4. Object to Image relation


2. What I need are 23 points for both the object and the image coordinates. Each point is to be substituted to the relation above. After which we can obtain A.

Locating 23 object points is easy. Every box is 1 inch long/wide, so all I have to do is count the boxes. The chosen points in the object are shown in Figure 4. The origin is labelled “1”.


Figure 4. Object points with labels


Figure 5. Object coodinates (inches)


3. What was left to do is to locate the corresponding image coordinates. We can load the image shown in Figure 5 in scilab and this will give us the pixel coordinates of the 23 points.


//Locate 23 points arbitrary points in the checkerboard image
stacksize(20000000);
im = imread("check.jpg");
imshow(im); o = locate(1); //to locate the origin
xbasc;
imshow(im);
w = locate(23); //to locate 23 arbitrary points in the image
xbasc;


Using this, I obtained the the following:


Figure 6. Image coordinates (pixels)


4. Since I want to perform the operation in a series of points, I listed these points in scilab.

//Image Coordinates in y and z
yp = [w(1,1) w(1,2) w(1,3) w(1,4) w(1,5) w(1,6) w(1,7) w(1,8) w(1,9) w(1,10) w(1,11) w(1,12) w(1,13) w(1,14) w(1,15) w(1,16) w(1,17) w(1,18) w(1,19) w(1,20) w(1,21) w(1,22) w(1,23)] - o(1,1);
zp = [w(2,1) w(2,2) w(2,3) w(2,4) w(2,5) w(2,6) w(2,7) w(2,8) w(2,9) w(2,10) w(2,11) w(2,12) w(2,13) w(2,14) w(2,15) w(2,16) w(2,17) w(2,18) w(2,19) w(2,20) w(2,21) w(2,22) w(2,23)] - o(2,1);
//Corresponding object coordinates
x = [0 0 0 0 0 0 0 0 0 0 0 0 3 3 5 3 2 4 7 6 8 0 8];
y = [2 4 6 2 6 2 4 2 2 0 8 8 0 0 0 0 0 0 0 0 0 0 0];
z = [1 3 5 5 7 7 9 9 11 12 12 0 1 3 5 6 9 10 8 11 12 7 0];

5. I can now define the matrix to be used.

for i = 1:length(x);

Q((2*i)+1,:) = [x(i) y(i) z(i) 1 0 0 0 0 -(yp(i)*x(i)) -(yp(i)*y(i)) -(yp(i)*z(i))];

Q((2*i)+2,:) = [0 0 0 0 x(i) y(i) z(i) 1 -(zp(i)*x(i)) -(zp(i)*y(i)) -(zp(i)*z(i))];

d((2*i)+1) = yp(i);

d((2*i)+2) = zp(i);

end;

6. Solving for a:

a = inv(Q'*Q)*Q'*d;


I then obtained the following values:
Figure 7. “a” values

7. To check if the process above is accurate, we get new object and image coordinates. We substitute the object coordinates in one of our relation and see how much the result deviates from our located image.



Figure 8. New test coordinates (red)

//Accuracy test

xtest = [6 6 4 0 0 0];

ytest = [0 0 0 5 5 4];

ztest = [6 9 7 10 7 5];

for j=1:6;

yptest(j) = ((a(1)*xtest(j))+(a(2)*ytest(j))+(a(3)*ztest(j))+a(4))/((a(9)*xtest(j))+(a(10)*ytest(j))+(a(11)*ztest(j))+1);

zptest(j) = ((a(5)*xtest(j))+(a(6)*ytest(j))+(a(7)*ztest(j))+a(8))/((a(9)*xtest(j))+(a(10)*ytest(j))+(a(11)*ztest(j))+1);

end;

yzptest = [yptest,zptest];

imtest = imread("check2.jpg");

imshow(imtest);

wtest = locate(6);

dev = (yzptest - wtest')

devy = dev(:,1);

devz = dev(:,2);

wy = wtest(1,:);

wz = wtest(2,:);

for i=1:6;

%devy(i) = devy(i).*100/wy(i)

%devz(i) = devz(i).*100/wz(i)

end;


And for each of these points the deviation is given by:

% deviation in y:


2.1570407

0.9817405

0.5051955

- 0.2694823

- 0.1936041

- 0.0723059


% deviation in z:


0.0671348

- 0.2211321

0.1453209

- 0.2033003

- 0.5786263

- 0.0462943


I have 2.1570407% as my largest deviation, which I account on the small graphics window and the locate function in scilab. Therefore I can conclude that the process is successful.

I rate myself 9/10 for this activity, because although I put a lot of effort to finish this, I was not able to meet the deadline.


Acknowledgement:

Ed helped me in this activity. Ralph helped me in publishing this blog since I've been having internet problems this week.