Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Real-time flow control of viscous fluids using 3D image processing
(USC Thesis Other)
Real-time flow control of viscous fluids using 3D image processing
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
REAL-TIME FLOW CONTROL OF VISCOUS FLUIDS
USING 3D IMAGE PROCESSING
by
Kabir Kanodia
A Thesis Presented to the
FACULTY OF THE USC VITERBI SCHOOL OF ENGINEERING
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
MASTER OF SCIENCE
(INDUSTRIAL AND SYSTEMS ENGINEERING)
December 2008
Copyright 2008 Kabir Kanodia
ii
Table of Contents
List of Tables ................................................................................................................................... iii
List of Figures .................................................................................................................................. iv
Abstract ............................................................................................................................................ v
Chapter 1: Introduction ................................................................................................................... 1
Chapter 2: Edge-detection filters ..................................................................................................... 3
2.1 Convolution masks ................................................................................................... 8
2.2 Edge detection application .................................................................................... 16
Chapter 3: Recognition of curved objects ...................................................................................... 19
3.1 Representation of curved surfaces ........................................................................ 20
3.2 Extraction of surface curvature ............................................................................. 21
Chapter 4: System Application ....................................................................................................... 24
Chapter 5: Interface ....................................................................................................................... 27
Chapter 6: Main Highlights ............................................................................................................ 28
Chapter 7: Future Upgrades .......................................................................................................... 30
References ..................................................................................................................................... 31
Appendix: Program developed in Visual C# to capture images from a web camera and
process the edges in them to detect the lowest level that would be used as the horizontal
maximum of the material. ............................................................................................................. 32
iii
List of Tables
Table 1: (a) Convolution Shift step 1, (b) Convolution Shift step 2, (c) Convolution
Shift step 3, (d) Convolution Shift step 4, (e) Convolution Shift step 5, (f) Convolution
Shift step 6, (g) Convolution Shift step 7…………………………………………………………………………………….9
Table 2: Index relationships in a 3x3 image region…………………………………………………………………….15
Table 3: Image processing times……………………………………………………………………………………………….26
iv
List of Figures
Figure 1: Extrusion of material using the Contour Crafting system with focus on the
excess protruding mixture. .............................................................................................................. 1
Figure 2: (a)Image of white square on black, (b)Center profile plot. .............................................. 4
Figure 3: Left vertical edge detector mask. ..................................................................................... 4
Figure 4: (a)Original Image, (b) Left, (c) Right .................................................................................. 5
Figure 5: Sobel edge detector .......................................................................................................... 6
Figure 6: Laplacian Masks (a)3x3, (b)5x5, (c)9x9. ............................................................................ 7
Figure 7: Plot of original data set ................................................................................................... 11
Figure 8: Plot of original data set and convolution with {-1, 3, -1}. ............................................... 12
Figure 9: Convolution mask applied to image ............................................................................... 13
Figure 10: First value computed in convolution of data sets......................................................... 14
Figure 11: Edge detection process ................................................................................................. 16
Figure 12: The frame captured in 160x120 format and the resulting image after the
application of Horizontal Edge detection algorithm. The bright white pixels indicate the
points of contrasts obtained by the algorithm. The dark part represents a continuous
smooth image without any change. Notice the vertical lines on the platform do not
affect the process. ......................................................................................................................... 17
Figure 13: Transition from general edge detection to lowest horizontal edge detection,
the right corner image shows that the lowest point in the edge of the object has been
marked by the white horizontal line; this represents the level from the bottom of the
image frame and is transmitted as the control signal. .................................................................. 18
Figure 14: User interface for image selection ................................................................................ 25
Figure 15: Main user interface ....................................................................................................... 27
v
Abstract
The purpose of this research work is to device a system that would be able to measure
the minute changes in flow of viscous fluids that are being employed in Contour
Crafting. We have chosen the method of image processing for measuring the flow rate.
In normal circumstances, three dimensional image processing is a complex process,
requiring heavy computations, but since our requirement includes supporting a real-
time scenario, we need to come up with a customized solution. Such a solution would
not only make processing fast but also, real-time. We divide the problem into two parts;
first detecting the edge of the protruding material that would provide us with the
horizontal width of the material and the other curve detection that would provide us
the vertical width. This provides us with the basic shape of the protruding material
(figure 1) giving us the approximate volume that needs to be corrected.
1
Chapter 1: Introduction
The Contour Crafting (CC) system is
basically an automated process of
construction that can be used to
build home at an affordable rate with
very high speeds. The system works
by extruding a special mixture of
concrete, gravels and other quick
setting substances, in a layer wise
fashion that would be used to build
the structure. This process is done by
a robotic arm that is capable of
performing movement in three dimensions - x, y and z. The tip of the arm is attached
with a nozzle that would be the point of extrusion.
When a layer of the material has been successfully deposited and the next one begins at
the top of the previous layer, the excess flow causes the material to start building up at
the sides of the structure, as shown in figure 1. It can be safely assumed that the shape
of the excess material is in the form of a cylinder. More the quantity of the material the
larger the cylinder becomes. For the purposes of regulating the flow of material a
system had to be devised that can sense the amount of excess material and can help the
system in adjusting the flow rate. This can be achieved by using image processing of the
Figure 1: Extrusion of material using the Contour
Crafting system with focus on the excess
protruding mixture.
2
excessive extruded material that would provide us with the approximate dimensions of
the material and thereby with the approximate value of the amount by which the flow
rate has to be reduced.
A typical image processing algorithm for a three dimensional object is process heavy and
slow which would not be able to provide us with the required data in a real-time
environment. A special way has to be designed for obtaining the required values. This
can be accomplished by dividing the problem in two parts, firstly by determining the
horizontal edge of the extruded material and secondly calculating the curvature of the
material, combining these two values we can obtain an approximate value of the shape
and the quantity.
3
Chapter 2: Edge-detection filters
Edge detection is an important initial step in many computer vision processes because
edges contain a bulk of information in an image. Edges are comprised of high-spatial-
frequency information and so filters that detect edges are also high pass filters. Consider
the image of a white square on a black background. When we plot the profile of
intensities horizontally across the center of this picture, we obtain a plot such as that
shown in figure 2. Moving from left to right along the x axis of this plot takes the us
through the sharp transition from black to white at the edge of the square, across the
top of the square, and then down the far side in another sharp transition, this time from
white to black. Note the sharp transition that takes place [8]. This term implies high
frequency when we compare the terminology to that used in sound (music) processing.
Sound is nothing more than a single dimensional signal, much like the profile plot given
through the center. If the goal is to detect the vertical edge on the left side of the
square, we can accomplish this using a high frequency spatial convolution mask such as
that given in figure 3.
4
Intensity
x
(a) (b)
-1 0 +1
-2 0 +2
-1 0 +1
Figure 3: Left vertical edge detector mask.
In regions of the image that contain identical pixel values, the sum of products will equal
zero so that all homogenous image regions will be removed. As the mark is processed
over the image, when pixel values transition from low to high (as with a left vertical
edge), the convolution will output its maximum value [2]. The highest values in the
result image will be given to the left side edges. An example of this filler applied to an
image is shown in figure 4.
Figure 2: (a) Image of white square on black, (b) Center profile plot.
5
Figure 4: (a) Original Image, (b) Left, (c) Right
The areas of the image that decrease in intensity toward the right in the original
image are darkened, while those edges that increase towards the left have increased in
intensity [9]. If the filter is flipped vertically so that values in the first column are positive
while those in the third become negative, then the filter will darken edges increasing
from the right, as seen. Clearly, these masks are directional in nature [18]. It is not hard
to visualize similar masks for top, bottom, left and right diagonal edges the mask given is
more commonly known as one of the Sobel masks, which are distinguished by the
values 1, 2 and 3.
We can combine the value computed from the mask in figure 2(a) with its
horizontal counterpart using the formula
S = (𝑽 𝟐 + 𝑯 𝟐 ) (3.1)
6
Where S is the edge strength, V is the
vertical convolution value and H is the
horizontal value. Fig 5 shows the results of this
processing applied to the image of Fig 4(a).
These computations are possible because the
operator pair yields a pair of values that
constitute a vector, or multidimensional element.
The formula for S is the standard form for
determining the magnitude of a vector, which we have called the edge strength [2].
A vector also has a direction, and in this case, the direction of the edge is
computed using the formula
D = 𝐭 𝐚 𝐧 −𝟏 (
𝑽 𝑯 ) (3.2)
Where D is the edge direction in radians, and V and H are the convolution values.
The image computed by using this formula is not useful in a direct observation sense.
This is because each pixel value represents an angle, not intensity. The image is useful in
machine vision processes that seek to compile as much information as possible about a
scene. Edge direction may be used to confirm edge linking operations or be included in
feature analysis.[18]
An edge operator that is nondiscretionary or rotationally insensitive is the
Laplacian. A Laplacian mask is basically a second-derivative operator or an operator that
Figure 5: Sobel edge detector
7
computes the rate of change of an edge as it varies in intensity across the image [8]. The
Sober is a first-derivative operator that computes the rate of change of an edge, or the
gradient. Although the Laplacian is non directional, it is highly susceptible to noise. The
affects of three different masks applied to own image are given below.
Figure 6: Laplacian Masks (a) 3x3 (b) 5x5 (c) 9x9.
Edge detection filters are high pass filters with the edges in an image
representing the high spatial frequencies. These filters are also called sharpening filters
as they improve the presence of lines. Another term generally used is edge
enhancement filters [11]. The distinction between edge enhancement and edge
detection is determined from the purpose for which the filter is employed [9]. There
filters are characterized by negative values in the mash that compute the difference
between pixel values as the convolution takes place. Hence, the derivative, or gradient,
which is rate of change, is computed.
8
2.1 Convolution masks
Consider two sets of numbers, the first set has only one element and can be
written as {3}. The second set has seven elements and is written as {0, 1, 2, 3, 2, 1, 0}. If
we multiply the elements of the second set with that of the first set, we get {0, 3, 6, 9, 6,
3, 0}. This procedure is defined as a vector multiplication by a scalar. Now if we modify
the first set and add two more elements we get {1, 3, 1}. If we now try to multiply the
second set by the first, we encounter a dilemma. Do we perform the multiplications so
that the result increases in dimensions, such as {0, 0, 0, 1, 3, 1, 3, 6, 3, ..} or do we select
one of the three elements of the first set as the multiplier? Assuming that we do not
wish to expand the dimensions of the result, yet we want to include all the values of the
multiplier. A solution might be to align the two sets, multiply the corresponding
elements, add the results, and use the sum as the new value at the aligned position.
Then the aligned position is changed by shifting the new multiplier to the right and
performing the process of multiplying and adding at the new position [2]. If this
approach was to be used a simple example would suffice to show how the process
works.
In the following sequence of seven tables, the first row, A, of each table contains
the large set, {0, 1, 2, 3, 2, 1, 0} while the second row, B, contains the small one, {1, 3, 1}.
Each table in the sequence shows the small set at a different position as it is shifted with
respect to the large set [9].
9
The third row, A*B, shows the discrete convolution, which is the result of
multiplying each element of the small set with its corresponding element in the large
set, adding the three values, and placing the sum at the aligned position, the center of
the B set [7].
1. Center value of B set is aligned to first value of A set. Convolution results A*B is
1, placed at position corresponding to center of B set, missing values are
assumed to be zero.
Table 1(a): Convolution Shift step 1
2. Set B shifted by one position. Convolution result A*B is (0x1) + (1x3) + (2x1) = 5.
Table 1(b): Convolution Shift step 2
3. Set B shifted by another one position. Convolution result A*B is (1x1) + (2x3) +
(3x1) = 10.
Table 1(c): Convolution Shift step 3
A 0 1 2 3 2 1 0
B 1 3 1
A*B 1
A 0 1 2 3 2 1 0
B 1 3 1
A*B 1 5
A 0 1 2 3 2 1 0
B 1 3 1
A*B 1 5 10
10
4. Set B shifted by another one position. Convolution result A*B is (1x2) + (3x3) +
(2x1) = 13.
Table 1(d): Convolution Shift step 4
5. Set B shifted by another one position. Convolution result A*B is (3x1) + (2x3) +
(1x1) = 10.
Table 1(e): Convolution Shift step 5
6. Set B shifted by another one position. Convolution result A*B is (2x1) + (1x3) +
(0x1) = 5.
Table 1(f): Convolution Shift step 6
7. Set B shifted by another one position. Convolution result A*B is (1x1) + (0x3) +
(0x1) = 1.
(Missing values assumed to be zero)
Table 1(g): Convolution Shift step 7
A 0 1 2 3 2 1 0
B 1 3 1
A*B 1 5 10 13
A 0 1 2 3 2 1 0
B 1 3 1
A*B 1 5 10 13 10
A 0 1 2 3 2 1 0
B 1 3 1
A*B 1 5 10 13 10 5
A 0 1 2 3 2 1 0
B 1 3 1
A*B 1 5 10 13 10 5 1
11
Step 7 reveal’s the final
result of the discrete
convolution operation,
the set {1, 4, 8, 10, 8, 4,
1}. The importance of
this result is the
difference between the
convolution result,
A*B, and the value large
set, A. The peak at the
center of the data has
become steeper. This is more obvious if we plot the set data, as shown in figure 7, [18].
The small set is called convolution mask because it is passed over a set of data
and effects a change. Other terms used are template, window and filter. Template
refers to the fact that the result will have a large value at the position where the mask
and the data set are equal. Window describes the action of the mask as a passage that
takes a data set from one set of values to another, and filter describes the action of
various masks that restrict or remove date of a specific shape or value [18].
The value of the convolution mask is not restricted to positive values, if the
master had been {-1, 3,-1} the following result would have been obtained. Now the
mask has left the original
Figure 7: Plot of original data set
and convolution with {1, 3, 1}.
12
The transition from a single-dimensioned set to a two-dimensional set is straight
forward. Consider the following matrix.
5 8 3 4 6 2 3 7
3 2 1 1 9 5 1 0
0 9 5 3 0 4 8 3
4 2 7 2 1 9 0 6
9 7 9 8 0 4 2 4
5 2 1 8 4 1 0 9
1 8 5 4 9 2 3 8
3 7 1 2 3 4 4 6
−2 − 1 0
−1 0 + 1
0 + 1 + 2
The 8 by 8 matrix on the left will be considered the image and the small 3 by 3
matrix on the right the convolution mask. With the single dimensional sets, the mask set
was overlaid onto the larger set, the corresponding radices in each set were multiplied
Figure 8: Plot of original data set and
convolution with {-1, 3, -1}.
13
together, and the products were
summed and are used as the value
for the convolution at the set
position corresponding to the center
position of the mask [14].
The following figure
illustrates the overlay process with
grids. By convention, the
convolution usually starts in the
upper left corner of the image where the center value of the small grid (the mask) will
multiply the first value of the large grid (the image). The mash elements that overhang
the image will multiply zero; however, we can also wrap the mask around to the other
side and perform what is known as a circular convolution. The remaining values of the
mask multiply with their corresponding values in the image, and the products are
summed [2]. The result is then placed in the convolution image at the upper left
position, as shown in the figure 9 using the matrix values given earlier.
Subsequent operations require that the mask be shifted one position to the right until
all values of the first row of the convolution result have been determined. The mask is
then moved to the first value of the second row, and the process is repeated until all
rows and columns of the convolution are computed. Note that this mask has negative
values in it and is negative symmetric about the right diagonal axis. This mask performs
Figure 9: Convolution mask applied to image
14
a shadow operation when convolved with an image because it increases the brightness
values across one side of a three-pixel band and decreases values along the other side.
The orientation is diagonally across the image, giving a shadow effect [14].
Figure 10: First value computed in convolution of data sets.
Convolution of images is not restricted to 3x3 masks. Since convolution is a local
operation in which computations that affect pixel relationships in the image are
confined to the mask size, the overall effects of the convolution is independent of mask
size as long as masks are small [2]. Also, convolution operations are easily performed
using parallel processing techniques, and this approach is severely degraded when mask
Size approaches the image size. Programming the discrete convolution is relatively easy;
the most difficult part is keeping track of the indexes. The following figure shows the
relationship of column (i) pixels and row (j) pixels in a three by three image region for a
given center value at coordinate ( i , j ).
15
Table 2: Index relationships in a 3x3 image region
This method has its own problems relating to the negative edge index, considering a 5x5
mask. In order to avoid negative indexes, the loop ranges while calculation have to be
selected carefully. It is a convenience to make the dimensions of the mask odd so that
the mask always has a center pixel. This is not required because masks with even
dimensions may be computed by selecting a pixel position in the mask to serve as the
compute point [14]. The issue gains significance when discussing discrete convolution in
a signal processing or optics-sense.
A final consideration in the computation of convolution masks is the scaling of
data. The data value computed by a mask may exceed the maximum value of a pixel,
which is normally 255. The image data structure used to store the convolution must
accommodate the maximum possible value or arithmetic overflow will result. A simple
alternative is to compute the value of the convolution and truncate it to 255. Also, the
computations involving the high pass filters will often result in negative values. If a byte-
i-1 , j-1 i-1 , j i-1 , j+1
i , j-1 i , j i , j+1
i+1 , j-1 i+1 , j i+1 , j+1
16
sized variable is set equal to a negative result, the final pixel value will be incorrect. The
simple alternative here is to set all the negative results equal to zero.
2.2 Edge detection application
Edge detection has been achieved by using a horizontal pixel change detection
technique. The algorithm operates on the principle that an edge would be defined by a
sudden change in the color and the brightness levels of the image. These changes are
detected by following a
sequential pattern of
comparing eight pixels
positioned on two alternate
adjacent horizontal lines [6].
When contrast is found then
the image is stored in a
different location, the
difference is amplified in the
new image which represents
the point as a white colored
pixel. The adjacent points on the
parallel horizontal lines which we
Figure 11: Edge detection process
17
used to detect the edge are set to black. Thus in the end of the process we are left with
a frame with white horizontal lines on a black background.
Figure 12: The frame captured in 160x120 format and the resulting image after the
application of Horizontal Edge detection algorithm. The bright white pixels indicate the
points of contrasts obtained by the algorithm. The dark part represents a continuous
smooth image without any change. Notice the vertical lines on the platform do not affect
the process.
Horizontal arrays of white color pixels define the edge of an object in the stored
image [4]. The camera is placed in a strategic position which allows it to capture video
with the nearest edge appearing at the bottom of the frame. Thus, to check the level of
perturbations in the flow we need to see the level of the lowest edge [14].
A technique similar to edge detection is employed to find the lowest level. The
search begins from the lowest corner of the frame where white colored pixels are
checked for continuity with the adjacent pixels [10]. If a continuity with more than 20
pixels is established then we consider that position as our horizontal edge, here a
variance of (+/-)2 horizontal lines is allowed so that abrupt edges can also be taken into
18
account [6]. Upon receiving this value it is stored in a temporary buffer which feeds this
value to the serial port of the computer [8].
Figure 13: Transition from general edge detection to lowest horizontal edge detection,
the right corner image shows that the lowest point in the edge of the object has been
marked by the white horizontal line; this represents the level from the bottom of the
image frame and is transmitted as the control signal.
Horizontal arrays of white color pixels define the edge of an object in the stored image.
The process continues indefinitely until a user originates a hold. Visual C# has been used
for designing and implementing the algorithm [8][5][6].
19
Chapter 3: Recognition of curved objects
The problem of recognition and localization of objects with curved surfaces.
Although polyhedral objects are encountered more frequently than objects in an
industrial environment, objects with curved surfaces are more prevalent in natural
scene. One simple way of dealing with the curved objects is to approximate them by
polyhedral objects- an approach commonly used in earlier vision systems. However,
such an approach has its own short comings, in that the resulting description is not rich
enough to capture the intrinsic properties of curved surfaces, is not stable to changes in
viewpoint, and suffers from approximation errors.
Invariant surface curvatures, viz. the mean and Gaussian and principal
curvatures, are used to come up with a description of curved surfaces that is invariant to
viewpoint. As in the case of polyhedral objects, multiple-objects scenes with clutter and
partial occlusion are considered. This precludes the use of global shape descriptors,
which means that only local shape descriptors based on the invariant surface curvatures
can be used.
20
3.1 Representation of curved surfaces
The desired characteristics of features chosen for the representation of curved surfaces
are:
1. View invariance so that they can be used for matching.
2. Richness of local support, which ensures robustness to occlusion.
3. Ease of extraction so that the effort involved in segmentation is minimal.
4. Generality of representation so that a wide range of surface type can be
covered.
The representation of surfaces in terms of the mean, Gaussian, and the principal
curvature exhibits the above characteristics. From surface differential geometry we
obtain the result that surface curvature (H), Gaussian curvature (K), the maximum
principal curvature (k1), and the minimum principal curvature (k2) are invariant to the
rigid body motion (i.e., rotations and translations) in 3-D Euclidean space. Moreover, the
unit vectors in the directions of the principal curvatures k1 and k2, denoted by t1 and t2,
respectively and the unit normal n form an orthogonal coordinate system at every non-
umbilic and non-planar point on the surface. Any two of H, K, k1, and k2 are adequate
to characterize the surface locally. The invariance of these surface curvature properties
forms the underlying basis of several surface segmentation and feature extraction
techniques.
21
3.2 Extraction of surface curvature
A reliable technique for surface curvature extraction is to approximate the digital range
surface by an analytical surface within a local window, using a least-squares surface
fitting technique. Another method is by a functional optimization technique. The
equation of the fitted analytical surface is used to compute the values of surface
curvatures and surface normals. Typically surface fitting techniques include least-square
surface fitting using bi-variate polynomial approximations, parametric surface
approximation techniques such as B-splines, tensor product of splines under tension,
and tensor product of discrete Chebychev polynomials.
In this work the discrete bi-orthogonal Chebychev polynomial has been used.
Discrete bi-orthogonal Chebychev polynomials are used as a basis function in a local N x
N (where N is odd) window centered about the point of interest. The orthogonality of
the basis functions enables efficient computation of the coefficients of the functional
approximation. The first four orthogonal polynomials are
𝜙 0
= 1 (4.1)
𝜙 1
= 𝓊 (4.2)
𝜙 2
= 𝓊 2
–
𝜇 2
𝜇 0
(4.3)
𝜙 3
= 𝓊 3
– (
𝜇 4
𝜇 2
) 𝓊 (4.4)
Where 𝜇 𝑘 = 𝜇 𝑘 𝑖 =+𝑀 𝑢 =−𝑀
22
And M =
(𝑁 −1)
2
A discrete biorthogonal basis is created from the 𝜙 i
’s:
𝜙 i,j
(𝓊 ,𝑣 ) = 𝜙 i
(𝓊 ) 𝜙 j
(𝑣 ) (4.5)
The surface function estimate that minimizes the sum of squared surface-fitting error
within the window is given by
𝑓
(𝓊 ,𝑣 ) = 𝑎 𝑖 ,𝑗 3
𝑖 ,𝑗 =0
𝜙 𝑖 (𝓊 ) 𝜙 j
(𝑣 ) (4.6)
Where the coefficients of the functional approximation are given by
(𝓊 ,𝑣 ) = (+M, +M) (4.7)
And 𝑏 i
(𝓊 ) is the normalized version of the polynomial 𝜙 i
(𝓊 ).
The estimates of the first and second order derivatives of the surface are given
by:
𝑓 𝑢 = 𝑎 10
−
𝜇 2
𝜇 0
𝑎 12
−
𝜇 4
𝜇 2
𝑎 30
+
𝜇 4
𝜇 0
𝑎 32
(4.8)
𝑓 𝑣 = 𝑎 01
−
𝜇 2
𝜇 0
𝑎 21
−
𝜇 4
𝜇 2
𝑎 03
+
𝜇 4
𝜇 0
𝑎 23
(4.9)
𝑓 𝑢 𝑢 = 2𝑎 20
− 2
𝜇 2
𝜇 0
𝑎 22
(4.10)
𝑓 𝑣 𝑣 = 2𝑎 02
− 2
𝜇 2
𝜇 0
𝑎 22
(4.11)
23
𝑓 𝑢 𝑣 = 𝑎 11
−
𝜇 4
𝜇 2
𝑎 31
−
𝜇 4
𝜇 2
𝑎 13
+
𝜇 4
2
𝜇 0
𝑎 33
(4.12)
Estimates of the partial derivatives are used to compute the coefficients of the first and
second fundamental forms of the surface.
G =
𝑔 11
𝑔 12
𝑔 21
𝑔 22
is the First fundamental form of the surface, where
𝑔 11
= 1 + 𝑓 𝑢 2
(4.13)
𝑔 22
= 1 + 𝑓 𝑣 2
(4.14)
𝑔 21
= 𝑔 12
= 𝑓 𝑢 𝑓 𝑣 (4.15)
B =
𝑏 11
𝑏 12
𝑏 21
𝑏 22
is the Second fundamental form of the surface, where
𝑏 11
=
𝑓 𝑢 ,𝑣 1+𝑓 𝑢 2
+𝑓 𝑣 2
(4.16)
𝑏 12
= 𝑏 21
=
𝑓 𝑢 ,𝑣 1+𝑓 𝑢 2
+𝑓 𝑣 2
(4.17)
𝑏 22
=
𝑓 𝑢 ,𝑣 1+𝑓 𝑢 2
+𝑓 𝑣 2
(4.18)
The principal curvatures are the roots of the quadratic equation,
𝐺 𝐾 𝑛 2
− 𝑔 11
𝑏 22
+ 𝑏 11
𝑔 22
− 2𝑔 12
𝑏 12
𝐾 𝑛 + 𝐵 = 0 𝑛 = 1,2 (4.19)
These curvatures are used to approximate the dimensions of the protruding material.
24
Chapter 4: System Application
A separate camera would be employed to capture the curvature of the surface. The
camera would be placed horizontally at an inclination of 45˚ to the front planar surface
of the wall. This would provide us with an accurate view of the curvature of the excess
material. The image would be kept focused on the final perturbation and the camera
would be attached to the main robot arm there by tracing its path, this would help us
eliminate the unnecessary images in the background thereby allowing us to reduce the
captured size of the image which would result in faster processing of the system. The
dimensions of the picture frame are predefined by the user, a list comprising of other
optional frame sizes is provided to the user, it ranges from 160x120 pixels to 640x480
pixels [8] as shown in figure 14.
25
Figure 14: User interface for image selection
The lowest configuration helps to ensure faster processing of images because of less
number of comparisons encountered. Once the image is captured, edge detection
algorithms are applied to it. From the various techniques available like Sobel, Prewitt,
Kirsh and others [8], horizontal edge detection has been chosen for this application
because of its low processing time requirements and relevance to a standard horizontal
edge. The processing times obtained with a standard 160x120 image were found to be
34.58 ms as shown in table 3.
26
Video Format (Image
Size)
Processing Times (ms)
160 x 120 34.58
176 x 144 43.44
320 x 240 97.23
352 x 288 131.46
640 x 280 188.87
Table 3: Image processing times
The processing times were obtained on a PC with an 800MHz motherboard and
employing a Pentium Duo core II processor operating at 1.667 GHz. The memory had
been limited to 2GB RAM. The above mentioned processing times would show
improved/degraded results depending on the hardware that has been utilized. For the
purpose of this application these values have been taken as defaults. Accuracy of the
approximated shape ranges between +/- 3%. The value depends on the scan lines taken
into consideration. This accuracy would improve with larger captured frame size and
reduce with smaller frame size.
27
Chapter 5: Interface
The interface designed for the user, displays the live feed from the camera, which frame
has been saved in the recent past, and which frame is being processed for edge
detection. Control buttons have been provided for initiating the process. Cumulative
processing times are shown, so that the user knows what the delay in response is going
to be and future adjustments to signals can be made before action is taken. A
percentage display is provided that shows the difference level between the present
frame and past frame. A counter shows how many frames have been extracted from the
camera and the number of frames on which edge detection has been applied.
Figure 15: Main user interface
28
Chapter 6: Main Highlights
The main highlights of the presented thesis "Real-time flow control of viscous fluids
using 3D image processing" are:
1) The system has been designed specifically for application of flow control in Contour
Crafting.
2) Conventional 3D image processing techniques employ extraction and analysis of
Masses, Surfaces and Lines. All of these processes are carried out by a single complex
algorithm. This makes the application time intensive.
3) A novel method that can be implemented in this specific application has been devised
that would employ edge detection along with surface curvature detection.
4) Since the problem has been divided into two parts it can be processed separately and
make the application faster than conventional image processing algorithms.
5) The unique combination of the two algorithms gives the width and the lateral
curvature which can approximate the dimensions of the surface features of the images
captured.
6) The system would require two separate cameras to capture the images from different
angles.
29
7) For Edge detection purposes the camera would be required to be placed in a way that
captures the edge of the surface width.
8) For curvature the camera would be placed in diagonal view of the surface.
9) In case of both the cameras, the images would be focused on the nearest contour
that has been constructed by the machine. This would allow us to capture small size
images that would in turn require less processing time.
10) The optimum fit for the systems has been found at 160x120 which would require
approximate processing 34.58 msec.
11) The variable obtained from the two processes would be used to provide the
approximate dimensions of the surface and would be sent to the controller for changes
in the flow of the viscous fluid.
30
Chapter 7: Future Upgrades
1) The processes of detection of image variation depend upon the contrast provided by
the image; this contrast is possible only in situations when a high brightness level in the
image is maintained. Improvement in the system would involve freedom from the
ambient lighting levels, so that differences can be detected even in low light levels.
2) If the system is operational for a long duration it starts accumulating delay. A refresh
rate would be provided by a predefined user set span time. This would flush out the
delay and help in real time processing of the data.
3) The entire system would be incorporated in a firmware that would enable users to
have a plug in and operate capability. This would include freedom from PC requirement.
Since the application would be running on a dedicated firmware, it would dramatically
increase the processing rates.
31
References
[1] Archer, Tom. Title: Inside C# / Tom Archer. Publication info: Redmond, Wash.: Microsoft Press, 2001.
[2] Bergholm F., ―Edge focusing, ‖ IEEE Trans. Pattern Anal. Machine Intell.vol. PAMI-9, pp. 726–741, 1987.
[3] Canny, J. F. ―A computational approach to edge detection, ‖ IEEE Trans. Pattern Anal. Machine Intell., vol. 8, pp.
679–698, Sept. 1986.
[4] Deriche R., ―Using Canny’s criteria to derive a recursively implemented optimal edge detector, ‖ Int. J. Comput.
Vis., vol. 1, no. 2, pp. 167–187; 1987.
[5] Gunnerson, Eric. Title: A programmer's introduction to C# / Eric Gunnerson; [foreword by Anders Hejlsberg].
Edition: 2nd ed. Publication info: Berkeley, CA press, c2001.
[6] Heath M., S. Sarkar, T. Sanocki, and K.W. Bowyer, "A Robust Visual Method for Assessing the Relative Performance
of Edge-Detection Algorithms" IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No. 12,
December 1997, pp. 1338-1359
[7] Heath M., S. Sarkar, T. Sanocki, and K. W. Bowyer, ―Comparison of edge detectors: A methodology and initial
study, ‖ Comput. Vis. Image Understand., vol. 69, no. 1, pp. 38–54, 1998.
[8] Liberty, Jesse. Title: Programming C# / Jesse Liberty. Edition: 2nd ed. Publication info: Sebastopol, CA: O'Reilly,
c2002.
[9] Lindeberg T., ―Edge detection and ridge detection with automatic scale selection, ‖ Int. J. Comput. Vis., vol. 30, no.
2, 1998.
[10] Meyer Y., Wavelets—Algorithms and Applications. Philadelphia, PA: SIAM, 1993.
[11] Pearson, Don ―Image processing, McGraw Hill, Chap 2, Page 50-67.
[12] Pappas, Chris H., 1953- Title: C# for windows programming / Chris H. Pappas, William H. Murray. Publication info:
Upper Saddle River, N.J.: Prentice Hall PTR, 2002.
[13] Ruzon M.A., and C. Tomasi, ―Edge, junction, and corner detection using color distributions, ‖ IEEE Trans. Pattern
Anal. Machine Intell., vol. 23, pp. 1281–1295, Nov. 2002.
[14] Robison, William. Title: Pure C# / William Robison. Publication info: [Indianapolis, Ind.] : Sams, c2002.
[15] Radke R.J, S. Andra, O. Al-Kofahi, and B. Roysam, ―Image change detection algorithms: a systematic survey, ‖
IEEE Trans. on Image Processing, vol. 14, no. 3, pp. 294–307, march 2005.
[16] Steifel, Michael. Title: Application development using C# and .NET / Michel Steifel, Robert J. Oberg. Publication
info: Upper Saddle River, NJ : Prentice Hall PTR, c2002.
[17] Suk, Minsoo and Suchendra M. Bhandarkar, Three dimensional object recognition from range images, ----
TA1632.S865.1992, Chap 7, and Page 183.
[18] Torre V. and T. A. Poggio, ―On edge detection, ‖ IEEE Trans. on Pattern Anal. Machine Intell., vol. 8, pp. 148–163,
Mar. 1986.
[19] Weeks Arthur R. and Harley R. Mylers, Computer Imaging Recipes In C, Prentice Hall, 1993, Page 100.
32
Appendix: Program developed in Visual C# to capture images from a web camera and
process the edges in them to detect the lowest level that would be used as the
horizontal maximum of the material.
/******************************************************
Viterbi School Of Engineering
Industrial and Systems Engineering
Master of Sciences
Kabir Kanodia
7935-1252-02
*******************************************************/
// FlowControl MainForm
using System;
using System.Drawing;
using System.Drawing.Imaging;
using System.Collections;
using System.ComponentModel;
using System.Windows.Forms;
using System.Data;
using System.Runtime.InteropServices;
using System.IO;
using System.Diagnostics;
using ImgProcessing;
using Timing;
using DShowNET;
using DShowNET.Device;
namespace CatchItV
{
///
Summary description for MainForm.
public class MainForm : System.Windows.Forms.Form, ISampleGrabberCB
{
bool dontSaveNext=true,firstRun=true;
private Int32 cPercent=96,minSave=96,count=0,count2=0;
string saveTime="";
private Counter counter=new Counter();
private float sec=0;
Bitmap a;
Bitmap wanted;
private System.Windows.Forms.Splitter splitter1;
private System.Windows.Forms.ToolBar toolBar;
private System.Windows.Forms.Panel videoPanel;
private System.Windows.Forms.Panel stillPanel;
private System.Windows.Forms.PictureBox pictureBox;
private System.Windows.Forms.ToolBarButton toolBarBtnTune;
private System.Windows.Forms.ToolBarButton toolBarBtnGrab;
33
private System.Windows.Forms.ToolBarButton toolBarBtnSave;
private System.Windows.Forms.ToolBarButton toolBarBtnSep;
private System.Windows.Forms.ImageList imgListToolBar;
private System.Windows.Forms.PictureBox pictureBox1;
private System.Windows.Forms.Label lblPercent;
private System.Windows.Forms.Timer timer1;
private System.Windows.Forms.Label lblSavePercent;
private System.Windows.Forms.Label lblSaveOrNot;
private System.Windows.Forms.PictureBox pictureBox2;
private System.Windows.Forms.Button btnStart;
private System.Windows.Forms.GroupBox groupBoxImageProcTime;
private System.Windows.Forms.Label label4;
private System.Windows.Forms.Label lblTotalTime;
private System.Windows.Forms.Label label8;
private System.Windows.Forms.Label lblTime;
private System.Windows.Forms.Label lblCount;
private System.Windows.Forms.Button btnStop;
private System.Windows.Forms.Button btnAbout;
private System.Windows.Forms.Label lblTimerValue;
private System.Windows.Forms.TrackBar trckBarTimer;
private System.Windows.Forms.TrackBar trckBarSaveValue;
private System.ComponentModel.IContainer components;
public MainForm()
{
// Required for Windows Form Designer support
InitializeComponent();
trckBarSaveValue.Value=cPercent;
lblSavePercent.Text=cPercent+"%";
timer1.Interval=trckBarTimer.Value;
lblTimerValue.Text=trckBarTimer.Value.ToString();
System.Windows.Forms.ToolBarButtonClickEventArgs ee=new
ToolBarButtonClickEventArgs(toolBarBtnGrab);
toolBar_ButtonClick(toolBarBtnGrab,ee);
}
///
Clean up any resources being used.
protected override void Dispose( bool disposing )
{
if( disposing )
{
CloseInterfaces();
if (components != null)
{
components.Dispose();
}
}
base.Dispose( disposing );
}
#region Windows Form Designer generated code
34
///
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
///
private void InitializeComponent()
{
this.components = new
System.ComponentModel.Container();
System.Resources.ResourceManager resources = new
System.Resources.ResourceManager(typeof(MainForm));
this.toolBar = new System.Windows.Forms.ToolBar();
this.toolBarBtnTune = new
System.Windows.Forms.ToolBarButton();
this.toolBarBtnGrab = new
System.Windows.Forms.ToolBarButton();
this.toolBarBtnSep = new
System.Windows.Forms.ToolBarButton();
this.toolBarBtnSave = new
System.Windows.Forms.ToolBarButton();
this.imgListToolBar = new
System.Windows.Forms.ImageList(this.components);
this.videoPanel = new System.Windows.Forms.Panel();
this.splitter1 = new System.Windows.Forms.Splitter();
this.stillPanel = new System.Windows.Forms.Panel();
this.lblTimerValue = new
System.Windows.Forms.Label();
this.trckBarTimer = new
System.Windows.Forms.TrackBar();
this.btnStop = new System.Windows.Forms.Button();
this.btnStart = new System.Windows.Forms.Button();
this.pictureBox2 = new
System.Windows.Forms.PictureBox();
this.lblSaveOrNot = new System.Windows.Forms.Label();
this.lblSavePercent = new
System.Windows.Forms.Label();
this.trckBarSaveValue = new
System.Windows.Forms.TrackBar();
this.lblPercent = new System.Windows.Forms.Label();
this.pictureBox1 = new
System.Windows.Forms.PictureBox();
this.pictureBox = new
System.Windows.Forms.PictureBox();
this.timer1 = new
System.Windows.Forms.Timer(this.components);
this.groupBoxImageProcTime = new
System.Windows.Forms.GroupBox();
this.label4 = new System.Windows.Forms.Label();
this.lblTotalTime = new System.Windows.Forms.Label();
this.label8 = new System.Windows.Forms.Label();
this.lblTime = new System.Windows.Forms.Label();
this.lblCount = new System.Windows.Forms.Label();
this.btnAbout = new System.Windows.Forms.Button();
this.stillPanel.SuspendLayout();
35
((System.ComponentModel.ISupportInitialize)(this.trckBarTimer)).B
eginInit();
((System.ComponentModel.ISupportInitialize)(this.trckBarSaveValue
)).BeginInit();
this.groupBoxImageProcTime.SuspendLayout();
this.SuspendLayout();
//
// toolBar
//
this.toolBar.Buttons.AddRange(new
System.Windows.Forms.ToolBarButton[] {
this.toolBarBtnTune,
this.toolBarBtnGrab,
this.toolBarBtnSep,
this.toolBarBtnSave});
this.toolBar.Cursor =
System.Windows.Forms.Cursors.Hand;
this.toolBar.DropDownArrows = true;
this.toolBar.Enabled = false;
this.toolBar.ImageList = this.imgListToolBar;
this.toolBar.Location = new System.Drawing.Point(0,
0);
this.toolBar.Name = "toolBar";
this.toolBar.ShowToolTips = true;
this.toolBar.Size = new System.Drawing.Size(774, 42);
this.toolBar.TabIndex = 0;
this.toolBar.Visible = false;
this.toolBar.ButtonClick += new
System.Windows.Forms.ToolBarButtonClickEventHandler(this.toolBar_Button
Click);
//
// toolBarBtnTune
//
this.toolBarBtnTune.Enabled = false;
this.toolBarBtnTune.ImageIndex = 0;
this.toolBarBtnTune.Text = "Tune";
this.toolBarBtnTune.ToolTipText = "TV tuner dialog";
//
// toolBarBtnGrab
//
this.toolBarBtnGrab.ImageIndex = 1;
this.toolBarBtnGrab.Text = "Grab";
this.toolBarBtnGrab.ToolTipText = "Grab picture from
stream";
36
//
// toolBarBtnSep
//
this.toolBarBtnSep.Enabled = false;
this.toolBarBtnSep.Style =
System.Windows.Forms.ToolBarButtonStyle.Separator;
//
// toolBarBtnSave
//
this.toolBarBtnSave.Enabled = false;
this.toolBarBtnSave.ImageIndex = 2;
this.toolBarBtnSave.Text = "Save";
this.toolBarBtnSave.ToolTipText = "Save image to
file";
//
// imgListToolBar
//
this.imgListToolBar.ColorDepth =
System.Windows.Forms.ColorDepth.Depth32Bit;
this.imgListToolBar.ImageSize = new
System.Drawing.Size(16, 16);
this.imgListToolBar.ImageStream =
((System.Windows.Forms.ImageListStreamer)(resources.GetObject("imgListT
oolBar.ImageStream")));
this.imgListToolBar.TransparentColor =
System.Drawing.Color.Transparent;
//
// videoPanel
//
this.videoPanel.BackColor =
System.Drawing.Color.Black;
this.videoPanel.BorderStyle =
System.Windows.Forms.BorderStyle.Fixed3D;
this.videoPanel.Location = new
System.Drawing.Point(8, 40);
this.videoPanel.Name = "videoPanel";
this.videoPanel.Size = new System.Drawing.Size(296,
304);
this.videoPanel.TabIndex = 1;
this.videoPanel.Resize += new
System.EventHandler(this.videoPanel_Resize);
//
// splitter1
//
this.splitter1.Location = new System.Drawing.Point(0,
42);
this.splitter1.Name = "splitter1";
this.splitter1.Size = new System.Drawing.Size(5,
363);
this.splitter1.TabIndex = 2;
this.splitter1.TabStop = false;
//
// stillPanel
//
37
this.stillPanel.AutoScroll = true;
this.stillPanel.AutoScrollMargin = new
System.Drawing.Size(8, 8);
this.stillPanel.AutoScrollMinSize = new
System.Drawing.Size(32, 32);
this.stillPanel.BorderStyle =
System.Windows.Forms.BorderStyle.Fixed3D;
this.stillPanel.Controls.Add(this.lblTimerValue);
this.stillPanel.Controls.Add(this.trckBarTimer);
this.stillPanel.Controls.Add(this.btnStop);
this.stillPanel.Controls.Add(this.btnStart);
this.stillPanel.Controls.Add(this.pictureBox2);
this.stillPanel.Controls.Add(this.lblSaveOrNot);
this.stillPanel.Controls.Add(this.lblSavePercent);
this.stillPanel.Controls.Add(this.trckBarSaveValue);
this.stillPanel.Controls.Add(this.lblPercent);
this.stillPanel.Controls.Add(this.pictureBox1);
this.stillPanel.Controls.Add(this.pictureBox);
this.stillPanel.Location = new
System.Drawing.Point(312, 40);
this.stillPanel.Name = "stillPanel";
this.stillPanel.Size = new System.Drawing.Size(459,
302);
this.stillPanel.TabIndex = 3;
//
// lblTimerValue
//
this.lblTimerValue.Location = new
System.Drawing.Point(400, 16);
this.lblTimerValue.Name = "lblTimerValue";
this.lblTimerValue.Size = new System.Drawing.Size(40,
24);
this.lblTimerValue.TabIndex = 10;
this.lblTimerValue.Text = "400";
//
// trckBarTimer
//
this.trckBarTimer.LargeChange = 300;
this.trckBarTimer.Location = new
System.Drawing.Point(400, 48);
this.trckBarTimer.Maximum = 1000;
this.trckBarTimer.Minimum = 7;
this.trckBarTimer.Name = "trckBarTimer";
this.trckBarTimer.Orientation =
System.Windows.Forms.Orientation.Vertical;
this.trckBarTimer.Size = new System.Drawing.Size(42,
240);
this.trckBarTimer.SmallChange = 10;
this.trckBarTimer.TabIndex = 9;
this.trckBarTimer.Value = 400;
this.trckBarTimer.ValueChanged += new
System.EventHandler(this.trckBarTimer_ValueChanged);
//
// btnStop
38
//
this.btnStop.BackColor = System.Drawing.Color.Purple;
this.btnStop.Enabled = false;
this.btnStop.FlatStyle =
System.Windows.Forms.FlatStyle.Flat;
this.btnStop.Location = new System.Drawing.Point(264,
248);
this.btnStop.Name = "btnStop";
this.btnStop.Size = new System.Drawing.Size(48, 40);
this.btnStop.TabIndex = 8;
this.btnStop.Text = "St&op";
this.btnStop.Click += new
System.EventHandler(this.btnStop_Click);
//
// btnStart
//
this.btnStart.BackColor =
System.Drawing.Color.Purple;
this.btnStart.FlatStyle =
System.Windows.Forms.FlatStyle.Flat;
this.btnStart.Location = new
System.Drawing.Point(264, 40);
this.btnStart.Name = "btnStart";
this.btnStart.Size = new System.Drawing.Size(48, 40);
this.btnStart.TabIndex = 7;
this.btnStart.Text = "&Start";
this.btnStart.Click += new
System.EventHandler(this.btnStart_Click);
//
// pictureBox2
//
this.pictureBox2.BorderStyle =
System.Windows.Forms.BorderStyle.FixedSingle;
this.pictureBox2.Location = new
System.Drawing.Point(8, 184);
this.pictureBox2.Name = "pictureBox2";
this.pictureBox2.Size = new System.Drawing.Size(240,
88);
this.pictureBox2.SizeMode =
System.Windows.Forms.PictureBoxSizeMode.StretchImage;
this.pictureBox2.TabIndex = 6;
this.pictureBox2.TabStop = false;
//
// lblSaveOrNot
//
this.lblSaveOrNot.Location = new
System.Drawing.Point(256, 208);
this.lblSaveOrNot.Name = "lblSaveOrNot";
this.lblSaveOrNot.Size = new System.Drawing.Size(80,
32);
this.lblSaveOrNot.TabIndex = 5;
//
// lblSavePercent
//
39
this.lblSavePercent.Location = new
System.Drawing.Point(344, 16);
this.lblSavePercent.Name = "lblSavePercent";
this.lblSavePercent.Size = new
System.Drawing.Size(40, 24);
this.lblSavePercent.TabIndex = 4;
//
// trckBarSaveValue
//
this.trckBarSaveValue.Enabled = false;
this.trckBarSaveValue.Location = new
System.Drawing.Point(344, 48);
this.trckBarSaveValue.Maximum = 100;
this.trckBarSaveValue.Name = "trckBarSaveValue";
this.trckBarSaveValue.Orientation =
System.Windows.Forms.Orientation.Vertical;
this.trckBarSaveValue.Size = new
System.Drawing.Size(42, 240);
this.trckBarSaveValue.TabIndex = 3;
this.trckBarSaveValue.ValueChanged += new
System.EventHandler(this.trckBarSaveValue_ValueChanged);
//
// lblPercent
//
this.lblPercent.Location = new
System.Drawing.Point(272, 120);
this.lblPercent.Name = "lblPercent";
this.lblPercent.Size = new System.Drawing.Size(40,
32);
this.lblPercent.TabIndex = 2;
//
// pictureBox1
//
this.pictureBox1.BorderStyle =
System.Windows.Forms.BorderStyle.FixedSingle;
this.pictureBox1.Location = new
System.Drawing.Point(8, 96);
this.pictureBox1.Name = "pictureBox1";
this.pictureBox1.Size = new System.Drawing.Size(240,
88);
this.pictureBox1.SizeMode =
System.Windows.Forms.PictureBoxSizeMode.StretchImage;
this.pictureBox1.TabIndex = 1;
this.pictureBox1.TabStop = false;
//
// pictureBox
//
this.pictureBox.BorderStyle =
System.Windows.Forms.BorderStyle.FixedSingle;
this.pictureBox.Location = new
System.Drawing.Point(8, 8);
this.pictureBox.Name = "pictureBox";
this.pictureBox.Size = new System.Drawing.Size(240,
88);
40
this.pictureBox.SizeMode =
System.Windows.Forms.PictureBoxSizeMode.StretchImage;
this.pictureBox.TabIndex = 0;
this.pictureBox.TabStop = false;
//
// timer1
//
this.timer1.Interval = 600;
this.timer1.Tick += new
System.EventHandler(this.timer1_Tick);
//
// groupBoxImageProcTime
//
this.groupBoxImageProcTime.Controls.Add(this.label4);
this.groupBoxImageProcTime.Controls.Add(this.lblTotalTime);
this.groupBoxImageProcTime.Controls.Add(this.label8);
this.groupBoxImageProcTime.Controls.Add(this.lblTime);
this.groupBoxImageProcTime.ForeColor =
System.Drawing.Color.DarkRed;
this.groupBoxImageProcTime.Location = new
System.Drawing.Point(8, 344);
this.groupBoxImageProcTime.Name =
"groupBoxImageProcTime";
this.groupBoxImageProcTime.Size = new
System.Drawing.Size(536, 56);
this.groupBoxImageProcTime.TabIndex = 17;
this.groupBoxImageProcTime.TabStop = false;
this.groupBoxImageProcTime.Text = "Image Processing
Time";
//
// label4
//
this.label4.ForeColor = System.Drawing.Color.DarkRed;
this.label4.Location = new System.Drawing.Point(200,
24);
this.label4.Name = "label4";
this.label4.Size = new System.Drawing.Size(64, 24);
this.label4.TabIndex = 19;
this.label4.Text = "Total Time";
this.label4.TextAlign =
System.Drawing.ContentAlignment.MiddleLeft;
this.label4.UseMnemonic = false;
//
// lblTotalTime
//
this.lblTotalTime.BorderStyle =
System.Windows.Forms.BorderStyle.FixedSingle;
this.lblTotalTime.ForeColor =
System.Drawing.Color.DarkRed;
this.lblTotalTime.Location = new
System.Drawing.Point(272, 24);
this.lblTotalTime.Name = "lblTotalTime";
41
this.lblTotalTime.Size = new System.Drawing.Size(240,
24);
this.lblTotalTime.TabIndex = 18;
this.lblTotalTime.TextAlign =
System.Drawing.ContentAlignment.MiddleCenter;
this.lblTotalTime.UseMnemonic = false;
//
// label8
//
this.label8.ForeColor = System.Drawing.Color.DarkRed;
this.label8.Location = new System.Drawing.Point(8,
24);
this.label8.Name = "label8";
this.label8.Size = new System.Drawing.Size(32, 24);
this.label8.TabIndex = 17;
this.label8.Text = "Time";
this.label8.TextAlign =
System.Drawing.ContentAlignment.MiddleLeft;
this.label8.UseMnemonic = false;
//
// lblTime
//
this.lblTime.BorderStyle =
System.Windows.Forms.BorderStyle.FixedSingle;
this.lblTime.ForeColor =
System.Drawing.Color.DarkRed;
this.lblTime.Location = new System.Drawing.Point(48,
24);
this.lblTime.Name = "lblTime";
this.lblTime.Size = new System.Drawing.Size(136, 24);
this.lblTime.TabIndex = 16;
this.lblTime.TextAlign =
System.Drawing.ContentAlignment.MiddleCenter;
this.lblTime.UseMnemonic = false;
//
// lblCount
//
this.lblCount.BorderStyle =
System.Windows.Forms.BorderStyle.FixedSingle;
this.lblCount.Font = new
System.Drawing.Font("Microsoft Sans Serif", 8.25F,
System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point,
((System.Byte)(178)), true);
this.lblCount.ForeColor =
System.Drawing.Color.DarkRed;
this.lblCount.Location = new
System.Drawing.Point(576, 368);
this.lblCount.Name = "lblCount";
this.lblCount.Size = new System.Drawing.Size(80, 16);
this.lblCount.TabIndex = 18;
this.lblCount.TextAlign =
System.Drawing.ContentAlignment.MiddleCenter;
this.lblCount.UseMnemonic = false;
//
42
// btnAbout
//
this.btnAbout.BackColor =
System.Drawing.Color.Purple;
this.btnAbout.FlatStyle =
System.Windows.Forms.FlatStyle.Flat;
this.btnAbout.Location = new
System.Drawing.Point(672, 360);
this.btnAbout.Name = "btnAbout";
this.btnAbout.Size = new System.Drawing.Size(48, 40);
this.btnAbout.TabIndex = 19;
this.btnAbout.Text = "&About";
this.btnAbout.Click += new
System.EventHandler(this.btnAbout_Click);
//
// MainForm
//
this.AutoScaleBaseSize = new System.Drawing.Size(5,
13);
this.ClientSize = new System.Drawing.Size(774, 405);
this.Controls.Add(this.btnAbout);
this.Controls.Add(this.lblCount);
this.Controls.Add(this.groupBoxImageProcTime);
this.Controls.Add(this.stillPanel);
this.Controls.Add(this.splitter1);
this.Controls.Add(this.videoPanel);
this.Controls.Add(this.toolBar);
this.FormBorderStyle =
System.Windows.Forms.FormBorderStyle.Fixed3D;
this.Icon =
((System.Drawing.Icon)(resources.GetObject("$this.Icon")));
this.MaximizeBox = false;
this.Name = "MainForm";
this.StartPosition =
System.Windows.Forms.FormStartPosition.CenterScreen;
this.Text = "CatchItV";
this.Closing += new
System.ComponentModel.CancelEventHandler(this.MainForm_Closing);
this.Activated += new
System.EventHandler(this.MainForm_Activated);
this.stillPanel.ResumeLayout(false);
((System.ComponentModel.ISupportInitialize)(this.trckBarTimer)).E
ndInit();
((System.ComponentModel.ISupportInitialize)(this.trckBarSaveValue
)).EndInit();
this.groupBoxImageProcTime.ResumeLayout(false);
this.ResumeLayout(false);
}
#endregion
///
43
/// The main entry point for the application.
///
[STAThread]
static void Main()
{
Application.Run(new MainForm());
}
private void MainForm_Closing(object sender,
System.ComponentModel.CancelEventArgs e)
{
this.Hide();
CloseInterfaces();
}
///
detect first form appearance, start grabber.
private void MainForm_Activated(object sender, System.EventArgs
e)
{
if( firstActive )
return;
firstActive = true;
if( ! DsUtils.IsCorrectDirectXVersion() )
{
MessageBox.Show( this, "DirectX 8.1 NOT installed!",
"DirectShow.NET", MessageBoxButtons.OK, MessageBoxIcon.Stop );
this.Close(); return;
}
if( ! DsDev.GetDevicesOfCat(
FilterCategory.VideoInputDevice, out capDevices ) )
{
MessageBox.Show( this, "No video capture devices
found!", "DirectShow.NET", MessageBoxButtons.OK, MessageBoxIcon.Stop );
this.Close(); return;
}
DsDevice dev = null;
if( capDevices.Count == 1 )
dev = capDevices[0] as DsDevice;
else
{
DeviceSelector selector = new DeviceSelector(
capDevices );
selector.ShowDialog( this );
dev = selector.SelectedDevice;
}
if( dev == null )
{
this.Close(); return;
}
44
if( ! StartupVideo( dev.Mon ) )
this.Close();
}
private void videoPanel_Resize(object sender, System.EventArgs e)
{
ResizeVideoWindow();
}
///
handler for toolbar button clicks.
private void toolBar_ButtonClick(object sender,
System.Windows.Forms.ToolBarButtonClickEventArgs e)
{
Trace.WriteLine( "!!BTN: toolBar_ButtonClick" );
int hr;
if( sampGrabber == null )
return;
if( e.Button == toolBarBtnGrab )
{
Trace.WriteLine( "!!BTN: toolBarBtnGrab" );
if( savedArray == null )
{
int size = videoInfoHeader.BmiHeader.ImageSize;
if( (size < 1000) || (size > 16000000) )
return;
savedArray = new byte[ size + 64000 ];
}
toolBarBtnSave.Enabled = false;
Image old = pictureBox.Image;
pictureBox.Image = null;
if( old != null )
old.Dispose();
toolBarBtnGrab.Enabled = false;
captured = false;
hr = sampGrabber.SetCallback( this, 1 );
}
else if( e.Button == toolBarBtnSave )
{
Trace.WriteLine( "!!BTN: toolBarBtnSave" );
SaveFileDialog sd = new SaveFileDialog();
sd.FileName = @"DsNET.bmp";
sd.Title = "Save Image as...";
sd.Filter = "Bitmap file (*.bmp)|*.bmp";
sd.FilterIndex = 1;
if( sd.ShowDialog() != DialogResult.OK )
return;
45
pictureBox.Image.Save( sd.FileName, ImageFormat.Bmp
); // save to new bmp file
}
else if( e.Button == toolBarBtnTune )
{
if( capGraph != null )
DsUtils.ShowTunerPinDialog( capGraph,
capFilter, this.Handle );
}
}
///
capture event, triggered by buffer callback.
void OnCaptureDone()
{
Trace.WriteLine( "!!DLG: OnCaptureDone" );
try
{
toolBarBtnGrab.Enabled = true;
int hr;
if( sampGrabber == null )
return;
hr = sampGrabber.SetCallback( null, 0 );
int w = videoInfoHeader.BmiHeader.Width;
int h = videoInfoHeader.BmiHeader.Height;
if( ((w & 0x03) != 0) || (w < 32) || (w > 4096) || (h
< 32) || (h > 4096) )
return;
//get Image
int stride = w * 3;
GCHandle handle = GCHandle.Alloc( savedArray,
GCHandleType.Pinned );
int scan0 = (int) handle.AddrOfPinnedObject();
scan0 += (h - 1) * stride;
Bitmap b = new Bitmap( w, h, -stride,
PixelFormat.Format24bppRgb, (IntPtr) scan0 );
handle.Free();
pictureBox1.Image = b;
b=new Bitmap(pictureBox1.Image);
if(firstRun)
{
firstRun=false;
pictureBox.Image=b;
a=new Bitmap(b);
}
else
{
46
count++;
wanted=new Bitmap(b.Width,b.Height);
Int32 percent;
counter.Start();
ImageProcessing imageProcessing=new
ImageProcessing(a,b,wanted);
imageProcessing.CompareUnsafeFaster(out
percent); //not required!!!!
percent=(Int32)(percent*100/(b.Width*b.Height)); // not
required!!!
counter.Stop();
//pictureBox2.Image=wanted;
if (!dontSaveNext) //remove percent info
(percent>=cPercent)&&!!!!
{
//imageProcessing.Save(Application.StartupPath+"\\wanted\\Wanted"
+count+"-"+percent+"-
"+System.DateTime.Now.Millisecond.ToString()+".jpg");
saveTime=count+"-"+percent+"-
"+System.DateTime.Now.Millisecond.ToString()+".jpg";
b.Save(Application.StartupPath+"\\Wanted\\first"+saveTime,System.
Drawing.Imaging.ImageFormat.Jpeg);
a.Save(Application.StartupPath+"\\Wanted\\second"+saveTime,System
.Drawing.Imaging.ImageFormat.Jpeg);
count2++;
// Microsoft.VisualBasic.Interaction.Beep();
dontSaveNext=true;
lblSaveOrNot.Text="don'tSaveNext";
//////
// public static bool EdgeDetectHorizontal(Bitmap b)
// {
Bitmap bmTemp = (Bitmap)b.Clone();
Bitmap c = (Bitmap)b.Clone();
Bitmap c1 = (Bitmap)b.Clone();
// GDI+ still lies to us - the return format is BGR,
NOT RGB.
BitmapData bmData = c.LockBits(new Rectangle(0, 0,
b.Width, b.Height), ImageLockMode.ReadWrite,
PixelFormat.Format24bppRgb);
BitmapData bmData2 = bmTemp.LockBits(new Rectangle(0,
0, b.Width, b.Height), ImageLockMode.ReadWrite,
PixelFormat.Format24bppRgb);
BitmapData bmData3 = c1.LockBits(new Rectangle(0, 0,
b.Width, b.Height), ImageLockMode.ReadWrite,
PixelFormat.Format24bppRgb);
47
int stridee = bmData.Stride;
System.IntPtr Scan0 = bmData.Scan0;
System.IntPtr Scan02 = bmData2.Scan0;
System.IntPtr Scan03 = bmData3.Scan0;
unsafe
{
byte* p = (byte*)(void*)Scan0;
byte* p2 = (byte*)(void*)Scan02;
byte* p3 = (byte*)(void*)Scan03;
int nOffset = stridee - c.Width * 3;
int nWidth = c.Width * 3;
int nPixel = 0;
int nPixel1 = 255;
int x1 = 0, y2 = 0, yy = 0;
p += stridee;
p2 += stridee;
p3 += stridee;
for (int y = 1; y < c.Height - 1; ++y)
{
p += 9;
p2 += 9;
p3 += 9;
for (int x = 9; x < nWidth - 9; ++x)
{
nPixel = ((p2 + stridee - 9)[0] +
(p2 + stridee - 6)[0] +
(p2 + stridee - 3)[0] +
(p2 + stridee)[0] +
(p2 + stridee + 3)[0] +
(p2 + stridee + 6)[0] +
(p2 + stridee + 9)[0] -
(p2 - stridee - 9)[0] -
(p2 - stridee - 6)[0] -
(p2 - stridee - 3)[0] -
(p2 - stridee)[0] -
(p2 - stridee + 3)[0] -
(p2 - stridee + 6)[0] -
(p2 - stridee + 9)[0]);
if (nPixel < 0)
{ nPixel = 0; }
if (nPixel > 255)
{
nPixel = 255;
x1 = x1 + 1;
if (x1 >= 4)
{
48
y2 = y;
//for (int y3 = 0; y3 < nWidth - 9;
++y3)
// {
(p3 + stridee)[0] = (byte) nPixel1;
//Microsoft.VisualBasic.Interaction.Beep();
// }
x1 = 0;
}
}
(p + stridee)[0] = (byte)nPixel;
++p;
++p2;
++p3;
}
x1 = 0;
p = p + (9 + nOffset);
p2 = p2 + (9 + nOffset);
p3 = p3 + (9 + nOffset);
}
//**************************************
/* byte* p4 = (byte*)(void*)Scan02;
byte* p3 = (byte*)(void*)Scan03;
p4 += stridee;
p3 += stridee;
for (int yy = 1; yy < c1.Height - 1; ++yy)
{
p4 += 9;
p3 += 9;
for (int xx = 9; xx < nWidth - 9; ++xx)
{
if(yy == y2)
{ nPixel = 255;}
(p3 + stridee)[0] = (byte)nPixel;
++p4;
++p3;
}
//x1 = 0;
p4 = p4 + (9 + nOffset);
p3 = p3 + (9 + nOffset);
}
/* for (int y3 = y2; y3 < y2 + 2; y3++)
{
49
for (int xx = 0; xx < nWidth; xx++)
{
p2[0] = (byte) nPixel1;
Microsoft.VisualBasic.Interaction.Beep();
}
}*/
}
c.UnlockBits(bmData);
bmTemp.UnlockBits(bmData2);
c1.UnlockBits(bmData3);
///
c1.Save(Application.StartupPath + "\\Final\\second" +
saveTime, System.Drawing.Imaging.ImageFormat.Jpeg);
c.Save(Application.StartupPath + "\\Test\\second" +
saveTime, System.Drawing.Imaging.ImageFormat.Jpeg);
}
else
{
dontSaveNext=false;
lblSaveOrNot.Text="SaveNext";
}
this.lblTime.Text=counter.ToString();
sec+=counter.Seconds;
lblTotalTime.Text=sec.ToString()+" Seconds.";
counter.Clear();
lblCount.Text=count2+"/"+count;
lblPercent.Text=percent+"%";
//the below code compute the minimum save
percent
//because the defference between webcams,
//daylight
if((percent-minSave)>5)
{
try
{
trckBarSaveValue.Value=percent+5;
minSave=percent;
}
catch{}
}
//else if(percent
try
{
minSave=(minSave+percent)/2;
trckBarSaveValue.Value=minSave+5;
50
}
catch{}
}
pictureBox.Image=b;
a=new Bitmap(pictureBox.Image);
savedArray=null;
}
catch
{
}
}
///
start all the interfaces, graphs and preview
window.
bool StartupVideo( UCOMIMoniker mon )
{
int hr;
try {
if( ! CreateCaptureDevice( mon ) )
return false;
if( ! GetInterfaces() )
return false;
if( ! SetupGraph() )
return false;
if( ! SetupVideoWindow() )
return false;
#if DEBUG
DsROT.AddGraphToRot( graphBuilder, out
rotCookie ); // graphBuilder capGraph
#endif
hr = mediaCtrl.Run();
if( hr < 0 )
Marshal.ThrowExceptionForHR( hr );
bool hasTuner = DsUtils.ShowTunerPinDialog( capGraph,
capFilter, this.Handle );
toolBarBtnTune.Enabled = hasTuner;
return true;
}
catch
51
{
return false;
}
}
///
make the video preview window to show in
videoPanel.
bool SetupVideoWindow()
{
int hr;
try {
// Set the video window to be a child of the main
window
hr = videoWin.put_Owner( videoPanel.Handle );
if( hr < 0 )
Marshal.ThrowExceptionForHR( hr );
// Set video window style
hr = videoWin.put_WindowStyle( WS_CHILD |
WS_CLIPCHILDREN );
if( hr < 0 )
Marshal.ThrowExceptionForHR( hr );
// Use helper function to position video window in
client rect of owner window
ResizeVideoWindow();
// Make the video window visible, now that it is
properly positioned
hr = videoWin.put_Visible( DsHlp.OATRUE );
if( hr < 0 )
Marshal.ThrowExceptionForHR( hr );
hr = mediaEvt.SetNotifyWindow( this.Handle,
WM_GRAPHNOTIFY, IntPtr.Zero );
if( hr < 0 )
Marshal.ThrowExceptionForHR( hr );
return true;
}
catch
{
return false;
}
}
///
build the capture graph for grabber.
bool SetupGraph()
{
int hr;
try {
hr = capGraph.SetFiltergraph( graphBuilder );
if( hr < 0 )
52
Marshal.ThrowExceptionForHR( hr );
hr = graphBuilder.AddFilter( capFilter, "Ds.NET Video
Capture Device" );
if( hr < 0 )
Marshal.ThrowExceptionForHR( hr );
DsUtils.ShowCapPinDialog( capGraph, capFilter,
this.Handle );
AMMediaType media = new AMMediaType();
media.majorType = MediaType.Video;
media.subType = MediaSubType.RGB24;
media.formatType = FormatType.VideoInfo; //
???
hr = sampGrabber.SetMediaType( media );
if( hr < 0 )
Marshal.ThrowExceptionForHR( hr );
hr = graphBuilder.AddFilter( baseGrabFlt, "Ds.NET
Grabber" );
if( hr < 0 )
Marshal.ThrowExceptionForHR( hr );
Guid cat = PinCategory.Preview;
Guid med = MediaType.Video;
hr = capGraph.RenderStream( ref cat, ref med,
capFilter, null, null ); // baseGrabFlt
if( hr < 0 )
Marshal.ThrowExceptionForHR( hr );
cat = PinCategory.Capture;
med = MediaType.Video;
hr = capGraph.RenderStream( ref cat, ref med,
capFilter, null, baseGrabFlt ); // baseGrabFlt
if( hr < 0 )
Marshal.ThrowExceptionForHR( hr );
media = new AMMediaType();
hr = sampGrabber.GetConnectedMediaType( media );
if( hr < 0 )
Marshal.ThrowExceptionForHR( hr );
if( (media.formatType != FormatType.VideoInfo) ||
(media.formatPtr == IntPtr.Zero) )
throw new NotSupportedException( "Unknown
Grabber Media Format" );
videoInfoHeader = (VideoInfoHeader)
Marshal.PtrToStructure( media.formatPtr, typeof(VideoInfoHeader) );
Marshal.FreeCoTaskMem( media.formatPtr );
media.formatPtr = IntPtr.Zero;
hr = sampGrabber.SetBufferSamples( false );
if( hr == 0 )
53
hr = sampGrabber.SetOneShot( false );
if( hr == 0 )
hr = sampGrabber.SetCallback( null, 0 );
if( hr < 0 )
Marshal.ThrowExceptionForHR( hr );
return true;
}
catch
{
return false;
}
}
///
create the used COM components and get the
interfaces.
bool GetInterfaces()
{
Type comType = null;
object comObj = null;
try {
comType = Type.GetTypeFromCLSID( Clsid.FilterGraph );
if( comType == null )
throw new NotImplementedException( @"DirectShow
FilterGraph not installed/registered!" );
comObj = Activator.CreateInstance( comType );
graphBuilder = (IGraphBuilder) comObj; comObj = null;
Guid clsid = Clsid.CaptureGraphBuilder2;
Guid riid = typeof(ICaptureGraphBuilder2).GUID;
comObj = DsBugWO.CreateDsInstance( ref clsid, ref
riid );
capGraph = (ICaptureGraphBuilder2) comObj; comObj =
null;
comType = Type.GetTypeFromCLSID( Clsid.SampleGrabber
);
if( comType == null )
throw new NotImplementedException( @"DirectShow
SampleGrabber not installed/registered!" );
comObj = Activator.CreateInstance( comType );
sampGrabber = (ISampleGrabber) comObj; comObj = null;
mediaCtrl = (IMediaControl) graphBuilder;
videoWin = (IVideoWindow) graphBuilder;
mediaEvt = (IMediaEventEx) graphBuilder;
baseGrabFlt = (IBaseFilter) sampGrabber;
return true;
}
catch
{
return false;
}
54
finally
{
if( comObj != null )
Marshal.ReleaseComObject( comObj ); comObj =
null;
}
}
///
create the user selected capture device.
bool CreateCaptureDevice( UCOMIMoniker mon )
{
object capObj = null;
try {
Guid gbf = typeof( IBaseFilter ).GUID;
mon.BindToObject( null, null, ref gbf, out capObj );
capFilter = (IBaseFilter) capObj; capObj = null;
return true;
}
catch
{
return false;
}
finally
{
if( capObj != null )
Marshal.ReleaseComObject( capObj ); capObj =
null;
}
}
///
do cleanup and release DirectShow.
void CloseInterfaces()
{
int hr;
try {
#if DEBUG
if( rotCookie != 0 )
DsROT.RemoveGraphFromRot( ref rotCookie
);
#endif
if( mediaCtrl != null )
{
hr = mediaCtrl.Stop();
mediaCtrl = null;
}
if( mediaEvt != null )
{
55
hr = mediaEvt.SetNotifyWindow( IntPtr.Zero,
WM_GRAPHNOTIFY, IntPtr.Zero );
mediaEvt = null;
}
if( videoWin != null )
{
hr = videoWin.put_Visible( DsHlp.OAFALSE );
hr = videoWin.put_Owner( IntPtr.Zero );
videoWin = null;
}
baseGrabFlt = null;
if( sampGrabber != null )
Marshal.ReleaseComObject( sampGrabber );
sampGrabber = null;
if( capGraph != null )
Marshal.ReleaseComObject( capGraph ); capGraph
= null;
if( graphBuilder != null )
Marshal.ReleaseComObject( graphBuilder );
graphBuilder = null;
if( capFilter != null )
Marshal.ReleaseComObject( capFilter );
capFilter = null;
if( capDevices != null )
{
foreach( DsDevice d in capDevices )
d.Dispose();
capDevices = null;
}
}
catch
{}
}
///
resize preview video window to fill client
area.
void ResizeVideoWindow()
{
if( videoWin != null )
{
Rectangle rc = videoPanel.ClientRectangle;
videoWin.SetWindowPosition( 0, 0, rc.Right, rc.Bottom
);
}
}
///
override window fn to handle graph events.
56
protected override void WndProc( ref Message m )
{
if( m.Msg == WM_GRAPHNOTIFY )
{
if( mediaEvt != null )
OnGraphNotify();
return;
}
base.WndProc( ref m );
}
///
graph event (WM_GRAPHNOTIFY) handler.
void OnGraphNotify()
{
DsEvCode code;
int p1, p2, hr = 0;
do
{
hr = mediaEvt.GetEvent( out code, out p1, out p2, 0
);
if( hr < 0 )
break;
hr = mediaEvt.FreeEventParams( code, p1, p2 );
}
while( hr == 0 );
}
///
sample callback, NOT USED.
int ISampleGrabberCB.SampleCB( double SampleTime, IMediaSample
pSample )
{
Trace.WriteLine( "!!CB: ISampleGrabberCB.SampleCB" );
return 0;
}
///
buffer callback, COULD BE FROM FOREIGN
THREAD.
int ISampleGrabberCB.BufferCB( double SampleTime, IntPtr pBuffer,
int BufferLen )
{
if( captured || (savedArray == null) )
{
Trace.WriteLine( "!!CB: ISampleGrabberCB.BufferCB" );
return 0;
}
captured = true;
bufferedSize = BufferLen;
Trace.WriteLine( "!!CB: ISampleGrabberCB.BufferCB !GRAB!
size = " + BufferLen.ToString() );
if( (pBuffer != IntPtr.Zero) && (BufferLen > 1000) &&
(BufferLen <= savedArray.Length) )
Marshal.Copy( pBuffer, savedArray, 0, BufferLen );
57
else
Trace.WriteLine( " !!!GRAB! failed " );
this.BeginInvoke( new CaptureDone( this.OnCaptureDone ) );
return 0;
}
///
flag to detect first Form appearance
private bool firstActive;
///
base filter of the actually used video
devices.
private IBaseFilter capFilter;
///
graph builder interface.
private IGraphBuilder graphBuilder;
///
capture graph builder interface.
private ICaptureGraphBuilder2 capGraph;
private ISampleGrabber sampGrabber;
///
control interface.
private IMediaControl mediaCtrl;
///
event interface.
private IMediaEventEx mediaEvt;
///
video window interface.
private IVideoWindow videoWin;
///
grabber filter interface.
private IBaseFilter baseGrabFlt;
///
structure describing the bitmap to grab.
private VideoInfoHeader videoInfoHeader;
private bool captured = true;
private int bufferedSize;
///
buffer for bitmap data.
private byte[] savedArray;
///
list of installed video devices.
private ArrayList capDevices;
private const int WM_GRAPHNOTIFY = 0x00008001; // message
from graph
private const int WS_CHILD = 0x40000000; //
attributes for video window
private const int WS_CLIPCHILDREN = 0x02000000;
private const int WS_CLIPSIBLINGS = 0x04000000;
58
///
event when callback has finished
(ISampleGrabberCB.BufferCB).
private delegate void CaptureDone();
#if DEBUG
private int rotCookie = 0;
#endif
private void timer1_Tick(object sender, System.EventArgs e)
{
System.Windows.Forms.ToolBarButtonClickEventArgs ee=new
ToolBarButtonClickEventArgs(toolBarBtnGrab);
toolBar_ButtonClick(toolBarBtnGrab,ee);
}
private void btnStart_Click(object sender, System.EventArgs e)
{
btnStart.Enabled=false;
btnStop.Enabled=true;
timer1.Enabled=true;
}
private void trckBarSaveValue_ValueChanged(object sender,
System.EventArgs e)
{
cPercent=trckBarSaveValue.Value;
lblSavePercent.Text=cPercent+"%";
}
private void btnStop_Click(object sender, System.EventArgs e)
{
btnStart.Enabled=true;
btnStop.Enabled=false;
timer1.Enabled=false;
firstRun=true;
}
private void btnAbout_Click(object sender, System.EventArgs e)
{
About about=new About();
about.ShowDialog(this);
}
private void trckBarTimer_ValueChanged(object sender,
System.EventArgs e)
{
timer1.Interval=trckBarTimer.Value;
lblTimerValue.Text=trckBarTimer.Value.ToString();
}
}
59
internal enum PlayState
{
Init, Stopped, Paused, Running
}
}
Abstract (if available)
Abstract
The purpose of this research work is to device a system that would be able to measure the minute changes in flow of viscous fluids that are being employed in Contour Crafting. We have chosen the method of image processing for measuring the flow rate. In normal circumstances, three dimensional image processing is a complex process, requiring heavy computations, but since our requirement includes supporting a real-time scenario, we need to come up with a customized solution. Such a solution would not only make processing fast but also, real-time. We divide the problem into two parts
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Methodology for design of a vibration operated valve for abrasive viscous fluids
PDF
Contour Crafting process planning and optimization
PDF
Mixture characterization and real-time extrusion quality monitoring for construction-scale 3D printing (Contour Crafting)
PDF
Resource allocation in dynamic real-time systems
PDF
Hybrid vat photopolymerization processes for viscous photocurable and non-photocurable materials
PDF
Real-time reservoir characterization and optimization during immiscible displacement processes
PDF
Enhancing the surface quality and dimensional accuracy of SIS-metal parts with application to high temperature alloys
PDF
Fluid dynamics of a crystallizing particle in a rotating liquid sphere
PDF
Algorithms and data structures for the real-time processing of traffic data
PDF
Experimental and numerical techniques to characterize structural properties of fresh concrete
PDF
4D printing of self-folding structures using polystyrene film
PDF
Energy control and material deposition methods for fast fabrication with high surface quality in additive manufacturing using photo-polymerization
PDF
3D printing of polymeric parts using Selective Separation Shaping (SSS)
PDF
The extension of selective inhibition sintering (SIS) to high temperature alloys
PDF
Selective separation shaping: an additive manufacturing method for metals and ceramics
PDF
Robust control of periodically time-varying systems
PDF
Slurry based stereolithography: a solid freeform fabrication method of ceramics and composites
PDF
Stochastic models: simulation and heavy traffic analysis
PDF
Flood front tracking and continuous recording of time lag in immiscible displacement
PDF
Energy efficient design and provisioning of hardware resources in modern computing systems
Asset Metadata
Creator
Kanodia, Kabir
(author)
Core Title
Real-time flow control of viscous fluids using 3D image processing
School
Viterbi School of Engineering
Degree
Master of Science
Degree Program
Industrial and Systems Engineering
Publication Date
12/16/2008
Defense Date
10/23/2008
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
edge detection,flow control,image processing,OAI-PMH Harvest,real-time,viscous fluids
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Khoshnevis, Behrokh (
committee chair
), Chen, Yong (
committee member
), Rosenbloom, Paul S. (
committee member
)
Creator Email
kabir@kanodia.org,kkanodia@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m1930
Unique identifier
UC1152564
Identifier
etd-Kanodia-2321 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-145447 (legacy record id),usctheses-m1930 (legacy record id)
Legacy Identifier
etd-Kanodia-2321.pdf
Dmrecord
145447
Document Type
Thesis
Rights
Kanodia, Kabir
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
edge detection
flow control
image processing
real-time
viscous fluids