digital image processing full report
Thread Rating:
  • 1 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
computer science technology
Active In SP
**

Posts: 740
Joined: Jan 2010
#1
24-01-2010, 10:40 PM



.doc   DIGITAL IMAGE PROCESSING full report.DOC (Size: 61.5 KB / Downloads: 1,976)

A
Paper Presentation
On
DIGITAL IMAGE PROCESSING

Abstract:
Over the past dozen years forensic and medical applications of technology first developed to record and transmit pictures from outer space have changed the way we see things here on earth, including Old English manuscripts. With their talents combined, an electronic camera designed for use with documents and a digital computer can now frequently enhance the legibility of formerly obscure or even invisible texts. The computer first converts the analogue image, in this case a videotape, to a digital image by dividing it into a microscopic grid and numbering each part by its relative brightness. Specific image processing programs can then radically improve the contrast, for example by stretching the range of brightness throughout the grid from black to white, emphasizing edges, and suppressing random background noise that comes from the equipment rather than the document. Applied to some of the most illegible passages in the Beowulf manuscript, this new technology indeed shows us some things we had not seen before and forces us to reconsider some established readings.
Introduction to Digital Image Processing:
¢ Vision allows humans to perceive and understand the world surrounding us.
¢ Computer vision aims to duplicate the effect of human vision by electronically perceiving and understanding an image.
¢ Giving computers the ability to see is not an easy task - we live in a three dimensional (3D) world, and when computers try to analyze objects in 3D space, available visual sensors (e.g., TV cameras) usually give two dimensional (2D) images, and this project and implimentationion to a lower number of dimensions incurs an enormous loss of information.
¢ In order to simplify the task of computer vision understanding, two levels are usually distinguished; low-level image processing and high level image understanding.
¢ Usually very little knowledge about the content of images
¢ High level processing is based on knowledge, goals, and plans of how to achieve those goals. Artificial intelligence (AI) methods are used in many cases. High-level computer vision tries to imitate human cognition and the ability to make decisions according to the information contained in the image.
¢ This course deals almost exclusively with low-level image processing, high level in which is a continuation of this course.
¢ Age processing is discussed in the course Image Analysis and Understanding, which is a continuation of this course.
History:
Many of the techniques of digital image processing, or digital picture processing as it was often called, were developed in the 1960s at the Jet Propulsion Laboratory, MIT, Bell Labs, University of Maryland, and few other places, with application to satellite imagery, wire photo standards conversion, medical imaging, videophone, character recognition, and photo enhancement. But the cost of processing was fairly high with the computing equipment of that era. In the 1970s, digital image processing proliferated, when cheaper computers Creating a film or electronic image of any picture or paper form. It is accomplished by scanning or photographing an object and turning it into a matrix of dots (bitmap), the meaning of which is unknown to the computer, only to the human viewer. Scanned images of text may be encoded into computer data (ASCII or EBCDIC) with page recognition software (OCR).
Basic Concepts:
¢ A signal is a function depending on some variable with physical meaning.
¢ Signals can be
o One-dimensional (e.g., dependent on time),
o Two-dimensional (e.g., images dependent on two co-ordinates in a plane),
o Three-dimensional (e.g., describing an object in space),
o Or higher dimensional.
Pattern recognition is a field within the area of machine learning. Alternatively, it can be defined as "the act of taking in raw data and taking an action based on the category of the data" [1]. As such, it is a collection of methods for supervised learning.
Pattern recognition aims to classify data (patterns) based on either a priori knowledge or on statistical information extracted from the patterns. The patterns to be classified are usually groups of measurements or observations, defining points in an appropriate multidimensional space. Are to represent, for example, color images consisting of three component colors.
Image functions:
¢ The image can be modeled by a continuous function of two or three variables;
¢ Arguments are co-ordinates x, y in a plane, while if images change in time a third variable t might be added.
¢ The image function values correspond to the brightness at image points.
¢ The function value can express other physical quantities as well (temperature, pressure distribution, distance from the observer, etc.).
¢ The brightness integrates different optical quantities - using brightness as a basic quantity allows us to avoid the description of the very complicated process of image formation.
¢ The image on the human eye retina or on a TV camera sensor is intrinsically 2D. We shall call such a 2D image bearing information about brightness points an intensity image.
¢ The real world, which surrounds us, is intrinsically 3D.
¢ The 2D intensity image is the result of a perspective project and implimentationion of the 3D scene.
¢ When 3D objects are mapped into the camera plane by perspective project and implimentationion a lot of information disappears as such a transformation is not one-to-one.
¢ Recognizing or reconstructing objects in a 3D scene from one image is an ill-posed problem.
¢ Recovering information lost by perspective project and implimentationion is only one, mainly geometric, problem of computer vision.
¢ The second problem is how to understand image brightness. The only information available in an intensity image is brightness of the appropriate pixel, which is dependent on a number of independent factors such as
o Object surface reflectance properties (given by the surface material, microstructure and marking),
o Illumination properties,
o And object surface orientation with respect to a viewer and light source.
Digital image properties:
Metric properties of digital images:
¢ Distance is an important example.
¢ The distance between two pixels in a digital image is a significant quantitative measure.
¢ The Euclidean distance is defined by Eq. 2.42

o City block distance

o Chessboard distance Eq. 2.44

¢ Pixel adjacency is another important concept in digital images.
¢ 4-neighborhood
¢ 8-neighborhood
¢ It will become necessary to consider important sets consisting of several adjacent pixels -- regions.
¢ Region is a contiguous set.
¢ Contiguity paradoxes of the square grid

¢ One possible solution to contiguity paradoxes is to treat objects using 4-neighborhood and background using 8-neighborhood (or vice versa).
¢ A hexagonal grid solves many problems of the square grids ... any point in the hexagonal raster has the same distance to all its six neighbors.
¢ Border R is the set of pixels within the region that have one or more neighbors outside R ... inner borders, outer borders exist.
¢ Edge is a local property of a pixel and its immediate neighborhood --it is a vector given by a magnitude and direction.
¢ The edge direction is perpendicular to the gradient direction which points in the direction of image function growth.
¢ Border and edge ... the border is a global concept related to a region, while edge expresses local properties of an image function.
¢ Crack edges ... four crack edges are attached to each pixel, which are defined by its relation to its 4-neighbors. The direction of the crack edge is that of increasing brightness, and is a multiple of 90 degrees, while its magnitude is the absolute difference between the brightness of the relevant pair of pixels. (Fig. 2.9)
Topological properties of digital images
¢ Topological properties of images are invariant to rubber sheet transformations. Stretching does not change contiguity of the object parts and does not change the number One such image property is the Euler--Poincare characteristic defined as the difference between the number of regions and the number of holes in them.
¢ Convex hull is used to describe topological properties of objects.
¢ r of holes in regions.
¢ The convex hull is the smallest region which contains the object, such that any two points of the region can be connected by a straight line, all points of which belong to the region.
Useses
A scalar function may be sufficient to describe a monochromatic image, while vector functions are to represent, for example, color images consisting of three component colors.
CONCLUSION
Further, surveillance by humans is dependent on the quality of the human operator and lot off actors like operator fatigue negligence may lead to degradation of performance. These factors may can intelligent vision system a better option. As in systems that use gait signature for recognition in vehicle video sensors for driver assistance.
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
Reply
project report tiger
Active In SP
**

Posts: 1,062
Joined: Feb 2010
#2
01-03-2010, 11:19 PM


.ppt   DITITAL IMAGE PROCESSING.ppt (Size: 252 KB / Downloads: 5,585)

DIGITAL IMAGE PROCESSING


INTRODUCTION

Pictures are the most common and convenient means of conveying or transmitting information.
A picture is worth a thousand words. Pictures concisely convey information about positions, sizes and inter-relationships between objects. They portray spatial information that we can recognize as objects.
Pictures concisely convey information about positions, sizes and inter-relationships between objects. They portray spatial information that we can recognize as objects.
Human beings are good at deriving information from such images, because of our innate visual and mental abilities. About 75% of the information received by human is in pictorial form.

DIGITAL IMAGE

A digital image is typically composed of picture elements (pixels) located at the intersection of each row i and column j in each K bands of imagery.
Each pixel is associated a number known as Digital Number (DN) or Brightness Value (BV), that depicts the average radiance of a relatively small area within a scene (Fig. 1)
A smaller number indicates low average radiance from the area and the high number is an indicator of high radiant properties of the area .

COLOR COMPOSITES

While displaying the different bands of a multispectral data set, images obtained in different bands are displayed in image planes (other than their own) the color composite is regarded as False Color Composite (FCC).
A color infrared composite Ëœstandard false color compositeâ„¢ is displayed by placing the infrared, red, green in the red, green and blue frame buffer memory (Fig. 2).

IMAGE RECTIFICATION

Geometric distortions manifest themselves as errors in the position of a pixel relative to other pixels in the scene and with respect to their absolute position within some defined map project and implimentationion.
If left uncorrected, these geometric distortions render any data extracted from the image useless

REASONS OF DISTORTIONS

For instance distortions occur due to changes in platform attitude (roll, pitch and yaw), altitude, earth rotation, earth curvature, panoramic distortion and detector delay.
Rectification is a process of geometrically correcting an image so that it can be represented on a planar surface (Fig. 3).

IMAGE ENHANCEMENT TECHNIQUES

Image enhancement techniques improve the quality of an image as perceived by a human
Spatial Filtering Technique
Contrast Stretch

Contrast

Contrast generally refers to the difference in luminance or grey level values in an image and is an important characteristic. It can be defined as the ratio of the maximum intensity to the minimum intensity over an image.

Contrast Enhancement

Contrast enhancement techniques expand the range of brightness values in an image so that the image can be efficiently displayed in a manner desired by the analyst

Linear Contrast Stretch

The grey values in the original image and the modified image follow a linear relation in this algorithm.
. A density number in the low range of the original histogram is assigned to extremely black and a value at the high end is assigned to extremely white.

SPATIAL FILTERING

Low-Frequency Filtering in the Spatial Domain
Image enhancements that de-emphasize or block the high spatial frequency detail are low-frequency or low-pass filters.
The simple smoothing operation will, however, blur the image, especially at the edges of objects.

High-Frequency Filtering in the Spatial Domain
High-pass filtering is applied to imagery to remove the slowly varying components and enhance the high-frequency local variations
Thus, the high-frequency filtered image will have a relatively narrow intensity histogram


CONCLUSIONS

So, with the above said stages and techniques, digital image can be made noise free and it can be made available in any desired format. (X-rays, photo negatives, improved image, etc)
Reply
project topics
Active In SP
**

Posts: 2,492
Joined: Mar 2010
#3
12-04-2010, 09:04 PM

Abstract

Digital image processing techniques are generally more versatile, reliable, and accurate; they have the additional benefit of being easier to implement than their analog counterparts. Digital computers are used to process the image. The image will be converted to digital form using a digitizer and then process it. Today, hardware solutions are commonly used in video processing systems. However, commercial image processing tasks are more commonly done by software running on conventional personal computers.

In this paper we have presented the stages of image processing, commonly used image processing techniques (two dimensional), Digital image editing and image editor features and some more.

Overall, Image processing is a good option that deserves a careful look. Thus the statement Image Processing has Revolutionized the world we live in exactly fits because of the diverse applications of the image processing in various fields. We hope that, by going through this paper one can get a brief idea of Image Processing
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
Reply
project topics
Active In SP
**

Posts: 2,492
Joined: Mar 2010
#4
21-04-2010, 10:56 AM


.doc   Image processing.doc (Size: 570.5 KB / Downloads: 563)

IMAGE PROCESSING

PRESENTED BY:
1. T.Krishna Kanth
2.N.V.Ram Kishore
D.V.R. College of Engineering and Technology.
Kandi, Hyderabad
Andhra Pradesh.


ABSTRACT:

An image is digitized to convert it to a form which can be stored in a computer's memory or on some form of storage media such as a hard disk or CD-ROM. This digitization procedure can be done by a scanner, or by a video camera connected to a frame grabber board in a computer. Once the image is digitized, two things are most important it is to be stored or transmitted with minimum bits and it should be restored with maximum clarity .This makes way to the various image processing operations.
Image processing operations are divided into three major categories, Image Compression, Image Enhancement and Restoration, and Measurement Extraction. Image compression is familiar to most people. It involves reducing the amount of memory needed to store a digital image. Where as the Image Enhancement and Restoration deals with the retrieval of the image back.
This paper deals with the Image Enhancement and Restoration which helps in the image restoration with maximum clarity and enhancement of the image quality.
The first section describes what Image Enhancement and Restoration is and the second section tells us about the techniques used for the Image Enhancement and Restoration and, final section describes the advantages and disadvantage of using these techniques.



A Short Introduction to Digital Image Processing
An image is digitized to convert it to a form which can be stored in a computer's memory or on some form of storage media such as a hard disk or CD-ROM. This digitization procedure can be done by a scanner, or by a video camera connected to a frame grabber board in a computer. Once the image has been digitized, it can be operated upon by various image processing operations.
Image processing operations can be roughly divided into three major categories
Image Compression
Image Enhancement and Restoration
Measurement Extraction
Image compression is familiar to most people. It involves reducing the amount of memory needed to store a digital image.
Image defects which could be caused by the digitization process or by faults in the imaging set-up (for example, bad lighting) can be corrected using Image Enhancement techniques. Once the image is in good condition, the Measurement Extraction operations can be used to obtain useful information from the image.
Some examples of Image Enhancement and Measurement Extraction are given below. The examples shown all operate on 256 grey-scale images. This means that each pixel in the image is stored as a number between 0 to 255, where 0 represents a black pixel, 255 represents a white pixel and values in-between represent shades of grey. These operations can be extended to operate on colour images.
The examples below represent only a few of the many techniques available for operating on images. Details about the inner workings of the operations have not been given, but some references to books containing this information are given at the end for the interested reader.
Image Enhancement and Restoration
The image at the left of Figure 1 has been corrupted by noise during the digitization process. The 'clean' image at the right of Figure 1 was obtained by applying a median filter to the image.
Figure 1. Application of the median filter
An image with poor contrast, such as the one at the left of Figure 2, can be improved by adjusting the image histogram to produce the image shown at the right of Figure 2.
Figure 2. Adjusting the image histogram to improve image contrast
The image at the top left of Figure 3 has a corrugated effect due to a fault in the acquisition process. This can be removed by doing a 2-dimensional Fast-Fourier Transform on the image (top right of Figure 3), removing the bright spots (bottom left of Figure 3), and finally doing an inverse Fast Fourier Transform to return to the original image without the corrugated background bottom right of Figure 3).
Figure 3. Application of the 2-dimensional Fast Fourier Transform
An image which has been captured in poor lighting conditions, and shows a continuous change in the background brightness across the image (top left of Figure 4) can be corrected using the following procedure. First remove the foreground objects by applying a 25 by 25 greyscale dilation operation (top right of Figure 4). Then subtract the original image from the background image (bottom left of Figure 4). Finally invert the colors and improve the contrast by adjusting the image histogram (bottom right of Figure 4)
Figure 4. Correcting for a background gradient
Image Measurement Extraction
The example below demonstrates how one could go about extracting measurements from an image. The image at the top left of Figure 5 shows some objects. The aim is to extract information about the distribution of the sizes (visible areas) of the objects. The first step involves segmenting the image to separate the objects of interest from the background. This usually involves thresholding the image, which is done by setting the values of pixels above a certain threshold value to white, and all the others to black (top right of Figure 5). Because the objects touch, thresholding at a level which includes the full surface of all the objects does not show separate objects. This problem is solved by performing a watershed separation on the image (lower left of Figure 5). The image at the lower right of Figure 5 shows the result of performing a logical AND of the two images at the left of Figure 5. This shows the effect that the watershed separation has on touching objects in the original image.
Finally, some measurements can be extracted from the image. Figure 6 is a histogram showing the distribution of the area measurements. The areas were calculated based on the assumption that the width of the image is 28 cm.
Figure 5. Thresholding an image and applying a Watershed Separation Filter
Figure 6. Histogram showing the Area Distribution of the Objects
Basic Enhancement and Restoration Techniques
¢ Unsharp masking
¢ Noise suppression
¢ Distortion suppression
The process of image acquisition frequently leads (inadvertently) to image degradation. Due to mechanical problems, out-of-focus blur, motion, inappropriate illumination, and noise the quality of the digitized image can be inferior to the original. The goal of enhancement is-- starting from a recorded image c[m,n]--to produce the most visually pleasing image â[m,n]. The goal of restoration is--starting from a recorded image c[m,n]--to produce the best possible estimate â[m,n] of the original image a[m,n]. The goal of enhancement is beauty; the goal of restoration is truth.
The measure of success in restoration is usually an error measure between the original a[m,n] and the estimate â[m,n]: E{â[m,n], a[m,n]}. No mathematical error function is known that corresponds to human perceptual assessment of error. The mean-square error function is commonly used because:
1. It is easy to compute;
2. It is differentiable implying that a minimum can be sought;
3. It corresponds to "signal energy" in the total error, and;
4. It has nice properties vis à vis Parseval's theorem, eqs. (22) and (23).
The mean-square error is defined by:
In some techniques an error measure will not be necessary; in others it will be essential for evaluation and comparative purposes.
Unsharp masking
A well-known technique from photography to improve the visual quality of an image is to enhance the edges of the image. The technique is called unsharp masking. Edge enhancement means first isolating the edges in an image, amplifying them, and then adding them back into the image. Examination of Figure 33 shows that the Laplacian is a mechanism for isolating the gray level edges. This leads immediately to the technique:
The term k is the amplifying term and k > 0. The effect of this technique is shown in Figure 48.
The Laplacian used to produce Figure 48 is given by eq. (120) and the amplification term k = 1.
Original Laplacian-enhanced
Figure 48: Edge enhanced compared to original
Noise suppression
The techniques available to suppress noise can be divided into those techniques that are based on temporal information and those that are based on spatial information. By temporal information we mean that a sequence of images {ap[m,n] | p=1,2,...,P} are available that contain exactly the same objects and that differ only in the sense of independent noise realizations. If this is the case and if the noise is additive, then simple averaging of the sequence:
Temporal averaging -
will produce a result where the mean value of each pixel will be unchanged. For each pixel, however, the standard deviation will decrease from to .
If temporal averaging is not possible, then spatial averaging can be used to decrease the noise. This generally occurs, however, at a cost to image sharpness. Four obvious choices for spatial averaging are the smoothing algorithms that have been described in Section 9.4 - Gaussian filtering (eq. (93)), median filtering, Kuwahara filtering, and morphological smoothing (eq. ).
Within the class of linear filters, the optimal filter for restoration in the presence of noise is given by the Wiener filter . The word "optimal" is used here in the sense of minimum mean-square error (mse). Because the square root operation is monotonic increasing, the optimal filter also minimizes the root mean-square error (rms). The Wiener filter is characterized in the Fourier domain and for additive noise that is independent of the signal it is given by:
where Saa(u,v) is the power spectral density of an ensemble of random images {a[m,n]} and Snn(u,v) is the power spectral density of the random noise. If we have a single image then Saa(u,v) = |A(u,v)|2. In practice it is unlikely that the power spectral density of the uncontaminated image will be available. Because many images have a similar power spectral density that can be modeled by Table 4-T.8, that model can be used as an estimate of Saa(u,v).
A comparison of the five different techniques described above is shown in Figure 49. The Wiener filter was constructed directly from eq. because the image spectrum and the noise spectrum were known. The parameters for the other filters were determined choosing that value (either or window size) that led to the minimum rms.

a) Noisy image (SNR=20 dB) b) Wiener filter c) Gauss filter ( = 1.0)
rms = 25.7 rms = 20.2 rms = 21.1

d) Kuwahara filter (5 x 5) e) Median filter (3 x 3) f) Morph. smoothing (3 x 3)
rms = 22.4 rms = 22.6 rms = 26.2
Figure 49: Noise suppression using various filtering techniques.
The root mean-square errors (rms) associated with the various filters are shown in Figure 49. For this specific comparison, the Wiener filter generates a lower error than any of the other procedures that are examined here. The two linear procedures, Wiener filtering and Gaussian filtering, performed slightly better than the three non-linear alternatives.
Distortion suppression
The model presented above--an image distorted solely by noise--is not, in general, sophisticated enough to describe the true nature of distortion in a digital image. A more realistic model includes not only the noise but also a model for the distortion induced by lenses, finite apertures, possible motion of the camera and/or an object, and so forth. One frequently used model is of an image a[m,n] distorted by a linear, shift-invariant system ho[m,n] (such as a lens) and then contaminated by noise [m,n]. Various aspects of ho[m,n] and [m,n] have been discussed in earlier sections. The most common combination of these is the additive model:
The restoration procedure that is based on linear filtering coupled to a minimum mean-square error criterion again produces a Wiener filter :
Once again Saa(u,v) is the power spectral density of an image, Snn(u,v) is the power spectral density of the noise, and o(u,v) = F{ho[m,n]}. Examination of this formula for some extreme cases can be useful. For those frequencies where Saa(u,v) >> Snn(u,v), where the signal spectrum dominates the noise spectrum, the Wiener filter is given by 1/o(u,v), the inverse filter solution. For those frequencies where Saa(u,v) << Snn(u,v), where the noise spectrum dominates the signal spectrum, the Wiener filter is proportional to o*(u,v), the matched filter solution. For those frequencies where o(u,v) = 0, the Wiener filter W(u,v) = 0 preventing overflow.
The Wiener filter is a solution to the restoration problem based upon the hypothesized use of a linear filter and the minimum mean-square (or rms) error criterion. In the example below the image a[m,n] was distorted by a bandpass filter and then white noise was added to achieve an SNR = 30 dB. The results are shown in Figure 50.

a) Distorted, noisy image b) Wiener filter c) Median filter (3 x 3)
rms = 108.4 rms = 40.9 Figure 50: Noise and distortion suppression using the Wiener filter, eq. and the median filter.
The rms after Wiener filtering but before contrast stretching was 108.4; after contrast stretching with eq. (77) the final result as shown in Figure 50b has a mean-square error of 27.8. Using a 3 x 3 median filter as shown in Figure 50c leads to a rms error of 40.9 before contrast stretching and 35.1 after contrast stretching. Although the Wiener filter gives the minimum rms error over the set of all linear filters, the non-linear median filter gives a lower rms error. The operation contrast stretching is itself a non-linear operation. The "visual quality" of the median filtering result is comparable to the Wiener filtering result. This is due in part to periodic artifacts introduced by the linear filter which are visible in Figure 50b.
CONCLUSION:
Audio stream contain extremely valuable data, whose contents is also very rich and diverse. The combination of audio and image techniques, will definitely generate interesting results, and very likely improve the quality of the present analysis
REFEREANCE:
[1] Computer Techniques in Image processing By Andrews 1970
[2] Article in the November issue of the journal , ELECTRONICS TODAY.
[3] Article in the January issue of journal, ELECTRONICS FOR YOU
[4] Digital image restoration By Andrews 1977
[5] Digital image processing By Rafael Gonzalez and Richard Woods 2003
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
Reply
project report helper
Active In SP
**

Posts: 2,270
Joined: Sep 2010
#5
22-09-2010, 02:49 PM

More Info About digital image processing full report




topicideashow-to-image-processing--12931
Reply
project report helper
Active In SP
**

Posts: 2,270
Joined: Sep 2010
#6
06-10-2010, 10:46 AM



.pptx   DIGITAL IMAGE PROCESSING.pptx (Size: 41.13 KB / Downloads: 152)

DIGITAL IMAGE PROCESSING

Abstrac
t:
Over the past dozen years forensic and medical applications of technology first developed to record and transmit pictures from outer space have changed the way we see things here on earth, including Old English manuscripts. With their talents combined, an electronic camera designed for use with documents and a digital computer can now frequently enhance the legibility of formerly obscure or even invisible texts. The computer first converts the analogue image, in this case a videotape, to a digital image by dividing it into a microscopic grid and numbering each part by its relative brightness. Specific image processing programs can then radically improve the contrast, for example by stretching the range of brightness throughout the grid from black to white, emphasizing edges, and suppressing random background noise that comes from the equipment rather than the document. Applied to some of the most illegible passages in the Beowulf manuscript, this new technology indeed shows us some things we had not seen before and forces us to reconsider some established readings.
Introduction to Digital Image Processing:
¢ Vision allows humans to perceive and understand the world surrounding us.
¢ Computer vision aims to duplicate the effect of human vision by electronically perceiving and understanding an image.
¢ Giving computers the ability to see is not an easy task - we live in a three dimensional (3D) world, and when computers try to analyze objects in 3D space, available visual sensors (e.g., TV cameras) usually give two dimensional (2D) images, and this project and implimentationion to a lower number of dimensions incurs an enormous loss of information.
¢ In order to simplify the task of computer vision understanding, two levels are usually distinguished; low-level image processing and high level image understanding.
¢ Usually very little knowledge about the content of images
¢ High level processing is based on knowledge, goals, and plans of how to achieve those goals. Artificial intelligence (AI) methods are used in many cases. High-level computer vision tries to imitate human cognition and the ability to make decisions according to the information contained in the image.
¢ This course deals almost exclusively with low-level image processing, high level in which is a continuation of this course.
¢ Age processing is discussed in the course Image Analysis and Understanding, which is a continuation of this course

Reference: topicideashow-to-digital-image-processing-full-report#ixzz11YJyBaWw
Reply
project report helper
Active In SP
**

Posts: 2,270
Joined: Sep 2010
#7
12-10-2010, 12:27 PM



Image processing

Monochrome black/white image
In electrical engineering and computer science, image processing is any form of signal processing for which the input is an image, such as a photograph or video frame; the output of image processing may be either an image or, a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it.
Image processing usually refers to digital image processing, but optical and analog image processing also are possible. This article is about general techniques that apply to all of them. The acquisition of images (producing the input image in the first place) is referred to as imaging.

for more :-\
en.wikipediawiki/Computer_science
Reply
project report helper
Active In SP
**

Posts: 2,270
Joined: Sep 2010
#8
13-10-2010, 05:17 PM


.ppt   imageprocessing.ppt (Size: 728.5 KB / Downloads: 194)
Image Processing
presented by
Yunus..sonu (ece) Abhinav(ece)...
RAMAPPA
ENGINEERING COLLEGE

Introduction
This paper highlights the information regarding the “IMAGE PROCESSING” and drawback of password. This can be minimized by the usage of biometrics and applications of “BIOMETRICS” will be discussed.
Biometrics returns your body into your password, so it is going to be next generation’s powerful security tool…!



Reply
seminar surveyer
Active In SP
**

Posts: 3,541
Joined: Sep 2010
#9
18-10-2010, 02:23 PM


PRESENTED BY:
T.VAMSHI


.ppt   image processing.ppt (Size: 792 KB / Downloads: 180)


Introduction

“Morphing” is an interpolation technique used to create a series of intermediate objects from two objects.
“The face - morphing algorithm” automatically extracts feature points on the face and morphing is performed.
This algorithm is proposed by Mr. M.Biesel within Bayesian framework to do automatic face morphing.

Pre – Processing


removing the noisy backgrounds

clipping to get a proper facial image, and

scaling the image to a reasonable size. 
Reply
project report helper
Active In SP
**

Posts: 2,270
Joined: Sep 2010
#10
20-10-2010, 12:13 PM


.ppt   DIP.ppt (Size: 580 KB / Downloads: 165)
digital image processing full report

PRESENTED BY
S.Sudeepthi
T.V.L.Anusha



DIGITAL IMAGE

Two dimensional representation of values.
These are called “PIXELS”.
Pixels are stored in computer memory.


DIGITAL IMAGE PROCESSING

The processing done by using Computer software.

Avoids build-up of noise and signal distortion


How can we process an Image?

Transfer image to a computer
Digitize the image
* Digitization – translating image into numerical code understood by computer.
Processing can be done through software programs in a “Digital Dark-room”
Image is broken down into thousands of pixels
Reply
project report helper
Active In SP
**

Posts: 2,270
Joined: Sep 2010
#11
01-11-2010, 11:05 AM


.doc   project report.doc (Size: 464.5 KB / Downloads: 278)
image processing full report

INTRODUCTION:


Over the last two decades, we have witnessed an explosive growth in both the diversity of techniques and the range of applications of image processing. However, the area of color image processing is still not covered, despite having become common place, with consumers choosing the convenience of color imaging over traditional grayscale imaging. With advances in image sensors, digital TV, image databases, and video and multimedia systems, and with the proliferation of color printers, color image displays, DVD devices, and especially digital cameras and image-enabled consumer electronics, color image processing appears to have become the main focus of the image-processing research community. Processing color images or, more generally, processing multichannel images, such as satellite images, color filter array images, microarray images, and color video sequences, is a nontrivial extension of the classical grayscale processing. Recently, there have been many color image processing and analysis solutions, and many interesting results have been reported concerning filtering, enhancement, restoration, edge detection, analysis, compression, preservation, manipulation, and evaluation of color images. The surge of emerging applications, such as single-sensor imaging, color-based multimedia, digital rights management, art, and biomedical applications, indicates that the demand for color imaging solutions will grow considerably in the next decade[4].
Reply
seminar surveyer
Active In SP
**

Posts: 3,541
Joined: Sep 2010
#12
06-01-2011, 03:16 PM




.ppt   Image Processing.ppt (Size: 1.53 MB / Downloads: 179)

By
Alok K. Watve


Applications of image processing
Gamma ray imaging
X-ray imaging
Multimedia systems
Satellite imagery
Flaw detection and quality control
And many more…….

Fundamental Steps in digital image processing
Image acquisition
Image enhancement(gray or color images)
Wavelet and multi-resolution processing
Compression
Morphological processing
Segmentation
Representation & description
Object recognition
Image enhancement in spatial domain

Binary images
Only two colors
Gray images
A range of colors(not more than 256) from black to white
Color images
Contain several colors(as many as 224)




Reply
seminar class
Active In SP
**

Posts: 5,361
Joined: Feb 2011
#13
07-03-2011, 03:43 PM

PRESENTED BY
M.VAMSI KRISHNA
S.BABAJAN


.doc   vamsi1.doc (Size: 253 KB / Downloads: 204)
ABSTRACT
In the era of multimedia and Internet, image processing is a key technology.
Image processing is any form of information processing for which the input is an image, such as photographs or frames of video; the output is not necessarily an image, but can be for instance a set of features of the image.
Image processing is of two types Analog image processing and digital image processing. Digital image processing has the same advantages over analog image processing as digital signal processing has over analog signal processing - it allows a much wider range of algorithms to be applied to the input data, and can avoid problems such as the build-up of noise and signal distortion during processing. But the cost of analog image processing was fairly high compared to digital image processing.
Analog image can be converted to a digital image which can be processed in greater aspects, having greater advantages affordably and the processes such as sampling, quantization, Image acquisition, Image Segmentation of converting analog image to a digital image is explained in this report.
Image processing has a very good scope in the fields of Signal-processing aspects of image processing, imaging systems, and image scanning, display and printing. Includes theory, algorithms, and architectures for image coding, filtering, enhancement, restoration, segmentation, and motion estimation; image formation in tomography, radar, sonar, geophysics, astronomy, microscopy, and crystallography; image scanning, digital half-toning and display, and color reproduction.
HISTORY
Many of the techniques of digital image processing, or digital picture processing as it was often called, were developed in the 1960s at the Jet Propulsion Laboratory, MIT, Bell Labs, University of Maryland, and a few other places, with application to satellite imagery, wirephoto standards conversion, medical imaging, videophone, character recognition, and photo enhancement. But the cost of processing was fairly high with the computing equipment of that era. In the 1970s, digital image processing proliferated, when cheaper computers and dedicated hardware became available. Images could then be processed in real time, for some dedicated problems such as television standards conversion. As general-purpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized and compute-intensive operations.
With the fast computers and signal processors available in the 2000s, digital image processing has become the most common form of image processing, and is generally used because it is not only the most versatile method, but also the cheapest
INTRODUCTION
Digital image processing is the use of computer algorithms to perform image processing on digital images. Digital image processing has the same advantages over analog image processing as digital signal processing has over analog signal processing — it allows a much wider range of algorithms to be applied to the input data, and can avoid problems such as the build-up of noise and signal distortion during processing.
We will restrict ourselves to two-dimensional (2D) image processing although most of the concepts and techniques that are to be described can be extended easily to three or more dimensions.
We begin with certain basic definitions. An image defined in the "real world" is considered to be a function of two real variables, for example, a(x,y) with a as the amplitude (e.g. brightness) of the image at the real coordinate position (x,y). An image may be considered to contain sub-images sometimes referred to as regions-of-interest, ROIs, or simply regions. This concept reflects the fact that images frequently contain collections of objects each of which can be the basis for a region. In a sophisticated image processing system it should be possible to apply specific image processing operations to selected regions. Thus one part of an image (region) might be processed to suppress motion blur while another part might be processed to improve color rendition.
The amplitudes of a given image will almost always be either real numbers or integer numbers. The latter is usually a result of a quantization process that converts a continuous range (say, between 0 and 100%) to a discrete number of levels. In certain image-forming processes, however, the signal may involve photon counting which implies that the amplitude would be inherently quantized. In other image forming procedures, such as magnetic resonance imaging, the direct physical measurement yields a complex number in the form of a real magnitude and a real phase.
IMAGE
It is a 2D function f(x, y) where x and y are spatial co-ordinates and f (Amplitude of function) is the intensity of the image at x, y. Thus, an image is a 2-dimensional function of the co-ordinates x, y.
DIGITAL IMAGE
If x, y and amplitude of f are all discrete quantities, then the image is called Digital Image. Digital image is a collection of elements called pixels, where each pixel has a specific co-ordinate value and a particular gray-level. Processing of this image using a digital computer is called Digital Image Processing. E.g. Fingerprint Scanning Handwriting Recognition System Face recognition system Biometric scanning used for authentication in Modern pen drives. The effect of digitization is shown in Figure 1.
The 2D continuous image a(x, y) is divided into N rows and M columns. The intersection of a row and a column is termed a pixel. The value assigned to the integer coordinates [m,n] with {m=0,1,2,...,M-1} and {n=0,1,2,...,N-1} is a[m,n]. In fact, in most cases a(x, y)--which we might consider to be the physical signal that impinges on the face of a 2D sensor--is actually a function of many variables including depth (z), color ( ), and time (t). Unless otherwise stated, we will consider the case of 2D, monochromatic, static images in this




Reply
seminar class
Active In SP
**

Posts: 5,361
Joined: Feb 2011
#14
12-03-2011, 03:05 PM


.doc   IMAGE PROCESSING.doc (Size: 366.5 KB / Downloads: 158)
IMAGE PROCESSING
ABSTRACT

In thispaper, the basics of capturing an image, image processing to modify and enhance the image are discussed. There are many applications for Image Processing like surveillance, navigation, and robotics. Robotics is a very interesting field and promises future development so it is chosen as an example to explain the various aspects involved in Image Processing .
The various techniques of Image Processing are explained briefly and the advantages and disadvantages are listed. There are countless different routines that can be used for variety of purposes. Most of these routines are created for specific operations and applications. However, certain fundamental techniques such as convolution masks can be applied to many classes of routines. We have concentrated on these techniques, which enable us to adapt, develop, and use other routines and techniques for other applications. The advances in technology have created tremendous opportunities for visual system and image processing. There is no doubt that the trend will continue into the future.
INTRODUCTION
Image Processing :

Image processing pertains to the alteration and analysis of pictorial information. Common case of image processing is the adjustment of brightness and contrast controls on a television set by doing this we enhance the image until its subjective appearing to us is most appealing. The biological system (eye, brain) receives, enhances, and dissects analyzes and stores mages at enormous rates of speed.
Basically there are two-methods for processing pictorial information. They are:
1. Optical processing
2. Electronic processing.
Optical processing uses an arrangement of optics or lenses to carry out the process. An important form of optical image processing is found in the photographic dark room.
Electronic image processing is further classified as:
1. Analog processing
2. Digital processing.
Analog processing:
These ple of this kind is the control of brightness and contrast of television image. The television signal is a voltage level that varies In amplitude to represent brightness through out the image by electrically altering these signals , we correspondingly alter the final displayed image appearance.
Digital image processing:
Processing of digital images by means of digital computer refers to digital image processing. Digital images are composed of finite number of element of which has a particular location value. Picture elements, image elements, and pixels are used as elements used for digital image processing.
Digital Image Processing is concerned with processing of an image. In simple words an image is a representation of a real scene, either in black and white or in color, and either in print form or in a digital form i.e., technically a image is a two-dimensional light intensity function. In other words it is a data intensity values arranged in a two dimensional form, the required property of an image can be extracted from processing an image. Image is typically by stochastic models. It is represented by AR model. Degradation is represented by MA model.
Other form is orthogonal series expansion. Image processing system is typically non-casual system. Image processing is two dimensional signal processing. Due to linearity Property, we can operate on rows and columns separately. Image processing is vastly being implemented by “Vision Systems” in robotics. Robots are designed, and meant, to be controlled by a computer or similar devices. While “Vision Systems” are most sophisticated sensors used in Robotics. They relate the function of a robot to its environment as all other sensors do.
“Vision Systems” may be used for a variety of applications, including manufacturing, navigation and surveillance.
Some of the applications of Image Processing are:
1.Robotics. 3.Graphics and Animations.
2.Medical Field. 4.Satellite Imaging.
Reply
seminar class
Active In SP
**

Posts: 5,361
Joined: Feb 2011
#15
19-03-2011, 10:19 AM

presented by:
Ranjith & Waquas


.pptx   1112Image.pptx (Size: 529.35 KB / Downloads: 130)
Introduction to Image Processing
What is an Image?

An Image is an Array, or a Matrix, of square pixels (Picture elements) arranged in Columns and Rows.
 There are two groups of Images
 Vector Graphics (or line art)
 Bitmaps (Pixel based images)
 There are two groups of Colors
* RGB
 Fourier Transform : a Review
 Fourier Transform Basic Functions
Image Enhancements
 Image Enhancement techniques:
Spatial Domain Methods
Frequency Domain Methods
 Spatial (time) domain techniques are techniques that operate directly on pixels.
 Frequency domain techniques are based on the modifying the Fourier Transform of an Image.
Frequency Domain Filtering
 Edges and transitions (e.g., Noise) in an image contribute significantly to High – frequency content of Fourier Transform.
 Low frequency contents in the Fourier Transform are responsible to the general appearance of the image over smooth areas.
 Blurring (Smoothing) is achieved by attenuating range of High – frequency components of Fourier Transform.
Embedded Image Processing System on FPGA
Abstract
The Design of an Embedded Image Processing System (called DIPS) on FPGA is presented. DIPS is based on the Xilinix MicroBlaze 32 – bit soft processor core and implemented in Spartan – 3.
Introduction
Today, embedded systems can be Microcontroller-based, DSP based, ASIC based, or FPGA based Systems. Xilinix, a FPGA vendor has provided the MicroBlaze 32 – bit soft processor core which is licensed as part of Xilinix Embedded Development Kit.
Overview of the Xilinix MicroBlaze
 The MicorBlaze soft processor is a 32 – bit Architecture.
 The Backbone of the architecture is a single – issue, 3 stage pipeline with 32 general purpose registers, Arithmetic Logic Units (ALU), a shift units, and two levels of Interrupt.
 Two Memory interfaces of MicroBlaze Processor
 Local Memory Bus (LMB)
 Xilinix Cache Link (XCL)
 Fast Simplex Link (FSL)
 The Local Memory Bus is provides a Low latency storage such as interrupt and exception handler
 The Xilinix Cache Link is a High performance point – to – point connection to an external memory controller.
 The Fast Simplex Link is a simple, yet powerful, yet point – to – point interfaces that connects User – Developed co-processors to the MicroBlaze Processor pipeline.
Image Processing Vs Computer Graphics
 There generally is a bit of confusion in recognising the difference between the fields of Image processing and Computer graphics.
 This two topics will be entirely different, almost the opposite of each other. And a com. graphics is involved with image synthesis, and not recognition or Analysis, as in the case of Image processing.
 Morphing used in advertisements could be said to be the most commonly witnessed computer graphics technique.
 Input to an Image processing is always a real image formed via some physical phenomenon such as Scanning, filming, Etc.
Conclusions…
 Imaging professionals, scientists, and engineers who use image processing as a tool and wish to develop a deeper understanding and create custom solutions to imaging problems in their field.
 IT professionals wanting a self-study course featuring easily adaptable code and completely worked out examples enabling them to be productive right away.
 Image processing using all Programming Languages like,
C, C++, Java, Etc.
• It is used for all fields like, Medical, all Web standards, Etc.
• The visual system of a single human being does more image processing than the entire world’s supply of supercomputers.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page

Quick Reply
Message
Type your reply to this message here.


Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  analog digital hybrid modulation ppt jaseelati 0 234 11-12-2014, 03:44 PM
Last Post: jaseelati
  solar power satellite full report computer science technology 30 33,638 28-02-2014, 07:01 PM
Last Post: sandhyaswt16
  electronic nose full report project report tiger 15 17,801 24-10-2013, 11:23 AM
Last Post: Guest
  smart pixel arrays full report computer science technology 5 8,093 30-07-2013, 05:10 PM
Last Post: Guest
  DIGITAL CALENDAR seminar class 4 3,696 27-07-2013, 10:05 AM
Last Post: study tips
  smart antenna full report computer science technology 18 22,559 25-07-2013, 01:55 PM
Last Post: livon
  speed detection of moving vehicle using speed cameras full report computer science technology 14 22,647 19-07-2013, 04:40 PM
Last Post: study tips
  speech recognition full report computer science technology 17 22,822 14-05-2013, 12:28 PM
Last Post: study tips
  global system for mobile communication full report computer science technology 9 8,502 06-02-2013, 10:01 AM
Last Post: seminar tips
  satrack full report computer science technology 10 17,481 02-02-2013, 10:53 AM
Last Post: seminar tips