Digital Image Processing-Seminar
Computer Science Clay|
Active In SP
Joined: Jan 2009
07-08-2009, 05:27 PM
Image Processing is processing of the image so as to reveal the inner details of the image for further investigation. With the advent of digital computers, Digital Image Processing has started revolutionizing the world with its diverse applications. The field of Image Processing continues, as it has since the early 1970â„¢s, on a path of dynamic growth in terms of popular and scientific interest and number of commercial applications.
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
Active In SP
Joined: Apr 2010
14-05-2010, 06:41 AM
DIGITAL IMAGE PROCESSING.doc (Size: 230 KB / Downloads: 428)
DIGITAL IMAGE PROCESSING:
Image Processing is processing of the image so as to reveal the inner details of the image for further investigation. With the advent of digital computers, Digital Image Processing has started revolutionizing the world with its diverse applications. The field of Image Processing continues, as it has since the early 1970â„¢s, on a path of dynamic growth in terms of popular and scientific interest and number of commercial applications. Considering the advances in the last 30 years resulting in routine application of image processing to problems in medicine, entertainment, law enforcement, and many others.
The discipline of Digital Image Processing covers a vast area of scientific and engineering knowledge. Modern digital technology has made it possible to manipulate multi-dimensional signals with systems that range from simple digital circuits to advanced parallel computers. Itâ„¢s built on a foundation of one- and two-dimensional signal processing theory and overlaps with such disciplines as Artificial Intelligence (Scene understanding), information theory (image coding), statistical pattern recognition (image classification), communication theory (image coding and transmission), and microelectronics (image sensors, image processing hardware).
Image processing has revolutionized in various fields. Examples include mapping internal organs in medicine using various scanning technologies (image reconstruction from project and implimentationions), automatic fingerprint recognition (pattern recognition and image coding) and HDTV (video coding).
Steps in Image processing:
The main steps involved in any image processing applications are as follows;
In order to process any image the image must be acquired so as to perform the necessary
processing. Images are generated by the combination of an illumination source and the reflection or absorption of energy from that source by the elements of the scene being imaged. The illumination may originate from a source of electromagnetic energy such as radar, infrared, or X-Ray image. Depending on the nature of the source the illumination energy is either reflected or transmitted through the object of interest. Special sensors are available for scanning the images.
Image compression solves the problem of reducing the amount of data required to represent the image. The basis of compression lies in removal of redundant data that might be useful for the purpose of storage. Image compression also plays a main role in transmitting data through Internet.
Enhancement as the name indicates is to enhance the image so as to bring the details of the parts of the image which are obscured due to some distortion in the image. The principal objective behind image enhancement is to process the image so that it results in an image which is more suitable for the particular application where that image is applied than the original image.
Enhancement of the image using Filters:
Segmentation of the image is to subdivide a image into its constituent regions or objects. Image segmentation algorithms generally are based on one of two basic properties of intensity values: discontinuity and similarity. In the first category, approach is to partition the image based on abrupt changes in intensity, such as edges of the image. The second approach is to partition the image into regions that are similar according to set of predefined criteria.
There are a wide variety of fields where Image Processing is applied. Some of them include
analyzing geographical conditions
remote sensing .
Considering the importance of Image Processing in the field of Bio-Medicine the proposed system OPTHALMIC ANALYSIS AND DETECTION was developed and is explained in detail.
Objective Of the Proposed System:
The proposed system aims at taking the input images as the photographic images scanned and to compare and contrast the same image in several aspects with the normal image and study and display the various homologous, analogous characteristics and to display the medical conclusions by studying the innermost parts of the eye.
The main aim of our project and implimentation is to develop software, which will take a biological image, in this case, the human eye, and to completely analyze it and to detect certain common, yet chronic diseases.
Store the standard defective eye in the database.
Scan the photographic image of the patient.
Compare the images using Digital Image Processing techniques.
Infer the disease and suggest remedial measures.
Diseases that are Diagnosed:
The proposed software system uses the general method of processing a two dimensional picture by a digital computer.
In the proposed system, the image which is inputted can either be transparencies, slide photographs, X-ray or a chat.
Uses the general Image Processing techniques.
Basic Technique involved:
Pseudo coloring technique is used for the purpose of the comparison. As the name suggests pseudo coloring means false coloring. Different parts of the eye are assigned colors to make the process of comparison easier.
In Turbo C the color ranges from 0 to 15.The process of matrix addition is used during the comparison of the images. If in this process if the value of addition exceeds 15 the final value will be assigned to 15 and this condition indicates the defect because the assigned color for the value 15 will be different when compared to the value obtained.
Glaucoma is caused by a number of different eye diseases, which in most cases produce increased pressure within the eye. A backup of fluid in the eye causes this elevated pressure. Over time, it causes damage to the optic nerve. Through early detection, diagnosis and treatment, the software can help to preserve the vision.
Sample image of a personâ„¢s eye affected by Glaucoma
Â¢ Generally No Symptoms
Â¢ Blind Spots will be on Vision or side or Peripheral
Â¢ Blurred vision
Â¢ Severe Eye pain
Diabetic retinopathy is a complication of diabetes and a leading cause of blindness.
It occurs when diabetes damages the tiny blood vessels inside the retina, the light-sensitive tissue at the back of the eye
Sample image of personâ„¢s eye affected by Diabetic Retinopathy
A few specks of blood, or spots, "floating" in the vision are seen.
Hemorrhages tend to happen more than once, often during sleep.
A corneal ulcer is an open sore in the cornea, the clear and round front part of the eye through which light passes. Tissue loss because of inflammation produces an ulcer. The ulcer can either be located in the center of the cornea and greatly affect the vision or be placed in the periphery and not affect it so much.
Image of personâ„¢s eye affected by Cornea Ulcer
Eyes may become sensitive to bright light.
Iritis a form of Anterior Uveitis is a term for an inflammatory disorder of the colored part of the eye (iris).
Iritis is an inflammation inside the eye, the condition is potentially sight threatening.
Image of patientâ„¢s eye with IRITIS
Various processes involved:
The process of image processing i.e. comparison of the images can be done using the below mentioned four classes of algorithms. Each class of algorithms is explained considering a disease.
These processes alter the arrangement of the pixels in an image based on the geometric transformation. This algorithm compares the database image containing the standard defective eye of a person affected by the disease (Glaucoma taken here) with that of a person affected by the disease. It takes the entire image, compares by processing the image through Digital Image Processing techniques and determines whether the patientâ„¢s eye is affected by the disease.
Detection of Glaucoma through Image Processing (Quadrant Process):
Image of standard defective eye Image of the Patientâ„¢s eye.
These processes alter the pixel value in the image based only upon the original value of the pixel and possibly its location within an image and the values that surround it. In this algorithm X and Y coordinates are given as the input. So the processing is done in that particular area and the results indicate whether the disease is affected in that part of the eye. This is to check the accurate area where the disease is concentrated inside the human eye.
Detection of Diabetic Retinopathy through Image Processing (Area Process):
Image of standard defective eye Image of patientâ„¢s eye with Diabetic Retinopathy
The area inside the box marked in the image is checked for the disease and the results displays whether the patientâ„¢s eye is affected by the image.
These processes alter the pixel values within an image based on the pixel values in one or more additional images. In the frame process, both the database image of the standard defective eye and that of the image of the patientâ„¢s eye affected by the disease are separated into different frames.
Detection of Glaucoma using Frame Process
Standard defective image Image taken from Patientâ„¢s eye
So the images are compared within a particular frame, and detects whether the disease is present in that frame which obviously indicates whether the image is present in that part of the eye. Whenever the program is executed it takes the frame and compares inside that part of the frame.
These processes alter the pixel value in the image based only upon the original value of the pixel and possibly its location within an image. In the point process a single point (pixel) is checked. During the execution the point that is to be compared with the database image and the sample image is entered. The point process checks whether the disease is present at that point.
Processing the image to detect Cornea Ulcer (Point Process)
Standard database image Image of personâ„¢s eye affected by Cornea Ulcer
On executing the source code it asks for the pixel to be compared. On entering the pixel value, it compares those to images at the entered position and checks if disease is present at that point. In this case the point marked within the image is checked and it detects whether the disease is present at that point.
Advantages Of the Proposed System over the Existing System:
Process of Comparison is simpler
Hardware requirements are minimized.
Point precision can be achieved showing the defective portion.
Layman can use the software.
Time consumption is minimal.
Internal image structure of the eye can be diagnosed.
Explores so far unexplored areas of the human eye.
Increase the number of diseases that can be detected.
Image Compression techniques to reduce storage.
Future of Image Processing:
Image signal processing is a fast growing field, and little can be predicted of what would be possible 50 years from now. However, key areas of research is being directed include improving the traditional tools for compression, transmission, modulating, coding and encryption. On going efforts should be made in developing open standards to ensure inter-operability. With our appetite for media rich and bandwidth hungry resources increasing, and the majority of users still relying on the Plain Old Telephone system for the Internet, there is increasing bottleneck in information delivery.
Current research project and implimentations:
The measurement of the degree of opacification in a posterior lens capsule following cataract and intraocular lens implantation surgery.
The development of a set of tools, algorithms and technologies for the Conflict Detection and Resolution in a wide range of real time applications within Railways and Metros networks.
The development and implementation of a screening system based on digital fundus image for the early detection of diabetic retinopathy.
The application of neural networks to the FADS problem in nuclear medicine.
Thus the statement Image Processing has Revolutionized the world we live in exactly fits because of the diverse applications of the image processing in various fields.
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
Active In SP
Joined: Feb 2011
23-03-2011, 09:45 AM
AIP-point-mm2.ppt (Size: 4.86 MB / Downloads: 235)
Fundamental Steps in Computer Vision
• Point Processing
What is point processing?
• Grey level mapping
• Segmentation using thresholding
• SW to do Image Processing and Analysis
• Free: rsb.info.nih.gov/ij/
• Stabile (Java)
• Extremely easy to learn and use
• Comes with a C-like programming language, but we’ll only use the menus
What is point processing?
• Only one pixel in the input has an effect on the output
• For example:
– Changing the brightness, thresholding, histogram stretching
• Grey level enhancement
– Process one pixel at a time independent of all other pixels
– For example used to correct Brightness and Contrast (remote control)
• The brightness is the intensity
• Change brightness:
– To each pixel is added the value b
– f(x,y) is the input image
– g(x,y) is the (enhanced) output image
• If b>0 => brighter image
• If b<0 => less bright image
• The contrast describes the level of details we can see
• Change contrast:
• Each pixel is multiplied by a
– f(x,y) is the input image
– g(x,y) is the (enhanced) output image
• If a>1 => more contrast
• If a<1 => less contrast
Combining brightness and contrast
• A straight line
• Greylevel mapping
• X-Axis: Input Value
• Y-Axis: Output Value
• This plot: Identity
– Output equals Input: a=1 and b=0
• Apply to each pixel!
• To save time the greylevel
mapping can be written as a
• How to set the greylevel mapping
• Humans cannot tell the difference between greylevel values too close to each other
• So: spread out the greylevel values
• This is called histogram stretching
• Something really different…
• Until now: Image processing (manipulation)
• Image analysis: segmentation
• The task:
– Information versus noise
– Foreground (object) versus background
• Use greylevel mapping and the histogram
• When two peaks (modes) of a histogram correspond to object and noise (Show: AuPbSn40, bridge)
• Find a THRESHOLD value, T, that separates the two peaks. This process is called THRESHOLDING
– If f(x,y) > T then g(x,y) = 1, else g(x,y) = 0
– ( or reverse )
• Result: a binary image where
object pixels = 1 and noise = 0
• (Show: AuPbSn40, bridge, 2Dgel, blobs)
• Often, obtaining a bi-modal histogram is the ”sole” purpose of the image acquisition:
• How to define the Theshold?
– If we have a good setup => the Threshold is static!
– Find it during training
• But the histogram is NEVER static!!
What to remember
• Point processing
– Pixel-wise operations
– Greylevel mapping
• Setting brightness and contrast
– Histogram processing
– Segmentation: Thresholding. Bimodal histogram
Active In SP
Joined: Feb 2011
29-03-2011, 02:18 PM
33 IMAGEPROCESSING.doc (Size: 369.5 KB / Downloads: 113)
In this paper, the basics of capturing an image, image processing to modify and enhance the image are discussed. There are many applications for Image Processing like surveillance, navigation, and robotics. Robotics is a very interesting field and promises future development so it is chosen as an example to explain the various aspects
involved in Image Processing. The various techniques of Image Processing are explained briefly in the paper. There are countless routines which are created for specific operations and applications. We have concentrated on other techniques, like segmentation, morphing which enable us to adapt, develop, and use other routines and techniques for other applications. Also we had discussed how this image processing is used in field of medicine by taking some examples like bone scanning, chest etc. many applications of this image processing are also mentioned in the paper.
Image Processing :
Image processing pertains to the alteration and analysis of pictorial information. Common case of image processing is the adjustment of brightness and contrast controls on a television set by doing this we enhance the image until its subjective appearing to us is most appealing. The biological system (eye, brain) receives, enhances, and dissects analyzes and stores mages at enormous rates of speed.
Basically there are two-methods for processing pictorial information. They are:
Optical processing uses an arrangement of optics or lenses to carry out the process. An important form of optical image processing is found in the photographic dark room.
Electronic image processing is further classified as:
1. Analog processing
2. Digital processing.
These ple of this kind is the control of brightness and contrast of television image. The television signal is a voltage level that varies In amplitude to represent brightness through out the image by electrically altering these signals , we correspondingly alter the final displayed image appearance.
Digital image processing:
Processing of digital images by means of digital computer refers to digital image processing. Digital images are composed of finite number of element of which has a particular location value. Picture elements, image elements, and pixels are used as elements used for digital image processing.
Digital Image Processing is concerned with processing of an image. In simple words an image is a representation of a real scene, either in black and white or in color, and either in print form or in a digital form i.e., technically a image is a two-dimensional light intensity function. In other words it is a data intensity values arranged in a two dimensional form, the required property of an image can be extracted from processing an image. Image is typically by stochastic models. It is represented by AR model. Degradation is represented by MA model.
Other form is orthogonal series expansion. Image processing system is typically non-casual system. Image processing is two dimensional signal processing. Due to linearity Property, we can operate on rows and columns separately. Image processing is vastly being implemented by “Vision Systems” in robotics. Robots are designed, and meant, to be controlled by a computer or similar devices. While “Vision Systems” are most sophisticated sensors used in Robotics. They relate the function of a robot to its environment as all other sensors do.
“Vision Systems” may be used for a variety of applications, including manufacturing, navigation and surveillance.
Some of the applications of Image Processing are:
1. Robotics. 3. Graphics and Animations.
2. Medical Field. 4. Satellite Imaging.
Image Processing: Image processing is a subclass of signal processing concerned specifically with Pictures. Improve image quality for human perception and/or computer interpretation. Image Enhancement
To bring out detail is obscured, or simply to highlight certain features of interest in an image.
1.Image Restoration: Improving the appearance of an image tend to be based on mathematical or probabilistic models of image degradation.
2.Color Image Processing: Gaining in importance because of the significant increase in the use of digital images over the Internet.
3.Wavelets: Foundation for representing images in various degrees of resolution. Used in image data compression and pyramidal representation (images are subdivided successively into smaller regions)
4.Compression: Reducing the storage required to save an image or the bandwidth required to transmit it. Ex. JPEG (Joint Photographic Experts Group) image compression standard.
5.Morphological processing: Tools for extracting image components that are useful in the representation and description of shape.
6.Image Segmentation: Computer tries to separate objects separate objects from the image background from the image background. It is one of the most difficult tasks in DIP. A rugged segmentation procedure brings the process a long way toward successful solution of an image problem. Output of the segmentation stage is raw pixel data, constituting either the boundary of a region or all the points in the region itself.
The following is the overall view and analysis of Image Processing.
IMAGE PROCESSING TECHNIQUES:
Image Processing techniques are used to enhance, improve, or otherwise alter an image and to prepare it for image analysis. Usually, during image processing information is not extracted from the image. The intention is to remove faults, trivial information, or information that may be important, but not useful, and to improve the image.
Image processing is divided into many sub processes, including Histogram Analysis, Thresholding, Masking, Edge Detection, Segmentation, and others.
STAGES IN IMAGE PROCESSING:
An image is captured by a sensor (such as a monochrome or color TV camera) and digitized. If the output of the camera or sensor is not already in digital form, an analog-to digital converter digitizes it.
2.RECOGNITION AND INTERPRETATION:
Recognition is the process that assigns a label to an object based on the information provided by its descriptors. Interpretation is assigning meaning to an ensemble of recognized objects.
Segmentation is the generic name for a number of different techniques that divide the image into segments of its constituents. The purpose of segmentation is to separate the information contained in the image into smaller entities that can be used for other purposes.
4.REPRESENTATION AND DESCRIPTION:
Representation and Description transforms raw data into a form suitable for the Recognition processing.
5. KNOWLEDGE BASE:
A problem domain detailing the regions of an image where the information of interest is known to be located is known as knowledge base. It helps to limit the search.
Thresholding is the process of dividing an image into different portions by picking a certain grayness level as a threshold, comparing each pixel value with the threshold, and then assigning the pixel to the different portions, depending on whether the pixel’s grayness level is below the threshold or above the threshold value. Thresholding can be performed either at a single level or at multiple levels, in which the image is processed by dividing it into”layers”, each with a selected threshold.
Various techniques are available to choose an appropriate threshold ranging from simple routines for binary images to sophisticated techniques for complicated images.
Sometimes we need to decide whether neighboring pixels are somehow “connected” or related to each other. Connectivity establishes whether they have the same property, such as being of the same region, coming from the same object, having a similar texture, etc. To establish the connectivity of neighboring pixels, we first have to decide upon a connectivity path.
Like other signal processing mediums, Vision Systems contains noises. Some noises are systematic and come from dirty lenses, faulty electronic components, bad memory chips and low resolution. Others are random and are caused by environmental effects or bad lighting. The net effect is a corrupted image that needs to be preprocessed to reduce or eliminate the noise. In addition, sometimes images are not of good quality, due to both hardware and software inadequacies; thus, they have to be enhanced and improved before other analysis can be performed on them.
A mask may be used for many different purposes, including filtering operations and noise reduction. Noise and Edges produces higher frequencies in the spectrum of a signal. It is possible to create masks that behave like a low pass filter, such that higher frequencies of an image are attenuated while the lower frequencies are not changed very much. There by the noise is reduced.
Edge Detection is a general name for a class of routines and techniques that operate on an image and results in a line drawing of the image. The lines represented changes in values such as cross sections of planes, intersections of planes, textures, lines, and colors, as well as differences in shading and textures. Some techniques are mathematically oriented, some are heuristic, and some are descriptive. All generally operate on the differences between the gray levels of pixels or groups of pixels through masks or thresholds. The final result is a line drawing or similar representation that requires much less memory to be stored, is much simpler to be processed, and saves in computation and storage costs. Edge detection is also necessary in subsequent process, such as segmentation and object recognition. Without edge detection, it may be impossible to find overlapping parts, to calculate features such as a diameter and an area or to determine parts by region growing.
IMAGE DATA COMPRESSION:
Electronic images contain large amounts of information and thus require data transmission lines with large bandwidth capacity. The requirements for the temporal and spatial resolution of an image, the number of images per second, and the number of gray levels are determined by the required quality of the images. Recent data transmission and storage techniques have significantly improved image transmission capabilities, including transmission over the Internet.
REAL-TIME IMAGE PROCESSING:
In many of the techniques considered so far, the image is digitized and stored before processing. In other situations, although the image is not stored, the processing routines require long computational times before they are finished. This means that, in general, there is a long lapse between the time and image is taken and the time a result obtained. This may be acceptable in situations in which the decisions do not affect the process. However, in other situations, there is a need for real-time processing such that the results are available in real time or in a short enough time to be considered real time. Two different approaches are considered for real time processing. One is to design dedicated hardware such that the processing is fast enough to occur in real time. The other is to try to increase the efficiency of both the software and the hardware and thereby reduce processing and computational requirements.
Here we want to present some of the applications of Image Processing in some fields where it is applied like Robotics, Medical field and common uses…
Image Processing is vastly being implemented in Vision Systems in Robotics. Robots capture the real time images using cameras and process them to fulfill the desired action.
A simple application in robotics using Vision Systems is a robot hand-eye coordination system. Consider that the robot’s task is to move an object from one point to another point. Here the robots are fixed with cameras to view the object which is to be moved. The hand of the robot and the object that is to be captured are observed by the cameras, which are fixed to the robot in position, this real time image is processed by the image processing techniques to get the actual distance between the hand and the object. Here the base wheel of the robot’s hand is rotated through an angle which is proportional to the actual distance between hand and the object. Here a point in the target is obtained by using the Edge Detection Technique. The operation to be performed is controlled by the micro-controller, which is connected to the ports of the fingers of the robot’s hand. Using the software programs the operations to be performed are assigned keys from the keyboard. By pressing the relative key on the keyboard the hand moves appropriately.
Here the usage of sensors/cameras and Edge Detection technique are related to Image Processing and Vision Systems. By this technique the complexity of using manual sensors is minimized to a great extent and thereby sophistication is increased. Hence image processing is used here in the study of robotics.
Active In SP
Joined: Feb 2011
26-04-2011, 09:42 AM
IMAGE PROCESSING SAC.doc (Size: 2.55 MB / Downloads: 127)
Our project and implimentation topic is “IMAGE PROCESSING TECHNIQUES”. It is a desktop based application. This project and implimentation aims at creating various effects for processing an image of any format such as .jpg, .gif etc. Our objective is to give a clear outlook about the various operations or effects that can give to an image to change its original look. We select this topic as our project and implimentation by acquiring motivations from various existing software’s such as Windows Picture Management likewise…We use java net beans as a supporting software while commencing this project and implimentation. The pixel grabber function in java helps to grab each image into its pixel level.
Image Processing is the art and science of manipulating digital images. It stands with one foot firmly in mathematics and the other in aesthetics and is a critical component of graphical computer systems. It is a genuinely useful standalone application of Java2D.The 2D API introduces a straight forward image processing model to help developers manipulate these image pixels. The image processing parts of Java are buried within the java.awt.image package.
System study is the first phase for the development of software when the preliminary investigation is made. The importance of system study phase is the establishment of the requirements for our system to acquire, developed and installed. The important outcome of the preliminary investigation is made in the study phase. System study is one of the important steps included in the system development life cycle. System study involves studying the ways by which we can process an image. A number of image editors are available for us, but they have high cost. By developing our own system we can done image processing at free of cost.
The life cycle of our system includes the following steps:
Reorganizations of need or Preliminary study/survey
Development and testing
Post implementation and Maintenance
Recognition of needs and preliminary investigation is the first system activity done by us. After that we find out how each effect take place in an image. We identify deferent functions and information’s are collected. It is also essential that the analyst familiarize himself with the objectives, activities and functions of organizations in which the system is to be implemented.
2.2 FEASIBILITY STUDY
Many feasibility studies disillusioning for both users and analyst. First the study often presupposes that when the feasibility document is being prepared the analyst is in a position to evaluate solutions; second most studies tend to overlook the confusion inherent in system development. The three key considerations are involved in feasibility analysis are
• Economic feasibility
• Technical feasibility
• Behavioral feasibility
2.3 ECONOMIC FEASIBILITY
Economic feasibility is the most frequently used methods for evaluating the effectiveness of a candidate system. More commonly known as cost/benefit analysis, the procedure is to determine the benefits and savings that are expected from a candidate system and compare them with cost. The result of comparison is found and changed if needed. If benefits outweigh costs then the decision is made to design and implement the system. Otherwise further justification or alternation in the proposed system will have to be made if it is to have a chance of being approved. As we are developing a completely new system the cost is on the higher side. The implementation costs involve the installation of a new hardware and software as well as the cost of hosting the website on the internet. Maintenance of the system is much costly. Training for the operating personnel is also expected to be by the people who have never been initialized to operating a computerized system.
In this case, benefits outweigh costs computerization reduces the need for manual labour.This saves much money and also save many hours of manual labor resulting in financial savings. Saving on time is also a benefit of the new system. Thus concluding that the benefits of the system outweighed its cost culminated the economic feasibility study.
2.4 TECHNICAL FEASIBILITY
Technical feasibility centers on the existing system and to what extend it can support the proposed addition. Here we have many technologies existed which can give effects to an image .But our proposed system have almost all the operations together in one unit. We can choose any effect fastly and easily whenever we required, otherwise we have to select an effect, then add an effect to image, if we are not much impressed with we have to search for another one. In the proposed system there is no need to search for effects .All the effects are put together and select an image give effects, change to another one and so on easily. The main feature of the proposed system is that it is more users friendly.
2.5 BEHAVIORAL FEASIBILITY
It is also known as operational feasibility. People are inherently resistant to change and computers have been known to facilitate change. Now most people support computerized system. An estimate should be made of how strong a reaction the user staff is likely to have toward the development of a new system. Therefore it is understandable that the introduction of the new system required special effort to educate and train the staff on way of operating system. Also required to give awareness to the customers. The staffs were not against the system; however the user would accept the concept.
2.6 INTRODUCTORY INVESTIGATION
Introductory Investigation is done prior to the system study phase. It is indented to give an insight into the requirements of the system based on the feasibility report obtained after feasibility study. After the feasibility study, we came across some factors which made the introduction of a new system inevitable. In globalized world good information system has become a need more than a status symbol for any organization, especially a public organization like ours.
2.7 SYSTEM STUDY
System study involves studying the ways the organization currently retrieves and processes to produce information with the goal of determining how to make it better. For this, we developed an alternative system and evaluated it in terms of cost, benefits and feasibility. We made a thorough study of all areas which we have to make better while developing the proposed system.
2.7.1 PROPOSED SYSTEM
The proposed system is designed to meet almost all effects/operations that can given to an image. Here our proposed system entitled “IMAGE PROCESSING TECHNIQUES”. The main feature of this system is very user friendly and user can get a look to all effects easily, and can do all effects to a single image which is directed to display below the effects name mentioned in the menu page and cancel unnecessary effects easily by using cancel button below the image.
Advantages of proposed system
• Simple and more user friendly
• More interactive
• It avoids time delay
• High data security
smart paper boy|
Active In SP
Joined: Jun 2011
22-06-2011, 12:13 PM
IMAGE PROCESSING.doc (Size: 386.5 KB / Downloads: 64)
While humans have had the innate ability to recognize and distinguish different faces for millions of years, computers are just now catching up. In this paper, we'll learn how computers are turning your face into computer code so it can be compared to thousands, if not millions, of other faces. We'll also look at how facial recognition software is being used in elections, criminal investigations and to secure your personal computer.
Facial recognition software falls into a larger group of technologies known as biometrics. Biometrics uses biological information to verify identity. The basic idea behind biometrics is that our bodies contain unique properties that can be used to distinguish us from others. Facial recognition methods may vary, but they generally involve a series of steps that serve to capture, analyze and compare your face to a database of stored images.
A Software company called Visionics developed Facial Recognition software called Faceit. The heart of this facial recognition system is the Local Feature Analysis (LFA) algorithm. This is the mathematical technique the system uses to encode faces. The system maps the face and creates a faceprint, a unique numerical code for that face. Once the system has stored a faceprint, it can compare it to the thousands or millions of faceprints stored in a database. Potential applications even include ATM and check-cashing security, Security Law Enforcement & Security Surveillance and voter database for duplicates. This biometrics technology could also be used to secure your computer files, by mounting a web cam to your computer and to get into your computer. By implementing this technology and the normal password security you are getting double security to your valuable data.
People have an amazing ability to recognize and remember thousands of faces. While humans have had the innate ability to recognize and distinguish different faces for millions of years, computers are just now catching up. In this paper, you'll learn how computers are turning your face into computer code so it can be compared to thousands, if not millions, of other faces. We'll also look at how facial recognition software is being used in elections, criminal investigations and to secure your personal computer.
Biometrics is considered a natural means of identification since the ability to distinguish among individual appearances is possessed by humans. Facial scan systems can range from software-only solutions that process images processed through existing closed-circuit television cameras and processing systems.With facial recognition technology, a digital video camera image is used to analyze facial characteristics such as the distance between eyes, mouth or nose. These measurements are stored in a database and used to compare with a subject standing before a camera.
Facial-scan technology is based on the standard biometrics sequence of image acquisition, image acquisition, and image processing distinctive characteristic location, templates creations, and matching. An optimal image is captured through a high resolution camera, with moderate lighting and users directly facing a camera. The enrollment images define the facial characteristics to be used in all future verifications, thus a high quality enrollment is essential. Challenges that occur in the image Acquisition process include distance from user angled acquisition and lighting. Distance from the camera reduces facial size and thus image resolution.
Your face is an important part of who you are and how people identify you. Imagine how hard it would be to recognize an individual if all faces looked the same. Except in the case of identical twins, the face is arguably a person's most unique physical characteristic. While humans have had the innate ability to recognize and distinguish different faces for millions of years, computers are just now catching up.
Visionics, a company based in New Jersey, is one of many developers of facial recognition technology. The twist to its particular software, FaceIt, is that it can pick someone's face out of a crowd, extract that face from the rest of the scene and compare it to a database full of stored images. In order for this software to work, it has to know what a basic face looks like.
Facial recognition software can be used to find criminals in a crowd, turning a mass of people into a big line up.
Facial recognition software is based on the ability to first recognize a face, which is a technological feat in itself, and then measure the various features of each face. If you look in the mirror, you can see that your face has certain distinguishable landmarks. These are the peaks and valleys that make up the different facial features. Visionics defines these landmarks as nodal points. There are about 80 nodal points on a human face. Here are a few of the nodal points that are measured by the software:
1. Distance between eyes 2. Width of nose 3. Depth of eye sockets
4. Cheek bones. 5. Jaw Line 6. Chin
These nodal points are measured to create a numerical code, a string of numbers that represents the face in a database. This code is called a faceprint. Only 14 to 22 nodal points are needed for the FaceIt software to complete the recognition process. In the next section, we'll look at how the system goes about detecting, capturing and storing faces.
Facial recognition software falls into a larger group of technologies known as biometrics. Biometrics uses biological information to verify identity. The basic idea behind biometrics is that our bodies contain unique properties that can be used to distinguish us from others. Besides facial recognition, biometric authentication methods also include Fingerprint scan, Retina scan and Voice identification.
Facial recognition methods may vary, but they generally involve a series of steps that serve to capture, analyze and compare your face to a database of stored images. Here is the basic process that is used by the FaceIt system to capture and compare images:
smart paper boy|
Active In SP
Joined: Jun 2011
12-08-2011, 03:15 PM
gp 8.ppt (Size: 358.91 KB / Downloads: 60)
Introductionto Digital image processing
Types of image –Monochrome image, Gray scale image, color image(24) bit , half toned image.
Color image processing
Advantage and disadvantage of digital image processing.
What is image??
Two dimensional representation of three dimensional world is called as image.
The missing dimension in an image is basically the depth.
Most of cameras use for imaging is sensitive to this range of electro magnetic spectrum(380-760nm).
Although even there are cameras which are capable of detecting infrared,ultravoilet light,x-rays and radio waves too.
What is image processing??
Image processing is an integral part of human being and one continue to process image sub consciously doing this every now and then.
The human eye –brain mechanism represents the ultimate imaging system
BASIC ELEMENTS OF IMAGE PROCESSING SYSTEM
Digital image processing is modification images on a computer. The components involved in this process are -
It is the first step in any image processing system
The general aim of image acquisition is to transform an optical image (real world data) into an array of numerical data which could be later manipulated on a computer
Image acquisition is achieve by suitable camera depending upon the image requirement
All video signals are essentially in analog form that is electrical signal convey luminance and color with continuously variable voltage.
The cameras are interfaced to a computer where the processing algorithm are written from grabber card.
Usually a frame grabber card is a PCB fitted to host computer with its analog entrance port matching the impedance of incoming video signal.
Frame grabber card usually have A/D card with resolution of 8 to 12 bits.
Frame grabber has a block of memory – frame buffer memory – large capacity .
Image data 8 bit – written under computer control- by DMA transfer. And contents are read out at a video rate 30 frames per second pass through D/A converter and displayed monitor.
Table representing amount of space required to store an image
System ranging from microcomputer to general purpose large computer are used in image processing
Processing of digital image involve procedure that are usually expressed in algorithm form due to which most image processing are implemented on software .
A display device produces and shows a visual form of numerical values store in a computer as an image array .
Principal display devices- printer, T.V., monitor CRT .
Monochrome and color tv monitors-principal display devices used in modern image processing system.
These raster devices convert image data into video frame
Stages of transmission
The image sequence from camera is coded into as concise a representation as possible for transmission over the channel.
NTSE,PAL and SECAM are the three major coding system used in various parts of the world.
USA uses NTSE while INDIA uses PAL.
TYPES OF IMAGES
As images are two dimensional function they can be classified as follows:
Gray scale images
Color (24bit) images
Half –toned images
In this each pixel is stored as a single bit.
Here 0 represents black while 1 represent white.
It is black and white image in the strictest sense.
The images are also called bit mapped images.
In such images we have black and white pixel and no other shades of grey.
GREY SCALE IMAGES
Here each pixel is usually stored as a byte (8 bits).
Due to this each pixel can have value ranging from 0(black) to 255(white).
GRAY SCALE IMAGE
COLOR IMAGES(24 bits)
They are based on the fact that a variety of colors can be generated by mixing three primary colors viz, Red,Green,Blue in proper proportions
In this each pixel is composed of RGB value and each of these required 8 bits for its representation.
Hence each pixel is represented by 24 bits .
A 24 bit color image supports 16 , 777,216 different combinations of colors .
COLOR (24) BIT IMAGE
Half toned images
The emergence of half tonning of images is due to the following reason-
Instead of the better picture quality shown by grey scale images they suffer a drawback that they are not easily compatible to various ink jet ,laser and dot matrix printers.
This is due to the fact that they all are bi level devices ie they show only two colours ie black on white backgrounds.
This is not at all ideal for the case that one has to get the entire proper image.
HALF TONED IMAGE
Color image processing
The retina in eye is covered with photo receptor cells(rods and cones).
Rods provide us with monochromatic night vision while cones are responsible for color vision.
There are about 6 to 7 million cones in the human eye.
The cones occur in three types, differing mainly
in photochemistry they employ to convert light into nerve impulses. The cone divides visible portion of the electromagnetic spectrum into three bands viz Red,Green andBlue.These three color are hence called primary colors
The color TV, camera TV consist of three camera tubes
Each of these three cameras receives a filtered primary color .
The light from the scene split up into three primary color by using three prisms
These prisms are designed as diachronic mirror
It passes only one wavelength.
The rays from these mirror is now pass through color filters known as trimming filter
These are covered to video signals by red, green and blue camera tube
Morphology is a science of form and structure
In the strict sense , denotes a branch of biology the deals with structure of animals and plants
In image processing morphology is about regions and shape.
Morphological techniques treat an image as an assemble of sets
Morphology can hence be defined as interaction between two sets-
Advantages & disadvantages of digital image processing
• High noise immunity
• Adjustable precision
• Ease of design (automation) and
• Fabrication, therefore, low cost
• Better Reliability
• Less need for calibration and maintenance
• Ease of diagnosis and repair
• Easy to duplicate similar circuits
• Easily controllable by computer
• Lower speed
• Needs converters to communicate with
real world, therefore , f m p more expensive
and less precision
– Digital to Analog (D/A)
– Analog to Digital (A/D)
Active In SP
Joined: Feb 2012
20-02-2012, 04:04 PM
to get information about the topic image processing full report ppt and related topic refer the link bellow
topicideashow-to-image-processing-compression-techniques-download-full-seminar and presentation-report
topicideashow-to-image-processing-project and implimentations
topicideashow-to-image-processing-project and implimentation-ideas
topicideashow-to-digital-image-processing-seminar and presentation
topicideashow-to-digital-image-processing-project and implimentations
topicideashow-to-image-processing-project and implimentations-ideas