concealed weapon detection using digital image processing full report
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
project topics
Active In SP
**

Posts: 2,492
Joined: Mar 2010
#1
01-04-2010, 09:12 PM



AUTHORS:
p.nikithareddy
ECE-3/4
GNITS
swathisharma
ECE-3/4
GNITS

ABSTRACT
In the present scenario, bomb blasts are rampant all around the world. Bombs went of in buses and underground stations, killed many and left many injured. Bomb blasts can not be predicted before hand. This paper is all about the technology which predicts the suicide bombers and explosion of weapons through IMAGING FOR CONCLEAD WEAPON DETECTION, the sensor improvements, how the imaging takes place and the challenges. And we also describe techniques for simultaneous noise suppression, object enhancement of video data and show some mathematical results.

The detection of weapons concealed underneath a personâ„¢s clothing is very much important to the improvement of the security of the general public as well as the safety of public assets like airports, buildings, and railway stations etc. Manual screening procedures for detecting concealed weapons such as handguns, knives, and explosives are common in controlled access settings like airports, entrances to sensitive buildings and public events. It is desirable sometimes to be able to detect concealed weapons from a standoff distance, especially when it is impossible to arrange the flow of people through a controlled procedure.
INTRODUCTION:
Till now the detection of concealed weapons is done by manual screening procedures. To control the explosives in some places like airports, sensitive buildings, famous constructions etc. But these manual screening procedures are not giving satisfactory results, because this type of manual screenings procedures screens the person when the person is near the screening machine and also some times it gives wrong alarm indications so we are need of a technology that almost detects the weapon by scanning. This can be achieved by imaging for concealed weapons.

The goal is the eventual deployment of automatic detection and recognition of concealed weapons. it is a technological challenge that requires innovative solutions in sensor technologies and image processing.

The problem also presents challenges in the legal arena; a number of sensors based on different phenomenology as well as image processing support are being developed to observe objects underneath peopleâ„¢s clothing.

IMAGING SENSORS
These imaging sensors developed for CWD applications depending on their portability, proximity and whether they use active or passive illuminations.
1. INFRARED IMAGER:
Infrared imagers utilize the temperature distribution information of the target to form an image. Normally they are used for a variety of night-vision applications, such as viewing vehicles and people. The underlying theory is that the infrared radiation emitted by the human body is absorbed by clothing and then re-emitted by it. As a result, infrared radiation can be used to show the image of a concealed weapon only when the clothing is tight, thin, and stationary. For normally loose clothing, the emitted infrared radiation will be spread over a larger clothing area, thus decreasing the ability to image a weapon.
2. P M W IMAGING SENSORS:
FIRST GENERATION:
Passive millimeter wave (MMW) sensors measure the apparent temperature through the energy that is emitted or reflected by sources. The output of the sensors is a function of the emissive of the objects in the MMW spectrum as measured by the receiver. Clothing penetration for concealed weapon detection is made possible by MMW sensors due to the low emissive and high reflectivity of objects like metallic guns. In early 1995, the MMW
data were obtained by means of scans using a single detector that took up to 90 minutes to generate one image.
Following figure1 (a) shows a visual image of a person wearing a heavy sweater that conceals two guns made with metal and ceramics. The corresponding 94-GHz radiometric image figure1 (b) was obtained by scanning a single detector across the object plane using a mechanical scanner. The radiometric image clearly shows both firearms.

SECOND GENERATION:
Recent advances in MMW sensor technology have led to video-rate (30 frames/s) MMW cameras .One such camera is the pupil-plane array from Terex Enterprises. It is a 94-GHz radiometric pupil-plane imaging system that employs frequency scanning to achieve vertical resolution and uses an array of 32 individual wave-guide antennas for Horizontal resolution. This system collects up to 30 frames/s of MMW data.

CWD THROUGH IMAGE FUSION:
By fusing passive MMW image data and its corresponding infrared (IR) or electro-optical (EO) image, more complete information can be obtained.
The information can then be utilized to facilitate concealed weapon detection.
Fusion of an IR image revealing a concealed weapon and its corresponding MMW image has been shown to facilitate extraction of the concealed weapon. This is illustrated in the example given in following figure 3a) Shows an image taken from a regular CCD camera, and Figure3b) shows a corresponding MMW image. If either one of these two images alone is presented to a human operator, it is difficult to recognize the weapon concealed underneath the rightmost personâ„¢s clothing. If a fused image as shown in Figure 3c) is presented, a human operator is able to respond with higher accuracy. This demonstrates the
benefits of image fusion for the CWD application, which integrates complementary information from multiple types of sensors.

IMAGING PROCESSING ARCHITECTURE:

An image processing architecture for CWD is shown in Figure 4.The input can be multi sensor (i.e., MMW + IR, MMW + EO, or MMW + IR + EO) data or only the MMW data. In the latter case, the blocks showing registration and fusion can be removed from Figure 4. The output can take several forms. It can be as simple as a processed image/video sequence displayed on a screen; a cued display where potential concealed weapon types and locations are highlighted with associated confidence measures; a yes, no, or maybe indicator; or a combination of the above. The image processing procedures that have been investigated for CWD applications range from simple de-noising to automatic pattern recognition.

WAVELET APPROACHS FOR PRE PROCESSING:
Before an image or video sequence is presented to a human observer for operator-assisted weapon detection or fed into an automatic weapon detection algorithm, it is desirable to preprocess the images or video data to maximize their exploitation. The preprocessing steps considered in this section include enhancement and filtering for the removal of shadows, wrinkles, and other artifacts. When more than one sensor is used, preprocessing must also include registration and fusion procedures.

1) IMAGE DENOISING & ENHANCEMENT THROUGH WAVELETS:
Many techniques have been developed to improve the quality of MMW images in this section, we describe a technique for simultaneous noise suppression and object enhancement of passive MMW video data and show some mathematical results.
De-noising of the video sequences can be achieved temporally or spatially. First, temporal de-noising is achieved by motion compensated filtering, which estimates the motion trajectory of each pixel and then conducts a 1-D filtering along the trajectory.
This reduces the blurring effect that occurs when temporal filtering is performed without regard to object motion between frames. The motion trajectory of a pixel can be estimated by various algorithms such as optical flow methods, block-based methods and
Bayesian-methods. If the motion in an image sequence is not abrupt, we can restrict the search to a small region in the subsequent frames for the motion trajectory. For additional de-noising and object enhancement, the technique employs a wavelet transform method that is based on multi scale edge representation.

The frame was then spatially de-noised and enhanced by the wavelet transform methods. Four decomposition levels were used and edges in
The fine scales were detected using the magnitude and angles of the gradient of the multi-scale edge representation. The threshold for de-noising was 15% of the maximum gradient at each scale.Note that the image of the handgun on the chest of the subject is more apparent in the enhanced frame than it is in the original frame. However, spurious features such as glint are also enhanced; higher-level procedures such as pattern
Recognition has to be used to discard these undesirable features.
2) CLUTTER FILTERING:
Clutter filtering is used to remove unwanted details (shadows, wrinkles, imaging artifacts, etc.) that are not needed in the final image for human observation, and can adversely affect the performance of the automatic recognition stage. This helps improve the recognition performance, either operator-assisted or automatic. For this purpose, morphological filters have been employed. Examples of the use of morphological filtering for noise removal are provided through the complete CWD example given in Figure. A complete description of the example is given in a later section.

3) REGISTRATION OF MULTI- SENSOR IMAGES:
As indicated earlier, making use of multiple sensors may increase the efficacy of a CWD system. The first step toward image fusion is a precise alignment of images (i.e., image registration).
Very little has been reported on the registration problem for the CWD application. Here, we describe a registration approach for images taken at the same time from different but
Nearly collocated (adjacent and parallel) sensors based on the maximization of mutual information (MMI) criterion. MMI states that two images are registered when their mutual information (MI) reaches its maximum value. This can be expressed mathematically as the following:
where F and R are the images to be registered. F is referred to as the floating image, whose pixel coordinates (Ëx) are to be mapped to new coordinates on the reference image R. The reference image R is to be re-sampled according to the positions defined by the new coordinates Ta, where T denotes the transformation model, and the dependence of T on its associated parameters a is indicated by the use of notation Ta. I is the MI similarity measure calculated over the region of overlap of the two images and can be calculated through the joint histogram of the two images the above criterion says that the two images F and R are
registered through Ta* when a* globally optimizes the MI measure, a two-stage registration algorithm was developed for the registration of IR images and the corresponding MMW images of the first generation. At the first stage, two human silhouette extraction algorithms were developed, followed by a binary correlation to coarsely register the two images.

The purpose was to provide an initial search point close to the final solution for the second stage of the registration algorithm based on the MMI criterion.
In this manner, any local optimizer can be employed to maximize the MI measure.
One registration result obtained by this approach is illustrated through the example given in Figure 6.
1V) IMAGE DECOMPOSITION:
The most straightforward approach to image fusion is to take the average of the source images, but this can produce undesirable results such as a decrease in contrast. Many of the advanced image fusion methods involve multi resolution image decomposition based on the wavelet transform. First, an image pyramid is constructed for each source image by applying the wavelet transform to the source images. This transform domain representation emphasizes important details of the source images at different scales, which is useful for choosing the best fusion rules. Then, using a feature selection rule, a fused pyramid is formed for the composite image from the pyramid coefficients of the source images. The simplest feature selection rule is choosing the maximum of the two corresponding transform values. This allows the integration of details into one image from two or more images. Finally, the composite image is obtained by taking an inverse pyramid transform of the composite wavelet representation. The process can be applied to fusion of multiple source imagery. This type of method has been used to fuse IR and MMW images for CWD application. The first fusion example for CWD application is given in Figure 7. Two IR images taken from separate IR cameras from different viewing angles are considered in this case. The advantage of image fusion for this case is clear since we can observe a complete gun shape only in the fused image.


AUTOMATIC WEAPON DETECTION:
After preprocessing, the images /video sequences can be displayed for operator-assisted weapon detection or fed into a weapon detection module for automated weapon detection. Toward this aim, several steps are required, including object extraction, shape description, and weapon recognition.

SEGMENTATION FOR OBJECT EXTRACTION
Object extraction is an important step towards automatic recognition of a weapon, regardless of whether or not the image fusion step is involved. It has been successfully used to extract the gun shape from the fused IR and MMW images. This could not be achieved using the original images alone. Segmented result from the fused IR and MMW image is shown in Figure 6.

CHALLENGES:

There are several challenges ahead. One critical issue is the challenge of performing detection at a distance with high probability of detection and low probability of false alarm. Yet another difficulty to be surmounted is forging portable multi-sensor instruments. Also, detection systems go hand in hand with subsequent response by the operator, and system development should take into account the overall context of deployment.

CONCLUSION:

Imaging techniques based on a combination of sensor technologies and processing will potentially play a key role in addressing the concealed weapon detection problem. Recent advances in MMW sensor technology have led to video-rate (30 frames/s) MMW cameras. However, MMW cameras alone cannot provide useful information about the detail and location of the individual being monitored. To enhance the practical values of passive MMW cameras, sensor fusion approaches using MMW and IR, or MMW and EO cameras are being described. By integrating the complementary information from different sensors, a more effective CWD system is expected.

REFERENCES:
1. An Article from IEEE SIGNAL PROCESSING MAGAZINE March 2005 pp. 52-61

2. wikipedia.org
3. N.G.Paulter, Guide to the technologies of concealed weapon imaging and detection, NIJ Guide 602-00, 2001

4. imageprocessing.com




Attached Files
.doc   concealed weapon detection using digital image processing.doc (Size: 508.5 KB / Downloads: 443)
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
Reply
ramu.jan23@gmail.com
Active In SP
**

Posts: 1
Joined: Aug 2010
#2
06-08-2010, 01:15 AM

i have an idea of doing project and implimentation on doing scanning a person who hidden the weapons inside the dress and roaming in the public place .as it was unable to check manually so it has been done the scanner placed in the public place where it can scan more number of people and send information to the contol room using GPS and GSM. And we can use like a electronic fly to surround around a limited area to detect the weapons . PLEASE I NEED THE REQUIRED INFORMATION HOW TO APPROACH I REQ.HELP
Reply
binalbruno
Active In SP
**

Posts: 2
Joined: Jun 2010
#3
11-08-2010, 02:34 PM

Thanx a lot!!could you plz give me more information about this topic?
Reply
kitgr8
Active In SP
**

Posts: 2
Joined: Aug 2010
#4
13-08-2010, 07:15 PM

hey pls send me all the details related to this project and implimentation
Reply
sowmya
Active In SP
**

Posts: 24
Joined: Jan 2010
#5
14-08-2010, 05:45 PM

can u pls send full documentation and ppt of this topic.pls pls pls
Reply
shahnawaz
Active In SP
**

Posts: 1
Joined: Oct 2010
#6
09-10-2010, 01:56 PM

Can any body tell how to do CWD in matlab...?

Regards
Reply
seminar class
Active In SP
**

Posts: 5,361
Joined: Feb 2011
#7
18-03-2011, 11:22 AM

Presented by
K.SATISH KUMAR
P.KARTHIK


.doc   concealedweapondetection.1111.doc (Size: 7.3 MB / Downloads: 115)
ABSTRACT
We have recently witnessed the series of bomb blasts in Mumbai. Bombs went of in buses and underground stations. And killed many and left many injured. On July 13th seven explosions took place with in one hour. And left the world in shell shock and the Indians in terror.
This situation is not limited to Mumbai but it can happen any where and any time in the world. People think bomb blasts can’t be predicted before handled. Here we show you the technology which predicts the suicide bombers and explosion of weapons through IMAGE PROCESSING FOR CONCLEAD WEAPON DETECTION.
The detection of weapons concealed underneath a person’s clothing is very much important to the improvement of the security of the general public as well as the safety of public assets like airports, buildings, and railway stations etc. Manual screening procedures for detecting concealed weapons such as handguns, knives, and explosives are common in controlled access settings like airports, entrances to sensitive buildings and public events. It is desirable sometimes to be able to detect concealed weapons from a standoff distance, especially when it is impossible to arrange the flow of people through a controlled procedure
In the present paper we describe the concepts of the technology ‘CONCEALEAD WEAPON DETECTION’ the sensor improvements, how the imaging takes place and the challenges. And we also describe techniques for simultaneous noise suppression, object enhancement of video data and show some mathematical results.
INTRODUCTION:
Till now the detection of concealed weapons is done by manual screening procedures. To control the explosives in some places like airports, sensitive buildings, famous constructions etc. But these manual screening procedures are not giving satisfactory results, because this type of manual screenings procedures screens the person when the person is near the screening machine and also some times it gives wrong alarm indications so we are need of a technology that almost detects the weapon by scanning. This can be achieved by imaging for concealed weapons.
The goal is the eventual deployment of automatic detection and recognition of concealed weapons. It is a technological challenge that requires innovative solutions in sensor technologies and image processing.
The problem also presents challenges in the legal arena; a number of sensors based on different phenomenology as well as image processing support are being developed to observe objects underneath people’s clothing.
IMAGING SENSORS
These imaging sensors developed for CWD applications depending on their portability, proximity and whether they use active or passive illuminations. The different types of imaging sensors for CWD based are shown in following table.
1.INFRARED IMAGER:
Infrared imagers utilize the temperature distribution information of the target to form an image. Normally they are used for a variety of night-vision applications, such as viewing vehicles and people. The underlying theory is that the infrared radiation emitted by the human body is absorbed by clothing and then re-emitted by it. As a result, infrared radiation can be used to show the image of a concealed weapon only when the clothing is tight, thin, and stationary. For normally loose clothing, the emitted infrared radiation will be spread over a larger clothing area, thus decreasing the ability to image a weapon.
2. P M W IMAGING SENSORS:
FIRST GENERATION:

Passive millimeter wave (MMW) sensors measure the apparent temperature through the energy that is emitted or reflected by sources. The output of the sensors is a function of the emissive of the objects in the MMW spectrum as measured by the receiver. Clothing penetration for concealed weapon detection is made possible by MMW sensors due to the low emissive and high reflectivity of objects like metallic guns. In early 1995, the MMW data were obtained by means of scans using a single detector that
Took up to 90 minutes to generate one image.
Following figure1 (a) shows a visual image of a person wearing a heavy sweater that conceals two guns made with metal and ceramics. The corresponding 94-GHz radiometric image figure1 (b) was obtained by scanning a single detector across the object plane using a mechanical scanner. The radiometric image clearly shows both firearms.

Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page

Quick Reply
Message
Type your reply to this message here.


Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  DETECTION OF LOST MOBILE USING SNIFFERS seminar class 64 34,003 12-04-2016, 03:24 PM
Last Post: mkaasees
  power theft detection via plc pdf jaseelati 0 345 22-01-2015, 03:31 PM
Last Post: jaseelati
  secure atm by image processing pdf jaseelati 0 331 15-01-2015, 03:17 PM
Last Post: jaseelati
  digital parking system project abstract jaseelati 0 747 15-01-2015, 02:42 PM
Last Post: jaseelati
  network intrusion detection system project report doc jaseelati 0 383 13-01-2015, 01:15 PM
Last Post: jaseelati
  deadlock detection algorithm source code c jaseelati 0 287 10-01-2015, 02:18 PM
Last Post: jaseelati
  bomb detection robotics using embedded controller jaseelati 0 337 06-01-2015, 04:50 PM
Last Post: jaseelati
  secure atm by image processing jaseelati 0 404 01-01-2015, 04:15 PM
Last Post: jaseelati
  how to check eb reading in digital meter in tamilnadu jaseelati 0 278 27-12-2014, 01:53 PM
Last Post: jaseelati
  network intrusion detection system project report jaseelati 0 253 23-12-2014, 03:08 PM
Last Post: jaseelati