Pile-up correction using digital signal processing
Active In SP
Joined: Jun 2010
21-12-2010, 04:50 PM
PROJECt.pdf (Size: 1.06 MB / Downloads: 122)
Presented By:Surajit Ghosh TSO, ELECTRONICS
THE PILEUP effect has been recognized as a limiting factor in performing gamma-ray spectrometry at high count rates .Pileup occurs when two or more events take place so close in time that the measurement system responds as if it is a single event. The pileup effect deteriorates the resolution of the energy spectrum. It can even result in the total masking of some spectral lines of interest. Degradation of the resolution of the spectrum is particularly pronounced at high count rates since there is a higher likelihood that pulses will overlap and pile up. Traditional analog and digital techniques are used to minimize or remove the effect of pulse pileup are based on detecting and discarding the input pulses with evident distortion due to pileup. Almost every analog or digital techniques use three basic steps
1) reduction of the pileup effects;
2) detection of the beginning of the pulses; and
3) computation of the amplitudes of overlapped pulses. People approach in different ways to minimize the effects of pulse pile up among which these are to be mentioned: 1) reduction of the pileup effect by de-convolution of overlapped pulses; 2) improving the energy resolution of the spectrum,[Monte Carlo technique] and 3)reduction of the pileup effect by pulse clipping (PC) . Several methods for reducing the distortion in pulse-high spectra caused by pulse pileup have been developed. A de-convolution method that allows inverse filtering of the pulses to separate them has been studied. The problem of precise detection of the beginning of the pulse is very important. The most common approach is a simple thresholding method, where the beginning of the pulse is detected when the pulse is above the predefined level. This method is error prone, particularly when the level of noise is high. The basic idea behind pulse clipping (PC) is that shorter pulses have a lower likelihood of being piled up. The duration of the pulse tail is reduced, while the amplitude and the rising edge of the pulse usually remain unchanged . The goal of this project and implimentation is to develop algorithms to separate out the piled up pulses and finally develop hardware to correct pile-up errors. The algorithms that give very good resolution and best results works efficiently in computer but they are hard to implement in hardware. It is a challenge to realize a real-time system which performs complex computations like FFT (Fast Fourier Transform), Curve fitting, Linear Prediction or optimizations like PSO (Particle Swarm Optimization), GA (Genetic Algorithm) etc.
DIGITAL PULSE PROCESSING :
Digital pulse processing is a signal processing technique in which detector (pre-amplifier output) signals are directly digitized and processed to extract quantities of interest. This approach has several significant advantages compared to traditional analog signal shaping. First, analyses can be developed which take pulse-by-pulse differences into account, as in making ballistic deficit compensations. Second, transient induced charge signals, which deposit no net charge on an electrode, can be analyzed to give, for example, information on the position of interaction within the detector. Third, dead-times from transient overload signals are greatly reduced, from tens of ms to hundreds of ns. Fourth, signals are easily captured, so that more complex analyses can be postponed and many more.
2. 1. THE DIGITAL APPROACH TO SPECTROSCOPIC PULSE PROCESSING:
At the conceptual level, digital and analog signal processing share several common goals, which include: (1) to extract information of interest (pulse height, pulse shape, arrival time, etc.) from an incoming data stream; (2) to suppress non-essential information (i.e. ``noise''); (3) to reduce the incoming data stream to a manageable level; and (4) to sort or present the extracted data in a manner that makes it intelligible. Each of these functions is present in all spectrometers, whether one is dealing with a single, simple g-ray detector used in a counting experiment or with an array of hundreds of detectors with anti-Compton shields in a large nuclear physics experiment.
COMPARISON: DIGITAL TO ANALOG
The typical modern analog spectrometer simultaneously processes the preamplifier signal using a fast channel and a slow channel . The fast channel detects pulses and provides pileup inspection while the slow channel provides energy resolution. When a valid pulse is detected, the slow channel peak is captured, stretched and digitized by an analog-to-digital converter (ADC) and binned by a multi-channel analyzer (MCA) for inclusion in a spectrum. Topologically, a digital pulse processor (DPP) is similar. The major difference is that, after signal conditioning, the preamplifier signal is digitized immediately and all fast and slow channel operations are carried out in digital filters. This distinction provides several immediate benefits. First, all channels are working with identical copies of the signal, whose fidelity can be maintained indefinitely. Second, there is no dead-time penalty incurred by the digitization step, as there is in the analog case. Third, pulse shapes are easily captured and stored for analyses, immediate or delayed, which are difficult or impossible to implement in analog circuitry. Finally, if the system is overloaded, it recovers with the speed of the signal conditioning circuit, rather than of the slow energy filtering circuit. A second distinction is that DPPs have an enhanced ability to delay signals while accurately preserving time information between different filtering operations. A digital delay line, for example, is just a First-in-First-out (FIFO) memory which can easily be tens of ms long with perfect signal fidelity. Time information can be preserved because all filters run synchronously on a common clock and require a fixed number of cycles to execute. For example, since the time between the firing of a fast channel discriminator and the arrival of the signal peak in the slow channel is a known number of clock cycles, `peak capture' in the DPP is simply implemented using a register gated by a counter. Several approaches for DPP implementation have been developed. The earliest was to stream data through a FIFO and, when a pulse was detected, to load the FIFO contents into a digital signal processor (DSP) to apply the desired filtering algorithms. This approach is capable of the highest resolution but is inherently slow due to the number of processing operations required per pulse. A second approach was to implement all digital filtering in hardwired logic, using, for example, digital multiplier chips to implement the filters. This approach is capable of very high throughput rates but the required circuits were complex, expensive and extremely power hungry.
AREAS OF NEW CAPABILITY
As a result of these developments, several areas of new capability are now available. These include the following, [2-3] 1. Pulse specific corrections to provide ballistic deficit correction and particle identification. 2. Transient signal analysis to analyze induced charge signals to provide photon interaction location information. 3. Improved transient response to eliminate slow filter overload responses. 4. Hierarchies of processing complexity which readily support simple-fast vs complex-slow decision making on an event by event basis. 5. Complex coincidence criteria are readily supported so that criteria for capturing and/or processing data can be immediate or delayed. 6. Digital communication enhances operating convenience by allowing remote operation, instrument self calibration, and restoration of previous setups from filters.
Joined: Jul 2011
16-01-2012, 11:25 AM
to get information about signal processing full report ,ppt and related topic refer the link bellow
topicideashow-to-digital-signal-processor-seminar and presentation