ESTIMATION OF DEFECTS BASED ON EFECT DECAY MODEL: ED3M-- SOFTWARE ENGINEERING
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
electronics seminars
Active In SP
**

Posts: 694
Joined: Nov 2009
#1
13-01-2010, 07:31 AM


ESTIMATION OF DEFECTS BASED ON EFECT DECAY MODEL: ED3M-- SOFTWARE ENGINEERING

An accurate prediction of the number of defects in a software product during system testing contributes not only to the management of the system testing process but also to the estimation of the productâ„¢s required maintenance. Here, a new approach, called Estimation of Defects based on Defect Decay Model (ED3M) is presented that computes an estimate the defects in an ongoing testing process. ED3M is based on estimation theory. Unlike many existing approaches, the technique presented here does not depend on historical data from previous project and implimentations or any assumptions about the requirements and/or testersâ„¢ productivity. It is a completely automated approach that relies only on the data collected during an ongoing testing process. This is a key advantage of the ED3M approach as it makes it widely applicable in different testing environments. Here, the ED3M approach has been evaluated using five data sets from large industrial project and implimentations and two data sets from the literature. In addition, a performance analysis has been conducted using simulated data sets to explore its behavior using different models for the input data. The results are very promising; they indicate the ED3M approach provides accurate estimates with as fast or better convergence time in comparison to well-known alternative techniques, while only using defect data as the input.
Technology to use: .NET
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
Reply
seminar topics
Active In SP
**

Posts: 559
Joined: Mar 2010
#2
29-03-2010, 11:56 PM

Estimation of Defects Based on
Defect Decay Model: ED3M
Abstract:

An accurate prediction of the number of defects in a software product during system testing contributes not only to the management of the system testing process but also to the estimation of the productâ„¢s required maintenance. Here, a new approach, called Estimation of Defects based on Defect Decay Model (ED3M) is presented that computes an estimate the defects in an ongoing testing process. ED3M is based on estimation theory. Unlike many existing approaches, the technique presented here does not depend on historical data from previous project and implimentations or any assumptions about the requirements and/or testersâ„¢ productivity. It is a completely automated approach that relies only on the data collected during an ongoing testing process. This is a key advantage of the ED3M approach as it makes it widely applicable in different testing environments. Here, the ED3M approach has been evaluated using five data sets from large industrial project and implimentations and two data sets from the literature. In addition, a performance analysis has been conducted using simulated data sets to explore its behavior using different models for the input data. The results are very promising; they indicate the ED3M approach provides accurate estimates with as fast or better convergence time in comparison to well-known alternative techniques, while only using defect data as the input.
EXISTING SYSTEM:
Several researchers have investigated the behavior of defect density based on module size. One group of researchers has found that larger modules have lower defect density. Two of the reasons provided for their findings are the smaller number of links between modules and that larger modules are developed with more care. The second group has suggested that there is an optimal module size for which the defect density is minimal. Their results have shown that defect density depicts a U-shaped behavior against module size. Still others have reported that smaller modules enjoy lower defect density, exploiting the famous divide and conquer rule. Another line of studies has been based on the HAIDER ET AL.: ESTIMATION OF DEFECTS BASED ON DEFECT DECAY MODEL: ED3M 349 . Convergence statistics, collected from the simulation of 100 data sets generated from the Triple-Linear behavior, of the estimator with (A)10 percent tolerance, (b)20 percent tolerance, and © 30 percent tolerance. Convergence statistics, collected from the simulation of 100 data sets generated from the Multiexponential behavior, of the estimator with: 10 percent tolerance, (b) 20 percent tolerance, and © 30 percent tolerance. use of design metrics to predict fault-prone modules. Briand et al. have studied the degree of accuracy of capture-recapture models, proposed by biologists, to predict the number of remaining defects during inspection using actual inspection data. They have also studied the impact of the number of inspectors and the total number of defects on the accuracy of the estimators based on relevant capturerecapture models. Ostrand et al. Bell et al. have developed a model to predict which files will contain the most faults in the next release based on the structure of each file, as well as fault and modification history from the previous release.
PROPOSED SYSTEM:
Many researchers have addressed this important problem with varying end goals and have proposed estimation techniques to compute the total number of defects. A group of researchers focuses on finding error-prone modules based on the size of the module. Briand et al. predict the number of remaining defects during inspection using actual inspection data, whereas Ostrand et al predict which files will contain the most faults in the next release. Zhang and Mockus use data collected from previous project and implimentations to estimate the number of defects in a new project and implimentation. However, these data sets are not always available or, even if they are, may lead to inaccurate estimates. For example, Zhang and Mockus use a naïve method based only on the size of the product to select similar project and implimentations while ignoring many other critical factors such as project and implimentation type, complexity, etc. Another alternative that appears to produce very accurate estimates is based on the use of Bayesian Belief Networks (BBNs) .However, these techniques require the use of additional information, such as expert knowledge and empirical data, that are not necessarily collected by most software development companies. Software reliability growth models (SRGMs) are also used to estimate the total number of defects to measure software reliability. Although they can be used to indicate the status of the testing process, some have slow convergence while others have limited application as they may require more input data or initial values that are selected by experts.
Hardware and Software Requirements:
SOFTWARE REQUIREMENTS
VS .NET 2005,C#
SQL SERVER 2000
Windows XP.
HARDWARE REQUIREMENTS
Hard disk : 80 GB
RAM : 512mb
Processor : Pentium IV
Monitor : 17™™Color Monitor
Estimation of Defects Based on
Defect Decay Model: ED3M
Scope of the project and implimentation :
The goal of the project and implimentation is estimate the defects in a software product. The availability of this estimate allows a test manager to improve his planning, monitoring, and controlling activities; this provides a more efficient testing process. estimators can achieve high accuracy as more and more data becomes available and the process nears completion.
Introduction :
Software metrics are crucial for characterizing the development status of a software product. Well-defined metrics can help to address many issues, such as cost, resource planning (people, equipment such as testbeds, etc.), and product release schedules. Metrics have been proposed for many phases of the software development lifecycle, including requirements, design, and testing. In this paper, the focus is on characterizing the status of the software testing effort using a single key metric: the estimated number of defects in a software product. The availability of this estimate allows a test manager to
improve his planning, monitoring, and controlling activities; this provides a more efficient testing process. Also, since, in many companies, system testing is one of the last phases (if not the last), the time to release can be better assessed; the estimated remaining defects can be used to predict the required level of customer support. Ideally, a defect estimation technique has several important characteristics. First, the technique should be accurate as decisions based on inaccurate estimates can be time consuming and costly to correct. However, most estimators can achieve high accuracy as more and more data becomes available and the process nears completion.
By that time, the estimates are of little, if any, use. Therefore,a second important characteristic is that accurate estimates need to be available as early as possible during the system testing phase. The faster the estimate converges to the actual value (i.e., the lower its latency), the more valuable the result is to a test manager. Third, the technique should be generally applicable in different software testing processes and on different kinds of software products. The inputs to the process should be commonly available and should not require extensive expertise in an underlying formalism. In this case, the same technique can be widely reused, both within and among software development companies, reducing training costs, the need for additional tool support, etc. Many researchers have addressed this important problem with varying end goals and have proposed estimation techniques to compute the total number of defects. A group of researchers focuses on finding error-prone modules based on the size of the module . Briand et al. predict the number of remaining defects during inspection using actual inspection data, whereas Ostrand . predict which files will contain the most faults in the next release. Zhang and Mockus use data collected from previous project and implimentations to estimate the number of defects in a new project and implimentation. However, these data sets are not always available or, even if they are, may lead to inaccurate estimates. For example, Zhang and Mockus use a naïve method based only on the size of the product to select similar project and implimentations while ignoring many other critical factors such as project and implimentation type, complexity, etc. Another alternative that appears to produce very accurate estimates is based on the use of Bayesian Belief Networks (BBNs) .
Estimation of Defects based on Defect Decay Model (ED3M), is a novel approach proposed here which has been rigorously validated using case studies, simulated data sets, and data sets from the literature. Based on this validation work, the ED3M approach has been shown to produce accurate final estimates with a convergence rate that meets or improves upon closely related, well-known techniques. The only input is the defect data; the ED3M approach is fully automated .
Although the ED3M approach has yielded promising results, there are defect prediction issues that are not addressed by it. For example, system test managers would benefit from obtaining a prediction of the defects to be found in ST well before the testing begins, ideally in the requirements or design phase. This could be used to improve the plan for developing the test cases. The ED3M approach, which requires test defect data as the input, cannot be used for this. Alternate approaches which rely on different input data (e.g., historical project and implimentation data and expert knowledge) could be selected to accomplish this. However, in general, these data are not available at most companies.
A second issue is that test managers may prefer to obtain the predictions for the number of defects on a feature-by-feature basis, rather than for the whole system. Although the ED3M approach could be used for this, the number of sample points for each feature may be too small
to allow for accurate predictions. As before, additional information could be used to achieve such estimations, but this is beyond the scope of this paper. Third, the performance of the ED3M approach is affected when the data diverge from the underlying assumption of an exponential decay behavior.
Modules :
1) Login
2) Browse
3) Error Estimation
4) Error Correction
5) Report.
Module Description :
1) Login:
The Valid user enter into login to send data to available network systems, if the user doesnâ„¢t register it will move to new user creation from.In this Module Collecting the general user details and store database for future refereneces.It having Name, Password, Confirm Password, and Email address.
2 ) Browse :
The user select the already created project and implimentation given as input.we have to select the (.exe ) file of the project and implimentation from debug folder.using (.exe ) file we retrieve a classename,methodname,Parameter name and namespace.we going to apply a refactor and merging technices in selected classname.
3) Error Estimation :
In this module we computes an estimate of the defects in an ongoing testing process. This could be used to improve the plan for developing the test cases.
system testing is one of the last phases (if not the last), the time to release can be better assessed; the estimated remaining defects can be used to predict the required level of customer support.
4) Error Correction :
Error correction techniques are used to improve the estimates and,consequently, reduce the convergence time.After error has been estimated we going to correct the error in specified line number and files.
It is used to correct the current estimate; the corrected value is the Output.
5) Report :
In this module process report has been generated for specified users.In report form specified error corrected filename, username ,date and status of the project and implimentation display in report form.The information stored in admin database.
Module I/O :
Module Input :
In this project and implimentation we give already created project and implimentation given as input.we going to estimate the defects in a project and implimentation.we give only defect data as input. The only
input is the defect data; the ED3M approach is fully automated.
Module output :
The error correction technique in the ED3M model iteratively determines a mean growth factor, which is used to compensate for the error.we correct the defect.
Module Diagram :
UML Diagram :
Use case diagram :
Class Diagram :
Object Diagram :
Collaboration Diagram :

Sequence Diagram :

StateChart Diagram :

Activity Diagram :
Component Diagram :
E-R Diagram :
DataFlow Diagram :
Project Flow Diagram :

System Architecture :

Literature review :
The traditional way of predicting software reliability has since the 1970ies been the use of software reliability growth models. They were developed in a time when software was developed using a waterfall process model. This is inline with the
fact that most software reliability growth models require a substantial amount of failure data to get any trustworthy estimate of the reliability. Software reliability growth models are normally described in the form of an equation with a number of parameters that need to be fitted to the failure data. A key problem is that the curve fitting often means that the parameters can only be estimated very late in testing and hence their industrial value for decision-making is limited. This is particularly the case when development is done, for example, using an incremental approach or other short turnaround approaches. A sufficient amount of failure data is simply not available. The software reliability growth models have initially been developed for a quite different situation than today. Thus, it is not a surprise that they are no really fit for the challenges today unless the problems can be circumvented. This paper addresses some of the possibilities of addressing the problems with software reliability growth models by looking at ways of estimating the parameters in software reliability growth models before entering integration or system testing.
Construction simulation tools typically provide results in the form of numerical or statistical data. However, they do not illustrate the modeled operations graphically in 3D. This poses significant difficulty in communicating the results of simulation models, especially to persons who are not trained in simulation but are domain experts. The resulting Black-Box Effect is a major impediment in verifying and validating simulation models. Decision makers often do not have the means, the training and/or the time to verify and validate simulation models based solely on the numerical output of simulation models and are thus always skeptic about simulation analyses and have little confidence in their results. This lack of credibility is a major deterrent hindering the widespread use of simulation as an operations planning tool in construction. This paper illustrates the use of DES in the design of a complex dynamic earthwork operation whose control logic was verified and validated using 3D animation. The model was created using Stroboscope and animated using the Dynamic Construction Visualizer.
Over the years, many defect prediction studies have been conducted. The studies consider the problem using a variety of mathematical models (e.g., Bayesian Networks, probability distributions, reliability growth models, etc.) and characteristics of the project and implimentation, such as module size, file structure, etc. A useful survey and critique of these techniques is available in .Several researchers have investigated the behavior of defect density based on module size. One group of researchers has found that larger modules have lower defect density. Two of the reasons provided for their findings are the smaller number of links between modules and that larger modules are developed with more care. The second group has suggested that there is an optimal module size for which the defect density is minimal. Their results have shown that defect density depicts a U-shaped behavior against module size. Still others have reported that smaller modules enjoy lower defect density, exploiting the famous divide and conquer rule. Another line of studies has been based on theuse of design metrics to predict fault-prone modules . Briand have studied the degree of accuracy of capture-recapture models, proposed by biologists, to predict the number of remaining defects during inspection using actual inspection data. They have also studied the impact of the number of inspectors and the total number of defects on the accuracy of the estimators based on relevant recapture models. Ostrand and Bell have developed a model to predict which files will contain the most faults in the next release based on the structure of each file, as well as fault and modification history from the previous release. Their research [5] has shown that faults are distributed in files according to the famous Pareto Principle, i.e., 80 percent of the faults are found in 20 percent of the files. Zhang and Mockus assume that defects discovered and fixed during development are caused by implementing new features recorded as Modification Requests (MRs). Historical data from past project and implimentations are used to collect estimates for defect rate per feature MR, the time to repair the defect in a feature, and the delay between a feature implementation and defect repair activities. The selection criteria for past similar project and implimentations are based only on the size of the project and implimentation while disregarding many other critical characteristics. These estimates are used as input to a prediction model, based on the Poison distribution, to predict the number of defect repair MRs.
The technique that has been presented by Zhang and Mockus relies solely on historical data from past project and implimentations and does not consider the data from the current project and implimentation. Fenton have used BBNs to predict the number of defects in the software. The results shown are plausible; the authors also explain causes of the results from the model. However, accuracy has been achieved at the cost of requiring expert knowledge of the Project Manager and historical data (information besides defect data) from past project and implimentations. Currently, such information is not always collected in industry. Also, expert knowledge is highly subjective and can be biased. These factors may limit the application of such models to a few companies that can cope with these requirements. This has been a key motivating factor in developing the ED3M approach. The only information ED3M needs is the defect data from the ongoing testing process; this is collected by almost all companies. Gras also advocate the use and effectiveness of BBNs for defect prediction. However, they point out that the use of BBN is not always possible and an alternative method, Defect Profile Modeling (DPM), is proposed. Although DPM does not demand as much on calibration as BBN, it does rely on data from past project and implimentations, such as the defect
identifier, release sourced, phase sourced, release found, phase found, etc.Many Reliability models have been used to predict the number of defects in a software product. The models have also been used to provide the status of the testing process based on the defect growth curve. For example, if the defect curve is growing exponentially, then more undiscovered defects are to follow and testing should continue. If the growth curve has reached saturation, then the decision regarding the fate of testing can be reviewed by managers and engineers.
Advantages :
1) Accurate estimates as early as possible during the system testing process.
2) Much more information to compute the estimates.
3) Estimate the large modules.
4) Correct the current estimate ,corrected value is the output.
Application :
¢ XML-based application testing
¢ Java application testing
¢ N-tier client-server application testing
¢ WAP application testing
Module Description :
2) Login:
The Valid user enter into login to send data to available network systems, if the user doesnâ„¢t register it will move to new user creation from.In this Module Collecting the general user details and store database for future refereneces.It having Name, Password, Confirm Password, Phone number, Email address.
Screenshot:
2 ) Browse :
The user select the already created project and implimentation given as input.we have to select the (.exe ) file of the project and implimentation from debug folder.using (.exe ) file we retrieve a classename,methodname,Parameter name and namespace.we going to apply a refactor and merging technices in selected classname.
Screenshot -1 :
Screenshot-2
Module 3 :
Error Estimation :
In this module we computes an estimate of the defects in an ongoing testing process. This could be used to improve the plan for developing the test cases.
system testing is one of the last phases (if not the last), the time to release can be better assessed; the estimated remaining defects can be used to predict the required level of customer support.
Module -4
Error correction :
Error correction techniques are used to improve the estimates and,consequently, reduce the convergence time.After error has been estimated we going to correct the error in specified line number and files.
It is used to correct the current estimate; the corrected value is the Output.


Attached Files
.doc   Estimation_of_Defects_Based_on_Defect_Decay_Model_ED3M.doc (Size: 1.29 MB / Downloads: 136)
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
Reply
projectsofme
Active In SP
**

Posts: 1,124
Joined: Jun 2010
#3
24-09-2010, 03:23 PM

for more details about ESTIMATION OF DEFECTS BASED ON EFECT DECAY MODEL: ED3M-- SOFTWARE ENGINEERING:please follow the link:
googleurl?sa=t&source=web&cd=3&ved=0CBwQFjAC&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D4492790&ei=jnScTLX-Gs-XcfH1seAJ&usg=AFQjCNHEn7DtQo3O9Czodi26SVVGwQVyfg
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page

Quick Reply
Message
Type your reply to this message here.


Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  ms dos linker in system software ppt jaseelati 0 421 18-12-2014, 04:51 PM
Last Post: jaseelati
  Wireless Sensor Network Security model using Zero Knowledge Protocol project uploader 1 1,057 28-02-2014, 01:44 AM
Last Post: mspadmini19
  Question about software engineering? nelson111 0 336 28-10-2013, 03:36 PM
Last Post: nelson111
  Night image Optimization Technique using hybrid Model of high intensity pdf seminar projects maker 0 520 21-09-2013, 04:05 PM
Last Post: seminar projects maker
  AUTOMATED BUSINESS INTELLIGENCE NETWORK FOR SOFTWARE REQUIREMENT SPECIFICATIONS seminar projects maker 0 437 12-09-2013, 03:46 PM
Last Post: seminar projects maker
  A Knowledge-Based Software Information System pdf seminar projects maker 0 340 12-09-2013, 12:20 PM
Last Post: seminar projects maker
  Software Requirements Specification For Online Recurring Deposit study tips 0 305 10-09-2013, 11:57 AM
Last Post: study tips
  Software Requirements Specification for Airlines Reservation System study tips 0 468 30-08-2013, 04:57 PM
Last Post: study tips
  Software Requirements Specification For Internet connection sharing from mobile study tips 0 468 26-08-2013, 04:46 PM
Last Post: study tips
Rainbow Computer Engineering IEEE Project Topics for BE gajendra123 8 1,811 14-08-2013, 09:40 AM
Last Post: study tips