online freight tracking full report
project report tiger|
Active In SP
Joined: Feb 2010
26-02-2010, 04:15 PM
ONLINE FREIGHT TRACKING SYSTEM.doc (Size: 1.21 MB / Downloads: 97)
ONLINE FREIGHT TRACKING SYSTEM
SANJU K THOMAS ARAVIND S MAHENDRA VARMA S
Freight management system is an online system developed for the Voyager Freight Movers Ltd. at TeraBytes Software Solutions. This system solves almost all the limitations of the conventional system. Both the customer and the company are equally benefited by the proposed system. The system saves a lot of time and effort for both.
The system comprises of 3 modules They are Administrator, Staff and Customer. Only the administrator has the rights to enter in to all the modules. He is the only person who has the full control over the system. But others can enter only to their corresponding modules, that is for customer module is for customer and staff for staffs
Administrator module is the controlling part of the system. Through this module, administrator can alter the system .He can search and edit the data. Only the administrator can enter in to this module.
The staff module is for the staff. Every staff has their own login id and password to enter in to the system. The staffs of the corresponding checkpoints have to Update the list of items arrived there on the particular day. They can fill the registration details, when the customer came there for booking.
Customer module is for customer's service. He can enter the system through online from any place. He can sign up, if he is going book a cargo online .He is given a user name and password during registration. He can use this username to enter in to the system. A booking id is also given to him at the time of registration .He can use this to know the status of the booked item
The project and implimentation is developed using ASP.Net as front end and MS SQL Server 2000 as backend.
I lUJ^Ll 1\CJJUI I UU
wiiurm r taigttt 1 rut twig system
1.1 ORGANIZATION PROFILE
Terabytes is a global provider of enterprise and technology software services. Started in 2000, by a group of engineering and Management professionals. Terabytes is an emerging software consultancy service, commanding a growing resourcefulness in providing a range of advanced Software Solutions. It has been operating in the software development arena for over 5 years and is one of the leading software companies in Kerala, India.
Terabytes works closely with its Customers in understanding their businesses and information needs and provide solutions that help them succeed in the marketplace. Customers prefer to work with Terabytes for the continuous value addition and long-term relationship that endures. Terabytes has a 'best of breed' approach that is a judicious mix of partnerships and internal competency centers to deliver an end-to-end solution, with a short turnaround time.
Today we are pioneers in IT solutions and BPO services. Terabytes is equipped with the industry's best software engineers and updated infrastructure. The company is engaged in Computer education, Software development, Consultancy, Website Development, Multimedia and CAD services. We are also into BPO services like HR outsourcing, CAD Vectorisation, medical billing and data entry works.
OUR VALUE ADDED FEATURES ARE:
Â¢ Quality Processes Consultation
Â¢ Usability evaluation of all products and applications
. Â¢ Project monitoring through MS-Project
Â¢ Competency Centers
Â¢ Knowledge Repositories
rrujeci nepuri uu
11 ucning oynci
1.2 ABOUT THE SYSTEM
The freight management system is an automated version of manual freight management system. Comparing to conventional system, this is an online system. Both the customer and company is equally benefited the system. As far as the customer is concerned, there is no need to come to office for booking. He can enter in to the system. He can also make payment using credit cards or using online banking.
Incase of company, they can save a lot of time, money and manpower. Almost all the work is computerized. So the accuracy is maintained. Maintaining backup is very easy. It can do with in a few minutes. An additional facility for tracking and knowing the status of the parcel is added. Customer can make payments through DD, credit cards.
The system has three levels of interaction or three modules.
> Administrator level
> Staff level
> Customer level
From the name itself we know that it is administrator's part. Only the administrator is authorized to log in to it. If any changes is needed in the system, he enter this level and will make enough changes .He is the only authorized person to alter the details in database and other important areas of the system. The updating of the details and other details are edited by him. If a new checkpoint is introduced he will add to the checkpoint list. If new routes are found they also added to the route list
This is for the staffs working at the various checkpoints or branches. Staff will log in using the user id and password. Staff will enter the details of customer and goods while booking .Updating the arrival list and other jobs are done by the customer .Dispatch list are also updated by staffs.
Customer can login in to the site from anywhere .He can register online, if registration is completed, he is provided with booking id and password .Using this he can login and know the status of the consignment .He can make the payment through credit card or internet banking. He can also select the route according to his convenience.
2. PROGRAM ENVIRONMENT
2.1 HARDWARE SPECIFICATION
Processor Intel Pentium class, 600 megahertz (MHz) Higher
RAM 128 MB 256 MB
Free hard disk space 370 MB (minimum install) 600 MB
I VGA Super VGA
i 24X 52X
2.2 SOFTWARE SPECIFICATION
OPERATING SYSTEM WINDOWS2000 ADVANCED SERVER
CLIENT SIDE HTML
SERVER SIDE ASP.NET
BACK END MS SQL SERVER 2000
2.2.1 ABOUT OPERATING SYSTEM
WINDOWS 2000 ADVANCED SERVER
Windows 2000 Advanced Server is the most reliable operating system Microsoft has ever produced. Reliable systems start with reliable server software. The Microsoft Windows 2000 Server family of operating systems share a core set of architectural features aimed at ensuring continued reliability and availability
Symmetric multiprocessing (SMP)
SMP lets software use multiple processors on a single server in order to improve performance, a concept known as hardware scaling, or scaling up. Any idle processor can be assigned any task, and up to 8 CPUs can be added to improve performance and handle increased loads.
Clustering provides users with constant access to important server-based resources. Windows 2000 Advanced Server provides the system services for two-node server clustering. With clustering, you create two cluster nodes that appear to users as one server. If one of the nodes in the cluster fails, the other node begins to provide service in a process known as fail over. Combined with advanced SMP and large memory support in Windows 2000 Advanced Server,
Network load balance
Another way to improve the availability of Windows 2000 systems is through the use of network load balancing. To handle large amounts of traffic more efficiently, network load balancing routes incoming requests to one of several different machines.
Component load balancing
The newly released Microsoft Application Center 2000 will go beyond NLBS to include Component Load Balancing. With Component Load
Balancing, Windows 2000 can balance loads among different instances of the same COM+ component running on one or more machines that are running Application Center 2000. To add flexibility to distributed Web applications, you can use Component Load Balancing in conjunction with Network Load Balancing Services.
2.2.2 LANGUAGES AND TOOLS
. NET PROGRAMMING LANGUAGES
The .NET Framework provides a set of tools that help to build code that works with the .NET Framework. Microsoft provides a set of languages that are already .NET compatible. VB.NET and ASP.NET among those languages.
ASP.NET provides a powerful server side control architecture. ASP.NET builds on the programming classes of the .NET Framework, providing a Web application model with a set of controls and infrastructure that make it simple to build ASP Web applications. ASP.NET includes a set of controls that encapsulate common HTML user interface elements, such as text boxes and drop-down menus. These controls run on the Web server, however, and push their user interface as HTML to the browser. On the server, the controls expose an object-oriented programming model that brings the richness of object-oriented programming to the Web developer. ASP.NET also provides infrastructure services, such as session state management and process recycling that further reduces the amount of code a developer must write and increase application reliability. Using XML Web services features, ASP.NET developers can write their business logic and use the ASP.NET infrastructure to deliver that service via SOAP.
To get great performance and remove the active scripting dependency, ASP.NET pages assemblies (DLLs). When a page is first requested, ASP.NET compiles .the page into an assembly. The assembly contains a single generated class that derives from the System.Web.Ul.Page class. It contains all the code needs to generate the page, and is instantiated by the framework to process a request each time the .aspx page is requested.. However, the compilation is only ever done once for each, aspx file. All subsequent requests for the page -even after IIS has been restarted - are satisfied by instantiating the class generated, and asking it to render the page.
BENEFITS OF ASP.NET
Â¢ Make code cleaner
Â¢ Improve deployment, scalability, security, and reliability
Â¢ Provide better support for different browsers and devices
Â¢ Â¢ Enable a new breed of web applications
ASP.NET is designed around the concept of server controls. This stems from the fundamental changes in the philosophy for creating interactive pages. In particular, with the increasing power of servers and the ease of building multi-server web-frames.
SERVER CONTROL HIERARCHY
. The server controls are logically broken down into a set
Â¢ HTML Server Controls: The server equivalents of the HTML controls. They create output that is the same as the definition of the control within the page, and they use the same attributes as the standard HTML lament.
Â¢ Web Form Controls: A set of controls that are equivalent of the normal HTML <form> controls, such as a textbox, a hyperlink, and various buttons. They have a standardized set of property names that make life easier at design-time, and easier for graphical page creation tools to build the page.
Â¢ List Controls: These controls provide a range of ways to build lists.
These lists can also be data bound. In other words, the content of the list can come from a data source such as an Array, a Hash Table, or a range of other data sources. The range of controls provides many different options, and some include special features for formatting the output and even editing the data in the list.
Â¢ Rich Controls: Produce the rich content and encapsulate complex functionality, and will output pure HTML or HTML and Script.
Â¢ Validation Controls: A set of special controls designed to make it easy to check and validate the values entered into other controls on a page. They perform the validation client side, server side, or both, depending on the type of client device that requests the page.
ASP.NET SECURITY OPTIONS
ASP.NET Provides a range of different options for implementing security and restricting user access in a web application. All these options are configured within the web.config file located in the root folder of the application.
ASP.NET provides itself provides three types of authentication and authorization, through the first of these options (windows) does rely on IIS to do all the work for us.
Â¢ Windows built in authentication: IIS through Basic, Digest, or Integrated windows authentication performs the initial authentication. The web.config file can specify the accounts that are valid for the whole or parts of the application.
Â¢ Passport-based authentication: This option uses a centralized Web-based authentication service provided by Microsoft, which offers single - sign -on [SSN] and core profile server for member site.
Â¢ Forms based authentication: Unauthenticated requests are automatically redirected to an HML form page using HTTP client-side redirection. The client browser sends the cookie with all subsequent requests, and the user can access the application while they retain this cookie.
Â¢ Default [IIS] authentication: The default impersonation can still be used. But access control is limited to that specified within IIS. Resources are accessed under the context of the ASP.NET process account, or the IUSER account if impersonation is enabled.
Web clients are used to communicate with a Web sever to send requests for files and receive information from the server. It interprets this information and displays it in a format that can be understood by the user. This client has to be able to translate all data received from the server and send back data entered by the user. With the current Web standards, the Web clients also send back information about the system it is being run on, so that the server may respond accordingly, if it is configured to do so.
Web servers, just like other servers, perform the prime function of serving the network it is on. The service here is receiving requests for information and delivering it on a particular protocol. Essentially, a Web server is just a file server. The user requests a file and this is provided by the web server, if available. It is this software that manages user requests and server files. It may handle security, cache; server limits and even manages the server or set of servers.
The .NET Framework includes a series of classes that
implement a new data access technology that is specifically designed for use in
the .NET world. The new framework classes provide a whole lot more than just a
.NET version of ADO. While data management is often assumed to relate to
relational data sources such as database. There is extended support within .NET
for working with Extensible Markup Language (XML) and its associated
technologies. Traditional data access with ADO revolves around one
fundamental data storage object the Record set. The .NET data access object model is based around two fundamental objects:
The Data Reader and Dataset. The main differences are that a Data Reader provides forward-only and read-only access to data, while the Dataset object can hold more than one table.
THE ARCITECTURAL MODELS.
Presentation Layer Presentation Layer
Data Access Layer
Data Access Layw
Data Access Layer |
Phase 1: Classic
In the classic model, note how all layers are held within the application itself. This architecture would be very awkward to maintain in a large-scale environment unless extreme care was taken to fully encapsulate or modularize the code. Because Phase 1 of the Duwamish Books sample focuses on a small retail operation, this type of design is perfectly acceptable. It's easy to develop and, in the limited environment of a single retail outlet, easy to maintain. In Phase 1 we deliver the basic functionality and documentation of the code and design issues.
Phase 2: Two-tier
Phase 2 moves to a two-tier design, as we break out the data access code into its own layer. By breaking out this layer, we make multiple-user access to the data much easier to work with. The developer does not have to worry about record locking, or shared data, because all data access is encapsulated and controlled within the new tier.
Phase 3: Logical three-tier and physical three-tier
The business rules layer contains not only rules that determine what to do with data, but also how and when to do it. For an application to become scalable, it is often necessary to split the business rules layer into two separate layers: the client-side business logic, which we call workflow, and the server-side business logic. Although we describe these layers as client and server-side, the actual physical implementations can vary. Generally, workflow rules govern user input and other processes on the client, while business logic controls the manipulation and flow of data on the server. Phase 3 of the Duwamish books sample breaks out the business logic into a COM component to create a logical three-tier application. Our second step in creating a three-tier application is to provide a physical implementation of the architecture. To distribute the application across a number of computers, we implement Microsoft Transaction Server in Phase 3.5. The application becomes easier to maintain and distribute, as a change to the business rules affects a smaller component, not the entire application. This involves some fairly lengthy analysis because the business rules in Phase 1 were deliberately not encapsulated.
Phase 4: A Web-based application
Ã‚Â¦ .Phase 4 of the Duwamish books sample is the culmination of the migration from a desktop model to a distributed n-tier model implemented as a Web application. In Phase 4, we offer three client types aimed at different browser types. We also break out the workflow logic from the client application. This logic is now implemented through a combination of ASP script, some client-side processing (depending on the client type), and a COM component. The Workflow component converts ADO Record sets it receives from the Business
Logic Layer component into XML data, which is, in turn, converted into HTML for presentation. Phase 4 documents the benefits, architecture, and implementation issues relating to the migration of a three-tier application to a Web-based application.
IIS (Internet Information Services)
Internet Information Services (IIS) makes it easy for you to publish information on the Internet or your intranet. IIS includes a broad range of administrative features for managing Web sites and your Web server. With programmatic features like Active Server Pages (ASP), you can create and deploy scalable, flexible Web applications.
The introduction of ASP.NET transformed IIS from being a mare server of static content to being a server of dynamic content. Prior to the introduction of ASP.NET, the main function of IIS was to serve static HTML pages. When someone requested a web page from a web site using IIS, the server would fetch a static HTML file from disk or memory and send it out to the browser. The primary responsibility of IIS was to act as efficient interface between browser and a bunch of files sitting on the web server's hard drive. IIS was not different from other web servers in this respect. That is, a server receives request for particular files and respond by sending the correct file by retrieving it from the drive or memory.
-An application root is the starting point of an ASP.NET application and contains resource like the global.aspx file and the \bin directory. The web server itself becomes active in the process of creating the web page. IIS adds search capability, E-mail functionality, an additional level of security, as well as site analysis features. IIS is considered as a powerful and versatile web browser.
, XML is a markup language for documents containing structured information. Structured information contains both content (words, pictures, etc.) and some indication of what role that content plays (for example, content in a section heading has a different meaning from content in a footnote, which means something different than content in a figure caption or content in a database table, etc.). Almost all documents have some structure.
1 I I WW
A markup language is a mechanism to identify structures in a document. The XML the number of applications currently being developed that are based on, or make use of, XML documents is truly amazing for our purposes, the word "document" refers not only to traditional documents, like this one, but also to the myriad of other XML "data formats". These include vector graphics, e-comrmerce transactions, mathematical equations, object meta-data, server APIs, and a thousand other kinds of structured information. Edification defines a standard way to add markup to documents.
In order to appreciate XML, it is important to understand why it was created. XML was created so that richly structured documents could be used over the web. The only viable alternatives, HTML and SGML, are not practical for this purpose.
English scientists Berners-Lee wrote the server and client software on his computer and distribution began among his fellow scientists. Along with this colleagues, he grappled with the protocols and ended up with the URLs (Uniform Resource Locators), HTTP (Hyper Text Transfer Protocol), and HTML.
HTML is a simple text-based language that uses a series of tags to create a document that can be viewed by a browser. This versatile language allows the creation of hypertext links, also known as Hyper Links. These hyper links can be used to connect documents on different machine, on the same network or on a different network, or can even point to point of text in the same document.
Web documents are written in Hyper Text Markup Language (HTML). After a designer specifies a document's structure using HTML, the designer can apply an HTML document specification called a Document Type Definition (DTD) to the document. The HTML DTD is formal definition of the HTML syntax based on the Standard Generalized Markup Language
(SGML).HTML documents are platform independent.. To create an HTML document, designers embed tags (delimiters) and possibly character entity reference into a text-based document to specify operations a browser will perform on the corresponding text.
Hypertext (which usually appears as highlighted or underlined in a web document) lets user's hyperlink to other documents or elsewhere within the same document. A hot zone is similar to a hypertext, in that user can click their mouse on the hot zone or jump to another document or to another location within the current document. A single image can have multiple independent hot zones. Tags also tell the browser to connect a user to another file or URL when he clicks an active hyperlink.
VB script is a lightweight programming language that provides programming functionally based on the visual basic programming language. VB script is natively executed on the Internet explorer browser and can be executed in other browser s through plug-in technologies. VB script is also the default scripting language for IIS 3.0 or later. The use of scripting languages is interesting because the script source code is actually embedded as text within the web page.
. VB Script acts as both client-side and server side programming language. A client side programming language can be interpreted and executed by a browser. VB script is also a server side programming language. Server side programming language is a language that receives those files. A server side programming language performs all the work on our website's computer.
MICROSOFT .NET FRAMEWORK
Microsoft designed VB.NET from the ground up to take advantage of its new .NET Framework. The .NET Framework is a multi-language environment for building, deploying, and running XML Web services and applications.
The .NET Framework was designed with three goals in mind. First, it was intended to make Windows applications much more reliable, while also providing an application with greater degree of security. Second, it was intended to simplify the development of Web applications and services that not only work in the traditional sense, but on mobile devices as well. Lastly, the framework was designed to provide a single set of libraries that would work with multiple languages. . NET has been designed with multiple platform support as a key feature.
THE FOUR COMPONENTS OF THE .NET FRAMEWORK
Common Language Runtime
Programming Languages (C#, VC++, Vb.NET, . Jscnpt.NET)
COMMON LANGUAGE RUNTIME
One of the design goals of .NET Framework was to unify the runtime engines so that all developers could work with a set of runtime services. The .NET Framework's solution is called the Common Language Runtime (CLR). The CLR provides capabilities such as memory management, security, and robust error handling to any language that work with the .NET Framework. The CLR enables languages to inter operate with one another. Memory can be allocated by code written in one language and can be freed by code written in another language. Similarly, errors can be raised in one language and processed in another language.
The CLR provides many core services for applications. The CLR can provide these services due to the way it manages code execution.
. NET FRAMEWORK CLASS LIBRARY
The .NET Framework provides many classes that help developers re-use code. The .NET Class Libraries contain code for programming topics such as threading, file I/O, database support, XML parsing, and data structures such as stacks and queues. This entire class library is available to any programming languages that support the .NET Framework. Because all languages now support the same runtime, they can re-use any class that works with the .NET Framework. This means that any functionality available to one language will also be available to any other .NET language.
2.2.3 FRONTEND AND BACK END
FRONT END: ASP.NET
Asp.net is a web development platform that contains some tools to make development easier and more powerful. These tools include compilation additional language support and web forms and server controls. On-demand compilation is known as just-in-time compilation.Asp.net has object oriented languages c# and vb.net. Ã‚Â¦
Asp.Net pages are compiled in to .net classes the first time the page is requested and the compiled code is cached for subsequent page request, leading to huge improvement performance and the asp.net runtime will automatically defect if any changes are made to the source code. This compiled code can be written in c#, vb or Java script. Asp.Net solves a problem of browser dependencies since it verifies with each browser about the version, capabilities before sending the requested page and the output.
ASP.Net brings structure back into programming by offering a code behind a page, which separates the client side script and HTML from the server side code.
Using Web forms in ASP.Net and positioning controls such as text boxes and buttons is easy, and Visual Studio.Net will create the appropriate HTML code for the target browser that is selected. For instance, to be compatible with most browsers, Visual Studio.Net will create tables, and nested tables to obtain the desired position of the controls. If the application only needs to be compatible with the latest versions of Internet Explorer, then Visual Studio.Net will position the controls using DHTML. Compiled code:
ASP.Net solves the problem of running interpreted script by compiling the server-side code into IL (Intermediate Language). IL code is significantly faster than interpreted script. Early Binding:
ASP.Net also uses early binding when making calls to COM components, resulting in faster performance. Security:
ASP.Net has an enhanced security infrastructure that can be quickly configured and programmed to authenticate and authorize Web site users.
Ã‚Â¦ASP.Net contains performance enhancement, such as page and data catching. Diagnostics:
ASP.Net offers an enhanced tracing and debugging option,
which will save time. Ã‚Â¦ . Net framework:
Since ASP.Net uses the .Net framework; ASP.Net also inherits features of .Net framework, such as:
1) Automatic memory cleanup via garbage collection.
2) Cross-Language inheritance.
3) A large object-oriented base class library.
The .Net framework inherently supports multiple languages. So we can use whichever we feel most comfortable with. By default the CLR (Common Language Runtime) comes with Visual Basic. Net, C#, and
rroject Report uo
unune rreigrn 11 ucamg u^.v..
Jscript .Net, and there are a number of third party languages that we can use, such as Perl, COBOL and many others.
Additionally, Visual Studio .Net ads support fro Visual C++, and an implementation of Java (called J#) is also available for download from Microsoft. Because the language support is part of framework, it doesn't matter what languages we use.
ADO.Net is the latest in a long line of data access technologies released by Microsoft. ADO.Net differs somewhat from previous technologies, however, in that it comes as part of a whole new platform called. Net framework. This platform is set to revolutionize every area of development, and ADO.Net is just one aspect of that. The foundation on which the .NET framework built is the Common Language Runtime (CLR). The CLR is the execution environment that manages the .NET code at runtime. The .NET framework needs to be installed on any machine where .NET programmers will be run. In order to achieve cross-language support, all .NET programs are compiled prior to deployment into a low level language called Intermediate Language (IL). Microsoft's implementation of this language called Microsoft Intermediate Language (MSIL). This IL code is then just in time compiled in to native code at run time.
Asp.net offers significant improvements over asp in the areas of performance, statement, scalability, deployment, security, output cache control, web form support and xml web services and infrastructure.asp.net can run side by side on an internet information services (IIS) web server without interference.
BACKEND: SQL SERVER 2000
' MicrosoftÃ‚Â® SQL Server 2000 is a relational database management and analysis system for e-commerce, line-of-business, and data warehousing solutions. SQL Server 2000, the latest version, includes support for XML and HTTP, performance and availability features to partition load and ensure uptime, and advanced management and tuning functionality to automate routine tasks and lower total cost of ownership. . The SQL Server 2000 software
delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. SQL Server 2000 is intended for mission-critical, heavy-load production systems as well as for embedding
SQL Server and XML support
Extensible Markup Language (XML) is a hypertext
programming language used to describe the content of a set of data and how the
data should be output to a device or displayed in web page. In a
relational database such as Microsoft sql server 2000,all operations on the tables in the database produce a result in the form of table. Web application Programmers on the other hands are more familiar with working with hierarchical representation of data in xml or html documents. The sql server 2000 introduces support for xml .The Microsoft sql server 2000 relational database engine natively supports extensible markup language (XML).
FEATURES OF SQL SERVER 2000
Â¢ Internet Integration.
The SQL Server 2000 database engine includes integrated XML support. It also has the scalability, availability, and security features required to operate as the data storage component of the largest Web sites. The SQL Server 2000 programming model is integrated with the Windows DNA architecture for developing Web applications, and SQL Server 2000 supports features such as English Query and the Microsoft Search Service to incorporate user-friendly queries and powerful search capabilities in Web applications.
Â¢ Scalability and Availability.
The same database engine can be used across platforms ranging from laptop computers running Microsoft WindowsÃ‚Â® 98 through large, multiprocessor servers running Microsoft Windows 2000 Data Center Edition. SQL Server 2000 Enterprise Edition supports features such as federated servers, indexed views, and large memory support that allow it to scale to the performance levels required by the largest Web sites.
rroject Report On
(Juline rretglu 1 racking system
Â¢ Enterprise-Level Database Features.
The SQL Server 2000 relational database engine supports the features required to support demanding data processing environments. The database engine protects data integrity while minimizing the overhead of managing thousands of users concurrently modifying the database. SQL Server 2000 distributed queries allow you to reference data from multiple sources as if it were a part of a SQL Server 2000 database, Replication allows you to also maintain multiple copies of data, while ensuring that the separate copies remain synchronized. You can replicate a set of data to multiple, mobile, disconnected users, have them work autonomously, and then merge their modifications back to the publisher
Â¢ Ease of installation, deployment, and use.
SQL Server 2000 includes a set of administrative and development tools that improve upon the process of installing, deploying, managing, and using SQL Server across several sites. SQL Server 2000 also supports a standards-based programming model integrated with the Windows DNA, making the use of SQL Server databases and data warehouses a seamless part of building powerful and scalable systems.
Â¢ Data warehousing.
SQL Server 2000 includes tools for extracting and analyzing summary data for online analytical processing. SQL Server also includes tools for visually designing databases and analyzing data using English-based questions.
3. SYSTEM STUDY & ANALYSIS
3.1. EXISTING SYSTEM
The existing system is a manual one which needs a lot of paper works that Consumes more time, money and human effort. Searching is also difficult when they are manually processed. Recovery of data lost by accidental damage of stored papers is not possible in the present system. Taking hard copy backups consumes extra time and money.
The existing system is subjected to close study and the problem areas are identified. The solutions are given as a proposal. The proposal is then weighed with the existing system analytically and the best one is selected. The Proposal is presented to the user for endorsement. The current system will do all the steps manually. The customers have to come to office for booking and handing over the parcel. It will need a lot of effort and time. Every work should be done manually. Each transaction has to be entered manually. Since there are a lot of transactions occurring daily, it's time consuming and also generates a lot of workload. Shortest path for the consignment cannot be calculated. There is no provision for tracking the path for both customers and staffs. At the time of booking it is not easy to select the shortest path that benefits the customer. It is difficult to select the shortest path between the source and destination manually. Existing system needs more employees for the work .It is a risky job to search a record for editing or any other purposes .Updating the records are difficult and time consuming. Once the customer has handed over the parcel, it's difficult to know where it is until it reaches the destination. Customer has to wait for the acknowledgment from the destination branch to know about the safe delivery. Customer has to pay the service charge by cash. Under the current system there is no other means of payment.
3.1.1 DRAWBACKS OF EXISTING SYSTEM
The main draw back of the system is its manual environment. It will lead's to a lot of workload and complexities. It requires more man power. There is no online facility for tracking or payment
Â¢ Manual booking leads to a lot of paper works.
Â¢ More man power is required.
Â¢ Customer has to come to office for booking.
Â¢ Selection of shortest path is difficult which leads to wastage of time and money.
Â¢ Tracking of goods is not possible.
Â¢ Credit card or online payment is not possible, only cash payment is possible.
3.2 PROPOSED SYSTEM
The proposed system is a solution for the above mentioned problems. Almost all the work is automated .So the manpower and the workload is considerably reduced. Since it is an online system the customer is equally benefited along with the service provider. If the Customer can make use of the facilities provided ,he can save a lot of money and time .He need not come to office for booking and to hand over the parcel. He can book online and register using the online system .He can make the payment through credit cards and other online banking system.
During booking customer is given a booking id .He can use this id to log in the website for current status of the cargo. If it is an overseas shipment it is very useful and cost-effective method to track the details. Customer can select the path using the GUI which will cut the cost and time .If we take the company side they have many benefits. All the manual work is computerized .A lot of man power and time is saved .Since the conventional system has a lot of paper works, it is very risky and error prone .Proposed system is user friendly and transactions are recorded accurately. Searching the records and editing is very easy.
ADVANTAGES OF THE PROPOSED SYSTEM
The proposed system benefits the company and customer equally. Since it is an automated one company can save manpower and time .Accuracy can be maintained .Security is an added advantage .If the customer is considered, he can use online facility for booking, payment and knowing status
Â¢ Automated online system
Â¢ Manpower .money and time is saved
Â¢ Accuracy in transactions
Â¢ Customer can book from anywhere according to his convenience
Â¢ Payment can be done through online
Â¢ Customer can check the status online
Â¢ Searching and editing of record made easy
Unune freight /racking system
4. SYSTEM DESIGN & DEVELOPMENT
The design phase focuses on the detailed implementation of the system recommended in the feasibility study. The design phase is a transition from a user oriented document to a document oriented to the programmers or database personnel. Systems design goes through two phases of development:
Â¢ Logical Design
The DFD so far are known as logical data flow diagrams. They specify various logical processes performed on data, ie the type of operations performed. A logical DFD does not specify who does the operations, whether it is done manually or with a computer and also where it is done. A physical DFD specifies these.
Â¢ Physical Design
A physical DFD is easily drawn the fact gathering stage. A physical DFD is a good starting point in developing logical DFD; it is some times useful to depict physical movement of materials.
The data flow diagram shows the logical flow of a system and defines the boundaries of the system. For a candidate system, it describes the inputs (source), outputs (destination), database (files) and procedures (data flow), all in a format that meet the user's requirements.
4.1 INPUT DESIGN
Input design is the process of converting user-originated inputs to a computer based format, Input data are collected and organized into a group of similar data. Inaccurate input data is the most common cause of data processing errors. Effective input design minimizes errors made by data entry operators. The goal of designing input data is to make data entry as easy, logical and free from errors as possible. In addition to the general form considerations such as collecting only required data, grouping similar or related data, input design requires consideration of the needs of the data entry operator, m entering data, an operator needs to know the following: -
The allocated space for each field.
Â¢ Field sequence, which must match that in the source document.
Â¢ The format at which data field enter
Â¢ Access Details
Access screen contains the details of accessing a file from a client or from the other clients in the network. This screen includes the provision for selecting a file from any directory.
4.2 OUTPUT DESIGN
Computer output is the important and direct source of information to the user. Efficient, Intelligible Output design should improve the system's relationships with the user and helps in decision making. They also provide a permanent hard copy of these results for later consultation.
The various types of outputs required by the system are
Â¢ External Output: whose destination is outside the concern and requires special attention.
Â¢ Internal Output: whose destination is within the concern and requires careful design because they are the user's main interface within the computer.
Â¢ Operation Outputs: whose use is purely within the computer department Â¢' Interactive Outputs: which involves the user in communicating directly.
4.3 DATABASE DESIGN
Database design and management isn't very difficult. People much wiser than we have designed some very orderly and sound rules to follow and developed these rules into what is called the Normalization Process. Using this process, you can create brand new, fully functional, finely tuned databases or take current database tables, run them through these steps, and come out with well-oiled tables ready to fly. However you use these steps, they are the fundamentals of quality database design.
rroject Keport Uo
Untine rreigni 1 rucmng oysivm
Functional Dependence: Before we jump into the Normalization Process, we should take a step back and clear a few things up. First, this is not specific to any one type of database. These are rules that should be followed when using any database system, whether it is Oracle, MySQL, PostgreSQL, SQL Server, etc. Let us first discuss Functional Dependence, which is crucial in understanding the Normalization Process. This is merely a big term for a relatively simple idea. To illustrate it, lets take a look at a small sample table.Definition: A column is functionally dependent on another column if a value 'A' determines a single value for 'B' at any one time.To determine functional dependency, you can think of it like this: Given a value for Field A, can you determine the single value for B If B relies on A, then A is said to functionally determine B.
On KeysNow that we know what functional dependence is, we can clarify keys. Now, if you are working in databases, you probably already know what Primary Keys are. But, can you define them
Definition: Column A is the primary key for table T if: Property 1. All columns in T are functionally dependent on A Property 2. No sub collections of the columns in table T also have Property 1.
This makes perfect sense. If all your fields in a database are dependent on one and only one field, then that field is the key. Now, occasionally Property 2 is broken, and two fields are candidates for the Primary Key. These keys are then called candidate keys. From these candidate keys, one key is chosen and the others are called alternate keys.
First Normal Form: Now that we have clarified these concepts, lets move into the Normalization Process. The First Normal Form is defined as a table that does not contain repeating groups.
. Second Normal Form: First, lets just say this. The table is automatically 2NF if its Primary Key contains only one column. That was easy, wasn't it But then, if your Primary Key has more than one column, read on.
Third Normal Form: 3NF is a table that complies with the 2NF (and of course, the 1NF) and if the only determinants it contains are candidate keys. This does, of course, include the primary key.
Fourth Normal Form:Finally, we are up to the big one. 4NF is the father of the forms, as it is the be all and end all. This takes care of any problem that may occur. Lets start this time by defining a very important term, multivalued dependence (MD). MD is when field B is multidependent on A if each value of A is associated with a specific list of values for B, and this collection is independent of any values of C.
Before you sit down to design the database, gather all the information you want to include in the database. I mean everything. Go around to each department of the company (or just write it out yourself if this is just for you) and find out what everyone wants in the database. Once you have everything, bring it back, and create one huge table.
From there, break that table down to 1NF, then 2NF, and so on. Go back over each table, and make sure they all work together, and are all 4NF tables. If they aren't, then it can be assured the tables will suffer problems in the future.
Quality is in the design. And for those people who know, this helps comply with Codd's first 2 rules for a truly relational database system.
4.4 CODE DESIGN
The purpose of coding is to express the program logic in the best possible way and to the check it. The main reasons for coding are:
1. Unique Identification. Each item in a system should be identified uniquely and
2, Cross referencing. Diverse activities in an organization give rise to transactions
in different sub systems but affect the same item
1 / UJIC^l l\fCjJUl I uu
limine 1 i vigni Jtuc/uitg oysiurn
3, efficient storage. Code is a concise representation it reduces data entry time and improves reliability, Code as a key reduces storage space required for the data, Retrivel based on a key search is faster in a computer.
Requirements of coding scheme.
The number of digits/characters used in a code must be minimal to reduce storage space of the code and retrieval efficiency. It should be expandable, that is it must allow new items to be added easily. It should be meaningful and convey to a user some information about the characteristics of the item to enable quick recognition and identification of the item.
Type of Codes.
1. Serial Numbers. This method is that it is concise, precise and expandable. It is however not meaningful.
2. Block Codes. The block codes use blocks of serial numbers. This code is expandable and more meaningful than the serial number coding. It is precise but not comprehensive.
3. Group classification code. This is an improvement on the block code and is more meaningful.
Code Efficiency: It is often said that readability of a program is much more important than the intricacies of its code. Steps, which can be taken at the coding stage, will include:
Ã‚Â¦ Â¢ Use of meaningful data names
Â¢ Inclusion of commentary
Â¢ Layout of code
Â¢ Avoidance of tricks (straight forward code) ' Â¢ Detection of Error.
if a code is designed which is able to detect the two types of common errors namely single transcription and transposition errors, It will be reasonably good. Such code is called modules-11 code. In this check a set of codes they are transformed to another set of codes with error detecting property
; tujeL.1 i\eyui i vv
5. SYSTEM TESTING & IMPLEMENTATION
5.1 TESTING OBJECTIVES
Software testing is a critical element of quality assurance and represents the ultimate previews of specifications, design and coding. Testing represents an interesting anomaly for the software. Doing the earlier definition and development phase it was attempted to build software from an abstract concept to a tangible implementation
The main objectives of testing are: -
Testing is a process of executing a program with the intent of finding an error. A good test case is one that has a high probability of finding an as yet undiscovered error. A successful test is one that uncovers as yet undiscovered error. The above objectives imply a dramatic change in view point. They move counter to the commonly held view that a successful test is one in which no errors are found. Our objective is to design tests that systematically uncover different classes of errors and do so with a minimum amount of time and effort.
If testing is conducted successfully it will uncover errors in the software. As a secondary benefit, testing demonstrates that software function appears to be working according to specification and that performance requirement appears have to been met. In addition, data collected as testing is conducted provides a good indication of software reliability and some indication of the software quality as a whole, but there is one thing that testing cannot do. Ã‚Â¦ Ã‚Â¦
5.2 SYSTEM TESTING
Ã‚Â¦ For any software that is newly developed, primary importance is given to testing of the system. It is last opportunity for the developer to detect the possible errors in the software before handing over it to the customer. Testing is the process by which the developer will generate a set of test data, which gives the maximum probability of finding all types of errors that
can occur in the software. The various steps of testing the system can be listed as given below.
Â¢ Running the program to identify any errors (whether syntax or semantic) that might have occurred while feeding the program into the system.
Â¢ Applying the screen formats to regulate users to gauge the extend to which the screens are comprehensible to the user.
Â¢ obtaining the results/responses from user and analyzing it for improvement.
Â¢ Check the data accessibility from the data server and whether any improvements are needed or not.
The following ideas should be part of any testing plan 1-. Preventive Measures Ã‚Â¦
2. Spot checks
3. Testing all parts of the program
4. Test Data
5. Looking for trouble '
6. Ã‚Â¦ Time for testing
The data is entered in all forms separately and whenever an error occurred, it is corrected immediately. A quality team deputed by the management verified all necessary documents and tested the software while entering the data at all levels.
The entire testing process can be divided into three phases
1. Unit testing
2. Integrated Testing
3. Final/System testing
1. Unit Testing
As this system was partially GUI based Web application, the following were tested in this phase
1. Tab Order
2. Reverse Tab Order
3. Field Length
4. Front end Validations
In our system, unit testing has been successfully handled. The test data was given to each and every module in all respects and got the desired output. Each module has been tested found working properly.
2. Integration Testing
Test data should be prepared carefully since the data only determines the efficiency and accuracy of the system. Artificial data are prepared solely for testing. Every program validates the input data.
3. Validation Testing
In this, all the code modules were tested individually one after the other. The following were tested in all modules
1. Loop Testing :
2. Boundary Value analysis
3. Equivalence Partitioning Testing
In our case all the modules were combined and given the test data. The combined module works successfully with out any side effect on other programs. Everything was found fine working
3. Output Testing
This is the final step in testing. In this the entire system was tested as a whole with all forms, code, modules and class modules. This form of testing is popularly known as Black Box testing or system testing
Black Box testing methods focus on the functional requirement of the software. That is, Black Box testing enables the software engineer to derive sets of input conditions that will fully exercise all functional requirements for a program. Black Box testing attempts to find errors in the following categories; incorrect or missing functions, interface errors .errors in data
use. The first maintenance activity occurs since it is unreasonable to assume that software testing will uncover all errors in a large software system. The process of including the diagnosis and correction of one or more errors is called Corrective maintenance.
The second activity that contributes to a definition of maintenance occurs since rapid change is encountered in every aspect of computing There for adaptive maintenance modifies software properly interface with changing environment.
Third activity involves recommendations for new capabilities, modifications to the existing functions and general enhancements when the software is used. To satisfy request perspective maintenance is performed.
The fourth maintenance activity occurs when software is changed to improve future maintainability or reliability .this is called preventive maintenance.
- A computer system is secure if neither its ability to attain its objective nor its availability to survive can be adversely affected by an unwanted events. A computer-based security is a combination of many assets or resources design to perform some function or to provide service.
In this system, which is, web based, several measures have been taken to provide some security. Loss of confidentiality is reduced to a great extent. The facility to impose strict authorization is completely vested in the hands of the system administrator. He/She has the full authority to add or delete user to and from the system respectively. Only valid users can enter the system. They have to provide a valid user id and password, to prove that they are valid users .If any one of this is wrong, access is denied to the system.Forced change of password can be imposed after a period as specified by the system administrator. Also the password is made to contain alphanumeric characters. If any unauthorized person tries to enter the system, then also preventive measures can be taken. For e.g. after four consecutive trails that result in failure, provisions should be made such that the process terminates and the exits from the program.
Education and User Training
The purpose of training is to ensure that all the personnel who are associated with the system should possess the necessary knowledge and skills. The end users must know in detail what their rules will be, how they can use the system and what system will or will not do. Before the initialization of training program, materials are prepared. The reference manuals are mainly based upon the system specification. Both the system operators and users need the training.
Software maintenance is the process of modifying a software system or component after delivery to correct false, improve performance and other attributes, or adapt to change environment. Maintenance covers a wide range of activities including correcting coding and design errors, updating documentation and test data and of hardware and software. Maintenance is always necessary to keep the software usable and useful. Hardware also requires periodic maintenance to continue to bring a new system to standards. Software maintenance activities can be classified into
Â¢ Corrective Maintenance
Â¢ Adaptive Maintenance.
Â¢ Perceptive Maintenance.
Corrective maintenance removes software faults. Corrective maintenance should be the overriding priority of the software maintenance team.
Perceptive maintenance involves recommendations for new capability modifications to the existing functions and general enhancements when the software is used. To satisfy this request, perceptive maintenance is performed.
Adaptive maintenance modifies the software to keep it up to date with its environment. Adaptive maintenance may be needed because of changes in the user requirements, changes in the target platform, or changes in external interfaces. Minor adaptive changes may be handled by normal maintenance process. Major adaptive changes should be carried out as a separate development project and implimentation.
The quality of an information system depends on its design, development, testing and implementation. One aspect of system quality is its reliability. A system is reliable if, it does not produce failures. Although it is virtually impossible to develop software that can be proven to be error free, software development strive to prevent the occurrence of errors, using methods and techniques include error detection, correction and error tolerance. Both strategies are useful for keeping the system operating and preventing failures. Unlike hardware with which there can be manufacturing and equipment failures with software failures are the result of design errors that were introduced when specifications were formulated and software written.
An error avoidance, developers and programmers make every attempt to prevent errors from occurring at all. The scientific methods and techniques are used in the analysis and design phases are aimed at meeting this objective. This emphasis on early and careful identification of user equipments is another way this objective is pursued. Still analysis must assume that it is impossible to fully achieve this objective.
Error Detection and Correction
Ã‚Â¦This method uses design features that detect errors and Ã‚Â¦ make necessary change to correct either the error while the program is in use or the effect on the user so that a failure does. Not occurs. Even though the error may not happen for several years after the system is installed, the error is there from the day of development. The failure occurs later.
6. ANNEXURE A
6.1 DATAFLOW DIAGRAM Level 0:
FIELD NAME DATA TYPE DESCRITION
BID NUMERIC BOOKING ID
CURRENTCHK VARCHAR CURRENT
NEXTCHK VARCHAR NEXT CHECKPOINT
REMARK VARCHAR REMARK
DELIVERED VARCHAR DELIVERED
DT VARCHAR DATE
FIELD DATA TYPE DESCRIPTION
ORIGIN VARCHAR ORIGIN
DEST VARCHAR DESTINATION
TRANSIT VARCHAR TRANSIT
TRANSPORT VARCHAR TRANSPORT
DELIVERY VARCHAR DELIVERY
AMOUNT VARCHAR AMOUNT
APPTIME VARCHAR APPROXIMATE TIME
NAME VARCHAR ROUTE NAME
CHECK VARCHAR CHECKPOINT
MINRATE VARCHAR MINIMUM RATE
DISTANCE VARCHAR DISTANCE
RID NUMERIC ROUTE ID
FIELD NAME DATA TYPE DESCRITION
UNAME VARCHAR USERNAME
PASS VARCHAR PASSWORD
LNAME CHAR LNAME
PLACE VARCHAR LOCATION
ADDRESS TEXT ADDRESS
PIN TEXT PINCODE
DOB VARCHAR DATE OF
E-MAIL VARCHAR E-MAIL
PHONE TEXT PHONE
FAX TEXT FAX
ID NUMERIC USER ID
FIELD DATA TYPE DESCRIPTION
RNAME VARCHAR RECEIVER NAME
ADDR VARCHAR RECEIVERADDRESS
PHONE VARCHAR PHONE
E-MAIL VARCHAR E-MAIL
FAX VARCHAR FAX
PMODE VARCHAR PAYMENT MODE
EXDATE VARCHAR EXPECTED DATE
IT CODE VARCHAR ITEM OCODE
IT NAME VARCHAR ITEM NAME
NOP VARCHAR NUMBER OF PIECES
WEIGHT VARCHAR WEIGHT
TRATE VARCHAR TOTAL RATE
ROUTE VARCHAR ROUTE
ID NUMERIC ID
BID VARCHAR BOOKING ID
ORIGIN VARCHAR ORIGIN
FIELD DATA TYPE DESCRIPTION
CNAME VARCHAR COMPANY NAME
CADDRESS TEXT COMPANY ADDRESS
PIN TEXT PIN CODE
PHONE TEXT PHONE
FAX TEXT FAX
EMAIL VARCHAR EMAIL
ID NUMERIC COMPANY ID
FIELD NAME DATA TYPE DESCRITION
NAME VARCHAR CHECKPOINT NAME
PLACE VARCHAR PLACE
ADDRESS VARCHAR CHECKPOINTADDRESS
PIN VARCHAR PINCODE
PHONE VARCHAR PHONE
E-MAIL VARCHAR E-MAIL
FAX VARCHAR FAX
LOGIN VARCHAR LOGIN
PASS VARCHAR PASSWORD
LAT VARCHAR LATITUDE
LON VARCHAR LONGITUDE
ID NUMERIC CHECKPOINT ID
FIELD NAME DATA TYPE DESCRIPTION
NAME VARGHAR BANNED PRODUCT
REMARKS VARCHAR REMARKS
ID NUMERIC PRODUCT ID
FIELD NAME DATA TYPE DESCRITION
UNAME VARCHAR USER NAME
PASS CHAR PASSWORD
ID NUMERIC USE ID
II U/CCI I\CpUI I UU
Wlllllie II eiglll IIUCKIIIg Jlilf/T!
j i Booking info
ippf Ã‚Â£ta pi M
Booking status ,
Heme Route Analyser New Booking Service info
Voyager : Established in 1995,Voyager Freight Company Pvt Ltd. is, todaj', a diversified transport company, with offices and agents strategically located around the world. With a managerial base in Cochin, Trivandrum,Voyager is organised on a functional basis.
FREIGHT FORWARDING: Voyager has a strong global network of partners and agents that support its freight forwarding services
AIR FREIGHT: Voyager offers safe, cost effective, customer driven operations to a diverse range of clients.
SHOPS AGENCY: Voyager was initially setup as ship's agency and, since its inception in 1977, has built a solid reputation providing a wide range of sendees
C USTOMER REGISTRATION
UserName )aravind Password )Â¢
Address jpul imp ar airfoil
Date of Booking (Wednesday, May 31, 2006
Email email@example.comPhone (5645646
Check the Avail
Ã‚Â¦ '[' ^ f . w
HOME CHECK POINT ENTRY ROUTE ENTRY BANNED PRODUCTS COMAPNY DETAILS REPORT ACCOUNT SETTING
CHECK POINT ENTRY
User Name [admin
Password J Change
Sir - -9
i r i
1 i I
i l i . . *Â¢.<â€ ...
CHECK POINT :-
ENTER BOOKING NO SOURCE DESTINATION ROUTE
EXPECTED ARRIVAL DATE Saturday, April 01, 20013
ACTUAL ARRIVAL DATE Monday. May 15, 2006
delivered after three days
BANNED PRODUCTS ENTRY
PRODUCT NAME (liquor
Prohibited Legally *J
SHOW BANNED PRODTJX
uiuine rreigm aliening system
BANNED PRODUCTS VIEW
COMPANY DETAILS VIEW
ipw (R3Ã‚Â§#& \Smm 0, M M
PHONE FAX E MAIL
Gandhi. Square Chenneti
COMPANY DETAILS ENTRY
tr * â€šÂ¬
TIME NEEDED (Days)
OHIGIN jTRVM id
DESTINATION JERNAKULAM $
TRANSIT MODE j AIR
TRANSPORT TYPE | SHIPPER
RATE (Rs.) J45
APPROXIMATE TIME (Days)
HOI IE NAME laleppv
MINIMUM RATE (Rs.) 1250
TOTAL DISTANCE (Km.) [250
CHECKPOINT DISTANCE (Km.)
transit transport delivery AIR SHIPPERH0 amount apptime name chk
45 66 rl skj TRM
,-, 4>t; I , 1 .Jf Ã‚Â£ I
**mm ^<ag]tf Stera ijwl
DESTINATION j ERNAKULAM 3
SHOW AVAILABLE ROUTES
CHECKPOINTS IN THEROUTE
BOOKING ID |2~~ FIND STATUS
|sunday. April! 6, 2006
1 : t :
.....j ... m
'ME CHECK POINT ENTRY ROUTE ENTRY BANNED PRODUCTS COMPANY DETAILS REPORT 'COUNT SETTING
SOURCE BID NAME BOOKING DATE
111 tgrg Saturday, May 27, 2006;
We can conclude that this project and implimentation enables easy decision making with the support of correct and precise information, Accurate and timely reports as there would be less chance for errors in transactions. The new system can lead to increase in transactions i.e., more billing, more booking as the computerization speeds up operations, Increase in revenue due to increase in transactions and prevents loss of revenue. Management authorities can collect reports at any point of time from software. They need not worry about the delay in reports from finance and other departments. Computers can handle large volume of data without any frustrations.
As the software is totally integrated, there is no question of increased data entry problems. As the software comes with very flexible user interface, users will feel very comfortable with new environment.
12. SCOPE FOR FUTURE ENHANCEMENTS
The current application developed is in accordance with the requirement that has been provided by the organization. On regarding the future enhancement, the application can be further expanded in accordance with the changing scenario in the web-based applications that need frequent changes in the changing environment and expansion of the organization. Since the change in technology and user needs arises frequently in certain short intervals of time, the application can be further upgraded to meet the requirements that may arise in the far or near future. With regarding to the needs that arises, more and more features can be included by adding it as separate modules and integrating it with the existing system.
The .NET technology itself is based on OOPS concept whose main advantage is modularity which helps us in adding the future needs as add-on modules to work with the main system which can be done effortlessly instead of re-writing or modifying the entire application. So the scope of future enhancement is absolutely clear with the concept that is incorporated in the t