microsoft windows distributed internet application architect
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
computer science crazy
Super Moderator
******

Posts: 3,048
Joined: Dec 2008
#1
05-01-2010, 04:57 PM



.doc   Microsoft Windows Distributed internet application architect seminar report.doc (Size: 334.5 KB / Downloads: 145)
ABSTRACT
Microsoft Windows Distributed interNet Applications Architecture (Windows DNA) is the application development model for the Windows platform. Windows DNA specifies how to: develop robust, scalable, distributed applications using the Windows platform; extend existing data and external applications to support the Internet; and support a wide range of client devices maximizing the reach of an application. Developers are free from the burden of building or assembling the required infrastructure for distributed applications and can focus on delivering business solutions.
Windows DNA addresses requirements at all tiers of modern distributed applications: presentation, business logic, and data. Like the familiar PC environment, Windows DNA enables developers to build tightly integrated applications by accessing a rich set of application services in the Windows platform using a wide range of familiar tools. These services are exposed in a unified way through the Component Object Model (COM). Windows DNA provides customers with a roadmap for creating successful solutions that build on their existing computing investments and will take them into the future. Using Windows DNA, any developer will be able to build or extend existing applications to combine the power and richness of the PC, the robustness of client/server computing, and the universal reach and global communications capabilities of the Internet.

INTRODUCTION
For some time now, both small and large companies have been building robust applications for personal computers that continue to be ever more powerful and available at increasingly lower costs. While these applications are being used by millions of users each day, new forces are having a profound effect on the way software developers build applications today and the platform in which they develop and deploy their application.
The increased presence of Internet technologies is enabling global sharing of information”not only from small and large businesses, but individuals as well. The Internet has sparked a new creativity in many, resulting in many new businesses popping up overnight, running 24 hours a day, seven days a week. Competition and the increased pace of change are putting ever-increasing demands for an application platform that enables application developers to build and rapidly deploy highly adaptive applications in order to gain strategic advantage.
It is possible to think of these new Internet applications needing to handle literally millions of users”a scale difficult to imagine a just a few short years ago. As a result, applications need to deal with user volumes of this scale, reliable to operate 24 hours a day and flexible to meet changing business needs. The application platform that underlies these types of applications must also provide a coherent application model along with a set of infrastructure and prebuilt services for enabling development and management of these new applications.
Introducing Windows DNA: Framework for a New Generation of Computing Solutions
Today, the convergence of Internet and Windows computing technologies promises exciting new opportunities for savvy businesses: to create a new generation of computing solutions that dramatically improve the responsiveness of the organization, to more effectively use the Internet and the Web to reach customers directly, and to better connect people to information any time or any place. When a technology system delivers these results, it is called a Digital Nervous System. A Digital Nervous System relies on connected PCs and integrated software to make the flow of information rapid and accurate. It helps everyone act faster and make more informed decisions. It prepares companies to react to unplanned events. It allows people focus on business, not technology.
Creating a true Digital Nervous System takes commitment, time, and imagination. It is not something every company will have the determination to do. But those who do will have a distinct advantage over those who don't. In creating a Digital Nervous System, organizations face many challenges: How can they take advantage of new Internet technologies while preserving existing investments in people, applications, and data? How can they build modern, scalable computing solutions that are dynamic and flexible to change? How can they lower the overall cost of computing while making complex computing environments work?
Understanding the Microsoft Windows DNA Architecture
Microsoft President Steve Ballmer caught the attention of industry observers today by introducing Windows DNA for Manufacturing, a technical architecture designed to bring software integration to manufacturing environments. Earlier this month, a new Windows DNA Lab opened near Washington, D.C. -- the third such facility in the United States to spring up as a resource for companies building solutions on Windows DNA.
Clearly, Windows DNA is gaining a strong following. But as with any new industry trend, it raises an obvious question: What exactly does this architecture have to offer? More important, what does it mean to the people it's designed to affect? Jigish Avalani, group manager of Windows DNA marketing at Microsoft, explains that Windows DNA refers to the Windows Distributed interNet Application architecture, launched by Microsoft in fall of 1997.
"Windows DNA is essentially a 'blueprint' that enables corporate developers and independent software vendors (ISVs) to design and build distributed business applications using technologies that are inherent to the Windows platform," Avalani says. "It consists of a conceptual model and a series of guidelines to help developers make the right choices when creating new software applications."
Applications based on Windows DNA will be deployed primarily by businesses, from small companies to large enterprise organizations. Consumers are likely to use many of the applications built to take advantage of Windows DNA, such as electronic commerce Web sites and online banking applications.
A major force driving the need for Windows DNA is the Internet, which has dramatically changed the computing landscape. Five years ago, the process of developing programs used by one person on one computer was relatively straightforward. By contrast, some of today's most powerful applications support thousands of simultaneous users, need to run 24 hours a day, and must be accessible from a wide variety of devices -- from handheld computers to high-performance workstations. To meet these demanding requirements, application developers need adequate planning tools and guidance on how to incorporate the appropriate technologies. The Windows DNA architecture addresses this need.
Microsoft Windows DNA
Microsoft Windows Distributed interNet Applications Architecture (Windows DNA) is Microsoft's framework for building a new generation of highly adaptable business solutions that enable companies to fully exploit the benefits of the Digital Nervous System. Windows DNA is the first application architecture to fully embrace and integrate the Internet, client/server, and PC models of computing for a new class of distributed computing solutions. Using the Windows DNA model, customers can build modern, scalable, multitier business applications that can be delivered over any network. Windows DNA applications can improve the flow of information within and without the organization, are dynamic and flexible to change as business needs evolve, and can be easily integrated with existing systems and data. Because Windows DNA applications leverage deeply integrated Windows platform services that work together, organizations can focus on delivering business solutions rather than on being systems integrators. See Figure 1.

Figure 1. Windows DNA tools and system services
Guiding Principles of Windows DNA
The Microsoft application platform consists of a multi tiered distributed application model called Windows DNA (Figure 1) and a comprehensive set of infrastructure and application services. Windows DNA unifies the best of the services available on personal computers, application servers, and mainframes today; the benefits inherent in client-server computing and the best of Internet technologies around a common, component-based application architecture.
The following principles guided Microsoft in developing the Windows DNA architecture:
¢ Web computing without compromise.
Organizations want to create solutions that fully exploit the global reach and "on demand" communication capabilities of the Internet, while empowering end users with the flexibility and control of today's PC applications. In short, they want to take advantage of the Internet without compromising their ability to exploit advances in PC technology.
¢ Interoperability.
Organizations want the new applications they build to work with their existing applications and to extend those applications with new functionality. They require solutions that adhere to open protocols and standards so that other vendor solutions can be integrated. They reject approaches that force them to rewrite the legions of applications still in active use today and the thousands still under development.
¢ True integration.
In order for organizations to successfully deploy truly scalable and manageable distributed applications, key capabilities such as security, management, transaction monitoring, component services, and directory services need to be developed, tested, and delivered as integral features of the underlying platform. In many other platforms, these critical services are provided as piecemeal, non-integrated offerings often from different vendors, which forces IT professionals to function as system integrators.
¢ Lower cost of ownership.
Organizations want to provide their customers with applications that are easier to deploy and manage, and easier to change and evolve over time. They require solutions that do not involve intensive effort and massive resources to deploy into a working environment, and that reduce their cost of ownership both on the desktop and server administration side.
¢ Faster time to market.
Organizations want to be able to achieve all of the above while meeting tight application delivery schedules, using mainstream development tools, and without need for massive re-education or a "paradigm shift" in the way they build software. Expose services and functionality through the underlying "plumbing" to reduce the amount of code developers must write.
¢ Reduced complexity.
Integrate key services directly into the operating system and expose them in a unified way through the components. Reduce the need for information technology (IT) professionals to function as system integrators so they can focus on solving the business problem.

¢ Language, tool, and hardware independence .
Provide a language-neutral component model so developers can use task-appropriate tools. Build on the PC model of computing, wherein customers can deploy solutions on widely available hardware.

MICROSOFT WINDOWS DISTRIBUTED
INTERNET APPLICATIONS
Windows DNA: Building Windows Applications for the Internet Age
Windows DNA Technologies
The heart of Windows DNA is the integration of Web and client/server application development models through the Component Object Model (COM). Windows DNA services are exposed in a unified way through COM for applications to use. These services include component management, Dynamic HTML, Web browser and server, scripting, transactions, message queuing, security, directory, database and data access, systems management, and user interface.
Windows DNA fully embraces an open approach to Web computing. It builds on the many important standards efforts approved by bodies such as the World Wide Web Consortium (W3C) and the Internet Engineering Task Force (IETF). Adhering to open protocols and published interfaces makes it easy to integrate other vendor solutions and provides broad interoperability with existing systems.
Because Windows DNA is based on COM and open Internet standards, developers can use any language or tool to create compatible applications. COM provides a modern, language-independent object model that provides a standard way for applications to interoperate at all tiers of the architecture. Through COM, developers can extend any part of the application via pluggable software components that can be written in C++, Visual Basic, Java, or other languages. Because of this open approach, Windows DNA supports a broad range of development tools today, including tools from Microsoft, Borland, Powersoft, and many other vendors.

Microsoft developed the Windows Distributed interNet Application Architecture (Windows DNA) as a way to fully integrate the Web with the n-tier model of development. Windows DNA defines a framework for delivering solutions that meet the demanding requirements of corporate computing, the Internet, intranets, and global electronic commerce, while reducing overall development and deployment costs.
Windows DNA architecture employs standard Windows-based services to address the requirements of each tier in the multi tiered solution: user interface and navigation, business logic, and data storage. The services used in Windows DNA, which are integrated through the Component Object Model (COM), include:
¢ Dynamic HTML (DHTML)
¢ Active Server Pages (ASP)
¢ COM components
¢ Component Services
¢ Active Directory Services
¢ Windows® security services
¢ Microsoft® Message Queuing
¢ Microsoft Data Access Components
Microsoft built Windows DNA using open protocols and public interfaces, making it easy for organizations to integrate third-party products. In addition, by supporting industry-defined standards for Internet computing, Windows DNA will make it easier for developers to respond to technology changes. Some of the technologies recently added to the Windows DNA are outlined in the section given below, and are illustrated in the following diagram.

Figure 2. Technologies added to Windows DNA
Development Technologies
Microsoft Windows Distributed interNet Application (Windows DNA) Architecture is a dynamic set of technologies that you can use to build Web applications. Microsoft has added several key aspects to the architecture with Windows 2000.
This section contains:
¢ Component Services
¢ Dynamic HTML: Dynamic Hypertext Markup Language (DHTML).
¢ Windows Script Components
¢ XML: Extensible Markup Language (XML).
¢ Active Directory Service Interfaces

Component Services :
New with Windows 2000, Component Services provides a number of services that make component and Web application development easier. These services include:
Queued Components
Queued Components allow you to create components that can execute immediately if the client and server are connected. They provide an easy way to invoke and execute components asynchronously. In the event that the client and server are not connected, the component can hold execution until a connection is made. Queued Components assist the developer by using method calls similar to those calls used in component development, thus diminishing the need for an in-depth knowledge of marshaling.
Component Services Events
Component Services Events lets publishers and subscribers loosely connect to data sources so that these sources can be developed, deployed, and executed separately. The publisher does not need to know the number and location of the subscriber, and the subscriber uses an intermediate broker to find a publisher and manage the subscription to it. The event system simplifies component and Web application development by allowing both publisher and subscriber identities to be persistent: Publishers and subscribers identities can be manipulated without being known to each other.
Dynamic HTML :
Dynamic HTML (DHTML), which Microsoft introduced with Internet Explorer 4.0, allows you to create much richer HTML that responds to events on the client. By upgrading your HTML pages to take advantage of DHTML, you will not only enhance the user experience, you will also build Web applications that use server resources more efficiently.
DHTML controls the appearance of HTML pages by setting properties in the document object model (DOM), a model which Microsoft has proposed to the World Wide Web Consortium (W3C) as a standard. DHTML exposes an event model that allows you to change DOM properties dynamically.
Windows Script Components :
Windows Script Components provide you with an easy way to create Component Object Model (COM) components using scripting languages such as Microsoft Visual Basic Scripting Edition (VBScript) and other languages compatible with the ECMA 262 language specification (such as Microsoft JScript 2.0 and JavaScript 1.1). You can use script components as COM components in applications such as Internet Information Services (IIS), Microsoft Windows Scripting Host (WSH), and any other application that can support COM components.
Script component technology is made up of the following:
¢ The script component run-time (Scrobj.dll).
¢ Interface handlers, which are components that extend the script component run-time. An interface handler is a compiled component (generally written in C++) that implements specific
COM interfaces. When you install the script component run-time, you will receive the Automation interface handler, which makes it possible to call your script component from an .asp file.
¢ Your script component file (a.sct file). In your script component, you specify which interface handler you want to use. Your script component also defines the methods that can be called from an .asp file to accomplish the intended functionality.
Script components are an excellent technology for developing prototypes of COM components. Script components, like any other COM component, can be registered with Component Services if you intend for them to participate in transactions, or if you want to take advantage of the Component Services run-time environment. Because they are COM components, script components can access the ASP built-in objects.


XML:
Extensible Markup Language (XML), like HTML, allows you to apply markup, in the form of tags, to a document. However, unlike HTML, XML is designed to be used as a generalized markup language. In other words, markup applied to an XML document can be used to convey not only display and formatting information as with HTML, but semantic and organizational structure for the document. This flexibility makes XML extremely powerful, and the possible range of applications is impressive.
Active Directory Service Interfaces:
Microsoft Active Directory Service Interfaces (ADSI) is a COM-based directory service model that allows ADSI-compliant client applications to access a wide variety of distinct directory protocols, including Windows Directory Services, LDAP, and NDS, while using a single, standard set of interfaces. ADSI shields the client application from the implementation and operational details of the underlying data store or protocol.
An application called an ADSI provider makes itself available to ADSI client applications. The data exposed by the provider is organized in a custom namespace, defined by the provider. In addition to implementing the interfaces defined by ADSI, the provider also can implement the ADSI schema. The schema is used to provide metadata about the namespace structure and objects that are provided by the ADSI provider.
ADSI and IIS
IIS currently stores most Internet site configuration information in a custom data store called the IIS metabase. IIS exposes a low-level DCOM interface that allows applications to gain access to, and manipulate, the metabase. To make it easy to access the metabase, IIS also includes an ADSI provider that wraps most of the functionality provided by the DCOM interface, and exposes it to any ADSI-compliant client applications.
COM: The Cornerstone of Windows DNA
Avalani notes that Windows DNA is based on a programming model called COM (Component Object Model). The COM model has come into widespread use since its introduction by Microsoft and it is an integral part of many Microsoft applications and technologies, including Internet Explorer and the Office suite of applications.
Unlike traditional software development, which required each application to be built from scratch, COM allows developers to create complex applications using a series of small software objects. Much like cars or houses are built with standardized "parts," COM lets developers make portions of their applications using components. For example, Avalani says, a component might be a tax calculation engine or the business rules for a price list. A growing number of third-party vendors sell COM components.
This approach speeds up the development process by allowing several teams to work on separate parts at the same time. Developers can also reuse components from one project and implimentation to the next, and they can easily swap out or update a particular component without affecting other portions of the application. COM also offers the advantage of programming language independence. That means developers can create COM components using the tools and languages they're familiar with, such as Visual Basic, C, C++ and Java.
An easy way to look at it is that COM serves as the glue between the tiers of the architecture, allowing Windows DNA applications to communicate in a highly distributed environment.

DNA - An Architecture for Distributed Applications
DNA stands for Distributed InterNet Architecture, and it is the model Microsoft promotes for development of applications that will be accessable by widely separated clients. It can also be, however, a confusing array of terms and technologies.
To combat this confusion, BengalCore recently wrote an explanation of the different sections and development components that make up the Microsoft DNA, as part of a course on new technologies.
The following picture shows the different pieces within the DNA architecture, and how they work together.
Server machine
Placing your business objects on the server increases your control over the entire application, and over configuration issues. It also increases security aspects of the system, and reduces the client-side software footprint.

1. Central Database
By keeping all data in a central location, you open up opportunities for data sharing between clients and for central reporting. Business objects need only a central point-of-entry into the data store.
2. C++ COM DLLs
An easy way to port legacy code to a distributed application is to "wrap" that code with a COM interface. That code is then accessible to all other system components.
3. VB COM DLLs
Visual Basic provides easy control of databases, and into the automation methods for Excel, Access and SourceSafe.
4. IIS Web Server
Microsoftâ„¢s web server software product. This comes with the NT Server operating system, and provides support for Active Server Pages, ISAPI, and custom embedded controls.
5. Active Server Pages
ASP is a scripting language that is supported by IIS. It is a combination of HTML, VBScript, and COM. These scripts run on the web server, and then converted to HTML for the client response. ASP provides default components for interaction with the server or with a database. They can also embed custom business object components for your application.
Client machine
DNA expands the client base of your application to anyone capable of running the Internet Explorer 4.0 browser, making crossing machine boundaries easier.
6. ActiveX controls
These are visual components that are embedded into a web page and downloaded to the client machine. They can provide custom display or input beyond that which is available in the standard set of controls (buttons, text fields, and lists). An example ActiveX control would be a custom chart or grid.
7. Internet Explorer 4.0
The 4.0 version of Internet Explorer supports several new features that allows the browser to become the framework for your applicationâ„¢s Graphical User Interface (GUI). By using your browser as the presentation layer, all of the networking to the server is handled.
8. Dynamic HTML
This extension to the HTML standard provides precise placement of objects on the screen, data binding, effects, and dynamic modification capabilities.
9. Customgraphics
Graphics and presentation are the final piece to this puzzle. A consistent GUI provides customers with a pleasing means of interfacing with your application.
Given the proper underlying infrastructure, the multitier model of presentation, business logic and data can physically distribute processing over many computers. However, the core abstractions that have worked for single“ and two“tier models in the past”high-level programming languages, database management systems, and graphical user interfaces”do not fully address the requirements of multitier application development. A different level of abstraction is needed to develop scalable, manageable and maintainable multiuser applications, and at Microsoft we believe this abstraction is cooperating components.
Cooperating Components
Microsoft's Windows DNA strategy rests on Microsoft's vision of cooperating components that are built based on the binary standard called the Component Object Model (COM). COM is the most widely used component software model in the world, available on more than 150 million desktops and servers today. It provides the richest set of integrated services, the widest choice of easy-to-use tools, and the largest set of available applications. In addition, it provides the only currently viable market for reusable, off-the-shelf client and server components.
COM enables software developers to build applications from binary software components that can be deployed at any tier of the application model. These components provide support for packaging, partitioning, and distributed application functionality. COM enables applications to be developed with components by encapsulating any type of code or application functionality, such as a user interface control or line of business object. A component may have one or more interfaces; each exposes a set of methods and properties that can be queried and set by other components and applications. For example, a customer component might expose various properties such as name, address, and telephone number.
With the Microsoft Windows DNA model, components take away the complexity of building multitier applications. Applications based on components and the Windows DNA model rely on a common set of infrastructure and networking services provided in the Windows application platform. The Microsoft Windows NT security service, for example, provides access control to the Internet Information Server (IIS), as well as transaction and message queuing services. Other common services include systems management, directory services, networking, and hardware support.
Client Environments and Presentation Tier
Today, many application developers using cooperating components target the development of their applications to the Windows platform to take full advantage of the rich user interface Windows has to offer. Likewise, customers have come to expect a rich, highly functional user interface from their applications. The extended reach of information and services to customers that the Internet has enabled has created a new challenge for the application developer. The application developer today must develop a user interface that is distributable, available on Windows and non-Windows platforms, and supports a wide range of client environments, from handheld wireless devices to high-end workstations. Yet, applications must be rich with features to stay competitive and maintain the functionality that customers have come to expect.
As depicted in Figure 3, Windows DNA offers a broad range of presentation options, giving the application developer the choice when developing the best solution. Windows DNA permits the developer to choose the appropriate Windows components and Internet technologies to support the richest possible interface and range of client environments, from handheld wireless devices to high-end workstations.
Maintaining broad reach to a wide range of client environments while achieving the greatest compatibility with all browsers, application developers will generally use standard HTML in developing their browser neutral applications. Microsoft tools and application services support the current generation of standard HTML.

Figure 3. Windows DNA presentation approaches
The compromise in using static HTML is the reduced amount of functionality and richness in an applications user interface that customers have come to expect. This is okay for some applications as their application requires broad reach and browser neutrality.
There is a class of applications that don't have a browser neutrality requirement. The reality is that many corporations standardize on a single browser. In addition, application developers who want to provide more functionality in their application than they can achieve with standard HTML write code to determine the browser being used. These browser enhanced applications are written to take advantage of the technologies inherent in the browser to gain maximum functionality and richness. With technologies like dynamic HTML (DHTML) and scripting, application developers can create actions with functional Web-based interfaces for data entry or reporting without using custom controls of applets.
DHTML is based on the W3C-standard Document Object Model, which makes all Web-page elements programmable objects. Think of DHTML as a "programmable" HTML. Contents of the HTML document, including style and positioning information, can be modified dynamically by script code embedded in the page. Thus, scripts can change the style, content, and structure of a Web page without having to refresh the Web page from the Web server. By doing so, the client does not have to repeatedly return to the Web server for changes in display resulting in increased network performance. Unlike Java applets or Microsoft ActiveX controls, DHTML has no dependencies on the underlying virtual machine or operating system. For clients without DHTML support, the content appears in a gracefully degraded form.
There are times when DHTML plus scripting is not enough. Segments of applications need to leverage the operating system and underlying machine on which it is hosted, while still maintaining an active connection to the Internet for data or additional services. It is in those instances that application developers can take advantage of the robust components and Internet services provided by Windows to build Internet- reliant applications. Unlike page-based applications that are being run within the context of a browser, an Internet-reliant application is a full-fledged Windows executable that has full access to the broad range of services provided by the Windows client. These applications generally use a combination of HTML, DHTML, scripting, and ActiveX controls to provide rich integration with the client system as well as full connectivity to remote services on the Internet.
Applications written using the Microsoft Win32 application programming interface (API) offer the most functionality with reach limited to the application platforms that support the Win32 API. Through the use of cooperating components, developers today can have access to Internet technologies in the Windows application platform from within a Win32-based application. Applications written to the Win32 API that take advantage of system features and leverage Internet connectivity are called Internet-enhanced applications. Some common examples are the Microsoft Office 97 and Microsoft Visual Studio 98 development systems. These applications support unified browsing by embedding hyperlinks from within the application, host the browser for the display of documentation written in DHTML, and provide the capability to download updates to the products over the Internet seamlessly.

Application Services

Figure 4. Application services
The business logic tier is the heart of the application, where the application-specific processing and business rules are maintained. Business logic placed in components bridge the client environments and the data tiers. The Windows DNA application platform has been developed through years of innovation in supporting high-volume, transactional, large-scale application deployments, and provides a powerful run-time environment for hosting business logic components. As depicted in Figure 4, the application platform for developing Windows DNA applications include Web services, messaging services, and component services.
Web Services
Integrated with Microsoft's application platform is a high-performance gateway to the presentation tier. Microsoft's Internet Information Server enables the development of Web-based business applications that can be extended over the Internet or deployed over corporate intranets. With IIS, Microsoft introduced a new paradigm to the Internet”transactional applications. Transactions are the plumbing that makes it possible to run real business applications with rapid development, easy scalability, and reliability.
Active Server Pages (ASP), a component of IIS, is the language-neutral, compile-free, server-side scripting environment that is used to create and run dynamic, interactive Web server applications. By combining DHTML, scripting, and components, ASP enables application developers to create dynamic, interactive Web content and powerful Web-based applications.
With the trend toward distributed computing in enterprise environments, it is important to have flexible and reliable communication among applications. Businesses often require independent applications that are running on different systems to communicate with each other and exchange messages even though the applications may not be running at the same time. Applications built using a combination of ASP scripts communicating with cooperating components can interoperate with existing systems, applications, and data.
Component Services
Windows DNA is based on a programming model called COM (Component Object Model). The COM model has come into widespread use since its introduction by Microsoft and it is an integral part of many Microsoft applications and technologies, including Internet Explorer and the Office suite of applications.
Unlike traditional software development, which required each application to be built from scratch, COM allows developers to create complex applications using a series of small software objects. Much like cars or houses are built with standardized "parts," COM lets developers make portions of their applications using components. For example, a component might be a tax calculation engine or the business rules for a price list. A growing number of third-party vendors sell COM components.
This approach speeds up the development process by allowing several teams to work on separate parts at the same time. Developers can also reuse components from one project and implimentation to the next, and they can easily swap out or update a particular component without affecting other portions of the application. COM also offers the advantage of programming language independence. That means developers can create COM components using the tools and languages they're familiar with, such as Visual Basic, C, C++ and Java.
In the early 1990s, the underlying concept that facilitated interoperability was componentization; the underlying technology that enabled interoperability was COM. As it turns out, componentization is not only a great way to achieve interoperability, but a great way to design and develop software in general. So, in the mid-1990s Microsoft broadened COM's applicability beyond the desktop application to also include distributed applications by introducing Microsoft Transaction Server (MTS). MTS was an extension to the COM programming model that provided services for the development, deployment, and management of component-based distributed applications. MTS was a foundation of application platform services that facilitated the development of distributed applications for the Windows platform in a much simpler, more cost-effective manner than other alternatives.
COM+ is the next evolutionary step of COM and MTS. The unification of the programming models inherent in COM and MTS services makes it easier to develop distributed applications by eliminating the tedious nuances associated with developing, debugging, deploying, and maintaining an application that relies on COM for certain services and MTS for others. The benefits to the application developer is to make it faster, easier, and ultimately cheaper to develop distributed applications by reducing the amount of code required to leverage underlying system services.
To continue to broaden COM and the services offered today in MTS 2.0, COM+ consists of enhancements to existing services as well as new services to the application platform. They include:
¢ Bring your own transaction. COM components are able to participate in transactions managed by non-COM+ transaction processing (TP) environments that support the Transaction Internet Protocol (TIP).
¢ Expanded security. Support for both role-based security and process-access-permissions security. In the role-based security model, access to various parts of an application is granted or denied based on the logical group or role that the caller has been assigned to (for example, administrator, full-time employee, or part-time employee). COM+ expands on the current implementation of role-based security by including method-level security for both custom and IDispatch(Ex)-based interfaces.
¢ Centralized administration. The Component Services Explorer, a replacement for today's MTS Explorer and DCOMCNFG, presents a unified administrative model, making it easier to deploy, manage, and monitor n-tiered applications by eliminating the overhead of using numerous individual administration tools.
¢ In-memory database. The In-Memory Database maintains durable state information and transient state information in a consistent manner. It is an in-memory, fully transactional database system designed to provide extremely fast access to data on the machine on which it resides.
¢ Queued components. For asynchronous deferred execution when cooperating components are disconnected, this is in addition to the session-based, synchronous client/server programming model, where the client maintains a logical connection to the server today.
¢ Event notification. For times when a loosely coupled event notification mechanism is desirable, COM+ Events is a unicast/multicast, publish/subscribe event mechanism that allows multiple clients to "subscribe" to events that are "published" by various servers. This is in addition to the existing event notification framework delivered with connection points.
¢ Load balancing. Load balancing allows component-based applications to distribute their workload across an application cluster in a client-transparent manner.
Messaging Services
Microsoft Message Queue Server (MSMQ) provides loosely coupled and reliable network communications services based on a messaging queuing model. MSMQ makes it easy to integrate applications by implementing a push-style business event delivery environment between applications, and to build reliable applications that work over unreliable but cost-effective networks. The simple application based on COM lets developers focus on business logic and not on sophisticated communications programming. MSMQ also offers seamless interoperability with other message queuing products, such as IBM's MQSeries, through products available from Microsoft's independent software vendor (ISV) partners.

Extending to the Mainframe Transaction-Processing World
Using Microsoft's COM Transaction Integrator (TI), application developers can extend and expose complex instruction set computers (CISC) and information management system (IMS) transaction programs through the use of COM components. COM TI consists of a set of development tools and run-time services that automatically "wrap" IBM mainframe transaction and business logic as COM components that run in a Windows DNA environment. All COM TI processing is done on a Windows NT Server; host communication is accomplished through the SNA Server. COM TI does not require the mainframe to run any executable code or be programmed in any special way.
Universal Data Access
Universal Data Access is Microsoft's strategy for providing access to information across the enterprise. Today, companies building database solutions face a number of challenges as they seek to gain maximum business advantage from the data and information distributed throughout their corporations. Universal Data Access provides high-performance access to a variety of information sources, including relational and nonrelational data, and an easy-to-use programming interface that is tool and language independent.
Universal Data Access does not require expensive and time-consuming movement of data into a single data store, nor does it require commitment to a single vendor's products. Universal Data Access is based on open industry specifications with broad industry support, and works with all major established database platforms.
As depicted in Figure 5, the Universal Data Access-based framework operates at two levels. At the systems level, OLE DB defines a component-based architecture specified as a set of COM-based interfaces that encapsulate various database management system services. The OLE DB architecture does not constrain the nature of the data source; as a result, Microsoft and ISV have introduced providers for a wide variety of indexed sequential files, groupware products, and desktop products. At the application level, ActiveX Data Objects (ADO) provides a high-level interface to enable developers to access data from any programming language.

Figure 5. Data access

Top Windows DNA Performance Mistakes and How to Prevent Them
Microsoft Windows DNA is Microsoft's platform for building a new generation of effective and versatile business applications for the Web. Through the COM+ programming model, Windows DNA incorporates a number of familiar technologies, including Microsoft Windows 2000, Microsoft Visual Studio, and Microsoft SQL ServerÚ„¢, allowing for the construction of a secure, stable”and scalable”business infrastructure that can readily integrate diverse systems and applications.
At the core of Windows DNA is the capability of building n-tier applications, which include one or more middle tiers between the client and the server. An important element of contemporary software architecture, n-tier applications provide clear advantages over typical client/server implementations, especially in the level of scalability they can provide. They are essential for the increasing levels of cross-platform interactivity required by today's, and tomorrow's, business Web sites.
Producing a good n-tier application often entails a series of judgments in planning and implementing the final product. When those decisions are poorly made, development teams can encounter time-consuming”and often difficult to solve”performance problems after the application has been installed and implemented. Fortunately, many of these problems can be anticipated and prevented. This article shows you how to find and eliminate them early in the development process. The mistakes that follow were identified by Microsoft Consulting Services (MCS) consultants worldwide. We've assembled some useful solutions, and while following them may not prevent all of the problems you'll encounter, you will significantly reduce performance degradation.
Misunderstanding the Relationship between Performance and Scalability
Performance and scalability are not the same, but neither are they at odds. For example, an application may process information at an incredibly fast rate as long as the number of users sending it information is less than 100. When that application reaches the point at which 10,000 users are simultaneously providing input, the performance may degrade substantially, because scalability wasn't high enough in the list of considerations during the development cycle. On the other hand, that same high-performance application may be partially rewritten in a subsequent iteration and have no problem handling 100,000 customers at one time. By then, however, a substantial number of customers may have migrated to a product someone else got right the first time.
Sometimes applications seek scalability in terms of number of concurrent users strictly through performance, with the idea being that the faster a server application runs, the more users can be supported on a single server. The problem with this approach is that increasing the number of simultaneous users may create a bottleneck that will actually reduce the level of performance as the load increases. One cause of this kind of behavior is caching state and data in the middle tier. By avoiding such caching in the design phase of the development process, countless hours of backtracking and rewriting code can be avoided. The ideal is to find a point of balance that provides acceptable performance in a scalable implementation of a particular application. Finding this point always involves trade-offs.
Let's look at some of the basic concepts involved in scalability. Throughput refers to the amount of work (number of transactions) an application can perform in a measured period of time and is often calculated in transactions per second (tps). Scalability refers to the amount of change in linear throughput that occurs when resources are either increased or decreased. It is what allows an application to support anywhere from a handful to thousands of users, by simply adding or subtracting resources as necessary to "scale" the application. Finally, transaction time refers to the amount of time needed to acquire the necessary resources, plus the amount of time the transaction takes actually using these resources.
The point to note here is that scalability increases as throughput increases; that is, the higher the throughput growth per resource, the greater the scalability. Clearly, application developers must concentrate their efforts on increasing throughput growth if they are to increase scalability. Of course, the obvious question then becomes, how exactly does one go about increasing throughput? The answer to that question may sound reasonably simple: Reduce the overall growth of transaction times. But just how easy might that be?
Transaction times can be extended by a variety of factors. Acquiring the necessary resources can be slowed by such factors as network latency, disk access speed, database locking scheme, and resource contention. Added to that are elements that can affect resource usage time, such as network latency, user input, and sheer volume of work. Windows DNA application developers should concentrate on keeping resource acquisition and resource usage times as low as possible. Frank Redmond lists the following ways to manage some of these factors:
¢ Avoid involving user interaction as part of a transaction.
¢ Avoid network interaction as part of a transaction.
¢ Acquire resources late and release them early.
¢ Make more resources available. Otherwise, use MTS to pool resources that are in short supply or are expensive to create.
¢ Use MTS to share resources between users because it is usually more expensive to create a new resource than to reuse an existing one.
Eliminating the confusion that exists about the relationship of performance and scalability, in this context, primarily means remembering that running a high-performance application is not the only consideration for gaining an acceptable level of performance in a Windows DNA application. It must be scalable so that the largest number of simultaneous users can be logged on without compromising throughput to an unacceptable level.
Mistakes in the Middle Tier
The next three performance mistakes relate directly to middle tiers in the Windows DNA application. A middle tier in an n-tier application is necessarily complex because of the role it plays in the overall application. The specific tasks it performs can be separated into three general categories that are essential to Windows DNA applications.
The first task involves receiving input from the presentation tier. This input can be done programmatically or may come directly from a user. It may include information about (or a request for) almost anything. Second, a middle tier is responsible for interacting with the data services to perform the business operations that the application was designed to automate. For example, this might include sorting and combining information from different mailing lists to target a specific audience that was never previously considered to be a cohesive group. Finally, a middle tier returns processed information to the presentation tier so it can be used however the program or user sees fit. Within these three areas, performance can degrade significantly when developers use programming practices that are either little understood or mistakenly embraced as the "right" thing to do. These performance-compromising mistakes are explained more fully in the following sections.
Instantiating Deep and Complex Object Hierarchies in a Middle Tier
Development teams sometimes adopt complex sets of classes as part of a quest for object-oriented purity. In other cases, large numbers of simple objects are instantiated to model complex interactions through delegation. The practice of instantiating large numbers of objects and causing each to interact with the data store (to populate and persist itself) yields less scalable applications. Simply put, designing for the middle tier is not the same as designing for a traditional object-oriented fat-client application. The memory allocation associated with creating objects can be costly, and freeing memory can be even more so, because of the general practice of attempting to coalesce adjacent free blocks into larger blocks.
Performing Data-Centric Work in a Middle Tier
Developers sometimes fall into the trap of including data-centered tasks with the business services work in a middle tier instead of the data-services tier where they belong. Rules are frequently too rigid to account for all cases but it would be very unusual to find a justification for breaking this one. If data-centered tasks are included in a middle tier, your Windows DNA application is likely to perform more poorly than it would otherwise.
For example, it would be a mistake to retrieve multiple data sets from different tables and then join, sort, or search the data in middle-tier objects. The database is designed to handle this kind of activity and removing it to a middle tier is almost certainly a bad practice. True, there may be circumstances where doing so is called for because of the nature of the data store, but as much of this as possible should happen in the database before the dataset is returned to the middle tier.
Maintaining a Stateful Middle Tier
Most developers have heard that maintaining state in MTS/COM+ components is a bad thing to do, but a surprising number of project and implimentations still attempt it. Sometimes this happens with the hope that performance won't degrade to a significant degree. On the other hand, it is sometimes said that you can't use stateful components in MTS and that, of course, is not true. However, to achieve scalability and performance, you shouldn't use stateful components.
Specific issues exist with using the Session and Application objects to store state of any kind. Not the least of these issues is the current inability to scale such objects across multiple servers. This becomes especially problematic (even in single-server deployments) when one attempts to cache object instances, such as database connections in Session or Application objects.
Let's say you're using Microsoft's ActiveX Data Objects (ADO). If you store your ADO objects in Session variables, you'll introduce both scalability limitations and threading considerations. By storing a connection object this way, you lose the benefit of connection pooling. In terms of performance, doing this ensures that the connection object will only serve the user for which a given session is created, and the connection will not be released to the pool until the end of the session. Beyond that length of time, you must also take into account the default timeout assigned to a session variable. Session resources for each user are consumed for 20 minutes of idle time before being released. You can reduce this length of time either manually or programmatically, but you risk creating additional difficulties.
Instead of storing the object in a Session variable you need to, in effect, create it and destroy it in every applicable ASP page. Thanks to MTS and COM+, this doesn't involve the overhead normally associated with object creation and destruction. By using a technique known as interception, the MTS run time inserts a transparent layer called a context wrapper between a base client and an object in the MTS run-time environment. The MTS run time is then able to monitor the connection and take control on a call-by-call basis. MTS requires the use of the SafeRef method but this is not necessary in COM+ (Windows 2000 and later), because Contexts have replaced the MTS concept of context wrappers.
Poorly Tuning Your Database
Even with the ever-increasing processor power available for database servers, poorly tuned indexes and queries can bring an otherwise robust system to its knees. It is quite common to see developers coding stored procedures or queries without consulting the database administrator (DBA), or even running a project and implimentation with no DBA involvement. Similarly, the table design”including the data types of keys, the degree of normalization or denormalization, and certainly the index structure”plays a critical role in the performance of the overall system. Some of these factors can be "designed in", but others, such as the index and normalization strategies, should be implemented as a "best guess" and then carefully tuned through load testing.In general, not using the database efficiently (making multiple queries when one stored procedure call would do, getting data one row at a time, explicitly locking data when it isn't necessary to do so) can be a ready source of problems. These can be solved by bringing developer attention to them before code is written.
Making Poor Algorithm Choices
With the schedule pressures that beset many project and implimentations, developers often implement the first algorithm that comes to mind to solve a particular problem. In addition to introducing common bugs into implementations (as in initializing variables inside a loop), developers may fail to take into account the load characteristics of the algorithms they choose. Careful planning during the early stages of the development process can prevent this problem. As in most software engineering situations, this is usually the least costly solution. If poor choices are inadvertently made, discovering the problem early through selective testing can minimize the damage.
Failing to discover poor performance until late in the project and implimentation's development cycle is rarely an effective idea.
FEATURES AND ADVANTAGES OF WINDOWS DNA
¢ DNA helps to design and build multi-tier client/server applications. It provides a structured approach to creating applications whose components are clearly separated into distinct functional groups, with common communication protocols linking these groups. This provides the benefits of faster and less error-prone design and development, and interchangeability of components.
¢ DNA provides client transparency. The front-end (or client) is independent of the back end of the application, i.e. it needs no knowledge of this, irrespective of what, the back end of the application does, or how it does it. As long as it follows the DNA protocol and processing guidelines, it can be almost anything”from a standard Web browser to a specially developed application written in almost any programming language.
¢ DNA applications provide full transactional processing support. In applications of any real level of complexity, multiple operations are performed at different levels of the application, and at different times. To guarantee integrity of the results, there needs to be control over each set of operations as a whole, as well as monitoring of every individual step. DNA, and the associated software plumbing components, can accomplish this almost transparently and seamlessly.
¢ DNA can be used to create applications that are fault tolerant. As no network can ever be 100% guaranteed to give continuous and fast performance. A distributed application needs to be able to cope with network delays and software failures, while protecting data integrity and providing high availability and reliability.
¢ DNA is ideal for distributed applications. Once an application becomes distributed, i.e. divided into separate parts linked by a network, the problem of communication between the parts arises. Earlier it was necessary to create custom formats and techniques for passing information between each part of the application, leading to longer design and implementation periods, an increased number of bugs, and poor interoperability between different applications. By standardizing the communication protocols and interfaces, development speed and application reliability is boosted.
The DNA methodology covers many existing technologies to help design and implement robust, distributed applications. It visualizes this whole application as a series of tiers, with the client at the top and the data store at the bottom. The core of DNA is the use of business objects in a middle tier of the application.
Also, in DNA, business objects are implemented as software components. These components can be accessed by the client interface application or by another component, and can themselves call on other components, data stores, etc. Componentization of business rules brings many benefits, such as easier maintenance, encapsulation of the rules, protection of intellectual copyright, etc.
Hence, DNA is an approach to design that can speed up overall development time, while creating more reliable and fault tolerant applications that are easily distributable over a whole variety of networks.
To run these applications, Windows DNA relies on a rich set of integrated services supplied by the Windows platform. These services are infrastructure technologies that would be required for any scalable, distributed application -- for instance, transaction processing, security, directory services and systems management.
By providing a stable base of common services, Windows DNA relieves developers from the burden of creating their own infrastructure and allows them to focus instead on delivering business solutions. Developers save time, reduce costs, get their applications to market more quickly and equip companies for responding proactively to changing business conditions. These benefits are especially compelling in today's competitive business climate.
Several more good reasons why companies should base their applications on Windows DNA. Because the architecture is built on open protocols and industry standards, solutions from other vendors integrate easily into the environment. This helps ensure interoperability with mission-critical business applications, such as corporate databases and enterprise resource planning systems. An open approach also facilitates compatibility with existing computing systems, which means that companies can continue to take advantage of their legacy systems as opposed to replacing them.
The benefits of distributed computing applications that embrace Internet technologies are many. For individuals, it means the freedom to communicate or access information at any time and from any place. For businesses, it means making more informed decisions, better understanding their customers, and responding quickly as their business needs evolve. For software developers, the challenge has been how to build these solutions. Windows DNA offers them a cohesive and proven application architecture for distributed and Internet-based computing solutions.

Conclusion
The Windows DNA architecture and the Windows NT platform offer many distinct advantages to customers and their ISV partners. Its key benefits include:
¢ Providing a comprehensive and integrated platform for distributed applications, freeing developers from the burden of building the required infrastructure or assembling it using a piecemeal approach.
¢ Easy interoperability with existing enterprise applications and legacy systems to extend current investments.
¢ Making it faster and easier to build distributed applications by providing a pervasive component model, extensive prebuilt application services, and a wide choice of programming language and tools support.
Windows DNA applications have proven themselves in a wide range of circumstances, and the value they represent in the modern distributed computing environment has been thoroughly demonstrated. They have, however, also shown themselves to require careful planning and thorough testing throughout the development process. Avoiding the kinds of mistakes noted in this article should reduce the amount of resources required to produce the kind of Windows DNA application you want. Performance and load testing is unavoidable. Do it in a manner that simulates real-world conditions for your particular application, and you'll be rewarded with an n-tier application that works and works well.

References
1. microsoft.com
2. bengalcore.com
3. msdn.com

CONTENTS
¢ INTRODUCTION 1
¢ MICROSOFT WINDOWS DISRTIBUTEDINTERNET APPLICATION 7
¢ TOP WINDOWS DNA PERFORMANCE MISTAKES and
HOW TO PREVENT THEM 26
¢ FEATURES AND ADVANTAGES OF WINDOWS DNA 32
¢ CONCLUSION 35
¢ REFERENCES 36


ACKNOWLEDGMENT

I express my sincere thanks to Prof. M.N Agnisarman Namboothiri (Head of the Department, Computer Science and Engineering, MESCE), Mr. Sminesh (Staff incharge) for their kind co-operation for presenting the seminar and presentation.
I also extend my sincere thanks to all other members of the faculty of Computer Science and Engineering Department and my friends for their co-operation and encouragement.
Fahmida Mohammed
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
Reply
project topics
Active In SP
**

Posts: 2,492
Joined: Mar 2010
#2
03-04-2010, 11:42 PM

CONTENTS

1. Abstract

2. Introduction to Windows DNA

3. Guiding Principles of Windows DNA

4. Architecture of Windows DNA

¢ DNA-Architecture for Distributed Applications
¢ An Interoperability Framework

5. Microsoft Windows DNA Applications

6. Features and Advantages of Windows DNA

7. Top Windows DNA Performance Mistakes

8. Conclusion

9. References

10. Future Scope


ABSTRACT

Microsoft Windows Distributed interNet Applications Architecture (Windows DNA) is the application development model for the Windows platform. Windows DNA specifies how to: develop robust, scalable, distributed applications using the Windows platform; extend existing data and external applications to support the Internet; and support a wide range of client devices maximizing the reach of an application. Developers are free from the burden of building or assembling the required infrastructure for distributed applications and can focus on delivering business solutions.
Windows DNA addresses requirements at all tiers of modern distributed applications: presentation, business logic, and data. Like the familiar PC environment, Windows DNA enables developers to build tightly integrated applications by accessing a rich set of application services in the Windows platform using a wide range of familiar tools. These services are exposed in a unified way through the Component Object Model (COM). Windows DNA provides customers with a roadmap for creating successful solutions that build on their existing computing investments and will take them into the future. Using Windows DNA, any developer will be able to build or extend existing applications to combine the power and richness of the PC, the robustness of client/server computing, and the universal reach and global communications capabilities of the Internet














INTRODUCTION


The increased presence of Internet technologies is enabling global sharing of information”not only from small and large businesses, but individuals as well. The Internet has sparked a new creativity in many, resulting in many new businesses popping up overnight, running 24 hours a day, seven days a week. Competition and the increased pace of change are putting ever-increasing demands for an application platform that enables application developers to build and rapidly deploy highly adaptive applications in order to gain strategic advantage.
Introducing Windows DNA: Framework for a New Generation of Computing Solutions
Windows DNA refers to the Windows Distributed interNet Application architecture, launched by Microsoft."Windows DNA is essentially a 'blueprint' that enables corporate developers and independent software vendors (ISVs) to design and build distributed business applications using technologies that are inherent to the Windows platform,it consists of a conceptual model and a series of guidelines to help developers make the right choices when creating new software applications." Applications based on Windows DNA will be deployed primarily by businesses, from small companies to large enterprise organizations. Consumers are likely to use many of the applications built to take advantage of Windows DNA, such as electronic commerce Web sites and online banking applications.
A major force driving the need for Windows DNA is the Internet, which has dramatically changed the computing landscape. Five years ago, the process of developing programs used by one person on one computer was relatively straightforward. By contrast, some of today's most powerful applications support thousands of simultaneous users, need to run 24 hours a day, and must be accessible from a wide variety of devices ~ from handheld computers to high-performance workstations. To meet these demanding requirements, application developers need adequate planning tools and guidance on how to incorporate the appropriate technologies. The Windows DNA architecture addresses this need.
Microsoft Windows Distributed interNet Applications Architecture (Windows DNA) is Microsoft's framework for building a new generation of highly adaptable business solutions that enable companies to fully exploit the benefits of the Digital Nervous System. Windows DNA is the first application architecture to fully embrace and integrate the Internet, client/server, and PC models of computing for a new class of distributed computing solutions. Using the Windows DNA model, customers can build modern, scalable, multitier business applications that can be delivered over any network. Windows DNA applications can improve the flow of information within and without the organization, are dynamic and flexible to change as business needs evolve, and can be easily integrated with existing systems and data. Because Windows DNA applications leverage deeply integrated Windows platform services that work together, organizations can focus on delivering business solutions rather than on being systems integrators.
Guiding Principles of Windows DNA:
Web computing without compromise.Organizations want to create solutions that fully exploit the global reach and "on demand" communication capabilities of the Internet, while empowering end users with the flexibility and control of today's PC applications. In short, they want to take advantage of the Internet without compromising their ability to exploit advances in PC technology.

Interoperability. Organizations want the new applications they build to work with their existing applications and to extend those applications with new functionality. They require solutions that adhere to open protocols and standards so that other vendor solutions can be integrated. They reject approaches that force them to rewrite the legions of applications still in active use today and the thousands still under development.
True integration. In order for organizations to successfully deploy truly scalable and manageable distributed applications, key capabilities such as security, management, transaction monitoring, component services, and directory services need to be developed, tested, and delivered as integral features of the underlying platform. In many other platforms, these critical services are provided as piecemeal, non-integrated offerings often from different vendors, which force IT professionals to function as system integrators.
Lower cost of ownership. Organizations want to provide their customers with applications that are easier to deploy and manage, and easier to change and evolve over time. They require solutionsthat do not involve intensive effort and massive resources to deploy into a working environment, and that reduce their cost of ownership both on the desktop and server administration side.
Faster time to market.Organizations want to be able to achieve all of the above while meeting tight application delivery schedules, using mainstream development tools, and without need for massive re-education or a "paradigm shift" in the way they build software. Expose services and functionality through the underlying "plumbing" to reduce the amount of code developers must write.
Reduced complexity. Integrate key services directly into the operating system and expose them in a unified way through the components. Reduce the need for information technology (IT) professionals to function as system integrators so they can focus on solving the business problem.





Windows DNA Architecture
Windows DNA architecture employs standard Windows-based services to address the requirements of each tier in the multi tiered solution: user interface and navigation, business logic, and data storage. The services used in Windows DNA, which are integrated through the Component Object Model (COM), include:
1. Dynamic HTML (DHTML)
2. Active Server Pages (ASP)
3. COM components
4. Component Services
5. Active Directory Services
6. Windows security services
7. Microsoft Message Queuing
8. Microsoft Data Access Components

The most common used standards in the Web Services world are:
1. XML Schema: For message data typing and structuring. It allows defining a common vocabulary that the sending and receiving party may understand for achieving the message interchange goal.
2. WSDL: For associating messages and message exchange patterns (logic interface) with service names and network addresses (endpoints acting as physical interface).
3. WS-Addressing: For including endpoint addressing and reference properties associated with endpoints. Many of the other extended specifications require WS-Addressing support for defining endpoints and reference properties in communication patterns.
4. WS-Policy: For associating quality of service requirements with a WSDL definition. WS-Policy is a framework that includes policy declarations for various aspects of security, transactions, and reliability.
5. WS-Security: For providing message integrity, authentication and confidentiality, security token exchange, message session security, security policy expression, and security for a federation of services within a system.
6. WS-Metadata Exchange: For querying and discovering metadata associated with a Web service, including the ability to fetch a WSDL file and associated WS-Policy definitions.
The challenge was to design a system that would allow the actual presentation layer to run on distributed servers but deliver the data required to produce that presentation using Web services technology. Because of this decision, the system is inherently scalable at the front end. There is no single point of failure for the Web presentation layer. In fact, since most customers are using hosting services that employ multiple front-end servers using failover and round-robin DNS, we get the benefit of their existing hostsâ„¢ redundancy and scalability.

This architecture allowed us to focus on the next layer of the distributed architecture. Many architects use the term business services layer" to describe the layer of service that sits behind the presentation services in a 3-tier system.
But after years of system design, development, and research on Windows DNA 3-tier systems


Microsoft has added several key aspects to the architecture with Windows 2000. This section contains:
1. Component Services
2. Dynamic HTML: Dynamic Hypertext Markup Language (DHTML).
3. Windows Script Components
4. XML: Extensible Markup Language (XML).
5. Active Directory Service Interfaces and IIS
COM allows developers to create complex applications using a series of small software objects.
COM also offers the advantage of programming language independence. That means developers can create COM components using the tools and languages they're familiar with, such as Visual Basic, C, C++ and Java. An easy way to look at it is that COM serves as the glue between the tiers of the architecture, allowing Windows DNA applications to communicate in a highly distributed environment.

DNA - Architecture for Distributed Applications
The following picture shows the different pieces within the DNA architecture, and how they work together:
Server machine:- Placing your business objects on the server increases your control over the entire application, and over configuration issues. It also increases security aspects of the system, and reduces the client-side software footprint.

Central Database:-By keeping all data in a central location, you open up opportunities for data sharing between clients and for central reporting. Business objects need only a central point-of-entry into the data store. ASP is a scripting language that is supported by IIS. It is a combination of HTML, VBScript, and COM. These scripts run on the web server, and then converted to HTML for the client response. ASP provides default components for interaction with the server or with a database.

Dynamic HTML:-This extension to the HTML standard provides precise placement of objects on the screen, data binding, effects, and dynamic modification capabilities.

Custom Graphics:-Graphics and presentation are the final piece to this puzzle. A consistent GUI provides customers with a pleasing means of interfacing with your application.

Cooperating Components:-Microsoft's Windows DNA strategy rests on Microsoft's vision of cooperating components that are built based on the binary standard called the Component Object Model (COM). COM is the most widely used component software model in the world, available on more than 150 million desktops and servers today. It provides the richest set of integrated services, the widest choice of easy-to-use tools, and the largest set of available applications. In addition, it provides the only currently viable market for reusable, off-the-shelf client and server components.
COM enables software developers to build applications from binary software components that can be deployed at any tier of the application model. These components provide support for packaging, partitioning, and distributed application functionality. COM enables applications to be developed with components by encapsulating any type of code or application functionality, such as a user interface
control or line of business object. A component may have one or more interfaces; each exposes a set of methods and properties that can be queried and set by other components and applications. For example, a customer component might expose various properties such as name, address, and telephone number.

Client Environments and Presentation Tier:-Today, many application developers using cooperating components target the development of their applications to the Windows platform to take full advantage of the rich user interface Windows has to offer. Likewise, customers have come to expect a rich, highly functional user interface from their applications. The extended reach of information and services to customers that the Internet has enabled has created a new challenge for the application developer. The application developer today must develop a user interface that is distributable, available on Windows and non-Windows platforms, and supports a wide range of client environments, from handheld wireless devices to high-end workstations. Yet, applications must be rich with features to stay competitive and maintain the functionality that customers have come to expect.


The business logic tier is the heart of the application, where the application-specific processing and business rules are maintained. Business logic placed in components bridge the client environments and the data tiers. The Windows DNA application platform has been developed through years of innovation in supporting high-volume, transactional, large-scale application deployments, and provides a powerful run-time environment for hosting business logic components
the application platform for developing Windows DNA applications includes Web services, messaging services, and component services. Web Services
Integrated with Microsoft's application platform is a high-performance gateway to the presentation tier. Microsoft's Internet Information Server enables the development of Web-based business applications that can be extended over the Internet or deployed over corporate intranets. With IIS, Microsoft introduced a new paradigm to the Internet”transactional applications. Transactions are the plumbing that makes it possible to run real business applications with rapid development, easy scalability, and reliability.
Microsoft broadened COM's applicability beyond the desktop application to also include distributed applications by introducing Microsoft Transaction Server (MTS). MTS was an extension to the COM programming model that provided services for the development, deployment, and management of component-based distributed applications. MTS was a foundation of application platform services that facilitated the development of distributed applications for the Windows platform in a much simpler, more cost-effective manner than other alternatives. COM+ is the next evolutionary step of COM and MTS. The unification of the programming models inherent in COM and MTS services makes it easier to develop distributed applications by eliminating the tedious nuances associated with developing, debugging, deploying, and maintaining an application that relies on COM for certain services and MTS for others. The benefits to the application developer is to make it faster, easier, and ultimately cheaper to develop distributed applications by reducing the amount of code required to leverage underlying system services.
To continue to broaden COM and the services offered today in MTS 2.0, COM+ consists of enhancements to existing services as well as new services to the application platform. They include:

¢ Bring your own transaction. COM components are able to participate in transactions managed by non-COM+ transaction processing (TP) environments that support the Transaction Internet Protocol (TIP).
¢ Expanded security. Support for both role-based security and process-access-permissions security. In the role-based security model, access to various parts of an application is granted or denied based on the logical group or role that the caller has been assigned to (for example, administrator, full-time employee, or part-time employee). COM+ expands on the current implementation of role-based security by including method-level security for both custom and IDispatch(Ex)-based interfaces.
¢ Centralized administration. The Component Services Explorer, a replacement for today's MTS Explorer and DCOMCNFG, presents a unified administrative model, making it easier to deploy, manage, and monitor rc-tiered applications by eliminating the overhead of using numerous individual administration tools.
¢ Queued components. For asynchronous deferred execution when cooperating components are disconnected, this is in addition to the session-based, synchronous client/server programming model, where the client maintains a logical connection to the server today.
¢ Event notification. For times when a loosely coupled event notification mechanism is desirable, COM+ Events is a unicast/multicast, publish/subscribe event mechanism that allows multiple clients to "subscribe" to events that are "published" by various servers. This is in addition to the existing event notification framework delivered with connection points.
¢ Load balancing. Load balancing allows component-based applications to distribute their workload across an application cluster in a client-transparent manner.
Microsoft Message Queue Server (MSMQ) provides loosely coupled and reliable network communications services based on a messaging queuing model. MSMQ makes it easy to integrate applications by implementing a push-style business event delivery environment between applications, and to build reliable applications that work over unreliable but cost-effective networks. MSMQ also offers seamless interoperability with other message queuing products, such as IBM's MQSeries, through products available from Microsoft's independent software vendor (ISV) partners.

WINDOWS DNA Universal Data Access
Universal Data Access is Microsoft's strategy for providing access to information across the enterprise. Today, companies building database solutions face a number of challenges as they seek to gain maximum business advantage from the data and information distributed throughout their corporations. Universal Data Access provides high-performance access to a variety of information sources, including relational and non relational data, and an easy-to-use programming interface that is tool and language independent.

The foundation for developing enterprise data interoperability solutions on the Microsoft® Windows® platform is Microsoft Windows Distributed interNet Applications Architecture (Windows DNA). Windows DNA, which is based on the widely used Component Object Model (COM), specifies how to do the following:
¢ Develop robust, scalable, distributed applications using the Windows platform.
¢ Extend existing data and external applications to support Internet operations.
¢ Support a wide range of client devices maximizing the reach of an application.

Figure 1. Microsoft Windows DNA Architecture
Interoperability and reuse are key attributes of Windows DNA. Unlike traditional software development, which required each application to be built from scratch, the Component Object Model (COM) enables developers to create complex applications using a series of small software objects (COM components). For example, a component might be a credit card authorization procedure or the business rules for calculating shipping costs. The COM programming model speeds up the development process by enabling multiple development teams to work on different parts of an application simultaneously.
COM also offers the advantage of programming language independence. This means that Windows developers can create COM components using tools and languages with which they are familiar, such as Microsoft Visual Basic® and Visual C++®. For non-Windows programmers, including mainframe COBOL and Web publishers, COM components can be accessed from simple scripting languages, such as VBScript and JScript® development software. Windows DNA simplifies development by providing access to a wide range of services and products developed using a consistent object model”COM.
One example of the services available is what we call COM services for interoperability. COM services for interoperability include the network, data, application, and management services that are part of existing Microsoft products, such as Microsoft SNA Server. COM services for interoperability provide a common approach to system integration using the wide range of COM components available today.
An Interoperability Framework
Microsoft has defined a four-layer framework for interoperability based on industry standards for Network, Data, Applications, and Management”or NDAM for short. Microsoft provides access to interoperability components in each of these four categories. This document focuses on the Data interoperability layer, providing an overview of the wide range of COM components available for accessing multiple data stores across an enterprise environment.

Figure 2. Microsoft Interoperability Framework
Enterprises run their daily operations relying on multiple data sources, including database servers, legacy flat-file records, e-mail correspondence, personal productivity documents (spreadsheets, reports, presentations), and Web-based information publishing servers. Typically, applications, end-users, and decision-makers access these data sources by employing a variety of nonstandard interfaces. Data interoperability standards offer the transparent and seamless ability to access and modify data throughout the enterprise. Microsoftâ„¢s data interoperability strategy is called Universal Data Access. Universal Data Access uses COM objects to provide one consistent programming model for access to any type of data, regardless of where that data may be found in the enterprise.
An easy-to-use programming architecture that is both tool and language independent, Universal Data Access provides COM objects for high-performance access to a variety of relational (SQL) and nonrelational information sources. The technologies that make up the Universal Data Access strategy enable you to integrate diverse data sources, create easy-to-maintain solutions, and use your choice of best-of-breed tools, applications, and platform services.
To leverage existing investments, Universal Data Access does not require expensive and time-consuming movement of data into a single data store, nor does it require commitment to a single vendorâ„¢s data products. Universal Data Access is based on open industry specifications with broad industry support, and works with all major established database platforms.

Figure 3. Universal Data Access Architecture
The Microsoft Data Access Components (MDAC) are the key technologies that enable Universal Data Access. Data-driven client/server applications deployed over the Web or a LAN can use these components to easily integrate information from a variety of sources, both relational and nonrelational. These technologies include Open Database Connectivity (ODBC), OLE DB, and Microsoft ActiveX® Data Objects (ADO).
.
ODBC
Open Database Connectivity is an industry standard and a component of Microsoft Windows Open Services Architecture (WOSA). The ODBC interface makes it possible for applications to access relational data stored in almost any database management system (DBMS).
Microsoft's ODBC industry-standard data access interface continues to provide a unified way to access relational data as part of the OLE DB specification. ODBC is a widely accepted application programming interface (API) for database access. It is based on the Call-Level Interface (CLI) specifications from X/Open and ISO/IEC for database APIs and uses Structured Query Language (SQL) as its database access language. Microsoft has implemented a number of ODBC drivers to access diverse data stores. ODBC is widely supported by Microsoft, third party application development products, and end-user productivity applications.

Figure 4. ODBC Architecture
Microsoft offers a number of ODBC drivers as part of the Microsoft Data Access Components, which ships as a feature of many popular Microsoft products, including Microsoft SQL Server„¢, Microsoft Office, Microsoft BackOffice® family of products, Microsoft SNA Server, and Microsoft Visual Studio®. The following ODBC drivers are included in MDAC version 2.1:
¢ Microsoft ODBC Driver for SQL Server.
¢ Microsoft ODBC Driver for Oracle.
¢ Microsoft ODBC Driver for Microsoft Visual FoxPro®.
¢ Microsoft ODBC Driver for Access (Jet engine).
Additionally, Microsoft SNA Server 4.0 with Service Pack 2 ships with a Microsoft ODBC Driver for DB2.
A number of third-party ISVs offer ODBC drivers for many data sources.
In addition, OLE DB includes a bridge to ODBC to enable continued support for the broad range of ODBC relational database drivers available today. The Microsoft OLE DB Provider for ODBC leverages existing ODBC drivers, ensuring immediate access to databases for which an ODBC driver exists, but for which an OLE DB provider has not yet been written.
Note JDBC is a technology for accessing SQL database data from a Java client program. Microsoft offers a JDBC-to-ODBC bridge that allows Java programmers to access back-end data sources using available ODBC drivers. The Microsoft JDBC-ODBC bridge is part of the core set of classes that come with the Microsoft Virtual Machine (Microsoft VM) for Java.
OLE DB
OLE DB is a strategic system-level programming interface to data across the organization. OLE DB is an open specification designed to build on the success of ODBC by providing an open standard for accessing all kinds of data. Whereas ODBC was created to access relational databases, OLE DB is designed for relational and nonrelational information sources.

Figure 5. OLE DB Components
OLE DB encapsulates various database management system functions that enable the creation of software components implementing such services. OLE DB components consist of data providers, which contain and expose data; data consumers, which use data; and service components, which gather and sort data (such as query processors and cursor engines). OLE DB interfaces are designed to help diverse components integrate smoothly so that OLE DB component vendors can bring high-quality OLE DB products to market quickly.
OLE DB data providers
OLE DB data providers implement a set of core OLE DB interfaces that offer basic functionality. This basic functionality enables other OLE DB data providers, service components, and consumer applications to interoperate in a standard, predictable manner. The MDAC Software Development Kit (SDK) includes a set of OLE DB conformance tests that OLE DB component vendors, as well as end-user consumer developers, can run to ensure a standard level of compatibility. In addition, data providers can implement extended functionality as appropriate for a particular data source.
OLE DB data consumers
OLE DB data consumers can be any software programs that require access to a broad range of data, including development tools, personal productivity applications, database products, or OLE DB service components. A major set of OLE DB data consumers are ActiveX Data Objects (ADO), which provide a means to develop flexible and efficient data interoperability solutions using such high-level programming languages as Visual Basic.
OLE DB service components
OLE DB service components implement functionality not natively supported by some simple OLE DB data providers. For example, some basic OLE DB providers do not support rich sorting, filtering, and finding on their data sources. OLE DB service components, such as the Microsoft Cursor Service for OLE DB and the Microsoft Data Shaping Service for OLE DB, can seamlessly integrate with these basic OLE DB data providers to complete the functionality desired or expected by a given OLE DB consumer application. Universal Data Access allows for the development of generic OLE DB consumer applications that access many data sources in a single, uniform manner. Enterprise developers can write COM components that perform a specific function against nonspecific data sources. Such a component might run against a VSAM data set today and run against a SQL Server table tomorrow. This allows enterprises to migrate from one data store to another as efficiencies allow or business needs require.
Resource pooling is another popular function provided by OLE DB service components. When running under Microsoft Transaction Server, Microsoft Internet Information Server, or on standalone basis, OLE DB and ADO applications can make use of OLE DB resource pooling, supported by the OLE DB core services, to enable reuse of OLE DB Data Source proxy objects. Typically, the OLE DB session start-up, from instantiation of the OLE DB data source object (DSO) to creating the underlying network connection to the data source, is the most expensive part of a given transaction or unit of work. This is a critical issue when developing Web-based or multi-tier applications. Maintaining connections to the database in the resource state of the middle-tier component can create scalability issues. Creating a new connection on every page of a Web application is too slow. The solution is OLE DB resource pooling, which enables better scalability and offers better performance.
ADO
Microsoft ActiveX Data Objects (ADO) are a strategic application-level programming interface to data and information wrapped around OLE DB. ADO provides consistent, high-performance access to data and supports a variety of development needs, including the creation of front-end database clients and middle-tier business objects that use applications, tools, languages, or Internet browsers. ADO is designed to be the one data interface needed for single and multi-tier client/server and Web-based data-driven solution development. Its primary benefits are ease of use, high speed, low memory overhead, and a small disk footprint.
ADO provides an easy-to-use programming interface to OLE DB, because it uses a familiar metaphor”the COM automation interface, accessible from all leading Rapid Application Development (RAD) tools, database tools, and languages (including scripting languages).

Figure 6. ADO Object Model
ADO uses a flatter and more flexible object model than any previous object-oriented data access technology. Any of the five top-level objects can exist independent of the other four. Unlike DAO, which required constructing a hierarchical chain of objects before accessing data, ADO can be used to access data with just a couple of lines of code.
The secret of ADOâ„¢s strength is the fact that it can connect to any OLE DB provider and still expose the same programming model, regardless of the specific features offered by a particular provider. However, because each provider is unique in its implementation, how your application interacts with ADO may vary slightly when run against different data providers. Some common differences include ADO connection strings, command execution syntax, and supported data types.

Microsoft offers a number of technologies that facilitate interoperability with data stored on non-Windows systems. Developers can create solutions that take advantage of a large installed base of information sources while working in a familiar environment. The following is a collection of some of the Microsoft technologies available today that provide this capability.
Microsoft OLE DB Provider for DB2
The Microsoft OLE DB Provider for DB2 allows application developers to use familiar object-oriented programming techniques to access DB2 databases over SNA LU6.2 and TCP/IP networks, without requiring knowledge of SNA APPC programming.
Implemented using the open protocol of the Distributed Relational Database Architecture (DRDA), the Microsoft OLE DB Provider for DB2 supports access to remote DB2 data using industry-standard Structured Query Language (SQL) statements. Because the provider is part of Microsoft's universal data access strategy, which is based on OLE DB and ADO, it can interoperate with OLE DB-aware tools and applications in Microsoft Visual Studio 6.0, Microsoft SQL Server 7.0, and Microsoft Office 2000 Developer Edition. These generic consumer applications rely on this OLE DB provider's compatibility as verified using the OLE DB conformance tests. A key requirement is that the provider support the IDBSchemaRowset object. IDBSchemaRowset provides the consumer with a means to query the data sourceâ„¢s metadata that describes the target table. Using this schema information, the generic consumer can intelligently process and display the result sets of queries.

Figure 7. OLE DB Provider for DB2 connecting to DB2 for MVS
At run time, generic consumers use IDBSchemaRowset information to make choices on how to behave with a back-end data source. Other information available to generic consumers at run time includes IDBInfo data source-supported keywords and literal information. Typically, consumer applications are designed to modify their behavior based on an expected set of values returned in IDBSchemaRowset and IDBInfo based on well-defined rules in the OLE DB specification. Additionally, the OLE DB Provider for DB2 publishes the data types, including precision and scale limits, in the form of standard OLE DB IDBSchemaRowset DBSCHEMA_PROVIDER_TYPES.
Another use of IDBSchemaRowset data is to populate a list of tables and columns available in the data sourceâ„¢s current collection. The OLE DB Provider for DB2 maps IDBSchemaRowset to DB2 system table information. For example, DBSCHEMA_TABLES are provided using information stored in DB2 for OS/390 SYSIBM.SYSTABLES and DB2 for OS/400 QSYS2.SYSTABLES tables. DBSCHEMA_COLUMNS are mapped to DB2 for OS/390 SYSIBM.SYSCOLUMNS and DB2 for OS/400 QSYS2.SYSCOLUMNS information. The OLE DB provider queries these DB2 system tables at run time, returning the data on calls to IDBSchemaRowset TABLES and COLUMNS.
Two examples of this usage:
¢ The Visual Studio Data Designer, which allows developers to preview DB2 tables and drag these tables and columns into the query designer.
¢ The SQL Server Data Transformation Services, which offers the end user an intuitive wizard for picking tables for bulk movement of data between DB2 and any other OLE DB or ODBC data source.
OLE DB providers implement a default read-only, forward-only server cursor that is mapped to the data sourceâ„¢s cursor engine whenever possible. In the case of the OLE DB Provider for DB2, the server cursor offered is a forward-only updateable cursor. A server cursor offers the developer the most efficient means to traverse tables on the data source. Some generic consumers may expect and request a scrollable cursor when, as in the case of the OLE DB Provider for DB2, only a forward-only cursor is offered by the provider. In these cases, the generic consumer can request to use a client-side cursor that is implemented on behalf of the provider by the Microsoft Cursor Service for OLE DB. In ADO, a developer can specify the use of the ADO Client Cursor Engine (CCE) by simply specifying
Distributed Query Processor
The Distributed Query Processor (DQP) feature of Microsoft SQL Server 7.0 enables application developers to develop heterogeneous queries that join tables in disparate databases. To access the remote data sources, a user must create a Linked Server definition. The Linked Server encapsulates all of the network, security, and data source-specific information required to connect DQP to the remote data. Linked servers rely on underlying OLE DB providers or ODBC drivers for the actual physical connection to the target data source. Once a linked server has been defined, it can always be referred to with a single logical name as part of a SQL Server dynamic SQL statement or stored procedure. At run time, a linked server resource, such as a remote DB2 table, is treated like a local SQL Server table.
When a client application executes a distributed query using a linked server, SQL Server analyzes the command and sends any requests for linked server data via OLE DB to that data source. If security credentials were specified when the linked server was created, those credentials are passed to the linked server. Otherwise, SQL Server will pass the credentials of the current SQL Server login. When SQL Server receives the returned data, it is processed in conjunction with other portions of the query.
DQP can concurrently access multiple heterogeneous sources on local and remote computers. Additionally, it supports queries against both relational and nonrelational data by using OLE DB interfaces implemented in OLE DB providers. Using DQP, SQL Server administrators and end user developers can create linked server queries that run against multiple data sources with little or no modifications required. For example, a Visual Basic developer can create a single linked server query that selects and inserts data stored in DB2 today, and then can change the linked server name and run the same query against Microsoft SQL Server tomorrow. In this way, DQP provides the developer with an isolation level against changes in the storage engine.
Further, DQP is an efficient tool with which to join information from multiple tables spanning multiple data sources. For example, letâ„¢s say you are a regional sales manager for a large retail company with subsidiaries located in several countries. Because of mergers and acquisitions, some regional offices store their data in different databases from those of the corporate headquarters. The United Kingdom subsidiary stores its data in DB2; the Australian subsidiary stores its data in Microsoft Access; the Spanish subsidiary stores its data in Microsoft Excel; and the United States subsidiary stores its data in Microsoft SQL Server. You want a report that lists, on a quarterly basis for the last three years, the subsidiaries and the sales locations with the highest quarterly sales figures. Joining the required data tables can be accomplished in real time by using a single distributed query, running on Microsoft SQL Server.
Why Visual Studio 6.0?
The Microsoft Visual Studio 6.0 development system is a comprehensive suite of industry-leading development tools for building business applications for Windows 2000 Server, including client/server, multitier, and Web-based solutions. Visual Studio 6.0 includes key enterprise and team development features designed to help developers rapidly build scalable distributed applications that can be easily integrated with existing enterprise systems and applications.
Building on a Successful Base
COM+ builds on the proven success of COM.
¢ COM is in use on 200 million systems worldwide.
¢ COM supports a vibrant component marketplace. The demand for third-party components based on COM has been estimated to be $410 million this year, with a project and implimentationed 65 percent compound annual growth rate, and it is expected to grow to approximately $3 billion by 2001 (source: Giga Information Group). This base of available components allows developers to choose from a wide variety of components to assemble applications and solutions, which has revolutionized development on Windows platforms.
¢ COM supports thousands of available applications, including the highest-volume applications in the industry.
¢ Major system vendors, such as Hewlett-Packard Co., Digital Equipment Corp., and Siemens Nixdorf Information Systems Inc., have announced plans to ship COM on UNIX and non-UNIX systems within the year, and additional vendor commitments are expected to follow. In addition, Software AG has ported COM to many operating systems, including Solaris and MVS.
¢ COM consists of a well-defined, mature, and stable specification, as well as a reference implementation, which has been widely tested and adopted worldwide as a de facto standard.
¢ COM is supported by the largest number of development tools available for any component or object model on the market.
¢ COM+ enables the creation of the next generation of component-based applications, making it even easier to build and use components and providing richer, extensible services.
New Features in COM+
COM+ is an evolutionary extension to the Component Object Model (COM), the most widely used component technology in the world. COM+ makes it even easier for developers to create software components in any language, using any development tool. COM+ builds on the same factors that have made today's COM the choice of developers worldwide, including the following:
¢ The richest integrated services, including transactions, security, message queuing, and database access, to support the broadest range of application scenarios.
¢ The widest choice of tools from multiple vendors using multiple development languages.
¢ The largest customer base for customizable applications and reusable components.
¢ Proven interoperability with users' and developers' existing investments.

Host Integration Server 2000 solves the problem of integrating the Microsoft Windows operating system with other non-Windows enterprise systems running on platforms such as IBM mainframes, AS/400, and UNIX. By using the powerful and comprehensive bidirectional integration services of Host Integration Server 2000, developers are freed from platform boundaries and can build highly scalable, distributed applications that incorporate existing processes and data without requiring any recoding or "wrapping" of existing code. This allows businesses to quickly build new business-critical Windows DNA 2000 applications while preserving investment in best-of-breed and custom in-house developed solutions. Host Integration Server 2000 includes three levels of integration that lets developers make the most of existing computing resources.

Each Enterprise Application Integration (EAI) project and implimentation is unique and can include requirements for one or all of the following types of integration provided by Host Integration Server 2000:
¢ Network and Security Integration
Host Integration Server 2000 provides comprehensive managed host access seamlessly connecting legacy host systems with client/server and Web networks. Utilizing Windows 2000 Active Directory„¢ service, Host Integration Server integrates host-based security.
¢ Data Integration
Host Integration Server 2000 provides complete, secure access to enterprise data through object-oriented and programmatic access to relational DB2 data and flat file data on mainframes, AS/400, UNIX, Windows 2000, and Windows NT® Server.
¢ Application Integration
Utilizing COM Transaction Integrator, developers can build distributed applications that integrate Microsoft Transaction Services (MTS) with IBM host CICS and IMS transactions. Instead of learning the intricacies of host code, Web developers can use COMTI to wrap CICS and IMS transactions and expose them as COM objects available for use in distributed Web applications.
Host Integration Server 2000 also makes the most of Windows DNA 2000 core "plumbing code" such as: security, support for transactions, and queuing, freeing developers to create new functionality. The same core services reduce complexity for system administrators. Tight integration with Windows 2000 makes managing host and application access easier, less expensive, and more secure.
Introducing Microsoft Site Server 3.0
Microsoft® Site Server 3.0, a member of the Microsoft BackOffice® server family, provides a powerful alternative to the traditional methods of intranet development and maintenance. Microsoft Site Server 3.0 is the powerful intranet server, optimized for Microsoft Windows NT® Server operating system with Internet Information Server, for publishing and finding information easier and faster. By deploying Site Server 3.0, businesses can use the intranet to efficiently gather the collective expertise of the organization from wherever it resides”in Web sites, databases, file servers, e-mail”and deliver it to facilitate the sharing of knowledge and to improve business productivity.
Site Server 3.0 includes a unique set of features that are designed to work together to optimize information sharing across the organization:
¢ Content Management provides a structured publishing process for multiple content authors to submit, tag, and edit content through a drag-and-drop Web interface. Site editors can then approve, edit, and enforce uniform guidelines for content.
¢ Content Deployment enables administrators to deploy content securely and robustly across multiple distributed servers.
¢ Search enables users to perform full-text and property searches across various stores and formats, including HTTP, file systems, Exchange files, and databases.
¢ Personalization & Membership provide easy authoring and targeting of personalized information based on user profiles and behaviors and the ability to present search results in highly customizable user view and personalized Web pages.
¢ Push enables businesses to create delivery channels for Microsoft Internet Explorer 4.0. Channel agents, based on Microsoft Active Channel„¢ Server, enable intranet developers and administrators to create channels from databases and file systems as well as Search and Index Server. The Active Channel Multicaster saves valuable network bandwidth by using multicast technology to deliver channels.
¢ Knowledge Manager is a centralized Web-based application that integrates the Site Server knowledge management features to enable users to easily browse, search, share, and subscribe to relevant information.
¢ Analysis transforms raw hits recorded in server log files into valuable information about the requests, visits, and users that interact with an intranet. This allows businesses to measure the effectiveness of an intranet. Analysis also captures content and site structure to identify issues, such as pages with long load times and out-of-date content
¢ A BizTalk-based document is an XML file that deploys the tags from a certain vocabulary and follows the rules that the organization has defined for that type of document. A BizTalk-based document is actually exchanged by two BizTalk servers across a network. In Figure 1 you can see the overall BizTalk architecture, illustrating the role of XML for exchanging data between commercial partners. Both parties continue to manage documents in their own native formats on their own platforms, but data moves back and forth, despite architectural differences, thanks to XML.

Figure 1 BizTalk Architecture
BizTalk is central to Windows DNA 2000, and will be one of the key tools that help build e-commerce solutions. More often than not, today's e-commerce solutions require integration with existing information systems and data residing on host machines.?

MICROSOFT WINDOWS DISTRIBUTED INTERNET APPLICATIONS Windows DNA: Building Windows Applications for the Internet Age Windows DNA Technologies.

¢ Windows DNA services are exposed in a unified way through COM for applications to use. These services include component management, Dynamic HTML, Web browser and server, scripting, transactions, message queuing, security, directory, database and data access, systems management, and user interface.

¢ Adhering to open protocols and published interfaces makes it easy to integrate other vendor solutions and provides broad interoperability with existing systems.

¢ Because Windows DNA is based on COM and open Internet standards, developers can use any language or tool to create compatible applications.

¢ Microsoft developed the Windows Distributed interNet Application Architecture (Windows DNA) as a way to fully integrate the Web with the rc-tier model of development. Windows DNA defines a framework for delivering solutions that meet the demanding requirements of corporate computing, the Internet, intranets, and global electronic commerce, while reducing overall development and deployment costs.

FEATURES AND ADVANTAGES OF WINDOWS DNA


¢ DNA helps to design and build multi-tier client/server applications.
¢ DNA provides client transparency.
¢ DNA applications provide full transactional processing support.
¢ DNA can be used to create applications that are fault tolerant.
¢ DNA is ideal for distributed applications.
¢ DNA is ideal for distributed applications.

The DNA methodology covers many existing technologies to help design and implement robust, distributed applications. It visualizes this whole application as a series of tiers, with the client at the top and the data store at the bottom. The core of DNA is the use of business objects in a middle tier of the application. Also, in DNA, business objects are implemented as software components. These components can be accessed by the client interface application or by another component, and can themselves call on other components, data stores, etc. Componentization of business rules brings many benefits, such as easier maintenance, encapsulation of the rules, protection of intellectual copyright, etc. Hence, DNA is an approach to design that can speed up overall development time, while creating more reliable and fault tolerant applications that are easily distributable over a whole variety of networks.

Several more good reasons why companies should base their applications on Windows DNA. Because the architecture is built on open protocols and industry standards,
solutions from other vendors integrate easily into the environment. This helps ensure interoperability with mission-critical business applications, such as corporate databases and enterprise resource planning systems. An open approach also facilitates compatibility with existing computing systems, which means that companies can continue to take advantage of their legacy systems as opposed to replacing them.









TOP WINDOWS DNA PERFORMANCE MISTAKES

Throughput refers to the amount of work (number of transactions) an application can perform in a measured period of time and is often calculated in transactions per second (tps). Scalability refers to the amount of change in linear throughput that occurs when resources are either increased or decreased. It is what allows an application to support anywhere from a handful to thousands of users, by simply adding or subtracting resources as necessary to "scale" the application. Finally, transaction time refers to the amount of time needed to acquire the necessary resources, plus the amount of time the transaction takes actually using these resources.

Producing a good /-tier application often entails a series of judgments in planning and implementing the final product. When those decisions are poorly made, development teams can encounter time-consuming”and often difficult to solve”performance problems after the application has been installed and implemented. Fortunately, many of these problems can be anticipated and prevented. This article shows you how to find and eliminate them early in the development process. The mistakes that follow were identified by Microsoft Consulting Services (MCS) consultants worldwide.

Misunderstanding the Relationship between Performance and Scalability
Performance and scalability are not the same, but neither are they at odds. For example, an application may process information at an incredibly fast rate as long as the number of users sending it information is less than 100. When that application reaches the point at which 10,000 users are simultaneously providing input, the performance may degrade substantially, because scalability wasn't high enough in the list of considerations during the development cycle. On the other hand, that same high-performance application may be partially rewritten in a subsequent iteration and have no problem handling 100,000 customers at one time. By then, however, a substantial number of customers may have migrated to a product someone else got right the first time.Sometimes applications seek scalability in terms of number of concurrent users strictly through performance, with the idea being.

Sometimes applications seek scalability in terms of number of concurrent users strictly through performance, with the idea being that the faster a server application runs, the more users can be supported on a single server. The problem with this approach is that increasing the number of simultaneous users may create a bottleneck that will actually reduce the level of performance as the load increases. One cause of this kind of behavior is caching state and data in the middle tier. By avoiding such caching in the design phase of the development process, countless hours of backtracking and rewriting code can be avoided.

Acquiring the necessary resources can be slowed by such factors as network latency, disk access speed, database locking scheme, and resource contention. Added to that are elements that can affect resource usage time, such as network latency, user input, and sheer volume of work. Windows DNA application developers should concentrate on keeping resource acquisition and resource usage times as low as possible.

ways to manage some of these factors:
¢ Avoid involving user interaction as part of a transaction.
¢ Avoid network interaction as part of a transaction.
¢ Acquire resources late and release them early.
¢ Make more resources available. Otherwise, use MTS to pool resources that are in short supply or are expensive to create.
¢ Use MTS to share resources between users because it is usually more expensive to create a new resource than to reuse an existing one.
It must be scalable so that the largest number of simultaneous users can be logged on without compromising throughput to an unacceptable level.

A middle tier in an n-tier application is necessarily complex because of the role it plays in the overall application. The specific tasks it performs can be separated into three general categories that are essential to Windows DNA applications. The first task involves receiving input from the presentation tier. This input can be done programmatically or may come directly from a user. It may include information about (or a request for) almost anything. Second, a middle tier is responsible for interacting with the data services to perform the business operations that the application was designed to automate. For example, this might include sorting and combining information from different mailing lists to target a specific audience that was never previously considered to be a cohesive group. Finally, a middle tier returns processed information to the presentation tier so it can be used however the program or user sees fit. Within these three areas, performance can degrade significantly when developers use programming practices that are either little understood or mistakenly embraced as the "right" thing to do.

Performing Data-Centric Work in a Middle Tier.Developers sometimes fall into the trap of including data-centered tasks with the business services work in a middle tier instead of the data-services tier where they belong. Rules are frequently too rigid to account for all cases but it would be very unusual to find a justification for breaking this one. If data-centered tasks are included in middle tier, your Windows DNA application is likely to perform more poorly than it would otherwise. For example, it would be a mistake to retrieve multiple data sets from different tables and then join, sort, or search the data in middle-tier objects. The database is designed to handle this kind of activity and removing it to a middle tier is almost certainly a bad practice.

Even with the ever-increasing processor power available for database servers, poorly tuned indexes and queries can bring an otherwise robust system to its knees. It is quite common to see developers coding stored procedures or queries without consulting the database administrator (DBA), or even running a project and implimentation with no DBA involvement.






CONCLUSION

The Windows DNA architecture and the Windows NT platform offer many distinct advantages to customers and their ISV partners. Its key benefits include: ¦
¢ Providing a comprehensive and integrated platform for distributed applications, freeing developers from the burden of building the required infrastructure or assembling it using a piecemeal approach.
¢ Easy interoperability with existing enterprise applications and legacy systems to extend current investments.
¢ Making it faster and easier to build distributed applications by providing a pervasive component model, extensive prebuilt application services, and a wide choice of programming language and tools support. Windows DNA applications have proven themselves in a wide range of circumstances, and the value they represent in the modern distributed computing environment has been thoroughly demonstrated. They have, however, also shown themselves to require careful planning and thorough testing throughout the development process. Avoiding the kinds of mistakes noted in this article should reduce the amount of resources required to produce the kind of Windows DNA application you want. Performance and load testing is unavoidable. Do it in a manner that simulates real-world conditions for your particular application and you'll be rewarded with an n-tier application that works and works well.

FUTURE SCOPE

Microsoft compelled to release .NET. First and foremost .NET Is a framework that covers all the layers of software development above the Operating System. It provides the richest level of integration among presentation technologies, Component technologies and data technologies ever seen on a Microsoft or any platform .Secondly the entire architecture has been created to make it as easy to develop Internet Application as it is to develop for the desktop.This .NET architecture provide a wrapping of COM technologies
More than 200,000 developers are working on .NET in India. 64% of developers' community uses .NET currently. INDIGO: The Future Technology for Building Distributed Application
Indigo is Microsoft's unified programming model for building
Unified programming model for building service oriented applications with managed code .it is devised by Shevchuk while working on .NET framework to enable developers to build secure reliable transacted Web services that integrate across platforms and interoperate with existing investments. Indigo combines and extends the capabilities of existing Microsoft distributed system technologies including Enterprise Service, System messaging .NET Remoting including distance cross-process ,cross-machine ,cross-subnet ,cross-intranet ,cross-topologies protocols and security models.Indigo is build on and extends the .NET framework 2.0 which is a part of the windows release code named longhorn and it will be also made available on XP and 2003 server editions.

REFERENCES
¢ For more information on Universal Data Access, see microsoftdata/
¢ For more on the COM specification, see the COM Web site, located at microsoftcom/
¢ For additional information see Q218590 INF: "Configuring Data Sources for the Microsoft OLE DB Provider (DB2)" at support.microsoftsearch/default.asp.
¢ For more information see microsoftdata/oledb/
¢ For more information see microsoftdata/ado/.
¢ For more information, see microsoftjava/.
¢ For more information on ODBC, see microsoftdata/odbc/.
¢ For more information on these third-party offerings, see microsoftsql/.
¢
please read topicideashow-to-microsoft-windows-distributed-internet-application-architect and topicideashow-to-windows-dna for getting more about Windows DNA seminar and presentation informations


Attached Files
.doc   Windows DNA.doc (Size: 262 KB / Downloads: 49)
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page

Quick Reply
Message
Type your reply to this message here.


Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  A TECHNICAL SEMINAR ON 3D INTERNET seminar projects maker 1 730 06-04-2016, 12:37 PM
Last Post: mkaasees
  Load Rebalancing for Distributed File Systems in Clouds seminar tips 3 1,784 13-04-2015, 05:21 PM
Last Post: shilpavpius
  difference between windows xp and windows 7 ppt jaseelati 0 154 02-03-2015, 01:19 PM
Last Post: jaseelati
  seminar report on internet of things jaseelati 0 361 29-01-2015, 04:51 PM
Last Post: jaseelati
  3d internet seminar report jaseelati 0 250 11-12-2014, 02:16 PM
Last Post: jaseelati
  3d internet seminar report jaseelati 0 339 06-12-2014, 04:25 PM
Last Post: jaseelati
  Karnataka ration card online application form 2013 study tips 17 17,249 07-11-2013, 03:45 PM
Last Post: Guest
  Optical Packet Buffers for Backbone Internet Routers pdf seminar projects maker 0 409 28-09-2013, 02:48 PM
Last Post: seminar projects maker
  An abstract Report On WINDOWS APPS COMPENDIUM seminar projects maker 0 284 24-09-2013, 02:42 PM
Last Post: seminar projects maker
  JAD- JAVA APPLICATION DEVELOPER ppt seminar projects maker 0 478 12-09-2013, 12:07 PM
Last Post: seminar projects maker