cluster computing full report
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
computer science technology
Active In SP
**

Posts: 740
Joined: Jan 2010
#1
31-01-2010, 12:26 PM



.doc   CLUSTER COMPUTING report.doc (Size: 62.5 KB / Downloads: 727)

ABSTRACT
A computer cluster is a group of loosely coupled computers that work together closely so that in many respects it can be viewed as though it were a single computer. Clusters are commonly connected through fast local area networks. Clusters are usually deployed to improve speed and/or reliability over that provided by a single computer, while typically being much more cost-effective than single computers of comparable speed or reliability. Cluster computing has emerged as a result of convergence of several trends including the availability of inexpensive high performance microprocessors and high speed networks, the development of standard software tools for high performance distributed computing. Clusters have evolved to support applications ranging from ecommerce, to high performance database applications. Clustering has been available since the 1980s when it was used in DEC's VMS systems. IBM's sysplex is a cluster approach for a mainframe system. Microsoft, Sun Microsystems, and other leading hardware and software companies offer clustering packages that are said to offer scalability as well as availability. Cluster computing can also be used as a relatively low-cost form of parallel processing for scientific and other applications that lend themselves to parallel operations.
INTRODUCTION
Computing is an evolutionary process. Five generations of development history” with each generation improving on the previous one™s technology, architecture, software, applications, and representative systems”make that clear. As part of this evolution, computing requirements driven by applications have always outpaced the available technology. So, system designers have always needed to seek faster, more cost effective computer systems. Parallel and distributed computing provides the best solution, by offering computing power that greatly exceeds the technological limitations of single processor systems. Unfortunately, although the parallel and distributed computing concept has been with us for over three decades, the high cost of multiprocessor systems has blocked commercial success so far. Today, a wide range of applications are hungry for higher computing power, and even though single processor PCs and workstations now can provide extremely fast processing; the even faster execution that multiple processors can achieve by working concurrently is still needed. Now, finally, costs are falling as well. Networked clusters of commodity PCs and workstations using off-the-shelf processors and communication platforms such as Myrinet, Fast Ethernet, and Gigabit Ethernet are becoming increasingly cost effective and popular. This concept, known as cluster computing, will surely continue to flourish: clusters can provide enormous computing power that a pool of users can share or that can be collectively used to solve a single application. In addition, clusters do not incur a very high cost, a factor that led to the sad demise of massively parallel machines.
6
Clusters, built using commodity-off-the-shelf (COTS) hardware components and free, or commonly used, software, are playing a major role in solving large-scale science, engineering, and commercial applications. Cluster computing has emerged as a result of the convergence of several trends, including the availability of inexpensive high performance microprocessors and high speed networks, the development of standard software tools for high performance distributed computing, and the increasing need of computing power for computational science and commercial applications.
CLUSTER HISTORY
The first commodity clustering product was ARCnet, developed by Datapoint in 1977. ARCnet wasn't a commercial success and clustering didn't really take off until DEC released their VAXcluster product in the 1980s for the VAX/VMS operating system. The ARCnet and VAXcluster products not only supported parallel computing, but also shared file systems and peripheral devices. They were supposed to give you the advantage of parallel processing while maintaining data reliability and uniqueness. VAXcluster, now VMScluster, is still available on OpenVMS systems from HP running on Alpha and Itanium systems. The history of cluster computing is intimately tied up with the evolution of networking technology. As networking technology has become cheaper and faster, cluster computers have become significantly more attractive. How to run applications faster? There are 3 ways to improve performance: Work Harder Work Smarter Get Help Era of Computing Rapid technical advances ¢ the recent advances in VLSI technology ¢ software technology
¢ grand challenge applications have become the main driving force ¢ Parallel computing
CLUSTERS
Extraordinary technological improvements over the past few years in areas such as microprocessors, memory, buses, networks, and software have made it possible to assemble groups of inexpensive personal computers and/or workstations into a cost effective system that functions in concert and posses tremendous processing power. Cluster computing is not new, but in company with other technical capabilities, particularly in the area of networking, this class of machines is becoming a highperformance platform for parallel and distributed applications Scalable computing clusters, ranging from a cluster of (homogeneous or heterogeneous) PCs or workstations to SMP (Symmetric Multi Processors), are rapidly becoming the standard platforms for highperformance and large-scale computing. A cluster is a group of independent computer systems and thus forms a loosely coupled multiprocessor system as shown in figure.
10
However, the cluster computing concept also poses three pressing research challenges: A cluster should be a single computing resource and provide a single system image. This is in contrast to a distributed system where the nodes serve only as individual resources. It must provide scalability by letting the system scale up or down. The scaled-up system should provide more functionality or better performance. The systemâ„¢s total computing power should increase proportionally to the increase in resources. The main motivation for a scalable system is to provide a flexible, cost effective Information-processing tool. The supporting operating system and communication Mechanism must be efficient enough to remove the performance Bottlenecks. The concept of Beowulf clusters is originated at the Center of Excellence in Space Data and Information Sciences (CESDIS), located at the NASA Goddard Space Flight Center in Maryland. The goal of building a Beowulf cluster is to create a cost effective parallel computing system from commodity components to satisfy specific computational requirements for the earth and space sciences community. The first Beowulf cluster was built from 16 IntelDX4TM processors connected by a channel bonded 10 Mbps Ethernet and it ran the Linux operating system. It was an instant success, demonstrating the concept of using a commodity cluster as an alternative
11
choice for high-performance computing (HPC). After the success of the first Beowulf cluster, several more were built by CESDIS using several generations and families of processors and network. Beowulf is a concept of clustering commodity computers to form a parallel, virtual supercomputer. It is easy to build a unique Beowulf cluster from components that you consider most appropriate for your applications. Such a system can provide a cost-effective way to gain features and benefits (fast and reliable services) that have historically been found only on more expensive proprietary shared memory systems. The typical architecture of a cluster is shown in Figure 3. As the figure illustrates, numerous design choices exist for building a Beowulf cluster.
12
WHY CLUTERS?
The question may arise why clusters are designed and built when perfectly good commercial supercomputers are available on the market. The answer is that the latter is expensive. Clusters are surprisingly powerful. The supercomputer has come to play a larger role in business applications. In areas from data mining to fault tolerant performance clustering technology has become increasingly important. Commercial products have their place, and there are perfectly good reasons to buy a commerciallyproduced supercomputer. If it is within our budget and our applications can keep machines busy all the time, we will also need to have a data center to keep it in. then there is the budget to keep up with the maintenance and upgrades that will be required to keep our investment up to par. However, many who have a need to harness supercomputing power donâ„¢t buy supercomputers because they canâ„¢t afford them. Also it is impossible to upgrade them. Clusters, on the other hand, are cheap and easy way to take off-the-shelf components and combine them into a single supercomputer. In some areas of research clusters are actually faster than commercial supercomputer. Clusters also have the distinct advantage in that they are simple to build using components available from hundreds of sources. We donâ„¢t even have to use new equipment to build a cluster. Price/Performance
13
The most obvious benefit of clusters, and the most compelling reason for the growth in their use, is that they have significantly reduced the cost of processing power. One indication of this phenomenon is the Gordon Bell Award for Price/Performance Achievement in Supercomputing, which many of the last several years has been awarded to Beowulf type clusters. One of the most recent entries, the Avalon cluster at Los Alamos National Laboratory, "demonstrates price/performance an order of magnitude superior to commercial machines of equivalent performance." This reduction in the cost of entry to high-power computing (HPC) has been due to co modification of both hardware and software over the last 10 years particularly. All the components of computers have dropped dramatically in that time. The components critical to the development of low cost clusters are: 1. Processors - commodity processors are now capable of computational power previously reserved for supercomputers, witness Apple Computer's recent add campain touting the G4 Macintosh as a supercomputer. 2. Memory - the memory used by these processors has dropped in cost right with the processors. 3. Networking Components - the most recent group of products to experience co modification and dramatic cost decreases is networking hardware. High- Speed networks can now be assembled with these products for a fraction of the cost necessary only a few years ago. 4. Motherboards, busses, and other sub-systems - all of these have become commodity products, allowing the assembly of affordable computers from off the shelf components
14
COMPARING OLD AND NEW
Today, open standards-based HPC systems are being used to solve problems from High-end, floating-point intensive scientific and engineering problems to data intensive tasks in industry. Some of the reasons why HPC clusters outperform RISC based systems Include: Collaboration Scientists can collaborate in real-time across dispersed locations- bridging isolated islands of scientific research and discovery- when HPC clusters are based on open source and building block technology. Scalability HPC clusters can grow in overall capacity because processors and nodes can be added as demand increases. Availability Because single points of failure can be eliminated, if any one system component goes Down, the system as a whole or the solution (multiple systems) stay highly available. Ease of technology refresh Processors, memory, disk or operating system (OS) technology can be easily updated, And new processors and nodes can be added or upgraded as needed. Affordable service and support Compared to proprietary systems, the total cost of ownership can be much lower. This includes service, support and training.
15
Vendor lock-in The age-old problem of proprietary vs. open systems that use industryaccepted standards is eliminated. System manageability The installation, configuration and monitoring of key elements of proprietary systems is usually accomplished with proprietary technologies, complicating system management. The servers of an HPC cluster can be easily managed from a single point using readily available network infrastructure and enterprise management software. Reusability of components Commercial components can be reused, preserving the investment. For example, older nodes can be deployed as file/print servers, web servers or other infrastructure servers. Disaster recovery Large SMPs are monolithic entities located in one facility. HPC systems can be collocated or geographically dispersed to make them less susceptible to disaster.
16
LOGICAL VIEW OF CLUSTER
A Beowulf cluster uses multi computer architecture, as depicted in figure. It features a parallel computing system that usually consists of one or more master nodes and one or more compute nodes, or cluster nodes, interconnected via widely available network interconnects. All of the nodes in a typical Beowulf cluster are commodity systems- PCs, workstations, or servers-running commodity software such as Linux.
17
The master node acts as a server for Network File System (NFS) and as a gateway to the outside world. As an NFS server, the master node provides user file space and other common system software to the compute nodes via NFS. As a gateway, the master node allows users to gain access through it to the compute nodes. Usually, the master node is the only machine that is also connected to the outside world using a second network interface card (NIC). The sole task of the compute nodes is to execute parallel jobs. In most cases, therefore, the compute nodes do not have keyboards, mice, video cards, or monitors. All access to the client nodes is
18
provided via remote connections from the master node. Because compute nodes do not need to access machines outside the cluster, nor do machines outside the cluster need to access compute nodes directly, compute nodes commonly use private IP addresses, such as the 10.0.0.0/8 or 192.168.0.0/16 address ranges. From a userâ„¢s perspective, a Beowulf cluster appears as a Massively Parallel Processor (MPP) system. The most common methods of using the system are to access the master node either directly or through Telnet or remote login from personal workstations. Once on the master node, users can prepare and compile their parallel applications, and also spawn jobs on a desired number of compute nodes in the cluster. Applications must be written in parallel style and use the message-passing programming model. Jobs of a parallel application are spawned on compute nodes, which work collaboratively until finishing the application. During the execution, compute nodes use 10 standard message-passing middleware, such as Message Passing Interface (MPI) and Parallel Virtual Machine (PVM), to exchange information.
ARCHITECTURE
A cluster is a type of parallel or distributed processing system, which consists of a collection of interconnected standalone computers cooperatively working together as a single, integrated computing resource A node:
¢
a single or multiprocessor system with memory, I/O facilities, & OS
¢ generally 2 or more computers (nodes) connected together
19
¢ in a single cabinet, or physically separated & connected via a LAN ¢ appear as a single system to users and applications ¢ provide a cost-effective way to gain features and benefits
Three principle features usually provided by cluster computing are availability, scalability and simplification. Availability is provided by the cluster of computers operating as a single system by continuing to provide services even when one of the individual computers is lost due to a hardware failure or other reason. Scalability is provided by the inherent ability of the overall system to allow new components, such as computers, to be assed as the overall system's load is increased. The simplification comes from the ability of the cluster to allow administrators to manage the entire group as a single system. This greatly simplifies the
20
management of groups of systems and their applications. The goal of cluster computing is to facilitate sharing a computer load over several systems without either the users of system or the administrators needing to know that more than one system is involved. The Windows NT Server Edition of the Windows operating system is an example of a base operating system that has been modified to include architecture that facilitates a cluster computing environment to be established. Cluster computing has been employed for over fifteen years but it is the recent demand for higher availability in small businesses that has caused an explosion in this field. Electronic databases and electronic malls have become essential to the daily operation of small businesses. Access to this critical information by these entities has created a large demand for cluster computing principle features.
21
There are some key concepts that must be understood when forming a cluster computing resource. Nodes or systems are the individual members of a cluster. They can be computers, servers, and other such hardware although each node generally has memory and processing capabilities. If one node becomes unavailable the other nodes can carry the demand load so that applications or services are always available. There must be at least two nodes to compose a cluster structure otherwise they are just called servers. The collection of software on each node that manages all cluster specific activity is called the cluster service. The cluster service manages all of the resources, the canonical items in the system, and sees then as identical opaque objects. Resources can be such things as physical hardware devices, like disk drives and network cards, logical items, like logical disk volumes, TCP/IP addresses, applications, and databases.
22
When a resource is providing its service on a specific node it is said to be on-line. A collection of resources to be managed as a single unit is called a group. Groups contain all of the resources necessary to run a specific application, and if need be, to connect to the service provided by the application in the case of client systems. These groups allow administrators to combine resources into larger logical units so that they can be managed as a unit. This, of course, means that all operations performed on a group affect all resources contained within that group. Normally the development of a cluster computing system occurs in phases. The first phase involves establishing the underpinnings into the base operating system and building the foundation of the cluster components. These things should focus on providing enhanced availability to key applications using storage that is accessible to two nodes. The following stages occur as the demand increases and should allow for much larger clusters to be formed. These larger clusters should have a true distribution of applications, higher performance interconnects, widely distributed storage for easy accessibility and load balancing. Cluster computing will become even more prevalent in the future because of the growing needs and demands of businesses as well as the spread of the Internet.
Clustering Concepts
Clusters are in fact quite simple. They are a bunch of computers tied together with a network working on a large problem that has been broken down into smaller pieces. There are a number of different strategies we can use to tie them together. There are also a number of different software packages that can be used to make the software side of things work.
23
Parallelism The name of the game in high performance computing is parallelism. It is the quality that allows something to be done in parts that work independently rather than a task that has so many interlocking dependencies that it cannot be further broken down. Parallelism operates at two levels: hardware parallelism and software parallelism. Hardware Parallelism On one level hardware parallelism deals with the CPU of an individual system and how we can squeeze performance out of sub-components of the CPU that can speed up our code. At another level there is the parallelism that is gained by having multiple systems working on a computational problem in a distributed fashion. These systems are known as ˜fine grained™ for parallelism inside the CPU or having to do with the multiple CPUs in the same system, or ˜coarse grained™ for parallelism of a collection of separate systems acting in concerts. CPU Level Parallelism A computer™s CPU is commonly pictured as a device that operates on one instruction after another in a straight line, always completing one-step or instruction before a new one is started. But new CPU architectures have an inherent ability to do more than one thing at once. The logic of CPU chip divides the CPU into multiple execution units. Systems that have multiple execution units allow the CPU to attempt to process more than one instruction at a time. Two hardware features of modern CPUs support multiple execution units: the cache “ a small memory inside the CPU. The pipeline is a small area of memory inside the CPU where instructions that are next in line to be executed are stored. Both cache and pipeline allow impressive increases in CPU performances.
24
System level Parallelism It is the parallelism of multiple nodes coordinating to work on a problem in parallel that gives the cluster its power. There are other levels at which even more parallelism can be introduced into this system. For example if we decide that each node in our cluster will be a multi CPU system we will be introducing a fundamental degree of parallel processing at the node level. Having more than one network interface on each node introduces communication channels that may be used in parallel to communicate with other nodes in the cluster. Finally, if we use multiple disk drive controllers in each node we create parallel data paths that can be used to increase the performance of I/O subsystem. Software Parallelism Software parallelism is the ability to find well defined areas in a problem we want to solve that can be broken down into self-contained parts. These parts are the program elements that can be distributed and give us the speedup that we want to get out of a high performance computing system. Before we can run a program on a parallel cluster, we have to ensure that the problems we are trying to solve are amenable to being done in a parallel fashion. Almost any problem that is composed of smaller subproblems that can be quantified can be broken down into smaller problems and run on a node on a cluster. System-Level Middleware System-level middleware offers Single System Image (SSI) and high availability infrastructure for processes, memory, storage, I/O, and networking. The single system image illusion can be implemented using the hardware or software infrastructure. This unit focuses on SSI at the operating system or subsystems level.
25
A modular architecture for SSI allows the use of services provided by lower level layers to be used for the implementation of higher-level services. This unit discusses design issues, architecture, and representative systems for job/resource management, network RAM, software RAID, single I/O space, and virtual networking. A number of operating systems have proposed SSI solutions, including MOSIX, Unixware, and Solaris -MC. It is important to discuss one or more such systems as they help students to understand architecture and implementation issues. Message Passing Primitives Although new high-performance protocols are available for cluster computing, some instructors may want provide students with a brief introduction to message passing programs using the BSD Sockets interface Transmission Control Protocol/Internet Protocol (TCP/IP) before introducing more complicated parallel programming with distributed memory programming tools. If students have already had a course in data communications or computer networks then this unit should be skipped. Students should have access to a networked computer lab with the Sockets libraries enabled. Sockets usually come installed on Linux workstations. Parallel Programming Using MPI An introduction to distributed memory programming using a standard tool such as Message Passing Interface (MPI)[23] is basic to cluster computing. Current versions of MPI generally assume that programs will be written in C, C++, or Fortran. However, Java-based versions of MPI are becoming available.
26
Application-Level Middleware Application-level middleware is the layer of software between the operating system and applications. Middleware provides various services required by an application to function correctly. A course in cluster programming can include some coverage of middleware tools such as CORBA, Remote Procedure Call, Java Remote Method Invocation (RMI), or Jini. Sun Microsystems has produced a number of Java-based technologies that can become units in a cluster programming course, including the Java Development Kit (JDK) product family that consists of the essential tools and APIs for all developers writing in the Java programming language through to APIs such as for telephony (JTAPI), database connectivity (JDBC), 2D and 3D graphics, security as well as electronic commerce. These technologies enable Java to interoperate with many other devices, technologies, and software standards. Single System image A single system image is the illusion, created by software or hardware, that presents a collection of resources as one, more powerful resource. SSI makes the cluster appear like a single machine to the user, to applications, and to the network. A cluster without a SSI is not a cluster. Every SSI has a boundary. SSI support can exist at different levels within a system, one able to be build on another.
27
Single System Image Benefits
¢
Provide a simple, straightforward view of all system resources and activities, from any node of the cluster Free the end user from having to know where an application will run Free the operator from having to know where a resource is located Let the user work with familiar interface and commands and allows the administrators to manage the entire clusters as a single entity Reduce the risk of operator errors, with the result that end users see improved reliability and higher availability of the system Allowing centralize/decentralize system management and control to avoid the need of skilled administrators from system administration Present multiple, cooperating components of an application to the administrator as a single application
¢ ¢ ¢
¢
¢
¢
28
¢ ¢ ¢
Greatly simplify system management Provide location- independent message communication Help track the locations of all resource so that there is no longer any need for system operators to be concerned with their physical location
¢
Provide transparent process migration and load balancing across nodes.
¢ Improved system response time and performance
High speed networks
Network is the most critical part of a cluster. Its capabilities and performance directly influences the applicability of the whole system for HPC. Starting from Local/Wide Area Networks (LAN/WAN) like Fast Ethernet and ATM, to System Area Networks (SAN) like Myrinet and Memory Channel Eg. Fast Ethernet ¢ 100 Mbps over UTP or fiber-optic cable ¢ MAC protocol: CSMA/CD
29
COMPONENTS OF CLUSTER COMPUTER
1. Multiple High Performance Computers a. PCs b. Workstations c. SMPs (CLUMPS) 2. State of the art Operating Systems a. Linux (Beowulf) b. Microsoft NT (Illinois HPVM) c. SUN Solaris (Berkeley NOW) d. HP UX (Illinois - PANDA) e. OS gluing layers(Berkeley Glunix) 3. High Performance Networks/Switches a. Ethernet (10Mbps), b. Fast Ethernet (100Mbps), c. Gigabit Ethernet (1Gbps) d. Myrinet (1.2Gbps) e. Digital Memory Channel f. FDDI 4. Network Interface Card a. Myrinet has NIC b. User-level access support 5. Fast Communication Protocols and Services a. Active Messages (Berkeley) b. Fast Messages (Illinois) c. U-net (Cornell) d. XTP (Virginia)
30
6. Cluster Middleware a. Single System Image (SSI) b. System Availability (SA) Infrastructure 7. Hardware a. DEC Memory Channel, DSM (Alewife, DASH), SMP Techniques 8. Operating System Kernel/Gluing Layers a. Solaris MC, Unixware, GLUnix 9. Applications and Subsystems a. Applications (system management and electronic forms) b. Runtime systems (software DSM, PFS etc.) c. Resource management and scheduling software (RMS) 10. Parallel Programming Environments and Tools a. Threads (PCs, SMPs, NOW..) b. MPI c. PVM d. Software DSMs (Shmem) e. Compilers f. RAD (rapid application development tools) g. Debuggers h. Performance Analysis Tools i. Visualization Tools 11. Applications a. Sequential b. Parallel / Distributed (Cluster-aware app.)
31
CLUSTER CLASSIFICATIONS
Clusters are classified in to several sections based on the facts such as 1)Application target 2) Node owner ship 3) Node Hardware 4) Node operating System 5) Node configuration. Clusters based on Application Target are again classified into two: ¢ High Performance (HP) Clusters ¢ High Availability (HA) Clusters Clusters based on Node Ownership are again classified into two: ¢ Dedicated clusters ¢ Non-dedicated clusters Clusters based on Node Hardware are again classified into three: ¢ Clusters of PCs (CoPs) ¢ Clusters of Workstations (COWs) ¢ Clusters of SMPs (CLUMPs) Clusters based on Node Operating System are again classified into: ¢ Linux Clusters (e.g., Beowulf) ¢ Solaris Clusters (e.g., Berkeley NOW) ¢ Digital VMS Clusters ¢ HP-UX clusters
32
¢ Microsoft Wolfpack clusters Clusters based on Node Configuration are again classified into:
¢
Homogeneous Clusters -All nodes will have similar architectures and run the same OSs Heterogeneous Clusters- All nodes will have different architectures and run different OSs
¢
ISSUES TO BE CONSIDERED
Cluster Networking If you are mixing hardware that has different networking technologies, there will be large differences in the speed with which data will be accessed and how individual nodes can communicate. If it is in your budget make sure that all of the machines you want to include in your cluster have similar networking capabilities, and if at all possible, have network adapters from the same manufacturer. Cluster Software You will have to build versions of clustering software for each kind of system you include in your cluster. Programming Our code will have to be written to support the lowest common denominator for data types supported by the least powerful node in our cluster. With mixed machines, the more powerful machines will have attributes that cannot be attained in the powerful machine. Timing This is the most problematic aspect of heterogeneous cluster. Since these machines have different performance profile our code will execute at
33
different rates on the different kinds of nodes. This can cause serious bottlenecks if a process on one node is waiting for results of a calculation on a slower node. The second kind of heterogeneous clusters is made from different machines in the same architectural family: e.g. a collection of Intel boxes where the machines are different generations or machines of same generation from different manufacturers. Network Selection There are a number of different kinds of network topologies, including buses, cubes of various degrees, and grids/meshes. These network topologies will be implemented by use of one or more network interface cards, or NICs, installed into the head-node and compute nodes of our cluster. Speed Selection No matter what topology you choose for your cluster, you will want to get fastest network that your budget allows. Fortunately, the availability of high speed computers has also forced the development of high speed networking systems. Examples are 10Mbit Ethernet, 100Mbit Ethernet, gigabit networking, channel bonding etc.
34
FUTURE TRENDS - GRID COMPUTING
As computer networks become cheaper and faster, a new computing paradigm, called the Grid has evolved. The Grid is a large system of computing resources that performs tasks and provides to users a single point of access, commonly based on the World Wide Web interface, to these distributed resources. Users consider the Grid as a single computational resource. Resource management software, frequently referenced as middleware, accepts jobs submitted by users and schedules them for execution on appropriate systems in the Grid, based upon resource management policies. Users can submit thousands of jobs at a time without being concerned about where they run. The Grid may scale from single systems to supercomputer-class compute farms that utilize thousands of processors. Depending on the type of applications, the interconnection between the Grid parts can be performed using dedicated high-speed networks or the Internet. By providing scalable, secure, high-performance mechanisms for discovering and negotiating access to remote resources, the Grid promises to make it possible for scientific collaborations to share resources on an unprecedented scale, and for geographically distributed groups to work together in ways that were previously impossible. Several
35
examples of new applications that benefit from using Grid technology constitute a coupling of advanced scientific instrumentation or desktop computers with remote supercomputers; collaborative design of complex systems via high-bandwidth access to shared resources; ultra-large virtual supercomputers constructed to solve problems too large to fit on any single computer; rapid, large-scale parametric studies. The Grid technology is currently under intensive development. Major Grid project and implimentations include NASAâ„¢s Information Power Grid, two NSF Grid project and implimentations (NCSA Allianceâ„¢s Virtual Machine Room and NPACI), the European DataGrid Project and the ASCI Distributed Resource Management project and implimentation. Also first Grid tools are already available for developers. The Globus Toolkit [20] represents one such example and includes a set of services and software libraries to support Grids and Grid applications.
36
CONCLUSION
Clusters are promising ¢ Solve parallel processing paradox ¢ Offer incremental growth and matches with funding pattern
¢
New trends in hardware and software technologies are likely to make clusters more promising and fill SSI gap.
¢ Clusters based supercomputers (Linux based clusters) can be seen everywhere!
37
REFERENCE
buyya.com beowulf.org clustercomp.org sgi.com thu.edu.tw/~sci/journal/v4/000407.pdf dgs.monash.edu.au/~rajkumar/cluster cfi.lu.lv/teor/pdf/LASC_short.pdf webopedia.com howstuffworks.com
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
Reply
project report tiger
Active In SP
**

Posts: 1,062
Joined: Feb 2010
#2
11-02-2010, 08:21 PM


.doc   CLUSTERCOMPUTING.doc (Size: 178 KB / Downloads: 332)

ABSTRACT
Not all difficult problems require access to a single shared memory resource. Some problems can easily be broken into many smaller independent parts. Computer scientists often refer to this class of problems as "embarrassingly parallel" or as capacity problems. Many of the computers that we typically employ on a day-to-day basis for word processing or for game playing are very well equipped to solve the smaller components of capacity problems. In practice, clusters are usually composed of many commodity computers, linked together by a high-speed dedicated network
What distinguishes this configuration from the heavy hitting, top dollar supercomputers is that each node within a cluster is an independent system, with its own operating system, private memory, and, in some cases, its own file system. Because the processors on one node cannot directly access the memory on the other nodes, programs or software run on clusters usually employ a procedure called "message passing" to get data and execution code from one node to another. Compared to the shared memory systems of supercomputers, passing messages is very slow.
However, with cluster computing, the subparts of the larger problem usually run on a single processor for a long period of time without reference to the other sub parts, which means that the slow communication among nodes is not a major problem. Experts in the field often refer to these types of problems as CPU-bound. Cluster computing has become a major part of many research programs because the price to performance ratio of commodity clusters is very good. Also, because the nodes in a cluster are clones, there is no single point of
failure, which enhances the reliability to the cluster. Of course, these benefits can only be realized if the problems you are attempting to solve can be easily parallelized. Increasingly, computer clusters are being combined with large shared memory systems, such as the ones found in supercomputing architectures. By doing so, scientists who work on problems that have both capability and capacity elements can take advantage of the inherent strengths of both designs.
1. INTRODUCTION
Parallel computing has seen many changes since the days of the highly expensive and proprietary supercomputers. Changes and improvements in performance have also been seen in the area of mainframe computing for many environments. But these compute environments may not be the most cost effective and flexible solution for a problem.
Over the past decade, cluster technologies have been developed that allow multiple low cost computers to work in a coordinated fashion to process applications. The economics, performance and flexibility of compute clusters makes cluster computing an attractive alternative to centralized computing models and the attendant to cost, inflexibility, and scalability issues inherent to these models.
Many enterprises are now looking at clusters of high-performance, low cost computers to provide increased application performance, high availability, and ease of scaling within the data center. Interest in and deployment of computer clusters has largely been driven by the increase in the performance of off-the-shelf commodity computers, high-speed, low-latency network switches and the maturity of the software components.
Application performance continues to be of significant concern for various entities including governments, military, education, scientific and now enterprise organizations. This document provides a review of cluster computing, the various types of clusters and their associated applications. This document is a high-level informational document; it does not provide details about various cluster implementations and applications.
2. CLUSTER COMPUTING
Cluster computing is best characterized as the integration of a number of off-the-shelf commodity computers and resources integrated through hardware, networks, and software to behave as a single computer. Initially, the terms cluster computing and high performance computing were viewed as one and the same. However, the technologies available today have redefined the term cluster computing to extend beyond parallel computing to incorporate load-balancing clusters (for example, web clusters) and high availability clusters. Clusters may also be deployed to address load balancing, parallel processing, systems management, and scalability.
Today, clusters are made up of commodity computers usually restricted to a single switch or group of interconnected switches operating at Layer 2 and within a single virtual local-area network (VLAN).Each compute node (computer) may have different characteristics such as single processor or symmetric multiprocessor design, and access to various types of storage devices. The underlying network is a dedicated network made up of high-speed, low-latency switches that may be of a single switch or a hierarchy of multiple switches. A growing range of possibilities exists for a cluster interconnection technology. Different variables will determine the network hardware for the cluster. Price-per-port, bandwidth, latency, and throughput are key variables. The choice of network technology depends on a number of factors, including price, performance, and compatibility with other cluster hardware and system software as well as communication characteristics of the applications that will use the cluster.
Clusters are not commodities in themselves, although they may be based on commodity hardware. A number of decisions need to be made (for example, what type of hardware the nodes run on, which interconnect to use, and which type of switching architecture to build on) before assembling a cluster range. Each decision will affect the others, and some will probably be dictated by the
intended use of the cluster. Selecting the right cluster elements involves an understanding of the application and the necessary resources that include, but are not limited to, storage, throughput, latency, and number of nodes.
When considering a cluster implementation, there are some basic questions that can help determine the cluster attributes such that technology options can be evaluated:
1. Will the application be primarily processing a single dataset
2. Will the application be passing data around or will it generate real-time information
3. Is the application 32- or 64-bit
The answers to these questions will influence the type of CPU, memory architecture, storage, cluster interconnect, and cluster network design. Cluster applications are often CPU-bound so that interconnect and storage bandwidth are not limiting factors, although this is not always the case.
3. CLUSTER BENEFITS
The main benefits of clusters are scalability, availability, and performance. For scalability, a cluster uses the combined processing power of compute nodes to run cluster-enabled applications such as a parallel database server at a higher performance than a single machine can provide. Scaling the cluster's processing power is achieved by simply adding additional nodes to the cluster.
Availability within the cluster is assured as nodes within the cluster provide backup to each other in the event of a failure. In high-availability clusters, if a node is taken out of service or fails, the load is transferred to another node (or nodes) within the cluster. To the user, this operation is transparent as the applications and data running are also available on the failover nodes.
An additional benefit comes with the existence of a single system image and the ease of manageability of the cluster. From the users perspective the users sees an application resource as the provider of services and applications. The user does not know or care if this resource is a single server, a cluster, or even which node within the cluster is providing services.
These benefits map to needs of today's enterprise business, education, military and scientific community infrastructures. In summary, clusters provide:
¢ Scalable capacity for compute, data, and transaction intensive applications, including support of mixed workloads
Horizontal and vertical scalability without downtime
¢ Ability to handle unexpected peaks in workload
¢ Central system management of a single systems image
¢ 24 x 7 availability
4. TYPES OF CLUSTERS
There are several types of clusters, each with specific design goals and functionality. These clusters range from distributed or parallel clusters for computation intensive or data intensive applications that are used for protein, seismic, or nuclear modeling to simple load-balanced clusters.
4.1 High Availability or Failover Clusters
These clusters are designed to provide uninterrupted availability of data or services (typically web services) to the end-user community. The purpose of these clusters is to ensure that a single instance of an application is only ever running on one cluster member at a time but if and when that cluster member is no longer available, the application will failover to another cluster member. With a high-availability cluster, nodes can be taken out-of-service for maintenance or repairs. Additionally, if a node fails, the service can be restored without affecting the availability of the services provided by the cluster (see Figure 1). While the application will still be available, there will be a performance drop due to the missing node.
High-availability clusters implementations are best for mission-critical applications or databases, mail, file and print, web, or application servers.
Normal operation
Nodel Node2
Heart Beat
ApplnA « ApplnB
Shared Storage
After Fail Over
Nodel Heart Beat Node2
X 4 Appln A
« ApplnB
Shared Storaee
Figure 1
Unlike distributed or parallel processing clusters, high-availability clusters seamlessly and transparently integrate existing standalone, non-cluster aware applications together into a single virtual machine necessary to allow the network to effortlessly grow to meet increased business demands.
4.2 Clusters-Aware and Cluster-Unaware Applications
Cluster-aware applications are designed specifically for use in clustered environment. They know about the existence of other nodes and are able to communicate with them. Clustered database is one example of such application. Instances of clustered database run in different nodes and have to notify other instances if they need to lock or modify some data.
Cluster-unaware applications do not know if they are running in a cluster or on a single node. The existence of a cluster is completely transparent for such applications, and some additional software is usually needed to set up a cluster. A web server is a typical cluster-unaware application. All servers in the cluster have the same content, and the client does not care from which server the server provides the requested content.
4.3 Load Balancing Cluster
This type of cluster distributes incoming requests for resources or content among multiple nodes running the same programs or having the same content (see Figure 2). Every node in the cluster is able to handle requests for the same content or application. If a node fails, requests are redistributed between the remaining available nodes. This type of distribution is typically seen in a web-hosting environment.
Both the high availability and load-balancing cluster technologies can be combined to increase the reliability, availability, and scalability of application and data resources that are widely deployed for web, mail, news, or FTP services.
4.4 Parallel/Distributed Processing Clusters
Traditionally, parallel processing was performed by multiple processors in a specially designed parallel computer. These are systems in which multiple processors share a single memory and bus interface within a single computer. With the advent of high speed, low-latency switching technology, computers can be interconnected to form a parallel-processing cluster. These types of cluster increase availability, performance, and scalability for applications, particularly computationally or data intensive tasks.
A parallel cluster is a system that uses a number of nodes to simultaneously solve a specific computational or data-mining task. Unlike the load balancing or high-availability cluster that distributes requests/tasks to nodes where a node processes the entire request, a parallel environment will divide the request into multiple sub-tasks that are distributed to multiple nodes within the cluster for processing. Parallel clusters are typically used for CPU-intensive analytical applications, such as mathematical computation, scientific analysis (weather forecasting, seismic analysis, etc.), and financial data analysis.
One of the more common cluster operating systems is the Beowulf class of clusters. A Beowulf cluster can be defined as a number of systems whose collective processing capabilities are simultaneously applied to a specific technical, scientific, or business application. Each individual computer is referred to as a "node" and each node communicates with other nodes within a cluster across standard Ethernet technologies (10/100 Mbps, GibeE, or 10GbE). Other high-speed interconnects such as Myrinet, Infiniband, or Quadrics may also be used.
5. CLUSTER COMPONENTS
The basic building blocks of clusters are broken down into multiple categories: the cluster nodes, cluster operating system, network switching hardware and the node/switch interconnect. Significant advances have been accomplished over the past five years to improve the performance of both the compute nodes as well as the underlying switching infrastructure.
5.1 Cluster Nodes
Node technology has migrated from the conventional tower cases to single rack-unit multiprocessor systems and blade servers that provide a much higher processor density within a decreased area. Processor speeds and server architectures have increased in performance, as well as solutions that provide options for either 32-bit or 64-bit processors systems. Additionally, memory performance as well as hard-disk access speeds and storage capacities have also increased. It is interesting to note that even though performance is growing exponentially in some cases, the cost of these technologies has dropped considerably.
As shown in Figure 3 below, node participation in the cluster falls into one of two responsibilities: master (or head) node and compute (or slave) nodes. The master node is the unique server in cluster systems. It is responsible for running the file system and also serves as the key system for clustering middleware to route processes, duties, and monitor the health and status of each slave node. A compute (or slave) node within a cluster provides the cluster a computing and data storage capability. These nodes are derived from fully operational, standalone computers that are typically marketed as desktop or server systems that, as such, are off-the-shelf commodity systems.
Computing Nodes
5.2 Cluster Network
Commodity cluster solutions are viable today due to a number of factors such as the high performance commodity servers and the availability of high speed, low-latency network switch technologies that provide the inter-nodal communications. Commodity clusters typically incorporate one or more dedicated switches to support communication between the cluster nodes. The speed and type of node interconnects vary based on the requirements of the application and organization.
With today's low costs per-port for Gigabit Ethernet switches, adoption of 10-Gigabit Ethernet and the standardization of 10/100/1000 network interfaces on the node hardware, Ethernet continues to be a leading interconnect technology for many clusters. In addition to Ethernet, alternative network or interconnect technologies include Myrinet, Quadrics, and Infiniband that support bandwidths above 1Gbps and end-to-end message latencies below 10 microseconds (uSec).
5.3 Network Characterization
There are two primary characteristics establishing the operational properties of a network: bandwidth and delay. Bandwidth is measured in millions of bits per second (Mbps) and/or billions of bits per-second (Gbps). Peak bandwidth is the maximum amount of data that can be transferred in a single unit of time through a single connection. Bi-section bandwidth is the total peak bandwidth that can be passed across a single switch.
Latency is measured in microseconds (uSec) or milliseconds (mSec) and is the time it takes to move a single packet of information in one port and out of another. For parallel clusters, latency is measured as the time it takes for a message to be passed from one processor to another that includes the latency of the interconnecting switch or switches. The actual latencies observed will vary widely even on a single switch depending on characteristics such as packet size, switch architecture (centralized versus
distributed), queuing, buffer depths and allocations, and protocol processing at the nodes.
5.4 Ethernet, Fast Ethernet, Gigabit Ethernet and 10-Gigabit Ethernet
Ethernet is the most widely used interconnect technology for local area networking (LAN). Ethernet as a technology supports speeds varying from 10Mbps to 10 Gbps and it is successfully deployed and operational within many high-performance cluster computing environments.
6. CLUSTER APPLICATION
Parallel applications exhibit a wide range of communication behaviors and impose various requirements on the underlying network. These may be unique to a specific application, or an application category depending on the requirements of the computational processes.
Some problems require the high bandwidth and low-latency capabilities of today's low-latency, high throughput switches using 10GbE, Infiniband or Myrinet. Other application classes perform effectively on commodity clusters and will not push the bounds of the bandwidth and resources of these same switches. Many applications and the messaging algorithms used fall in between these two ends of the spectrum.
Currently, there are four primary categories of applications that use parallel clusters: compute intensive, data or input/output (I/O) intensive, and transaction intensive. Each of these has its own set of characteristics and associated network requirements. Each has a different impact on the network as well as how each is impacted by the architectural characteristics of the underlying network. The following subsections describe each application types.
6.1 Compute Intensive Application
Compute intensive is a term that applies to any computer application that demands a lot of computation cycles (for example, scientific applications such as meteorological prediction). These types of applications are very sensitive to end-to-end message latency. This latency sensitivity is caused by either the processors having to wait for instruction messages, or if transmitting results data between nodes takes longer. In general, the more time spent idle waiting for an instruction or for results data, the longer it takes to complete the application.
Some compute-intensive applications may also be graphic intensive. Graphic intensive is a term that applies to any application that demands a lot of computational cycles where the end result is the delivery of significant information for the development of graphical output such as ray-tracing applications. These types of applications are also sensitive to end-to-end message latency. The longer the processors have to wait for instruction messages or the longer it takes to send resulting data, the longer it takes to present the graphical representation of the resulting data.
6.2 Data or I/O Intensive Applications
Data intensive is a term that applies to any application that has high demands of attached storage facilities. Performance of many of these applications is impacted by the quality of the I/O mechanisms supported by current cluster architectures, the bandwidth available for network attached storage, and, in some cases, the performance of the underlying network components at both Layer 2 and 3. Data-intensive applications can be found in the area of data mining, image processing, and genome and protein science applications. The movement to parallel I/O systems continues to occur to improve the I/O performance for many of these applications.
6.3 Transaction Intensive Applications
Transaction intensive is a term that applies to any application that has a high-level of interactive transactions between an application resource and the cluster resources. Many financial, banking, human resource, and web-based applications fall into this category.
7. PERFPRMANCE IMPACT AND CAREABOUTS
There are three main careabouts for cluster applications: message latency, CPU utilization, and throughput. Each of these plays an important part in improving or impeding application performance. This section describes each of these issues and their associated impact on application performance.
8. MESSAGE LATENCY
Message latency is defined as the time it takes to send a zero-length message from one processor to another (measured in microseconds). The lower the latency for some application types, the better. Message latency is made up of aggregate latency incurred at each element within the cluster network, including within the cluster nodes themselves .Although network latency is often focused on, the protocol processing latency of message passing interface (MPI) and TCP processes within the host itself are typically larger.
Throughput of today's cluster nodes are impacted by protocol processing, both for TCP/IP processing and the MPI. To maintain cluster stability, node synchronization, and data sharing, the cluster uses message passing technologies such as Parallel Virtual Machine (PVM) or MPI.
TCP/IP stack processing is a CPU-intensive task that limits performance within high speed networks. As CPU performance has increased and new techniques such as TCP offload engines (TOE) have been introduced, PCs are now able to drive the bandwidth levels higher to a point where we see traffic levels reaching near theoretical maximum for TCP/IP on Gigabit Ethernet and near bus speeds for PCI-X based systems when using 10
Gigabit Ethernet. These high-bandwidth capabilities will continue to grow as processor speeds increase and more vendors build network adapters to the PCI-Express specification.
To address host stack latency, reductions in protocol processing have been addressed somewhat through the implementation of TOE and further developments of combined TOE and Remote Direct Memory Access (RDMA) technologies are occurring that will significantly reduce the protocol processing in the host.
9. CPU UTILIZATION
One important consideration for many enterprises is to use compute resources as efficiently as possible. As increased number of enterprises move towards realtime and business-intelligence analysis, using compute resources efficiently is an important metric. However, in many cases compute resource is underutilized. The more CPU cycles committed to application processing the less time it takes to run the application. Unfortunately, although this is a design goal, this is not obtainable as both the application and protocols compete for CPU cycles.
As the cluster node processes the application, the CPU is dedicated to the application and protocol processing does not occur. For this to change, the protocol process must interrupt a uniprocessor machine or request a spin lock for a multiprocessor machine. As the request is granted, CPU cycles are then applied to the protocol process. As more cycles are applied to protocol processing, application processing is suspended. In many environments, the value of the cluster is based on the run-time of the application. The shorter the time to run, the more floating-point operations and/or millions of instructions per-second occur, and, therefore, the lower the cost of running a specific application or job.
Figure 4 CPU Utilization
Application
Application
Application processing
Spin lockp
Application in wait state
The example shows that when there is virtually no network or protocol processing going on, CPU 0 and 1 of each node are 100% devoted to application processing.
It also shows that the network traffic levels have significantly increased. As this happens, the CPU spends cycles processing the MPI and TCP protocol stacks, including moving data to and from the wire. This results in a reduced or suspended application processing. With the increase in protocol processing, note that the utilization percentages of CPU 0 and 1 are dramatically reduced, in some cases to 0.
10. THROUGHPUT
Data throughput begins with a calculation of a theoretical maximum throughput and concludes with effective throughput. The effective throughput available between nodes will always be less than the theoretical maximum. Throughput for cluster nodes is based on many factors, including the following:
¢ Total number of nodes running
¢ Switch architectures
¢ Forwarding methodologies
¢ Queuing methodologies
¢ Buffering depth and allocations
¢ Noise and errors on the cable plant
As previously noted, parallel applications exhibit a wide range of communication behaviors and impose various requirements on the underlying network. These behaviors may be unique to individual applications and the requirements for inter-processor/inter-nodal communication. The methods used by the application programmer, as far as the passing of messages using MPI, vary based on the application requirements.
There are both simple and complex collective routines. As more scatter-gather, all gather, and all-to-all routines are used, multiple head-of-line blocking instances may occur within the switch, even within non-blocking switch architectures. Additionally, the buffer architectures of the underlying network, specifically the depth and allocation of ingress and egress buffers, become key to throughput levels.
If buffers fill, congestion management routines may be invoked. In the switch, this means that pause frames will be sent resulting in the sending node discontinuing sending traffic until the congestion subsides. In the case of TCP, the congestion avoidance algorithms come into effect.
11. SLOW START
In the original implementation of TCP, as soon as a connection was established between two devices, they could each send segments as fast as they liked as long as there was room in the other devices receive window. In a busy network, the sudden appearance of a large amount of new traffic could exacerbate any existing congestion.
To alleviate this problem, modern TCP devices are restrained in the rate at which they initially send segments. Each sender is at first restricted to sending only an amount of data equal to one "full-sized" segment that is equal to the MSS value for the connection.
Each time an acknowledgment is received, the amount of data the device can send is increased by the size of another full-sized segment. Thus, the device "starts slow" in terms of how much data it can send, with the amount it sends increasing until either the full window size is reached or congestion is detected on the link. In the latter case, the congestion avoidance feature, described below, is used.
12. CONGESTION AVOIDANCE
When potential congestion is detected on a TCP link, a device responds by throttling back the rate at which it sends segments. A special algorithm is used that allows the device to drop the rate at which segments are sent quickly when congestion occurs. The device then uses the Slow Start algorithm, described above, to gradually increase the transmission rate back up again to try to maximize throughput without congestion occurring again.
In the event of packet drops, TCP retransmission algorithms will engage. Retransmission timeouts can reach delays of up to 200 milliseconds, thereby significantly impacting throughput.
13. SUMMARY
High-performance cluster computing is enabling a new class of computationally intensive applications that are solving problems that were previously cost prohibitive for many enterprises. The use of commodity computers collaborating to resolve highly complex, computationally intensive tasks has broad application across several industry verticals such as chemistry or biology, quantum physics, petroleum exploration, crash test simulation, CG rendering, and financial risk analysis. However, cluster computing pushes the limits of server architectures, computing, and network performance.
Due to the economics of cluster computing and the flexibility and high performance offered, cluster computing has made its way into the mainstream enterprise data centers using clusters of various sizes.
As clusters become more popular and more pervasive, careful consideration of the application requirements and what that translates to in terms of network characteristics becomes critical to the design and delivery of an optimal and reliable performing solution.
Knowledge of how the application uses the cluster nodes and how the characteristics of the application impact and are impacted by the underlying network is critically important. As critical as the selection of the cluster nodes and operating system, so too are the selection of the node interconnects and underlying cluster network switching technologies.
A scalable and modular networking solution is critical, not only to provide incremental connectivity but also to provide incremental bandwidth options as the cluster grows. The ability to use advanced technologies within the same networking platform, such as 10 Gigabit Ethernet, provides new connectivity options, increases bandwidth, whilst providing investment protection.
The technologies associated with cluster computing, including host protocol stack-processing and interconnect technologies, are rapidly evolving to meet the demands of current, new, and emerging applications. Much progress has been made in the development of low-latency switches, protocols, and standards that efficiently and effectively use network hardware components.
1. Introduction 1
2. Cluster Computing
3. Cluster Benefits 4
4. Types of Clusters 5
4.1 High Availability or Failover Clusters 5
4.2 Cluster-Aware and Cluster-Unaware Applications
4.3 Load Balancing Cluster
4.4 Parallel/Distributed Processing Clusters 8
5. Cluster Components 9
5.1 Cluster Nodes 9
5.2 Cluster Network 10
5.3 Network Characterization 11
5.4 Ethernet, Fast Ethernet 12
6. Cluster Applications 13
6.1 Compute Intensive Applications 13
6.2 Data or I/O Intensive Applications 13
6.3 Transaction Intensive Applications 14
7. Performance Impacts and Careabouts 15
8. Message Latency 15
9. CPU Utilization 17
10. Throughput 19
11. Slow Start 20
12. Congestion Avoidance 21
13. Summary 22
Reply
projects wizhard
Active In SP
**

Posts: 261
Joined: Jul 2010
#3
07-07-2010, 06:20 PM

The ppt for this topic can be accessed from these links:
scribddoc/5323882/CLUSTER-COMPUTING
scribddoc/16998526/Cluster-Computing
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
Reply
seminar surveyer
Active In SP
**

Posts: 3,541
Joined: Sep 2010
#4
08-10-2010, 10:16 AM

Submitted by:
KUMAR KAUSHIK


.pdf   CLUSTER%20COMPUTING.pdf (Size: 844.37 KB / Downloads: 611)


Abstract

A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks. Clusters are usually deployed to improve performance and/or availability over that provided by a single computer, while typically being much more cost-effective than single computers of comparable speed or availability. The major objective in the cluster is utilizing a group of processing nodes so as to complete the assigned job in a minimum amount of time by working cooperatively. The main and important strategy to achieve such objective is by transferring the extra loads from busy nodes to idle nodes. The seminar and presentation will contain the concepts of cluster computing and the principles involved in it.


Introduction


Parallel computing has seen many changes since the days of the highly expensive and proprietary super computers. Changes and improvements in performance have also been seen in the area of mainframe computing for many environments. But these compute environments may not be the most cost effectiveand flexible solution for a problem. Over the past decade, cluster technologies have been developed that allow multiple low cost computers to work in a coordinated fashion to process applications.
The economics, performance and flexibility of compute clusters makes cluster computing an attractive alternative to centralized computing models and the attendant to cost, inflexibility, and scalability issues inherent to these models.
Many enterprises are now looking at clusters of high-performance, low cost computers to provide increased application performance, high availability, and ease of scaling within the data center. Interest in and deployment of computer clusters has largely been driven by the increase in the performance of off-the-shelf commodity computers, high-speed, low-latency network switches and the maturity of the software components. Application performance continues to be of significant concern for various entities including governments, military, education, scientific and now enterprise organizations. This document provides a review of cluster computing, the various types of clusters and their associated applications. This document is a highlevel informational document; it does not provide details aboutvarious cluster implementations and applications.
Reply
seminar surveyer
Active In SP
**

Posts: 3,541
Joined: Sep 2010
#5
16-10-2010, 11:33 AM

Presented by:
Puripanda Venkata Ajay
V.V.S.P.Murthy


.ppt   cluster2.ppt (Size: 836.5 KB / Downloads: 240)




A computer cluster is a group of loosely coupled computers that
work together closely so that in many respects it can be viewed
as though it were a single computer. Clusters are commonly connected through fast local area networks. Clusters are usually deployed to improve speed and/or reliability over that provided by a single computer, while typically being much more cost-effective than single computers of comparable speed or reliability. Cluster computing has emerged as a result of convergence of several trends including the availability of inexpensive high performance microprocessors and high speed networks, the development of standard software tools for high performance distributed computing.
Reply
project report helper
Active In SP
**

Posts: 2,270
Joined: Sep 2010
#6
28-10-2010, 10:07 AM


.doc   REPORT SEM.doc (Size: 228 KB / Downloads: 158)

cluster computing full report
INTRODUCTION________________________________________
A computer cluster is a group of linked computers, working together closely thus in many respects forming a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks. Clusters are usually deployed to improve performance and/or availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability. Today, a wide range of applications are hungry for higher computing power, and even though single processor PCs and workstations now can provide extremely fast processing; the even faster execution that multiple processors can achieve by working concurrently is still needed. Now, finally, costs are falling as well. Networked clusters of commodity PCs and workstations using off-the-shelf processors and communication platforms such as Myrinet, Fast Ethernet, and Gigabit Ethernet are becoming increasingly cost effective and popular. This concept, known as cluster computing, will surely continue to flourish: clusters can provide enormous computing power that a pool of users can share or that can be collectively used to solve a single application. In addition, clusters do not incur a very high cost, a factor that led to the sad demise of massively parallel machines.
Clusters, built using commodity-off-the-shelf (COTS) hardware components and free, or commonly used, software, are playing a major role in solving large-scale science, engineering, and commercial applications. Cluster computing has emerged as a result of the convergence of several trends, including the availability of inexpensive high performance microprocessors and high speed networks, the development of standard software tools for high performance distributed computing, and the increasing need of computing power for computational science and commercial applications.
Reply
shabab.shahana
Active In SP
**

Posts: 1
Joined: Dec 2010
#7
18-12-2010, 03:26 PM

i need full seminar and presentation report of SENSOR N/W, MOIP, SOFT COMPUTING & FREENET.
i kindly request you to forward reports mentioned above!
Reply
science projects buddy
Active In SP
**

Posts: 278
Joined: Dec 2010
#8
19-12-2010, 09:20 PM

Hi,
visit these threads for more details on the topics:
FREENET:
topicideashow-to-freenet
MOIP:
topicideashow-to-moip-mobile-communications-over-internet-protocol?pid=5382
SENSOR N/W:
topicideashow-to-wisenet-wireless-sensor-network-download-seminar and presentation-report
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
Reply
nitinbhil
Active In SP
**

Posts: 1
Joined: Jan 2011
#9
11-01-2011, 01:35 PM

hi

plz send me the full report on cluster computing and also seminar and presentation report


Thanking you

Nitin Bhil
Reply
seminar surveyer
Active In SP
**

Posts: 3,541
Joined: Sep 2010
#10
12-01-2011, 10:35 AM

hello Nitin Bhil, please go through this thread carefully and check out the attached files too. you can find out detailed report.
Reply
seminar class
Active In SP
**

Posts: 5,361
Joined: Feb 2011
#11
26-02-2011, 03:35 PM


.doc   12885391-Cluster-Computing.doc (Size: 88 KB / Downloads: 120)
What are clusters?
A cluster is a type of parallel or distributed processing system, which consists of a collection of interconnected stand-alone computers co - operatively working together as a single, integrated computing resource.
This cluster of computers shares common network characteristics like the same namespace and it is available to other computers on the network as a single resource. These computers are linked together using high-speed network interfaces between themselves and the actual binding together of the all the individual computers in the cluster is performed by the operating system and the software used.
MOTIVATION FOR CLUSTERING
High cost of ‘traditional’ High Performance Computing.
Clustering using Commercial Off The Shelf (COTS) is way cheaper than buying specialized machines for computing. Cluster computing has emerged as a result of the convergence of several trends, including the availability of inexpensive high performance microprocessors and high-speed networks, and the development of standard software tools for high performance distributed computing.
Increased need for High Performance Computing
As processing power becomes available, applications which require enormous amount of processing, like weather modeling are becoming more common place requiring the high performance computing provided by Clusters
Thus the viable alternative to this problem is
“Building Your Own Cluster”, which is what Cluster Computing is all about.
Components of a Cluster
The main components of a cluster are the Personal Computer and the interconnection network. The computer can be built out of Commercial off the shelf components (COTS) and is available economically.
The interconnection network can be either an ATM ring (Asynchronous Transfer Mode), which guarantees a fast and effective connection, or a Fast Ethernet connection, which is commonly available now. Gigabit Ethernet which provides speeds up to 1000Mbps,or Myrinet a commercial interconnection network with high speed and reduced latency are viable options.
But for high-end scientific clustering, there are a variety of network interface cards designed specifically for clustering.
Those include Myricom's Myrinet, Giganet's cLAN and the IEEE 1596 standard Scalable Coherent Interface (SCI). Those cards' function is not only to provide high bandwidth between the nodes of the cluster but also to reduce the latency (the time it takes to send messages). Those latencies are crucial to exchanging state information between the nodes to keep their operations synchronized.
INTERCONNECTION NETWORKS
Myricom

Myricom offers cards and switches that interconnect at speeds of up to 1.28 Gbps in each direction. The cards come in two different forms, copper-based and optical. The copper version for LANs can communicate at full speed at a distance of 10 feet but can operate at half that speed at distances of up to 60 feet. Myrinet on fiber can operate at full speed up to 6.25 miles on single-mode fiber, or about 340 feet on multimode fiber. Myrinet offers only direct point-to-point, hub-based, or switch-based network configurations, but it is not limited in the number of switch fabrics that can be connected together. Adding switch fabrics simply increases the latency between nodes. The average latency between two directly connected nodes is 5 to 18 microseconds, a magnitude or more faster than Ethernet.
Giganet
Giganet is the first vendor of Virtual Interface (VI) architecture cards for the Linux platform, in their cLAN cards and switches. The VI architecture is a platform-neutral software and hardware system that Intel has been promoting to create clusters. It uses its own network communications protocol rather than IP to exchange data directly between the servers, and it is not intended to be a WAN routable system. The future of VI now lies in the ongoing work of the System I/O Group, which in itself is a merger of the Next-Generation I/O group led by Intel, and the Future I/O Group led by IBM and Compaq. Giganet's products can currently offer 1 Gbps unidirectional communications between the nodes at minimum latencies of 7 microseconds.
Reply
seminar class
Active In SP
**

Posts: 5,361
Joined: Feb 2011
#12
23-03-2011, 10:44 AM


.doc   cluster 573.doc (Size: 106 KB / Downloads: 74)
Introduction
What are Clusters?

A cluster is a type of parallel or distributed processing system, which consists of a collection of interconnected stand-alone computers co - operatively working together as a single, integrated computing resource.
This cluster of computers shares common network characteristics like the same namespace and it is available to other computers on the network as a single resource. These computers are linked together using high-speed network interfaces between themselves and the actual binding together of the all the individual computers in the cluster is performed by the operating system and the software used.
What is a Beowulf Cluster?
It's a kind of high-performance massively parallel computer built primarily out of commodity hardware components, running a free-software operating system like Linux or Free BSD, interconnected by a private high-speed network.
Motivation For Clustering
High cost of ‘traditional’ High Performance Computing.
Clustering using Commercial Off The Shelf (COTS) is way cheaper than buying specialized machines for computing. Cluster computing has emerged as a result of the convergence of several trends, including the availability of inexpensive high performance microprocessors and high-speed networks, and the development of standard software tools for high performance distributed computing.
Increased need for High Performance Computing
As processing power becomes available, applications which require enormous amount of processing, like weather modeling are becoming more common place requiring the high performance computing provided by Clusters
Thus the viable alternative to this problem is
“Building Your Own Cluster”, which is what Cluster Computing is all about.
Components of a Cluster
The main components of a cluster are the Personal Computer and the interconnection network. The computer can be built out of Commercial off the shelf components (COTS) and is available economically.
The interconnection network can be either an ATM ring (Asynchronous Transfer Mode) which guarantees a fast and effective connection, or a Fast Ethernet connection which is commonly available now. Gigabit Ethernet which provides speeds up to 1000Mbps,or Myrinet a commercial interconnection network with high speed and reduced latency are viable options.
But for high-end scientific clustering , there are a variety of network interface cards designed specifically for clustering.
Those include Myricom's Myrinet, Giganet's cLAN and the IEEE 1596 standard Scalable Coherent Interface (SCI). Those cards' function is not only to provide high bandwidth between the nodes of the cluster but also to reduce the latency (the time it takes to send messages). Those latencies are crucial to exchanging state information between the nodes to keep their operations synchronized.
Clusters classified according to their use
Types of clustering

The three most common types of clusters include high-performance scientific clusters, load-balancing clusters, and high-availability clusters.
Scientific clusters
The first type typically involves developing parallel programming applications for a cluster to solve complex scientific problems. That is the essence of parallel computing, although it does not use specialized parallel supercomputers that internally consist of between tens and tens of thousands of separate processors. Instead, it uses commodity systems such as a group of single- or dual-processor PCs linked via high-speed connections and communicating over a common messaging layer to run those parallel applications. Thus, every so often, you hear about another cheap Linux supercomputer coming out. But that is actually a cluster of computers with the equivalent processing power of a real supercomputer, and it usually runs over $100,000 for a decent cluster configuration. That may seem high for the average person but is still cheap compared to a multimillion- and thus the traffic needs to be sent to network server applications running on other nodes. That can also be optimized according to the different resources available on each node or the particular environment of the network.
High-availability clusters
High-availability clusters exist to keep the overall services of the cluster available as much as possible.to take into account the computing hardware and software. As the primary node in a high-availability cluster fails, it is replaced by a secondary node that has been waiting for that moment. That secondary node is usually a mirror image of the primary node, so that when it does replace the primary, it can completely take over its identity and thus keep the system environment consistent from the user's point of view.
With each of those three basic types of clusters, hybrids and interbreeding often occur between them. Thus you can find a high-availability cluster that can also load-balance users across its nodes, while still attempting to maintain a degree of high-availability. Similarly, you can find a parallel cluster that can also perform load balancing between the nodes separately from what was programmed into the application. Although the clustering system itself is independent of what software or hardware is in use, hardware connections play a pivotal role when it comes to running the system efficiently
Cluster Classification according to Architecture
Clusters can be basically classified into two

o Close Clusters
o Open Clusters
Close Clusters
They hide most of the cluster behind the gateway node. Consequently they need less IP addresses and provide better security. They are good for computing tasks.
Open Clusters
All nodes can be seen from outside,and hence they need more IPs, and cause more security concern .But they are more flexible and are used for server task.
Beowulf Cluster
Basically, the Beowulf architecture is a multi-computer architecture that is used for parallel computation applications. Therefore, Beowulf clusters are primarily meant only for processor-intensive and number-crunching applications and definitely not for storage applications. Primarily, a Beowulf cluster consists of a server computer that controls the functioning of many client nodes that are connected together with Ethernet or any other network comprising of a network of switches or hubs. One good feature of Beowulf is that all the system's components are available from off-the-shelf component and there is no special hardware that is required to implement it. It also uses commodity software - most often Linux - and other commonly available components like Parallel Virtual Machine (PVM) and Messaging Passing Interface (MPI).
Besides serving all the client nodes in the Beowulf cluster, the server node also acts as a gateway to external users and passes files to the Beowulf system. The server is also used to drive the console of the system from where the various parameters and configuration can be monitored. In some cases, especially in very large Beowulf configurations, there is sometimes more than one server node with other specialized nodes that perform tasks like monitoring stations and additional consoles. In disk-less configurations, very often, the individual client nodes do not even know their own addresses until the server node informs them.
The major difference between the Beowulf clustering system and the more commonly implemented Cluster of Workstations (CoW) is the fact that Beowulf systems tend to appear as an entire unit to the external world and not as individual workstations. In most cases, the individual workstations do not even have a keyboard, mouse or monitor and are accessed only by remote login or through a console terminal. In fact, a Beowulf node can be conceptualized as a CPU+memory package that can be plugged into the Beowulf system - much like would be done with a motherboard.
It's important to realize that Beowulf is not a specific set of components or a networking topology or even a specialized kernel. Instead, it's simply a technology for clustering together Linux computers to form a parallel, virtual supercomputer.
Technicalities in the design of Cluster
Homogeneous and Heterogeneous Clusters.

The cluster can either be made of homogeneous machines, machines that have the same hardware and software configurations or as a heterogeneous cluster with machines of different configuration. Heterogeneous clusters face problems of different performance profiles, software configuration management.
Reply
smart paper boy
Active In SP
**

Posts: 2,053
Joined: Jun 2011
#13
18-07-2011, 09:45 AM

ABSTRACT
A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer. The Components of a cluster are commonly but not always, connected to each other through fast local area networks. Clusters are usually deployed to improve performance and/or availability over that provided by a single computer, while typically being much more cost-effective than single computers of comparable speed or availability. The major objective in the cluster is utilizing a group of processing nodes so as to complete the assigned job in a minimum amount of time by working cooperatively. The main and important strategy to achieve such objective is by transferring the extra loads from busy nodes to idle nodes. The seminar and presentation will contain the concepts of cluster computing and the principles involved in it.
Reply
smart paper boy
Active In SP
**

Posts: 2,053
Joined: Jun 2011
#14
06-08-2011, 02:57 PM

A computer cluster is a group of loosely coupled computers that work together closely so that in many respects it can be viewed as though it were a single computer. Clusters are commonly connected through fast local area networks. Clusters are usually deployed to improve speed and/or reliability over that provided by a single computer, while typically being much more cost-effective than single computers of comparable speed or reliability. Cluster computing has emerged as a result of convergence of several trends including the availability of inexpensive high performance microprocessors and high speed networks, the development of standard software tools for high performance distributed computing. Clusters have evolved to support applications ranging from ecommerce, to high performance database applications. Clustering has been available since the 1980s when it was used in DEC's VMS systems. IBM's sysplex is a cluster approach for a mainframe system. Microsoft, Sun Microsystems, and other leading hardware and software companies offer clustering packages that are said to offer scalability as well as availability. Cluster computing can also be used as a relatively low-cost form of parallel processing for scientific and other applications that lend themselves to parallel operations.
Reply
sudhir1111
Active In SP
**

Posts: 3
Joined: Sep 2011
#15
26-09-2011, 10:25 PM

pls send me full report on cluster computer
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page

Quick Reply
Message
Type your reply to this message here.


Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  web spoofing full report computer science technology 13 8,925 20-05-2016, 11:59 AM
Last Post: Dhanabhagya
  difference between soc and aoc in mobile computing jaseelati 0 322 20-12-2014, 01:12 PM
Last Post: jaseelati
  android full report computer science technology 57 73,127 24-09-2014, 05:05 PM
Last Post: Michaelnof
  steganography full report project report tiger 23 25,736 01-09-2014, 11:05 AM
Last Post: computer science crazy
  Cloud Computing abstract seminar tips 4 5,257 20-06-2014, 03:40 PM
Last Post: s.vmurugan@yahoo.com
  3D PASSWORD FOR MORE SECURE AUTHENTICATION full report computer science topics 144 92,342 13-05-2014, 10:16 AM
Last Post: seminar project topic
Video Random Access Memory ( Download Full Seminar Report ) computer science crazy 2 2,392 10-05-2014, 09:44 AM
Last Post: seminar project topic
Brick Virtual keyboard (Download Full Report And Abstract) computer science crazy 37 30,949 08-04-2014, 07:07 AM
Last Post: Guest
  Towards Secure and Dependable Storage Services in Cloud Computing FULL REPORT seminar ideas 5 4,121 24-03-2014, 02:51 PM
Last Post: seminar project topic
  eyeOS cloud operating system full report seminar topics 8 11,427 24-03-2014, 02:49 PM
Last Post: seminar project topic