WINDOWS AND LINNUX CLUSTER full report
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
project report tiger
Active In SP
**

Posts: 1,062
Joined: Feb 2010
#1
26-02-2010, 11:39 PM



.doc   WINDOWS AND LINNUX CLUSTER PROJECT REPORT.doc (Size: 365.5 KB / Downloads: 100)

WINDOWS AND LINNUX CLUSTER PROJECT REPORT
Submitted by
AKHIL K JINSO JOSE ROMERO A P RONY THAMPI SUMESH PARAKKAT
ABSTRACT
A computer cluster is a group of loosely coupled computers that work together closely so that in many respects they can be viewed as though they are a single computer. Most common .application of clustering are High avi lability and High performance.
The aim of the project and implimentation is to build a high performace cluster in both Windows and Linux platforms. Windows Compute Cluster Server 2003 is a high-performance computing solution that ¦uses clustered commodity x64 servers that are built with a combination of the Microsoft Windows Server 2003 Compute Cluster Edition operating system and the Microsoft Compute Cluster Pack. Linux is a highly stable operating system (OS) that is being used for many high availability tasks, like web and database servers. Also Linux clusters have been built to serve as high performance computations facilities.
CHAPTER 1
INTRODUCTION
1.1 ABOUT THE TOPIC
A computer cluster is a group of loosely coupled computers that work together closely so that in many respects they can be viewed as though they are a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks. Clusters are usually deployed to improve performance and/or availability over that provided by a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.
Clusters are mainly categorized as High availability clusters (HA), Load Balancing Clusters. Grid Clusters.
High-availability clusters (also known as failover clusters) are implemented primarily for the purpose of improving the availability of services which the cluster provides. They operate by having redundant nodes, which are then used to provide service when system components fail. The most common size for an HA cluster is two nodes, which is the minimum requirement to provide redundancy.
Load-balancing clusters operate by having all workload come through one or more load-balancing front ends, which then distribute it to a collection of back end Platform LSF HPC, Sun Grid Engine. Moab Cluster Suite and Maui Cluster Scheduler.
Grid clusters are a technology closely related to cluster computing. The key differences (by definitions which distinguish the two at all) between grids and traditional clusters are that grids connect collections of computers which do not fully trust each other, or which are geographically dispersed. Grids are thus more like a computing utility than like a single computer. In addition, grids typically support more heterogeneous collections than are commonly supported in clusters.
FigUREl.l General cluster architecture 1.2 ABOUT THE PROJECT
High-performance computing is now within reach for many businesses by clustering industry-standard servers. These clusters can range from a few nodes to hundreds of nodes. In the past, wiring, provisioning, configuring, monitoring, and managing these nodes and providing appropriate, secure user access was a complex undertaking, often requiring dedicated support and administration resources. We build cluster using Windows and Linux.
a. Microsoft Windows Compute Cluster Server 2003
Microsoft Windows Compute Cluster Server 2003 simplifies installation, configuration, and management, reducing the cost of compute clusters and making them accessible to a broader audience. Windows Compute Cluster Server 2003 is a high-performance computing solution that uses clustered commodity x64 servers that are built with a combination of the Microsoft Windows Server 2003 Compute Cluster Edition operating system and the Microsoft Compute Cluster Pack. The base operating system incorporates traditional Windows system management features for remote deployment and
cluster management. The Compute Cluster Pack contains the services, interfaces, and supporting software needed to create and configure the cluster nodes, as well as the utilities and management infrastructure. Individuals tasked with Windows Compute Cluster Server 2003 administration and management have the advantage of working within a familiar Windows environment, which helps enable users to quickly and easily adapt to the management interface.
Windows Compute Cluster Server 2003 is a significant step forward in reducing the barriers to deployment for organizations and individuals who want to take advantage of the power of a compute clustering solution.
¢ Integrated software stack. Windows Compute Cluster Server 2003 provides an integrated software stack that includes operating system, job scheduler, message passing interface (MPI) layer, and the leading applications for each target vertical.
¢ Better integration with IT infrastructure. Windows Compute Cluster Server 2003 integrates seamlessly with your current network infrastructure (for example, Active Directory®), enabling you to leverage existing organizational skills and technology.
¢ Familiar development environment. Developers can leverage existing Windows-based skills and experience to develop applications for Windows Compute Cluster Server 2003. Microsoft Visual Studio® is the most widely used integrated development environment (IDE) in the industry, and Visual Studio 2005 includes support for developing HPC applications, such as parallel compiling and debugging. Third-party hardware and software vendors provide additional compiler and math library options for developers seeking an optimized solution for existing hardware. Windows Compute Cluster Server 2003 supports the use of MPI with Microsoft's MPI stack, or the use of stacks from other vendors.
Public Network
(GigE)
Head Node
Windows Server 2003 R2 Standard x64 Edition Microsoft Compute Cluster Pack
SQL Server 2005 Standard Edition x64
Compute Node Image
Windows Server 2003
Compute Cluster Edition
Microsoft Compute
Cluster Pack
All other required apps and drivers installed
Private Network (GigE)
Internet
Windows Server 2003 Enterprise Edition x86 Automated Deployment Services
"Compute"node images" DNS & DHCP Active Directory
Service Node (ADS Server)
Windows Server 2003_ Compute Cluster Edition
Compute Cluster Pack
MP! Network (IB)
Compute Nodes
Figure 1.2 Windows cluster overview
b.Diskless Linux Cluster for high performance computations
Linux is a highly stable operating system (OS) that is being used for many high availability tasks, like web and database servers. Also Linux clusters have been built to serve as high perfomance computations facilities . These clusters are commonly referred as Beowulf clusters , whereby this term is not well defined. In general it denotes any type of Linux boxes in proximity (when the metric to define proximity is again subject to variation). The following aims are pursued:
¢ OS is Linux
¢ Diskless boot
¢ No hardware changes to nodes (no eproms)
¢ Homogeneous Nbdes
CHAPTER 2 REQUIREMENT ANALYSIS
2.1 INTRODUCTION
Requirement analysis is the process of gathering and interpreting facts, diagnosing problems and using the information to recommend improvements on the system. It is a problem solving activity that requires intensive communication between the system users and system developers.
Requirement analysis or study is an important phase of any system development process. The system is studied to the minutest detail and analyzed. The system analyst plays the role of an interrogator and dwells deep into the working of the present system. The system is viewed as a whole and the inputs to the system are identified. The outputs from the organization are traced through the various processing that the inputs phase through in the organization.
A detailed study of these processes must be made by various techniques like Interviews. Questionnaires etc. The data collected by these sources must be scrutinized to arrive to a conclusion. The conclusion is an understanding of how the system functions. This system is called the existing system. Now, the existing system is subjected to close study and the problem areas are identified. The designer now functions as a problem solver and tries to sort out the difficulties that the enterprise faces. The solutions are given as a proposal.
The proposal is then weighed with the existing system analytically and the best one is selected. The proposal is presented to the user for an endorsement by the user. The proposal is reviewed on user request and suitable changes are made. This loop ends as soon as the user is satisfied with the proposal.
2.2. FLAN YOUR CLUSTER
This step-by-step guide provides basic instructions on how to deploy a Windows compute cluster. Your cluster planning should cover the types of nodes that are required for a cluster, and the networks that you will use to connect the nodes. Although the instructions in this guide are based on one specific deployment, you should also consider your environment and the number and types of hardware you have available.
Your cluster requires three types of nodes:
Head node. A head node mediates all access to the cluster resources and acts as a single point for cluster deployment, management, and job scheduling. There is only one head node per cluster.
Service node. A service node provides standard network services, such as directory and DNS and DHCP services, and also maintains and deploys compute node images to new hardware in the cluster. Only one service node is needed for the cluster, although you can have more than one service node for different roles in the cluster”for example, moving the image deployment service to a separate node.
Compute node. A compute node provides computational resources for the cluster. Compute nodes are provided jobs and are managed by the head node.
2.3.HARDWARE REQUIREMENT ANALYSIS
Hardware requirements for computers running Windows Compute Cluster Server 2003 are similar to those for Windows Server 2003. Standard x64 Edition. You can find the system requirements for your cluster Table shows a list of hardware for all nodes.:
Hardware requirements for computers running Linux cluster
¢ Server:
x86 CPU at least 450 MHz Memory at least 256 MB
Two or more fast ethernet network interface cards (NICs) At least 10 GB HD space
¢ Client:
x86 CPU at least 200 MHz Memory at least 128 MB One fast ethernet network interface cards
¢ Fast ethernet switch
An ethernet hub is NOT acceptable. It's too slow for network booting and NFS. A fast ethernet switch will reduce the collision domain and give much smoother deployment. The fast ethernet switch should have enough ports for all clients and server.
1. Head Node: Head nodes generally provide one or more of the following functions:
User node and control node Management node Installation node Storage node
2. Compute Node: The compute nodes form the heart of the cluster. The user, control, management, and storage nodes are all designed in support of the compute nodes. It is on the compute nodes that all computations are actually performed. These will be logically grouped, depending on the needs of the job and as defined by the job scheduler.
3. Network: Networking in clusters usually needs high bandwidth, high speed, and low latency. The most demanding communication occurs between the compute nodes. This section presents the protocol and technologies used to build a cluster solution.
¢ Fast Ethernet: Fast Ethernet and TCP/IP are the two standards largely used for networking. They are simple and cost effective, and the technology is always improving. Fast Ethernet works well with a 200 MHz Intel Pentium Pro based machine used for a Beowulf node, and can operate at either half-duplex, ox full -duplex. With full-duplex, data can be sent and received at the same time. Full-duplex transmission is deployed either between the ports on two switches, between a computer and a switch port, or between two computers. Full-duplex requires a switch: a hub will not work.
¢ Gigabit Ethernet: Gigabit Ethernet uses a modified version of the American National Standards Institute (ANSI) X3T11 Fibre Channel standard physical layer (FC-0) to achieve 1 Gigabit per second raw bandwidth. Gigabit Ethernet supports multi and single mode optical fiber and short haul copper cabling. Fibre is ideal for connectivity between switches and servers and can reach a greater distance than copper. The cluster topologies involve at least one and possibly three different networks: Public, Private, and Message Passing Interface (MPI).
¢ Public network: An organizational network connected to the head node and optionally, the cluster compute nodes. The public network is often the business or organizational network most users log onto to perform their work. All intra cluster management and deployment traffic is carried on the public network unless a private network (and optionally, an MPI network) also connects the cluster nodes.
¢ Private network: A dedicated network that carries intra-cluster communication between nodes. This network, if it exists, carries management, deployment, and MPI traffic if no MPI network exists.
¢ MPI netvvork:A dedicated network, preferabh high bandwidth and low latency, that carries parallel MPI application communication between cluster nodes. This network, if it exists, is usually the highest bandwidth network of the three listed here.
4. Network Topology:
Windows Compute Cluster Server 2003 supports five different network topologies with one to three network interface cards (NICs) on each node. The five topologies supported are:
¢ Three NICs on each node. One NIC is connected to the public (corporate) network; one to a private, dedicated, cluster management network; and one to a high-speed, dedicated Message Passing Interface (MPI) network.
¢ Three NICs on the head node and two on each of the cluster nodes. The head node provides network address translation (NAT) between the compute nodes and the public network, with each compute node having a connection to the private network and a connection to a high¬speed protocol such as MPI.
¢ Two NICs on each node. One NIC is connected to the public (corporate) network, and one is connected to the private, dedicated, cluster network.
¢ Two NICs on the head node, and one on each of the compute nodes. The head node provides NAT between the compute nodes and the public network.
¢ A single NIC on each node, with all network traffic sharing the public network. In this limited networking scenario, RIS deployment of compute nodes is not supported, and each compute node must be manually installed and activated.
The Microsoft Message Passing Interface (MS-MPI) is a high-speed networking interface that runs over Gigabit Ethernet, InfiniBand, or any network that provides a
WinSock Direct-enabled driver. MS MP I is based on and compatible with the Argonne National Labs MP1CH2 implementation of MPI2.
2.4 SOFTWARE REQUIREMENT ANALYSIS
In order to build an operational cluster, you will need several software components, including an operating system, drivers, libraries, compilers, and management tools.
¢ Windows Cluster:
The head and compute nodes for Windows Compute Cluster Server 2003 can be any of the following operating systems: the software required for each node type, and the notes following the chart show you where to obtain the necessary software.
Mktosoft SQL Server„¢ 2005 Standard Edition x64: By default, the Compuie Cluster Pack will install MSDE on the head node for data and node tracking purposes. Because MSDE is limited to eight concurrent connections, SQL Server Standard Edition 2005 is recommended for clusters with more than 64 compute nodes.
ADS version 1.1: ADS requires 32-bit versions of Windows Server 2003 Enterprise Edition for image management and deployment. Future Microsoft imaging technology (Windows Deployment Services, available in the next release of Windows Server, code name "Longhorn") will support 64-bit software.
MMC3.0: MMC 3.0 is required for the administration node, which may or may not be the head node. It is automatically installed by the Compute Cluster Pack on the computer that is used to administer the cluster.
NET Framework 2.0: The .NET Framework is automatically installed by the Compute Cluster Pack.
WinPE: You will need a copy of Windows Preinstallation Environment for Windows Server 2003 SP1. If you need to add your Gigabit Ethernet drivers to the WinPE image, you will need to obtain a copy of the Windows Server 2003 SP1 OEM Preinstallation Kit (OPK), which contains the programs needed to update the WinPE image for your hardware.
Sysprep.exe: Sysprep.exe is used to help prepare the compute node image prior to deployment. Sysprep is included as part of Windows Server 2003 Compute Cluster Edition. Note: You must use the x64 bit version of Sysprep in order to capture and deploy your images.
nux cluster
Debian GNU/Linux operating system.: Debian is a Linux Distribution which is licenced under GPL. The latest Debian Etchy is used for the installation
Diskless Remote Boot in Linux (DRBL): DRBL (Diskless Remote Boot in Linux) is a NFS-/NIS server providing a diskless or systemless environment for client machines. Installation is possible on a machine with UNIX like operating systems. It uses distributed hardware resources and makes it possible for clients to fulK access local hardware, thus making it feasible to use machines with less power. It could be used for cloning machines, providing network installation of Linux distributions, providing machines via PXE boot, Providing a DRBL-Server, Installation Linux distribution via installation script.
Sun's Open Source Nl Grid Engine 6.: Sun Grid Engine (SGE) or GRD (Global Resource Director) is an open source batch-queuing system, supported by Sun Microsystems. SGE is typically used on a computer farm or computer cluster and is responsible for accepting, scheduling, dispatching, and managing the remote execution of large numbers of standalone, parallel or interactive user jobs. It also manages and schedules the allocation of distributed resources such as processors, memory, disk space, and software licenses.
CHAPTER 3 SYSTEM DESIGN
System design is the solution to the creation of a new system. This phase is composed of several systems. This phase focuses on the detailed implementation of the feasible system. Its emphasis is on translating design specifications to performance specification. System design has two phases of development logical and physical design.
.During logical design phase the analyst describes inputs (sources), out puts (destinations), databases (data sores) and procedures (data flows) all in a format that meats the uses requirements. The analyst also specifies the user needs and at a level that virtually determines the information flow into and out of the system and the data resources. Here the logical design is done through data flow diagrams and database design.
The physical design is followed by physical design or coding. Physical design produces the working system by defining the design specifications, which tell the programmers exactly what the candidate system must do. The programmers write the necessary programs that accept input from the user, perform necessary processing on accepted data through call and produce the required report on a hard copy or display it on the screen.
3.1. Windows Cluster
Windows Compute Cluster Server 2003 is a cluster of servers that includes a single head node and one or more compute nodes (see Figure 1). The head node controls and mediates all access to the cluster resources and is the single point of management, deployment, and job scheduling for the compute cluster. Windows Compute Cluster Server 2003 uses the existing coiporate Active Directory infrastructure for security, account management, and overall operations management using tools such as Microsoft Operations Manager 2005 and Microsoft Systems Management Server 2003.
The Windows Compute Cluster Server 2003 installation involves installing the operating system on the head node, joining it to an existing Active Directory domain, and then installing the Compute Cluster Pack. If youTl be using RIS to automatically deploy compute nodes, RIS will be installed and configured as part of the To Do List after installation is complete.
When Compute Cluster Pack installation is complete, it will display a To Do List page that shows you the steps necessary to complete configuration of your compute cluster. These steps include defining the network topology, configuring RIS using the Configure RIS wizard, adding compute nodes to the cluster, and configuring cluster users and administrators.
Management and Deployment
One of the biggest issues that customers face today in adopting HPC solutions is the management and deployment of clusters and nodes. This problem has traditionally been a departmental or corporate-level problem, with a dedicated information technology (IT) professional staff to manage and deploy nodes, and users submitting batch jobs and competing for limited resources. The design goals for Windows Compute Cluster Server 2003 were to:
¢ Provide an appliance-like setup.
¢ Give clear, prescriptive guidance.
¢ Provide authentication and authorization mechanisms.
¢ Build a scriptable solution.
Windows Compute Cluster Server 2003 leverages Active Directory and MMC 3.0 to provide a simple and familiar interface for managing and administering the cluster. Integrating with Active Directory enables easy, role-based cluster management, with Cluster Admin and Cluster User roles. The new Compute Cluster Administrator has live major pages:
Start Page Primarily a monitoring page, this page displays the number of nodes and their status, the number of processors in use and available, and job information, including the number of jobs and their status.
To Do List. This page is used to configure and administer the cluster, including networking. R1S. add and remove nodes, and cluster security.
Node Management This page displays information about nodes and jobs in the cluster and allows node tasks, such as approving a node, pausing or resuming a node, or rebooting a node.
Remote Desktop Sessions. This page is used to create and close remote desktop sessions on the compute nodes.
Performance Monitor. This page displays performance monitoring data from PerfMon. including processor time and jobs and processor statistics per node.
In addition to the Compute Cluster Administrator, there is a Compute Cluster Manager that is used for job submission and job management, and a Command Line Interface (CLI) that provides a command-line alternative for administering the cluster and managing jobs.
Setup and deployment are greatly simplified with Windows Compute Cluster Server 2003. Initial installation of the head node takes advantage of wizards to simplify and identify
¦the steps necessary, while the use of RIS makes adding a compute node as simple as plugging it into the network and turning it on.
a. MPI
The Microsoft Message Passing Interface (MS-MPI) is a version of the Argonne National Labs Open Source MPI2 implementation that is widely used by existing HPC clusters. MS-MPI is compatible with the MP1CH2 Reference Implementation and other MPI implementations and supports a full-featured API of more than 160 function calls.
The MS-MPI in Windows Compute Cluster Server 2003 leverages the WinSock Direct protocol for best performance and CPU efficiency. MS-MPI can utilize any Ethernet interconnect that is supported on Windows Server 2003 as well as low-latency and high-bandwidth interconnects, such as InfiniBand or Myrinet, through Winsock Direct drivers provided by the hardware manufacturers. Gigabit Ethernet provides a high-speed and cost-effective interconnect fabric, while InfiniBand is ideal for latency-sensitive and high-bandwidth applications.
MS-MPI includes support (bindings) for the C, Fortran77. and Fortran90 programming languages, and the latest release of Microsoft Visual Studio® includes a parallel debugger that works with MS-MPI. Developers can launch their MPI applications on multiple compute nodes from within the Visual Studio environment, and then Visual Studio will automatically connect the processes on each node, enabling the developer to individually pause and examine program variables on each node.
b. Scheduler
Windows Compute Cluster Server 2003 includes both a command-line job schedule-l-and the Compute Cluster Manager that let users schedule jobs, allocate resources needed for the job. and change the tasks and properties associated with the job.
The CL1 supports a variety of languages, including Perl, Fortran, C/C++, C#, and Java. Jobs can be single task or multiple tasks and can specify the number of processors required for the job and whether those processors are needed exclusively or can be shared with other jobs/tasks.
The important distinguishing features of the scheduler include:
Error Recover: This feature provides automatic retry of failed tasks and jobs and automatic routing around unresponsive nodes. Automatic detection of nodes that become responsive is also provided.
Automated Cleanup. Each process associated with a job or task is tracked and proactively shut down on all compute nodes at the conclusion of the job or task, preventing "run away" processes on the compute nodes.
Security. Each job or task runs in the context of the submitting user and maintains security throughout the process.
Security
With HPC clusters being adopted by a broad range of mainstream users for mission-critical applications, security and integration with the existing infrastructure is essential. Windows Compute Cluster Server 2003 leverages Active Directory to enable role-based security for all cluster jobs and administration. The scheduler runs each job under the context and credentials of the submitting user, not a super user, and all credentials are stored with the job and deleted at the completion of the job. This behavior enables the compute jobs to access network resources, such as file or database servers, in the context of the user and enables systems administrators to apply and audit security policies using the existing and familiar mechanisms in Active Directory.
All job management communications are done over encrypted and authenticated channels, and the credentials are known only to the node manager for the duration of the job. The compute process itself sees only a logon token, not the actual credentials, further isolating credentials and protecting their integrity.
Windows Compute Cluster Server 2003 helps provide end-to-end security over secure and encrypted channels throughout the job process when using MS-MPI. As the node manager schedules and assigns the job. and tasks are spawned, the job always runs in the context of the scheduling user. This is an important addition to the MS-MPI implementation that is not part of the reference MPICH2 implementation.
3.2. Linux Cluster
According to the aims outlined above, there are some constrained with respect to the hardware usable for a Linux cluster. You need a main board that supports network boot via PXE which is implemented through a Managed Boot Agent (MBA). This should not be a problem for most main boards these days. At the time of selecting the hardware (Autumn 2002) there was no cheap main board available with an onboard network interface card (NIC) which would support PXE. Therefore a PCI NIC with PXE support had to be added. 3Com and Intel cards should be usable.
Creating a Linux cluster doesn't require a huge amount of resources, either in hardware or in time, but there is a bare minimum of hardware that you need to get started:
A moderately powerful (at least a Pentium 4 with 1Gb RAM) machine to act as the master node. This really needs to be a dedicated machine, as this will allow users to submit jobs whether the cluster is currently running or not.
The master node obviously needs a network card, but if you want to give the master or the nodes internet access, or are planning to have more than about 40 nodes, you will need two or three network cards. Even high-end Gigabit ethernet cards are cheap these days, and the way your nodes will communicate with the master can make or break a successful cluster.
Network infrastructure to connect all of the nodes together. A 100Mbit network is the absolute minimum, particularly if you will have more than 10 nodes. Gigabit ethernet is recommended.
One or more node PCs which support PXE booting (network booting). If the bios doesn't have an option to boot from the network card, or you don't see any messages about net booting when you start the PCs. then you need to install a card which supports PXE. These nodes don't need to have a hard drive, but if they do (and you have some spare space) then it can be used for a swap partition (virtual memory in Windows-speak). It helps if the nodes contain similar hardware (such as is found in most offices or University computer rooms), as it reduces the amount of configuration required (almost nobody likes to rebuild the Linux kernel 20 times).
a. Server configuration
The server is to be installed with a Linux operation system (OS). After installation of the OS the following services have to be installed and configured.
DHCP
TFTP
PXE
NFS
A standard Debian distribution, which was downloaded from the internet was installed on the server. During installation it was ensured that DHCP and TFTP software packages were installed.
CHAPTER 4 INSTALLATION
4.1. WINDOWS CLUSTER
To install, configure, and tune a high-performance compute cluster, complete the following steps:
1. Install and configure the service node.
2. Install and configure ADS on the service node.
3. Install and configure the head node. ¦
4. Install the Compute Cluster Pack.
5. Define the cluster topology.
6. Create the compute node image.
7. Capture and deploy image to compute nodes.
8. Configure and manage the cluster.
9. Deploy the client utilities to cluster users.
Step J: Install and Configure the Service Node
The service node provides all the back-end network services for the cluster, including authentication, name services, and image deployment. It uses standard Windows technology and services to manage your network infrastructure. The service node has two Gigabit Ethernet network adapters and no MPI adapters. One adapter connects to the public network; the other connects to the private network dedicated to the cluster.
There are five tasks that are required for installation and configuration:
1. Install and configure the base operating system.
2. Install Active Directory. Domain Name Sen ices (DNS), and DHCP.
3. Configure DNS.
4. Configure DHCP.
5. Enable Remote Desktop for the cluster.
1. Install and configure the base operating system. Follow the normal setup procedure for Windows Server 2003 R2 Enterprise Edition, with the exceptions as noted in the following procedure.
C> To install and configure the base operating system
Boot the computer to the Windows Server 2003 R2 Enterprise Edition CD.
¦
1. Accept the license agreement.
2. On the Partition List screen, create two partitions: one partition of 30 GB, and a second using the remainder of the space on the hard drive. Select the 30 GB partition as the install partition, and then press ENTER.
3. On the Format Partition screen, accept the default of NTFS, and then press ENTER. Proceed with the remainder of the text-mode setup. The computer then reboots into graphical setup mode.
4. On the Licensing Modes page, select the option for which you are licensed, and then configure the number of concurrent connections if needed. Click Next.
5. On the Computer Name and Administrator Password page, type a name for the service node (for example. SERVICENODE). Type your local administrator password twice, and then press ENTER.
6. On the Networking Settings page, select Custom settings, and then click Next.
7. On the Networking Components page for your private adapter, select Internet Protocol (TCP/IP), and then click Properties. On the Internet Protocol (TCP/IP) Properties page, select Use the following IP address. Configure the adapter with a static nonroutable address, such as 10.0.0.1, and a 24-bit subnet mask (255.0.0.0). Select Use the following DNS server addresses, and then configure the adapter to use 127.0.0.1. Click OK. and then click Next.
8. Repeat the previous step for the public adapter. Configure the adapter to acquire its address by using DHCP from the public network. If you prefer, you can assign it a static address if you have one already reserved. Configure the public-adapter to use 127.0.0.1 for DNS queries. Click OK. and then click Next.
9. On the Workgroup or Computer Domain page, accept the default of No and the default of WORKGROUP, and then click Next. The computer will copy files, and then reboot.
10. Log in to the server as administrator. Click Start, click Run. type diskmgmt.msc, and then click OK. The Disk Management console starts.
1.1. Right-click the second partition on your drive, and then click Format. In the Format dialog box, select Quick Format, and then click OK. When the format process is finished, close the Disk Management console.
2. Install Active Directory, DNS, and DHCP. Windows Server 2003 provides a wizard to configure your server as a typical first server in a domain. The wizard configures your server as a root domain controller, installs and configures DNS, and then installs and configures DHCP.
To install Active Directory, DNS, and DHCP
1. Log in to your service node as Administrator. If the Manage Your Server page is not visible, click Start, and then click Manage Your Server,
2. Click Add or remove a role. The Configure Your Server Wizard starts. Click Next.
3. On the Configuration Options page, select Typical configuration for a first server, and then click Next.
4. On the Active Directory Domain Name page, type the domain name that will be used for your cluster and append the ".local" suffix (for example, HPCCluster.local). Click Next.
5. On the NetBIOS Domain Name page, accept the default NetBIOS name (for example. HPCCLUSTER) and click Next. At the Summary of Selections page, click Next. If the Configure Your Server Wizard prompts you to close any open programs, click OK.
6. On the NAT Internet Connection page, make sure the public adapter is selected. Deselect Enable security on the selected interface, and then click Next. If you have more than two network adapters in your computer, the
Network Selection page appears. Select the private LAN adapter and iiien click Next. Click Finish. After the files are copied, the server reboots.
7. After the server reboots, log on as Administrator. Review the actions listed in the Configure Your Server Wizard, and then click Next. Click Finish.
3.Configure DNS. DNS is required for the cluster and will be used by people who want to use the cluster. It is linked to Active Directory and manages the node names that are in use. DNS must be configured so that name resolution will function properly on your cluster. The following task helps to configure your DNS settings for your private and public networks.
To configure DNS
1. Click Start, and then click Manage Your Server. In the DNS Server section, click Manage this DNS server. You can also start the DNS Management console by clicking Start. Administrative Tools, and then DNS.
2. Right-click your server, and then click Properties.
3. Click the Interfaces tab. Select Only the following IP addresses. Select the public interface, and then click Remove. Only the private interface should be listed. If it is not, type the IP address of the private interface, and then click Add. This ensures that your services node will provide DNS services only to the private network and not to addresses on the rest of your network. Click Apply.
4. Click the Forwarders tab. If the public interface is using DHCP, confirm that the forwarder IP list has the IP address for a DNS server in your domain. If not. or if you are using a static IP address, type the IP address for a DNS server on your public network, and then click Add. This ensures that if the service node cannot resolve name queries, the request will be forwarded to another name server on your neiwotk. Click OK.
5. In the DNS Management console, select Reverse Lookup Zones. Right-click Reverse Lookup Zones, and then click New Zone. The New Zone Wizard starts. Click Next.
6. On the Zone Type page, select Primary zone, and then select Store the zone in Active Directory. Click Next.
7. On the Active Directory Zone Replication Scope page, select To all domain controllers in the Active Directory domain. Click Next.
8. On the Reverse Lookup Zone Name page, select Network ID. and then type the first three octets of your private network's IP address (for example, 10.0.0). A reverse name lookup is automatically created for you. Click Next.
9. On the Dynamic Update page, select Allow only secure dynamic updates. Click Next.
10. On the Completing the New Zone Wizard page, click Finish. The new reverse lookup zone is added to the DNS Management console. Close the DNS Management console.
4.Configure DHCP. Your cluster requires automated IP addressing services to keep node traffic to a minimum. Active Directory and DHCP work together so that network addressing and resource allocation will function smoothly on your cluster. DHCP has already been configured for your cluster network. However, if you want finer control over the number of IP addresses available and the information provided to DHCP clients, you must delete the current DHCP scope and create a new one. using settings that reflect your cluster deployment.
To configure DHCP
1. Click Start, and then click Manage Your Server. In the DHCP Server section, click Manage this DHCP server. You can also start the DHCP Management console by clicking Start, clicking Administrative Tools, and then clicking DHCP
2. Right-elick the scope name (for example, Scope [10.0.0.0] Scopel), and then click Deactivate. When prompted, click Yes. Right-click the scope again, and then click Delete. When prompted, click Yes. The old scope is deleted.
3. Right-click your server name and then click New Scope. The New Scope Wizard starts. Click Next.
4. On the Scope Name page, type a name for your scope (for example, "HPC Cluster") and a description for your scope. Click Next.
5. On the IP Address Range page, type the start and end ranges for your cluster. For example, the start address would be the same address used for the private adapter: 10.0.0.1. The end address depends on how many nodes you plan to have in your cluster. For up to 250 nodes, the end address would be 10.0.0.254. For 250 to 500 nodes, the end address would be 10.0.1.254. For the subnet mask, you can either increase the length to 16, or type in a subnet mask of 255.255.0.0. Click Next.
6. On the Add Exclusions page, you define a range of addresses that will not be handed to computers at boot time. The exclusion range should be large enough to include all devices that use static IP addresses. For this example, type the start address of 10.0.0.1 and an end address of 10.0.0.9. Click Add. and then click Next.
7. On the Lease Duration page, accept the defaults, and then click Next.
8. On the Configure DHCP Options page, select Yes, I want to configure these options now, and then click Next.
9. On the Router (Default Gateway) page, type the private network adapter address (for example. 10.0.0.1), and then click Add. Click Next.
10. On the Domain Name and DNS Servers page, in the Parent domain text box, type your domain name (for example. HPCCluster.Iocal). In the Server name text box, type the server name (for example, SERVICENODE). In the IP ADDRESS fields, type the private network adapter address (for example, 10.0.0.1). Click Add. and then click Next.
11. On the WINS Servers page, click Next.
12. On the Activate Scope page, select Yes, I want to activate this scope now, and then click Next.
13. On the Completing the New Scope Wizard page, click Finish. Close the DHCP Management console.
4. Enable Remote Desktop for the cluster. You can enable Remote Desktop for nodes on your cluster so that you can log on remotely and manage services by using the node's desktop.
To disable Windows Firewall and enable Remote Management for the domain
1. Click Start, click Administrative Tools, and then click Active Directory Users and Computers.
2. Right-click your domain (for example, hpccluster.local). click New, and then click Organizational Unit.
3. Type the name of your new OU (for example. Cluster Servers) and then click OK. A new OU is created in your domain.
4. Right-click your OU and then click Properties. The OU Properties dialog appears. Click the Group Policy tab. Click New. Type the name for your new Group Policy (for example. Enable Remote Desktop) and then press ENTER.
5. Click Edit. The Group Policy Object Editor opens. Browse to Computer Configuration \ Administrative Templates \ Windows Components \ Terminal Services.
6. Double-click Allow users to connect remotely using Terminal Services. Click Enabled and then click OK. Close the Group Policy Object Editor.
7. On the OU Properties page, on the Group Policy tab. select your new Group Policy and then click Options. Click No Override, click OK. You have created a new Group Policy for your OU that enables Remote Desktop. Click OK.
Step 2: Install and Configure ADS on the Service Node
ADS is used to install compute node images on new hardware with little or no input from the cluster administrator. This automated procedure makes it easy to set up and install new nodes on the cluster, or to replace failed nodes with new ones. To install and configure ADS, perform the following procedures:
1. Copy and update the WinPE binaries.
2. Copy and edit the script tiles.
3. Install and configure ADS.
4. Share the ADS certificate.
5. Import ADS templates.
6. Add devices to ADS.
1. Copy and update the WinPE binaries. The WinPE binaries provide a simple operating system for the ADS Deployment Agent and the scripting engine to run scripts against the node. Because the WinPE binaries are based on the installation files that are found on the Windows Server 2003 CD, the driver cabinet tiles may not include the drivers for your Gigabit Ethernet adapters. If your adapter is not recognized during installation and configuration of your compute node image, you will need to update the WinPE binaries with the necessary adapter drivers and information files.
I Copy and update the WinPE binaries
1. Create a C:\WinPE folder on your service node. Copy the WinPE binaries to C:\WinPE.
2. To update your WinPE binaries with the drivers and information files for your adapter, create a C:\Drivers folder on your service node. Copy the .sys, .inf. and .cat files for your driver to C:\Drivers.
3. Click Start, click Run, type cmd. and then click OK. A command prompt window opens.
4. Change directories to C:\WinPE\.
5. Type drvinst.exe /inf:c:\drivers\<filename>.inf c:\WinPE, where <fiienaine> is the file name for your driver's .inf file, and then press ENTER. Your WinPE binaries are now updated with the drivers for your Gigabit Ethernet Adapter.
2. Copy and edit the script files. Follow the normal setup procedure for Windows Server 2003 R2 Enterprise Edition, with the exceptions noted later.
t> Copy and edit the script files
1. Create the C:\HPC-CCS. Create three new folders within the HPC-CCS folder: C:\HPC-CCS\Scripts, C:\HPC-CCS\Sequences, and C:\HPC-CCS\Sysprep. Create the folder C:\HPC-CCS\Sysprep\I386.
2. Copy the files AddADSDevices.vbs, ChangelPforlB.vbs. and AddComputeNodes.csv (or the name of your input file) into C:\HPC-CCS\Scripts. Copy Capture-CCS-image-with-winpe.xml and Deploy-CCS-image-with-winpe.xml into C:\HPC-CCS\Sequences. Copy sysprep.inf into C:\HPC-CCS\Sysprep.
3. Insert the Windows Server 2003 Compute Cluster Edition CD into the CD drive. Browse to the CD folder \Support\Tools. Double-click Deploy.cab. Copy the files sysprep.exe and setupcl.exe to the C:\HPC-CCS\Sysprep\I386 folder. You must use the 64-bit versions of these files or the image capture script will not work.
4. Use the chart in Table 4 to edit the tile AddComputeNodes.csv (or the name of your input file) and use the values for your company, your administrator password information, your product key, MAC addresses, and MachineOU values. The easiest way to work with this tile, especially for entering the MAC addresses, is to import it into Excel as a comma-delimited file, add the necessary values, and then export the data as a comma-separated value tile.
3. Install anJ configure ADS, You can download the ADS binaries from Microsoft, and then either copy them to your service node or burn them onto a CD.
[- To install and configure ADS
1. Browse to the CD or the folder containing the ADS binaries and then run ADSSetup.exe.
2. A Welcome page appears. Click Install Microsoft SQL Server Desktop Engine SP4 (Windows). The setup program automatically installs the MSDE software.
3. On the Welcome page, click Install Automated Deployment Services. The Automated Deployment Services Setup Wizard starts. Click Next.
4. On the License Agreement page, select I accept the terms of the license agreement, and then click Next.
5. On the Setup Type page, select Full installation, and then click Next.
6. The Installing PXE warning dialog appears. Click OK, and then click Next.
7. On the Configure the ADS Controller page, make sure that Use Microsoft SQL Server Desktop Engine (Windows) is selected, and that Create a new ADS database is selected. Click Next.
8. On the Network Boot Service Settings page, make sure that Use this path is selected. Insert the Windows Server 2003 R2 Enterprise Edition x86 CD into the drive. Browse to the CD drive, or type the drive containing the CD, and then click Next.
9. On the Windows PE Repository page, select Location of Windows PE.
Browse to the folder containing the WinPE binaries (for example, C:\WrinPE). In the Repository name text box, type a name for your repository (for example, Nodelmages). Click Next.
10. On the Image Location page, type the path to the folder where the images will
be stored. These must be on the second partition that you created on your server
(for example. E:\Images). The folder will be created and shared automatically.
Click Next.
11. If ADS Setup Wizard detects more thtm one network adapter in your computer, the Network Settings for ADS Services page is displayed. In the Bind to this IP address drop-down list, select the IP address that the ADS services will use to distribute images on the private network, and then click Next.
12. On the Installation Confirmation page, click Install.
13. On the Completing the Automated Deployment Services Setup Wizard page, click Finish. Close the Automated Deployment Sen ices Welcome dialog box.
14. To open the ADS Management console, click Start, click All Programs, click Microsoft ADS, and then click ADS Management.
15. Expand the Automated Deployment Services node, and then select Services. In the center pane, right-click Controller Services, and then click Properties. On the Controller Service Properties page, select the Service tab. and then change Global job template to boot-to-winpe. For the Device Identifier, select MAC Address. For the WinPE Repository Name, type Nodelmages or the repository name that you created earlier. Click Apply, and then click OK.
16. In the ADS Management console, right-click Image Distribution Service, and then click Properties. Select the Service tab. and ensure that Multicast image deployment is selected. Click OK.
4. Share the ADS certificate. ADS creates a computer certificate when it is installed. This certificate is used to identify all computers in the cluster. The certificate must be shared so that the compute node image can import the certificate and then use it during the configuration process.
1> lb share the ADS certificate
1. Click Start, click Administrative Tools, and then click Server Management. The Server Management console opens.
2. Click Shared Folders, and then click New File Share. The Share a Folder Wizard starts. Click Next.
3. On the Folder Path page, click Browse, and then browse to C:\ Program Files\ Microsoft ADS\ Certificate. Click Next
4. On the Name, Description, and Settings page, aeeepi the defaults, and then click Next.
5. On the Permissions page, accept the defaults, and then click Finish. Click Close, and then close the Server Management console. The ADS certificate is
" shared on your network.
5. Import ADS templates. ADS includes several templates that are useful when managing your nodes, including reboot-to-winpe and reboot-to-hd. The templates are not installed by default; you must add them to ADS using a batch file. You also need to add the compute cluster templates to ADS so that you can capture and deploy the compute node image on your network.
To import ADS templates
1. Open Windows Explorer and browse to C:\ Program Files\ Microsoft ADS\ Samples\ Sequences.
2. Double-click create-templates.bat. The script file automatically installs the templates in ADS. Close Windows Fixplorer.
3. Click Start, click All Programs, click Microsoft ADS, and then click ADS Management. The ADS Management console opens.
4. Browse to .lob Templates. Right-click Job Templates, and then click New Job Template. The New Job Template Wizard starts. Click Next.
5. On the Template Type page, select An entirely new template, and then click Next
6. On the Name and Description page, type a name for the compute node capture template (for example, Capture Compute Node). Type a description (for example, Run within Windows Server CCE), and then click Next.
7. On the Command Type page, select Task sequence, and then click Next.
8. On the Script or Executable Program page, browse to C:\hpc-ccs\sequences. Select All files from the Files of type drop-down list. Select Capture-CCS-image-with-winpe.xml. and then click Open. Click Next.
9. On the Device Destination page, seieet None, and then click Next. Click Finish. Your capture template is added to ADS.
10. Repeat steps 4 through 9. In step 6, use Deploy Compute Node and Run from WinPE as the name and description. In step 8, select the file Deploy-CCS-image-with-winpe.xml. When finished, you have added the deployment template to ADS.
6. Add devices to ADS. Follow the normal setup procedure for Windows Server 2003 R2 Enterprise Edition, with the exceptions noted later.
P To add devices to ADS
1. Populate the ADS server with ADS devices. Click Start, click Run, type ctnd.exe, and then click OK. Change the directory to C:\HPC-CCS\Scripts.
2. Type AddADSDevices.vbs AddComputeNodes-Sample.csv (use the name of your input file instead of the sample file name). The script will echo the nodes as they are added to the ADS server. When the script is finished, close the command window.
If your company uses a proxy server to connect to the Internet, you should configure your server so that it can receive system and application updates from Microsoft.
®
1. To configure your proxy server settings, open Internet Explorer . Click Tools, and then click Internet Options.
2. Click the Connections tab. and then click LAN Settings.
3. On the Local Area Network (LAN) Settings page, select Use a proxy server for your LAN. Enter the URL or IP address for your proxy server.
4. If you need to configure secure HTTP settings, click Advanced, and then enter the URL and port information as needed.
5. Click OK three times, and then close Internet Explorer.
When you have finished configuring your server, click Start, click All Programs, and then click Windows Update. This will ensure that your server is up-to-date with service packs and software updates that may be needed to improve performance and security.
Step 3: Install and Configure the Head Node
The head node is responsible for managing the compute cluster nodes, performing job control, and acting as the gateway for submitted and completed jobs. It requires SQL Server 2005 Standard Edition as part of the underlying service and support structure. You should consider using three hard drives for your head node: one for the operating system, one for the SQL Server database, and one for the SQL Server transaction logs. This will provide reduced drive contention, better overall throughput, and some transactional redundancy should the database drive fail.
In some cases, enabling hyperthreading on the head node will also result in improved performance for heavily-loaded SQL Server applications.
There are two tasks that are required for installing and configuring your head node:
1. Install and configure the base operating system.
2. Install and configure SQL Server 2005 Standard Edition.
*> To install and configure the base operating system
1. On the head node computer, boot to the Windows Server 2003 R2 Standard Edition x64 CD.
2. Accept the license agreement.
3. On the Partition List screen, create two partitions: one partition of 30 GB, and a second that uses the remainder of the space on the hard drive. Select the 30 GB partition as the install partition, and then press ENTER.
4. On the Format Partition screen, accept the default of NTFS, and then press ENTER. Proceed with the remainder of the text-mode setup. The computer then reboots into graphical setup mode.
5. On the Licensing Modes page, select the option for which you are licensed, and then configure the number of concurrent connections, if needed. Click Next.
6. On the Computer Name and Administrator Password page, type a name for the head node (for example. HEADNODE). Type the account with permission to join a computer to the domain (for example. hpccluster\administrator). type the password twice, and then press ENTER.
7. On the Networking Settings page, select Typical settings, and then click Next. This will automatically assign addresses to your public and private adapters. If you want to use static IP addresses for either interface, select Custom Settings, and then click Next. Follow the steps that you used to configure your service node adapter settings.
8. On the Workgroup or Computer Domain page, select Yes, make this computer a member of a domain. Type the name of your cluster domain (for example, HPCCluster.local), and then click Next. When prompted, type the name and the password for an account that has permission to add computers to the domain (typically, the Administrator account), and then click OK. Note: If your network adapter drivers are not included on the Windows Server 2003 CD, then you will not be able to join a domain at this time. Instead, make the computer a member of a workgroup, complete the rest of setup, install your network adapters, and then join your head node to the domain.
When you have configured the base operating system, you can install SQL Server 2005 Standard Edition on your head node.
To install and configure SQL Server 2005 Standard Edition
1. Log on to your server as Administrator. Insert the SQL Server 2005 Standard Edition x64 CD into the head node. If setup does not start automatically, browse to the CD drive and then run setup.exe.
2. On the End User License Agreement page, select I accept the licensing terms and conditions, and then click Next.
3. On the Installing Prerequisites page, click Install. When the installations are complete, click Next. The Welcome to the Microsoft SQL Server Installation Wizard starts. Click Next.
4. On the System Configuration Check page, the installation program displays a report with potential installation problems. You do not need to install IIS or address any US-related warnings because IIS is not used in this deployment. Click Next.
5. On the Registration Information page, complete the Name and Company
fields with the appropriate information, and then click Next.
6. On the Components to Install page, select all check boxes, and then click Next.
7. On the Instance Name page, select Named instance, and then type COMPUTECLUSTER in the text box. Your cluster must have this name, or Windows Compute Cluster will not work. Click Next.
8. On the Service Account page, select Use the built-in System account, and then select Local system in the drop-down list. In the Start services at the end of setup section, select all options except SQL Server Agent, and then click Next.
9. On the Authentication Mode page, select Windows Authentication Mode. Click Next.
10. On the Collation Settings page, select SQL collations, and then select Dictionary order case-insensitive for use with 1252 Character Set from the drop-down list. Click Next.
11. On the Error and Usage Report Settings page, click Next.
12. On the Ready to Install page, click Install. When the Setup Progress page appears, click Next.
13. On the Completing Microsoft SQL Server 2005 Setup page, click Finish.
14. Open the Disk Management console. Click St*»rt. click Run. type diskmgmt.msc, and then click OK.
15. Right-click the second partition on your drive, and then click Format. In the Format dialog box. select Quick Format, and then click OK. When the format process finishes, close the Disk Management console.
If your company uses a proxy server to connect to the Internet, you should configure your head node so that it can receive system and application updates from Microsoft.
1. To configure your proxy server settings, open Internet Explorer. Click Tools, and then click Internet Options.
2. Click the Connections tab. and then click LAN Settings.
3. On the Local Area Network (LAN) Settings page, select Use a proxy server for your LAN. Enter the URL or IP address for your proxy server.
4. If you need to configure secure HTTP settings, click Advanced, and then enter the URL and port information as needed.
5. Click OK three times, and then close Internet Explorer.
When you have finished configuring your server, click Start, click All Programs, and then click Windows Update. This will ensure that your server is up-to-date with service packs and software updates that may be needed to improve performance and security. You should elect to install Microsoft Update from the Windows LIpdate page. This service provides service packs and updates for all Microsoft applications, including SQL Server. Follow the instructions on the Windows Update page to install the Microsoft Update service.
Step 4: Install the Compute Cluster Pack
When the head node has been configured, you can install the Compute Cluster Pack, that contains services, interfaces, and supporting software that is needed to create and configure cluster nodes. It also includes utilities and management infrastructure for your cluster.
P To install the Compute Cluster Pack
1. Insert the Compute Cluster Pack CD into the head node. The Microsoft Compute Cluster Pack Installation Wizard appears. Click Next.
2. On the Microsoft Software License Terms page, select I accept the terms in the license agreement, and then click Next.
3. On the Select Installation Type page, select Create a new compute cluster with this server as the head node. Do not use the head node as a compute node. Click Next.
4. On the Select Installation Location page, accept the default. Click Next.
5. On the Install Required Components page, a list of required components for the installation appears. Each component that has been installed will appear with a check next to it. Select a component without a check, and then click Install.
6. Repeat the previous step for all uninstalled components. When all of the required components have been installed, click Next. The Microsoft Compute Cluster Pack Installation Wizard completes. Click Finish.
Step 5: Define the Cluster Topology
After the Compute Cluster Pack installation for the head node is complete, a Cluster Deployment Tasks window appears with a To Do List. In this procedure, you will configure the cluster to use a network topology that consists of a single private network for the compute nodes and a public interface from the head node to the rest of the network.
I> To define the cluster topology
1. On the To Do List page, in the Networking section, click Configure Cluster
Network Topology. The Configure Cluster Network Topology Wizard starts.
Click Next.
2. On the Select Setup Type page, select Compute nodes isolated on private
network from the drop-down list. A graphic appears that shows you a representation of your network. You can learn more about the different network topologies by clicking the Learn more about this setup link. When you have reviewed the information, click Next.
3. On the Configure Public Network page, select the correct public (external)
network adaptor from the drop-down list. This network will be used for
communicating between the cluster and the rest of your network. Click Next.
4. On the Configure Private Network page, select the correct private (internal) adaptor from the drop-down list. This network will be used for cluster management and node deployment. Click Next.
5. On the Enable NAT Using ICS page, select Disable Internet Connection Sharing for this cluster. Click Next.
6. Review the summary page to ensure that you have chosen an appropriate network configuration, and then click Finish. Click Close.
Step 6: Create the Compute Node Image
You can now create a compute node image. This is the compute node image that will be captured and deployed to each of the compute nodes. There are three tasks that are required to create the compute node image:
1. Install and configure the base operating system.
2. Install and configure the ADS agent and Compute Cluster Pack.
3. Update the image and prepare it for deployment.
P To install and configure the base operating system
1. Start the node that you want to use to create your compute node image. Insert the Microsoft Windows Server 2003 Compute Cluster Edition CD into the CD drive. Text-mode setup launches automatically.
2. Accept the license agreement.
3. On the Partition List screen, create one partition of 16 GB. Select the 16 GB partition as the install partition, and then press ENTER.
4. On the Format Partition screen, accept the default of NTFS. and then press ENTER. Proceed with the remainder of the text-mode setup. The computer then reboots into graphical setup mode.
5. On the Licensing Modes page, select the option for which you are licensed, and then configure the number of concurrent connections, if needed. Click Next.
6. On the Computer Name and Administrator Password page, type a name foi the compute node that has not been added to ADS (for example, NODE000). Type your local administrator password twice, and then press ENTER.
7. On the Networking Settings page, select Typical settings, and then click Next. This will automatically assign addresses to your public and private adapters. The adapter information for the deployed nodes will be automatically created when the image is deployed to a node.
8. On the Workgroup or Computer Domain page, select Yes, make this computer a member of a domain. Type the name of your cluster domain (for example, MPCCluster), and then click Next. When prompted, type the name and the password for an account that has permission to add computers to the domain (for example. hpccluster\administrator). and then click OK. The computer will copy files, and then reboot. Note: If your network adapter drivers are not included on the Windows Server 2003 Compute Cluster Edition CD. then you will not be able to join a domain at this time. Instead, make the computer a member of a workgroup, complete the rest of setup, install your network adapters, and then join your compute node to the domain.
9. Log on to the node as administrator.
10. Copy the QFE fdes to your compute node. Run each executable and follow the instructions for installing the quick fix files on your server.
11. Open Regedit. Click Start, click Run, type regedit, and then click OK.
12. Browse to HKEY LOCAL_MACHINE\ SYSTEMA CurrentControlSet\ Services\ Tcpip\ Parameters. Right-click in the right pane. Click New, and then click DWORD value. Type SynAttackProtect (case sensitive), and then press ENTER.
13. Double-click the new key that you just created. Confirm that the value data is zero, and then click OK.
14. Right-click in the right pane. Click New. and then click DWORD value. Type TcpMaxDataRetransmissions (case sensitive), and then press ENTER.
15. Double-click the new key that you just created. In the Value data text box, type 20. Ensure that Base is set to Hexadecimal, and then click OK.
16. Close Regedit.
1 7. Disable any network interfaces that will not be used by the cluster, or that do not have physical network connectivity.
When you have configured the base operating system, you can then install and configure the ADS Agent and the Compute Cluster Pack on your image.
To install and configure the ADS Agent and Comp
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page

Quick Reply
Message
Type your reply to this message here.


Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Brain Chips full report seminar class 2 3,316 13-07-2016, 03:38 PM
Last Post: jaseela123
  Intranet Mailing System full report seminar tips 4 1,648 06-05-2016, 11:25 AM
Last Post: mkaasees
  chat server full report project report tiger 10 16,578 06-11-2015, 07:20 PM
Last Post: Guest
  email flyer design full report project report tiger 9 27,937 26-02-2015, 03:06 PM
Last Post: oiajecuot
Brick V3 MAIL SERVER full report project report tiger 4 6,508 04-10-2014, 02:39 PM
Last Post: GaCcBuH
  data mining full report project report tiger 35 199,299 03-10-2014, 04:30 AM
Last Post: kwfEXGu
  INSURANCE AGENT\'S COMMISSION TRACKING SYSTEM full report project report tiger 1 2,103 10-04-2014, 02:25 AM
Last Post: whyuvyoo
  encryption decryption full report seminar topics 7 9,413 16-01-2014, 11:54 AM
Last Post: seminar project topic
  ONLINE JOB PLACEMENT full report seminar presentation 3 4,778 12-10-2013, 09:59 AM
Last Post: Guest
  FULL REPORT ON ONLINE SOCIAL NETWORKING seminar projects maker 0 502 30-09-2013, 03:54 PM
Last Post: seminar projects maker