Tuesday, August 28, 2007
A computer cluster is a group of tightly coupled computers that work together closely so that in many respects they can be viewed as though they are a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks. Clusters are usually deployed to improve performance and/or availability over that provided by a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.
Cluster categorizations
High-availability clusters (also known as failover clusters) are implemented primarily for the purpose of improving the availability of services which the cluster provides. They operate by having redundant nodes, which are then used to provide service when system components fail. The most common size for an HA cluster is two nodes, which is the minimum requirement to provide redundancy. HA cluster implementations attempt to manage the redundancy inherent in a cluster to eliminate single points of failure. There are many commercial implementations of High-Availability clusters for many operating systems. The Linux-HA project is one commonly used free software HA package for the Linux OSs.
High-availability (HA) clusters
Load-balancing clusters operate by having all workload come through one or more load-balancing front ends, which then distribute it to a collection of back end servers. Although they are primarily implemented for improved performance, they commonly include high-availability features as well. Such a cluster of computers is sometimes referred to as a server farm. There are many commercial load balancers available including Platform LSF HPC, Sun Grid Engine, Moab Cluster Suite and Maui Cluster Scheduler. The Linux Virtual Server project provides one commonly used free software package for the Linux OS.
Load-balancing clusters
High-performance computing (HPC) clusters are implemented primarily to provide increased performance by splitting a computational task across many different nodes in the cluster, and are most commonly used in scientific computing. Such clusters commonly run custom programs which have been designed to exploit the parallelism available on HPC clusters. HPCs are optimized for workloads which require jobs or processes happening on the separate cluster computer nodes to communicate actively during the computation. These include computations where intermediate results from one node's calculations will affect future calculations on other nodes.
One of the most popular HPC implementations is a cluster with nodes running Linux as the OS and free software to implement the parallelism. This configuration is often referred to as a Beowulf cluster.
Microsoft offers Windows Compute Cluster Server as a high-performance computing platform to compete with Linux.
Many software programs running on High-performance computing (HPC) clusters use libraries such as MPI which are specially designed for writing scientific applications for HPC computers.
High-performance computing (HPC) clusters
Subscribe to:
Post Comments (Atom)
SmallfromSmall
Blog Archive
-
▼
2007
(125)
-
▼
August
(14)
- Political parties Elections The European Liber...
- Background The Megatower not only failed to comp...
- A computer cluster is a group of tightly coupled...
- For the chain of fast-food restaurants, see Wien...
- The Religious Society of Friends, commonly known...
- Jonathan Charles Turteltaub (born 8 August 196...
- During World War II, Operation Carpetbagger was ...
- The Sperm Whale (Physeter macrocephalus) is the ...
- Hone Wiremu Heke Pokai (1810? - August 6, 1850...
- Political parties in Italy Elections in Italy Th...
- Concord is the largest city in Contra Costa Coun...
- A surrogate key in a database is a unique identi...
- Erasure is an English synth pop duo band consist...
- President is not expected to fill Rove position
-
▼
August
(14)
No comments:
Post a Comment