[go: nahoru, domu]

Jump to content

Cray CX1000: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Tag: section blanking
Undid revision 458893044 by 200.13.92.131 (talk)
Line 4: Line 4:


== CX1000 scale-up symmetric multiprocessing modes ==
== CX1000 scale-up symmetric multiprocessing modes ==
[[Image:Cray CX1000-SC Server Node.JPG|thumb|right|Cray CX1000-SC Server Node, with the characteristic "building-block" L-shape clearly visible.]]
[[Image:Cray CX1000-SM Server Node.jpg|thumb|right|Angled front view of the Cray CX1000-SM Server node]]
The CX1000-SM and CX1000-SC nodes can be used for cluster computing, but they are designed for scale-up Symmetric Multi-Processing (SMP). When used for cluster computing, the CX1000-SM node is intended to be the '''master (service)''' node, although it can instead be a ''compute'' node. Similarly, the CX1000-SC node, when used for cluster computing, is intended to be a '''compute''' node, but can instead act as the ''master (service)'' node. Either or both the CX1000-SC and/or CX1000-SM nodes can be deployed in a [[High-performance computing|HPC]] cluster. The CX1000-SM and CX1000-SC nodes, when used for SMP, are connected by a cache-coherency interconnect which is a built-in subassembly of the CX1000-SM and CX1000-SC nodes, rather than a standalone device, and is called the ''Drawer Interconnect Switch'' in Cray literature. The Drawer Interconnect Switch uses the [[Intel QuickPath Interconnect]] technology.


== CX1000 scale-out cluster computing nodes ==
== CX1000 scale-out cluster computing nodes ==

Revision as of 23:05, 12 February 2012

The Cray CX1000 is a family of high-performance computers which is manufactured by Cray Inc.,and consists of two individual groups of computer systems. The first group is intended for scale-up symmetric multiprocessing (SMP), and consists of the CX1000-SM and CX1000-SC nodes. The second group is meant for scale-out cluster computing, and consists of the CX1000 Blade Enclosure, and the CX1000-HN, CX1000-C and CX1000-G nodes.

The CX1000 line sits between Cray's entry-level CX-1 Personal Supercomputer range and Cray's high-end XT-series supercomputers.

CX1000 scale-up symmetric multiprocessing modes

File:Cray CX1000-SC Server Node.JPG
Cray CX1000-SC Server Node, with the characteristic "building-block" L-shape clearly visible.
File:Cray CX1000-SM Server Node.jpg
Angled front view of the Cray CX1000-SM Server node

The CX1000-SM and CX1000-SC nodes can be used for cluster computing, but they are designed for scale-up Symmetric Multi-Processing (SMP). When used for cluster computing, the CX1000-SM node is intended to be the master (service) node, although it can instead be a compute node. Similarly, the CX1000-SC node, when used for cluster computing, is intended to be a compute node, but can instead act as the master (service) node. Either or both the CX1000-SC and/or CX1000-SM nodes can be deployed in a HPC cluster. The CX1000-SM and CX1000-SC nodes, when used for SMP, are connected by a cache-coherency interconnect which is a built-in subassembly of the CX1000-SM and CX1000-SC nodes, rather than a standalone device, and is called the Drawer Interconnect Switch in Cray literature. The Drawer Interconnect Switch uses the Intel QuickPath Interconnect technology.

CX1000 scale-out cluster computing nodes

CX1000 Blade Enclosure populated with eighteen CX1000-C Compute Nodes. The Local Control Panel is the rectangular object with a blue screen and the Cray logo below the screen. The two shorter blades just below the Local Control Panel are the Fan Blades.

The CX1000 scale-out cluster computing group of systems consists of the CX1000 Blade Enclosure, CX1000-C compute Node, CX1000-G GPU Node and CX1000-HN Management Node. Unlike the CX1000-SM and CX1000-SC nodes, these nodes cannot be used for scale-up SMP, as they were designed without a cache-coherency capability. The CX1000-C and CX1000-G nodes both have blade form factors, and the CX1000-HN node is a rackmount 2U Server. The CX1000-HN is intended to act as the head (service) node in an HPC cluster, with CX1000-C and/or CX1000-G compute nodes.

References

External links