OSCAR allows users, regardless of their experience level with a *nix environment, to install a Beowulf type high performance computing cluster. It also contains everything needed to administer and program this type of HPC cluster. OSCAR's flexible package management system has a rich set of pre-packaged applications and utilities which means you can get up and running without laboriously installing and configuring complex cluster administration and communication packages. It also lets administrators create customized packages for any kind of distributed application or utility, and to distribute those packages from an online package repository, either on or off site.
Rocks adds a vastly expanded solutions layer (Rocks HPC, Rocks Cloud, Rocks Rolls) and enterprise-class support, which transforms the leading open source cluster distribution into a production-ready cluster operating environment suitable for data centers of all shapes and sizes. Clustercorp also partners with the industry's leading workload management providers to offer Rocks MOAB, Rocks LSF, and Rocks SGE. Purchase turnkey Rocks clusters from a long list of reliable hardware partners including HP, Dell, Cray, Silicon Mechanics, and more.
ScaleMP, a maker of virtualization and aggregation software that allows a cluster of x64 servers to look like a big, bad, symmetric multiprocessing (SMP) shared-memory system to operating systems and selected classes of applications, is going downstream to target SMBs and upstream to chase cloud infrastructure providers.
an Miller joined Cray in February 2008 and currently heads up Cray’s Productivity Solutions group with the recently introduced Cray CX1 desk side supercomputer – an Intel Cluster Ready product. Mr. Miller also leads Cray’s corporate marketing organization. Prior to joining Cray, he served as the Vice President of Polyserve Software at HP and as Vice President of Worldwide Sales for PolyServe prior to its acquisition by HP. Before joining PolyServe, Mr. Miller was Vice President of Worldwide sales for IBM High End xSeries Servers where he worked for both IBM xSeries and pSeries organizations, with a particular focus on marketing and sales for high end Intel based systems. Prior to IBM, he was Vice President of Global Marketing for Sequent Computer Systems, and Vice President Asia Pacific. Miller has also worked for Software AG as Senior Vice President Asia Pacific, and for Unisys in many capacities, ending as General Manager for Asia South. Mr. Miller is a Graduate of London University.
InfiniBand is a switched fabric communications link primarily used in high-performance computing. Its features include quality of service and failover, and it is designed to be scalable. The InfiniBand architecture specification defines a connection between processor nodes and high performance I/O nodes such as storage devices. InfiniBand forms a superset of the Virtual Interface Architecture.
Building and Promoting a Linux-based Operating System to Support Virtual Organizations for Next Generation Grids (2006-2010). The emergence of Grids enables the sharing of a wide range of resources to solve large-scale computational and data intensive problems in science, engineering and commerce. While much has been done to build Grid middleware on top of existing operating systems, little has been done to extend the underlying operating systems to enablee and facilitate Grid computing, for example by embedding important functionalities directly into the operating system kernel.
The Ohio Supercomputer Center provides supercomputing, research and educational resources to a diverse state and national community, including education, academic research, industry and state government. At the Ohio Supercomputer Center, our duty is to empower our clients, partner strategically to develop new research and business opportunities, and lead Ohio's knowledge economy.
"For a while now, IBM has had multiple and competing tools for managing AIX and Linux clusters for its supercomputer customers and yet another set of tools that were used for other HPC setups with a slightly more commercial bent to them. But Big Blue has now cleaned house, killing off its closed-source Cluster Systems Management (CSM) tool and tapping its own open source Extreme Cluster Administration Toolkit (known as xCAT) as its replacement."
PelicanHPC is a distribution of GNU/Linux that runs as a "live CD" (or it can be put on a USB device, or it can be used as a virtualized OS). If the ISO image file is burnt to a CD, the resulting CD can be used to boot a computer. The computer on which PelicanHPC is booted is referred to as the "frontend node". It is the computer with which the user interacts. Once PelicanHPC is running, a script - "pelican_setup" - may be run. This script configures the frontend node as a netboot server. After this has been done, other computers can boot copies of PelicanHPC over the network. These other computers are referred to as "compute nodes". PelicanHPC configures the cluster made up of the frontend node and the compute nodes so that MPI-based parallel computing may be done.
Philip, a new supercomputer-- named after one of the first Boyd Professors (A Boyd Professorship is the highest and most prestigious academic rank LSU can confer on a professor) at LSU -- chemistry professor Philip W. West, is a 3.5 TFlops Peak Performance 37 compute node cluster running the Red Hat Enterprise Linux 5 operating system. Each node contains two latest Quad Core Nehalem Xeon 64-bit processors operating at a core frequency of 2.93 GHz. Philip was delivered to LSU in May, 2009 and is to be open for general use to LSU users.
I was doing some work and thought, "Wouldn't it be nice to have my own cluster?" I'm guessing not many people have those types of revelations, and probably fewer that decide they should go ahead and solve the problem. I wanted a cheap, small, easy to pack, light, quiet, low-power cluster that I could sit on my desk, and not even think about it.
Last week I moderated a webinar entitled Optimizing Performance for HPC: Part 2 - Interconnect with InfiniBand. It was a great presentation with a lot of practical information and good questions. If you missed it, it will be available for a few months, so you still have a chance to check it out. As part of the webinar, Vallard Benincosa of IBM, mentioned that the speed of light was a becoming an issue in network design. In engineering terms, that is refered to as a hard limit.
Traditionally, large scale-up servers used cache-coherent buses for inter-processor communications. These proprietary buses and servers are very costly and power-hungry. Today’s powerful x86 servers replace proprietary scale-up architectures with low-cost machines connected through high-speed, low-latency clustered interconnects. This article will take an in-depth view of their cost and power benefits compared to scale-up architectures, and explain that Ethernet can be tunneled through a PCI Express (PCIe) fabric to provide a very-high-performance, low-cost cluster interconnect suitable for storage IO.
Linux magazine HPC Editor Douglas Eadline had a chance recently to discuss the current state of HPC clusters with Beowulf pioneer Don Becker, Founder and Chief Technical Officer, Scyld Software (now Part of Penguin Computing). For those that may have come to the HPC party late, Don was a co-founder of the original Beowulf project, which is the cornerstone for commodity-based high-performance cluster computing. Don’s work in parallel and distributed computing began in 1983 at MIT’s Real Time Systems group. He is known throughout the international community of operating system developers for his contributions to networking software and as the driving force behind beowulf.org.
In late 2004, Google surprised the world of computing with the release of the paper MapReduce: Simplified Data Processing on Large Clusters. That paper ushered in a new model for data processing across clusters of machines that had the benefit of being simple to understand and incredibly flexible. Once you adopt a MapReduce way of thinking, dozens of previously difficult or long-running tasks suddenly start to seem approachable–if you have sufficient hardware.
openMosix is a Linux kernel extension for single-system image clustering. This kernel extension turns a network of ordinary computers into a supercomputer for Linux applications.