The San Diego Supercomputer Center has taken a significant step forward for scientific processing by developing the first of its kind High-Performance Computing (HPC) system which utilizes flash memory. Commonly used in household electronics such as digital cameras and cell phones, flash is generally considered a faster storage medium than traditional hard drives due to the fact that there are no moving parts, as opposed to the traditional drive which stores information on magnetic plates which must be individually accessed.
DOE says that its supercomputers are already running simulations that have datasets on the terabyte scale and that soon they will be chewing through datasets in the petabytes range. For instance, a climate model that spans past, present, and future at Lawrence Livermore National Lab currently spans 35 terabytes and is being used by over 2,500 researchers worldwide. An updated (and presumably finer-grained) climate model is expected to have a dataset in the range of 650 terabytes and the distributed archive of datasets related to this model is expected to be somewhere between 6 and 10 petabytes. To move such datasets around the ESnet network requires a lot more bandwidth and better protocols than Gigabit Ethernet.
Virtual Machines and Types of Service for TeraGrid Computing Foundational capabilities we provide in TeraGrid, such as "roaming" access and a "coordinated" software environment, open new possibilities in terms of more specialized services, or to allow the TeraGrid, as a system, to respond to supply and demand. For example, a resource provider might elect to increase the "price" of a queue in order to improve turnaround time by reducing demand, or decrease the price to increase demand (and thus utilization).
"High-performance computing is transforming physics research," said Ralph Roskies, co-scientific director of the Pittsburgh Supercomputing Center (PSC), during a presentation on Friday, March 20, at the American Physical Society Meeting, held in Pittsburgh, March 16-20. "The Impact of NSF's TeraGrid on Physics Research" was the topic of his talk, which led off a panel of physicists who have made major strides in their work through the TeraGrid, the National Science Foundation's cyberinfrastructure program. "These world-class facilities," said Roskies, "on a much larger scale than ever before, present major new opportunities for physics researchers to carry out computations that would have been infeasible just a few years ago."
TeraGrid ‘09 is just barely in our rear view, but planning for TeraGrid ‘10 has already begun! The 2010 TeraGrid Conference will be held Aug. 2-5 in Pittsburgh. Co-chairs of the conference are Richard Moore (San Diego Supercomputer Center) and Daniel S. Katz (Argonne National Laboratory/University of Chicago). You can help us make the next conference even more successful by giving us your feedback on TeraGrid ‘09. Just fill out this short online evaluation form to let us know what we can improve at TeraGrid ‘10. You can also follow TeraGrid ‘10 on Facebook. Stay tuned for more details!
<blockquote>Paul Avery, a recognized leader in advanced grid and networking for science, delivered the first keynote address at the recent TeraGrid '09 conference in Arlington, Va. A professor of physics at the University of Florida, Avery is co-principal investigator and founding member of the Open Science Grid (OSG). Avery talked about the history of OSG, some of the projects that leverage its resources, and OSG's relationship with TeraGrid.</blockquote>
<blockquote>Before he even took the podium, Ed Seidel was one of the buzz makers at the TeraGrid '09 conference. The day before his keynote, it was announced that he was stepping in as acting assistant director of the National Science Foundation's math and physical sciences directorate. For his talk at the conference, however, Seidel focused on the issues and efforts within his home at NSF, the Office of Cyberinfrastructure.</blockquote>
There was a new energy at this year's TeraGrid '09 conference thanks to an outstanding turnout for the student program. Thanks to support from the National Science Foundation (NSF), more than 100 high school, undergraduate and graduate students were able to participate in the conference.
OGF is an open community committed to driving the rapid evolution and adoption of applied distributed computing. Applied Distributed Computing is critical to developing new, innovative and scalable applications and infrastructures that are essential to p
One of the ways TeraGrid benefits researchers is by providing centralized accounting services, making it easier for them to use different sets of computational resources. This requires synchronized account and allocation data among the TeraGrid resource p
The Data Capacitor is a high speed/high bandwidth storage system for research computing that serves all IU campuses and NSF TeraGrid Users. At peak performance, the Data Capacitor has a 14.5 gigabyte per second aggregate transfer rate per second. The Dat
Science Gateways signal a paradigm shift from traditional high performance computing use. Gateways enable entire communities of users associated with a common scientific goal to use national resources through a common interface. Science gateways are enabl