Abstract
The CMS collaboration has a long term need to perform large-scale simulation
efforts, in which physics events are generated and their manifestations in the
CMS detector are simulated. Simulated data are then reconstructed and analyzed
by the physicists to support detector design and the design of the real-time
event filtering algorithms that will be used when CMS is running. Up to year
2002 the distribution of tasks among the different regional centers has been
done mainly through manual operations, even though some tools for data transfer
and centralization of the book-keeping were developed. In 2002 the first
prototypes of CMS distributed productions based on grid middleware have been
deployed, demonstrating that it is possible to use them for real data
production tasks. In this work we present the plans of the CMS experiment for
building a production and analysis environment based on the grid technologies
in time for the next big Data Challenge, which is foreseen for the beginning of
year 2004.
Users
Please
log in to take part in the discussion (add own reviews or comments).