Scope of the project
The goal of this project is to provide a system for storing and retrieving huge amounts of data, distributed among a large number of heterogenous server nodes, under a single virtual filesystem tree with a variety of standard access methods. Depending on the Persistency Model, dCache provides methods for exchanging data with backend (tertiary) Storage Systems as well as space management, pool attraction, dataset replication, hot spot determination and recovery from disk or node failures. Connected to a tertiary storage system, the cache simulates unlimited direct access storage space. Data exchanges to and from the underlying HSM are performed automatically and invisibly to the user. Beside HEP specific protocols, data in dCache can be accessed via NFSv4.1 (pNFS) as well as through WebDav.
dCache is looking for you !
We are looking for a web developer, who can write a web-based user interface.
Please find the details here
9th International dCache Workshop Amsterdam
dCache.org and the dCache German support group jointly invite to this year's dCache workshop taking place at SURF Sara, Amsterdam. We want to thank for SURF Sara's kind support.
The Agenda will follow soon. We are also still trying to make the workshop as affordable as possible and will soon provide you with the registration details. As always we welcome your recommendations and talks. Please write to email@example.com if you like to propose a topic or talk. We are excited to see you all in Amsterdam. Please bring your wetsuite ;)
Please find the details on the indico page: 9th International dCache Workshop
Life map of dCache installations around the world
Info - Contact
Documentation : Publications, Presentations and more
dCache, the Book
More on mailing lists
More : The dCache wiki >
|dCache is a joint venture between the Deutsches Elektronen-Synchrotron, DESY, the Fermi National Accelerator Laboratory, FNAL and the Nordic Data Grid Facility, NDGF.||Since end of 2001 our full production release is in use at an increasing number of sites world-wide and is delivering tera bytes of data from over hundreds of distributed server nodes.|