The dCache July news
Dear friends of dCache (and others),
before you all head into your deserved holidays, I would like to keep you
updated on recent activities at dCache.org. The topics have been
collected, talking to people during the July GDB and the subsequent
POST STEP'09 meeting. Please let me know if I missed something.
So, enjoy your last summer before LHC data taking and take care
Patrick and the dCache
New dCache Release Policy
dCache.org is introducing an important change in its release policy. As
many other software projects, we are moving away from feature
driven releases towards time based releases. This
essentially means that we will tell you more precisely when the
next release will be available, but no longer exactly which features
will be made available within that version. Pros and cons of this
approach are nicely covered by an oral presentation
and by the PhD thesis of
former Debian project lead Martin Michlmayr. The advantages for the
dCache project mainly are that
On that topic, dCache is preparing for a "Golden Release"
(1.9.5). As you may already guess, this Golden Release is going
to be supported throughout the entire first LHC run-period and, as
usual, no new features will be added to this release to ensure
stability. However, we will of course continue to publish new releases
with new features. It's up to the site to decide either to stay
with the Golden Release or to follow up on the feature branch. Maybe
our friends from WLCG do have an opinion here as well.
- we can ease the synchronization process between dCache and
our distributers, that
- sites can plan well ahead for system upgrades and that
- releases are not infinitely postponed by waiting for
promised features, which may be delayed for whatever reason.
Release dates and features
The 1.9.3 release is
out since July 1. As usual, NDGF already deployed this version ahead of
time. No problems have been reported. Please make yourself familiar
with the release notes. You will find that they provide a very
useful matrix on head node and pool node release compatibility.
1.9.3 is the first dCache version allowing file system Access Control
Lists (ACL's). On that topic the release notes provide more
insights as well. Presentations
have been given on gPlazma and ACL's during the dCache workshop in
The release date for 1.9.4 has been scheduled for the week of
July 20th at which point in time we will also provide the date, the
Golden Release will be published. 1.9.4 is a more technical
release, containing changes not visible to the user or system
administrator. It might contain the Tape System Protection
feature (see below), but this is not entirely clear yet. (See chapter
on the dCache release policy). BTW : The 1.9.0 release branch is no
Tape System Protection
Initially dCache has been designed to be a disk cache in front of a
Tape Storage System, moving files onto the tape-backend and restoring
them when needed. Those operations are handled transparently to the
user. The downside of this approach is that a simple read of a file,
not being on disk, automatically triggers a tape operation. As
operations are expensive and may interfere with storing RAW data,
coming from the Tier 0, this feature had to be reviewed. As a
result, it has been agreed with the experiments that no non-production
user should be allowed to trigger such a tape operation. dCache is now
implementing a first version of such a protective mechanism. A dCache
system administrator may specify a set of DN/FQAS's which are allowed
to trigger tape read accesses for files not being available on disk.
Users, requesting tape-only files, and not being on that white list,
will receive a permission error and no tape operation is launched.
Further details will be provided as soon as available. The feature
might be in 1.9.4 but certainly will be part of the Golden Release
The Chimera file system and the pnfs-to-chimera migration.
Chimera is the next generation dCache file system engine, replacing
PNFS. It has been build to overcome PNFS scalability issues, which some
large sites already encountered in the past and possibly will, in the
future. Beside performance and scalability, Chimera is addressing
operational and maintenance issues. Because, other than PNFS, Chimera
is storing its file system meta information in a regular SQL database,
system administrators may run standard SQL queries to get detailed
information on the file system content or status (e.g. quota, tape
backend etc). Chimera is a must in case you want to use file
system ACL inheritance and in case you would like to test the NFS4.1
functionality in dCache.
Beginning of 2009, the first Tier II migrated to chimera, followed by
the NDGF Tier I (end of March) and more Tier II's subsequently. No
Chimera related problem has been reported since. Please find a
presentation of the NDGF Tier I Chimera upgrade by Gerd and Mattias and
a tutorial given during the Aachen dCache workshop at our dCache documentation area.
Instructions on how to migrate to Chimera are provided in our wiki.
Larger sites (e.g. Tier I's) may consider to talk to dCache.org in case
they would like to upgrade to Chimera prior to the LHC run start. We
could make sure that experts are available.
Copy Module versus Migration Module.
Occasionally, large quantities of files need to be moved (copied)
between pools to either drain pool hardware or to manually optimize
dCache accesses. In the past this has been done using the Copy
Module. This component has been replaced by the Migration Module
which is a subcomponent of a pool cell and is accessible via the
command line interface. Typing 'help migration' in the pool CLI
will provide more details. A chapter in The Book is in
preparation and will very soon be available.
ROOT and dCap
The ROOT framework is providing a handler for the dCap protocol. (TDCap
file). As dCap and ROOT are following different development/deployment
cycles, it unfortunately happened that there have been ROOT versions
not properly working with certain dCap versions. Especially the famous vector
read feature, although implement by ROOT AND dCap, couldn't
be used at all. With the most recent releases this problem is solved.
(ROOT 5.22 or better 5.24 and dCap 1.9.x).
Using vector read, ROOT is able to read a set of non-continous
portions of a ROOT data file within a single transaction, improving
performance significantely. Vector read doesn't rely on the
tuning of the dCap read ahead buffer as it makes use of the
internal ROOT file structure. If vector read is NOT used,
it is advisable to optimize the dCap read ahead buffer
according to your file access profile. The default is set to 8K in the
TDCap file. It can be changed by setting the DCACHE_RA_BUFFER
environment variable. If you are using a customized ROOT version, which
might have read ahead disabled, you may set
'DCACHE_RAHEAD=true' to override this behavior.
NFS 4.1 is the most recent member of the Network File System family.
Other than its predecessors it supports managing highly distributed
data. It recently became a standard and has been adopted by various
prominent OS and Storage Solution vendors. With 1.9.3, dCache provides
an interface to the NFS 4.1 protocol, not only for file name space
operations but for data I/O operations as well. This allows dCache to
be mounted by NFS4.1 enabled clients, on which users can browse the
file system name space and directly access data files as from any other
mounted file system. In case you feel a strong desire to help us
testing NFS4.1, please get more information from our wiki wiki
Patrick Fuhrmann, DESY, Notkestrasse 85, 22607 Hamburg,