release notes | Book: 2.10, 2.11, 2.12, 2.13, 2.14, 2.15, 2.16, 3.0, 3.1, 3.2 | Wiki | Q&A black_bg
Web: Multi-page, Single page | PDF: A4-size, Letter-size | eBook: epub black_bg

SpaceManager configuration

[return to top]

SRM SpaceManager and Link Groups

SpaceManager is making reservations against free space available in link groups. The total free space in the given link group is the sum of available spaces in all links. The available space in each link is the sum of all sizes of available space in all pools assinged to a given link. Therefore for the space reservation to work correctly it is essential that each pool belongs to one and only one link, and each link belongs to only one link group. Link groups are assigned several parameters that determine what kind of space the link group corresponds to and who can make reservations against this space.

[return to top]

Making a Space Reservation

Now that the SRM SpaceManager is activated you can make a space reservation. As mentioned above you need link groups to make a space reservation.

[return to top]

Prerequisites for Space Reservations

Login to the admin interface and cd to the cell SrmSpaceManager.

[user] $ ssh -p 22224 -l admin admin.example.org
(local) admin > cd SrmSpaceManager

Type ls link groups to get information about link groups.

(SrmSpaceManager) admin > ls link groups

The lack of output tells you that there are no link groups. As there are no link groups, no space can be reserved.

[return to top]

The Link Groups

For a general introduction about link groups see the section called “Link Groups”.

Example:

In this example we will create a link group for the VO desy. In order to do so we need to have a pool, a pool group and a link. Moreover, we define unit groups named any-store, world-net and any-protocol. (See the section called “Types of Units”.)

Define a pool in your layout file, add it to your pool directory and restart the poolDomain.

[poolDomain]
[poolDomain/pool]
path=/srv/dcache/spacemanager-pool
name=spacemanager-pool
[root] # mkdir -p /srv/dcache/spacemanager-pool
[root] # /usr/bin/dcache restart

In the admin interface, cd to the PoolManager and create a pool group, a link and a link group.

(SrmSpaceManager) admin > ..
(local) admin > cd PoolManager
(PoolManager) admin > psu create pgroup spacemanager_poolGroup
(PoolManager) admin > psu addto pgroup spacemanager_poolGroup spacemanager-pool
(PoolManager) admin > psu removefrom pgroup default spacemanager-pool
(PoolManager) admin > psu create link spacemanager_WriteLink any-store world-net any-protocol
(PoolManager) admin > psu set link spacemanager_WriteLink -readpref=10 -writepref=10 -cachepref=0 -p2ppref=-1
(PoolManager) admin > psu add link spacemanager_WriteLink  spacemanager_poolGroup
(PoolManager) admin > psu create linkGroup spacemanager_WriteLinkGroup
(PoolManager) admin > psu set linkGroup custodialAllowed spacemanager_WriteLinkGroup true
(PoolManager) admin > psu set linkGroup replicaAllowed spacemanager_WriteLinkGroup true
(PoolManager) admin > psu set linkGroup nearlineAllowed spacemanager_WriteLinkGroup true
(PoolManager) admin > psu set linkGroup onlineAllowed spacemanager_WriteLinkGroup true
(PoolManager) admin > psu addto linkGroup spacemanager_WriteLinkGroup spacemanager_WriteLink
(PoolManager) admin > save
(PoolManager) admin > ..
	

Check whether the link group is available. Note that this can take several minutes to propagate to spacemanager.

(local) admin > cd SrmSpaceManager
(SrmSpaceManager) admin > ls link groups
FLAGS CNT RESVD        AVAIL         FREE             UPDATED NAME
--rc:no 0     0 + 7278624768 = 7278624768 2011-11-28 12:12:51 spacemanager_WriteLinkGroup
    

The link group spacemanager_WriteLinkGroup was created. Here the flags indicate first the status (- indicates that neither the expired [e] nor the released flags [r] are set), followed by the type of reservations allowed in the link group (here replica [r], custodial [c], nearline [n] and online [o] files; output [o] files are not allowed - see help ls link groups for details on the format). No space reservations have been created, as indicated by the count field. Since no space reservation has been created, no space in the link group is reserved.

[return to top]

The SpaceManagerLinkGroupAuthorizationFile

Now you need to edit the LinkGroupAuthorization.conf file. This file contains a list of the link groups and all the VOs and the VO Roles that are permitted to make reservations in a given link group.

Specify the location of the LinkGroupAuthorization.conf file in the /etc/dcache/dcache.conf file.

spacemanager.authz.link-group-file-name=/path/to/LinkGroupAuthorization.conf

The file LinkGroupAuthorization.conf has following syntax:

LinkGroup <NameOfLinkGroup> followed by the list of the Fully Qualified Attribute Names (FQANs). Each FQAN is on a separate line, followed by an empty line, which is used as a record separator, or by the end of the file.

FQAN is usually a string of the form <VO>/Role=<VORole>. Both <VO> and <VORole> can be set to *, in this case all VOs or VO Roles will be allowed to make reservations in this link group. Any line that starts with # is a comment and may appear anywhere.

Rather than an FQAN, a mapped user name can be used. This allows clients or protocols that do not provide VOMS attributes to make use of space reservations.

#SpaceManagerLinkGroupAuthorizationFile

LinkGroup <NameOfLinkGroup>
/<VO>/Role=<VORole>

Note

You do not need to restart the srm or dCache after changing the LinkGroupAuthorization.conf file. The changes will be applied automatically after a few minutes.

Use update link groups to be sure that the LinkGroupAuthorization.conf file and the link groups have been updated.

(SrmSpaceManager) admin > update link groups
Update started.

Example:

In the example above you created the link group spacemanager_WriteLinkGroup. Now you want to allow members of the VO desy with the role production to make a space reservation in this link group.

#SpaceManagerLinkGroupAuthorizationFile
# this is comment and is ignored

LinkGroup spacemanager_WriteLinkGroup
#
/desy/Role=production

Example:

In this more general example for a SpaceManagerLinkGroupAuthorizationFile members of the VO desy with role test are authorized to make a space reservation in a link group called desy-test-LinkGroup. Moreover, all members of the VO desy are authorized to make a reservation in the link group called desy-anyone-LinkGroup and anyone is authorized to make a space reservation in the link group called default-LinkGroup.

#SpaceManagerLinkGroupAuthorizationFile
# this is a comment and is ignored

LinkGroup desy-test-LinkGroup
/desy/Role=test

LinkGroup desy-anyone-LinkGroup
/desy/Role=*

LinkGroup default-LinkGroup
# allow anyone :-)
*/Role=*

[return to top]

Making and Releasing a Space Reservation as dCache Administrator

[return to top]

Making a Space Reservation

Example:

Now you can make a space reservation for the VO desy.

(SrmSpaceManager) admin > reserve space -owner=/desy/Role=production -desc=DESY_TEST -lifetime=10000 -lg=spacemanager_WriteLinkGroup 5MB
110000 voGroup:/desy voRole:production retentionPolicy:CUSTODIAL accessLatency:NEARLINE linkGroupId:0 size:5000000 created:Fri Dec 09 12:43:48 CET 2011 lifetime:10000000ms expiration:Fri Dec 09 15:30:28 CET 2011 description:DESY_TEST state:RESERVED used:0 allocated:0

The space token of the reservation is 110000.

Check the status of the reservation by

(SrmSpaceManager) admin > ls spaces -e -h
 TOKEN RETENTION LATENCY FILES ALLO   USED   FREE   SIZE             EXPIRES DESCRIPTION
110000 CUSTODIAL NEARLINE    0   0B +   0B + 5.0M = 5.0M 2011-12-09 12:43:48 DESY_TEST

(SrmSpaceManager) admin > ls link groups -h
FLAGS CNT RESVD   AVAIL   FREE             UPDATED NAME
--rc:no 1  5.0M +  7.3G = 7.3G 2011-11-28 12:12:51 spacemanager_WriteLinkGroup

Here the -h option indicates that approximate, but human readable, byte sizes are to be used, and -e indicates that ephemeral (time limited) reservations should be displayed too (by default time limited reservations are not displayed as they are often implicit reservations). As can be seen, 5 MB are now reserved in the link group, although with approximate byte sizes, 5 MB do not make a visible difference in the 7.3 GB total size.

You can now copy a file into that space token.

[user] $ srmcp file:////bin/sh srm://<dcache.example.org>:8443/data/world-writable/space-token-test-file -space_token=110000

Now you can check via the Webadmin Interface or the Web Interface that the file has been copied to the pool spacemanager-pool.

There are several parameters to be specified for a space reservation.

(SrmSpaceManager) admin > reserve space [-al=online|nearline] [-desc=<string>] -lg=<name>
[-lifetime=<seconds>] [-owner=<user>|<fqan>] [-rp=output|replica|custodial] <size>
[-owner=<user>|<fqan>]

The owner of the space is identified by either mapped user name or FQAN. The owner must be authorized to reserve space in the link group in which the space is to be created. Besides the dCache admin, only the owner can release the space. Anybody can however write into the space (although the link group may only allow certain storage groups and thus restrict which file system paths can be written to space reservation, which in turn limits who can upload files to it).

[-al=<AccessLatency>]

AccessLatency needs to match one of the access latencies allowed for the link group.

[-rp=<RetentionPolicy>]

RetentionPolicy needs to match one of the retention policies allowed for the link group.

[-desc=<Description>]

You can chose a value to describe your space reservation.

-lg=<LinkGroupName>

Which link group to create the reservation in.

<size>

The size of the space reservation should be specified in bytes, optionally using a byte unit suffix using either SI or IEEE prefixes.

[-lifetime=<lifetime]>

The life time of the space reservation should be specified in seconds. If no life time is specified, the space reservation will not expire automatically.

[return to top]

Releasing a Space Reservation

If a space reservation is not needed anymore it can be released with

(SrmSpaceManager) admin > release space <spaceTokenId>

Example:

(SrmSpaceManager) admin > reserve space -owner=/desy -desc=DESY_TEST -lifetime=600 5000000
110042 voGroup:/desy voRole:production retentionPolicy:CUSTODIAL accessLatency:NEARLINE linkGroupId:0 size:5000000 created:Thu Dec 15 12:00:35 CET 2011 lifetime:600000ms expiration:Thu Dec 15 12:10:35 CET 2011 description:DESY_TEST state:RESERVED used:0 allocated:0
(SrmSpaceManager) admin > release space 110042
110042 voGroup:/desy voRole:production retentionPolicy:CUSTODIAL accessLatency:NEARLINE linkGroupId:0 size:5000000 created:Thu Dec 15 12:00:35 CET 2011 lifetime:600000ms expiration:Thu Dec 15 12:10:35 CET 2011 description:DESY_TEST state:RELEASED used:0 allocated:0

You can see that the value for state has changed from RESERVED to RELEASED.

[return to top]

Making and Releasing a Space Reservation as a User

If so authorized, a user can make a space reservation through the SRM protocol. A user is authorized to do so using the LinkGroupAuthorization.conf file.

[return to top]

VO based Authorization Prerequisites

In order to be able to take advantage of the virtual organization (VO) infrastructure and VO based authorization and VO based access control to the space in dCache, certain things need to be in place:

Only if these 3 conditions are satisfied the VO based authorization of the SpaceManager will work.

[return to top]

VO based Access Control Configuration

As mentioned above dCache space reservation functionality access control is currently performed at the level of the link groups. Access to making reservations in each link group is controlled by the SpaceManagerLinkGroupAuthorizationFile.

This file contains a list of the link groups and all the VOs and the VO Roles that are permitted to make reservations in a given link group.

When a SRM Space Reservation request is executed, its parameters, such as reservation size, lifetime, access latency and retention policy as well as user’s VO membership information is forwarded to the SRM SpaceManager.

Once a space reservation is created, no access control is performed, any user can store the files in this space reservation, provided he or she knows the exact space token.

[return to top]

Making and Releasing a Space Reservation

A user who is given the rights in the SpaceManagerLinkGroupAuthorizationFile can make a space reservation by

[user] $ srm-reserve-space -retention_policy=<RetentionPolicy> -lifetime=<lifetimeInSecs> -desired_size=<sizeInBytes> -guaranteed_size=<sizeInBytes>  srm://<example.org>:8443
Space token =SpaceTokenId

and release it by

[user] $ srm-release-space srm://<example.org>:8443 -space_token=SpaceTokenId

Note

Please note that it is obligatory to specify the retention policy while it is optional to specify the access latency.

Example:

[user] $ srm-reserve-space -retention_policy=REPLICA -lifetime=300 -desired_size=5500000 -guaranteed_size=5500000  srm://srm.example.org:8443
Space token =110044

The space reservation can be released by:

[user] $ srm-release-space srm://srm.example.org:8443 -space_token=110044

[return to top]

Space Reservation without VOMS certificate

If a client uses a regular grid proxy, created with grid-proxy-init, and not a VO proxy, which is created with the voms-proxy-init, when it is communicating with SRM server in dCache, then the VO attributes can not be extracted from its credential. In this case the name of the user is extracted from the Distinguished Name (DN) to use name mapping. For the purposes of the space reservation the name of the user as mapped by gplazma is used as its VO Group name, and the VO Role is left empty. The entry in the SpaceManagerLinkGroupAuthorizationFile should be:

#LinkGroupAuthorizationFile
#
<userName>

[return to top]

Space Reservation for non SRM Transfers

Edit the file /etc/dcache/dcache.conf to enable space reservation for non SRM transfers.

spacemanager.enable.reserve-space-for-non-srm-transfers=true

If the spacemanager is enabled, spacemanager.enable.reserve-space-for-non-srm-transfers is set to true, and if the transfer request comes from a door, and there was no prior space reservation made for this file, the SpaceManager will try to reserve space before satisfying the request.

Possible values are true or false and the default value is false.

This is analogous to implicit space reservations performed by the srm, except that these reservations are created by the spacemanager itself. Since an SRM client uses a non-SRM protocol for the actual upload, setting the above option to true while disabling implicit space reservations in the srm, will still allow files to be uploaded to a link group even when no space token is provided. Such a configuration should however be avoided: If the srm does not create the reservation itself, it has no way of communicating access latency, retention policy, file size, nor lifetime to spacemanager.

[return to top]

SRM configuration for experts

There are a few parameters in /usr/share/dcache/defaults/*.properties that you might find useful for nontrivial SRM deployment.

[return to top]

dcache.enable.space-reservation

dcache.enable.space-reservation tells if the space management is activated in SRM.

Possible values are true and false. Default is true.

Usage example:

dcache.enable.space-reservation=true

[return to top]

srm.enable.space-reservation.implicit

srm.enable.space-reservation.implicit tells if the space should be reserved for SRM Version 1 transfers and for SRM Version 2 transfers that have no space token specified.

Possible values are true and false. This is enabled by default. It has no effect if dcache.enable.space-reservation is set to true.

Usage example:

srm.enable.space-reservation.implicit=true

[return to top]

dcache.enable.overwrite

dcache.enable.overwrite tells SRM and GridFTP servers if the overwrite is allowed. If enabled on the SRM node, should be enabled on all GridFTP nodes.

Possible values are true and false. Default is false.

Usage example:

dcache.enable.overwrite=true

[return to top]

srm.enable.overwrite-by-default

srm.enable.overwrite-by-default Set this to true if you want overwrite to be enabled for SRM v1.1 interface as well as for SRM v2.2 interface when client does not specify desired overwrite mode. This option will be considered only if dcache.enable.overwrite is set to true.

Possible values are true and false. Default is false.

Usage example:

srm.enable.overwrite-by-default=false 

[return to top]

srm.db.host

srm.db.host tells SRM which database host to connect to.

Default value is localhost.

Usage example:

srm.db.host=database-host.example.org

[return to top]

spaceManagerDatabaseHost

spaceManagerDatabaseHost tells SpaceManager which database host to connect to.

Default value is localhost.

Usage example:

spaceManagerDatabaseHost=database-host.example.org

[return to top]

pinmanager.db.host

pinmanager.db.host tells PinManager which database host to connect to.

Default value is localhost.

Usage example:

pinmanager.db.host=database-host.example.org

[return to top]

srm.db.name

srm.db.name tells SRM which database to connect to.

Default value is srm.

Usage example:

srm.db.name=srm

[return to top]

srm.db.user

srm.db.user tells SRM which database user name to use when connecting to database. Do not change unless you know what you are doing.

Default value is dcache.

Usage example:

srm.db.user=dcache

[return to top]

srm.db.password

srm.db.password tells SRM which database password to use when connecting to database. The default value is an empty value (no password).

Usage example:

srm.db.password=NotVerySecret

[return to top]

srm.db.password.file

srm.db.password.file tells SRM which database password file to use when connecting to database. Do not change unless you know what you are doing. It is recommended that MD5 authentication method is used. To learn about file format please see http://www.postgresql.org/docs/8.1/static/libpq-pgpass.html. To learn more about authentication methods please visit http://www.postgresql.org/docs/8.1/static/encryption-options.html, Please read "Encrypting Passwords Across A Network" section.

This option is not set by default.

Usage example:

srm.db.password.file=/root/.pgpass

[return to top]

srm.request.enable.history-database

srm.request.enable.history-database enables logging of the transition history of the SRM request in the database. The request transitions can be examined through the command line interface. Activation of this option might lead to the increase of the database activity, so if the PostgreSQL load generated by SRM is excessive, disable it.

Possible values are true and false. Default is false.

Usage example:

srm.request.enable.history-database=true

[return to top]

transfermanagers.enable.log-to-database

transfermanagers.enable.log-to-database tells SRM to store the information about the remote (copy, srmCopy) transfer details in the database. Activation of this option might lead to the increase of the database activity, so if the PostgreSQL load generated by SRM is excessive, disable it.

Possible values are true and false. Default is false.

Usage example:

transfermanagers.enable.log-to-database=false

[return to top]

srmVersion

srmVersion is not used by SRM; it was mentioned that this value is used by some publishing scripts.

Default is version1.

[return to top]

srm.root

srm.root tells SRM what the root of all SRM paths is in pnfs. SRM will prepend path to all the local SURL paths passed to it by SRM client. So if the srm.root is set to /pnfs/fnal.gov/THISISTHEPNFSSRMPATH and someone requests the read of srm://srm.example.org:8443/file1, SRM will translate the SURL path /file1 into /pnfs/fnal.gov/THISISTHEPNFSSRMPATH/file1. Setting this variable to something different from / is equivalent of performing Unix chroot for all SRM operations.

Default value is /.

Usage example:

srm.root="/pnfs/fnal.gov/data/experiment"

[return to top]

srm.limits.parallel-streams

srm.limits.parallel-streams specifies the number of the parallel streams that SRM will use when performing third party transfers between this system and remote GSI-FTP servers, in response to SRM v1.1 copy or SRM V2.2 srmCopy function. This will have no effect on srmPrepareToPut and srmPrepareToGet command results and parameters of GridFTP transfers driven by the SRM clients.

Default value is 10.

Usage example:

srm.limits.parallel-streams=20

[return to top]

srm.limits.transfer-buffer.size

srm.limits.transfer-buffer.size specifies the number of bytes to use for the in memory buffers for performing third party transfers between this system and remote GSI-FTP servers, in response to SRM v1.1 copy or SRM V2.2 srmCopy function. This will have no effect on srmPrepareToPut and srmPrepareToGet command results and parameters of GridFTP transfers driven by the SRM clients.

Default value is 1048576.

Usage example:

srm.limits.transfer-buffer.size=1048576

[return to top]

srm.limits.transfer-tcp-buffer.size

srm.limits.transfer-tcp-buffer.size specifies the number of bytes to use for the tcp buffers for performing third party transfers between this system and remote GSI-FTP servers, in response to SRM v1.1 copy or SRM V2.2 srmCopy function. This will have no effect on srmPrepareToPut and srmPrepareToGet command results and parameters of GridFTP transfers driven by the SRM clients.

Default value is 1048576.

Usage example:

srm.limits.transfer-tcp-buffer.size=1048576

[return to top]

srm.service.gplazma.cache.timeout

srm.service.gplazma.cache.timeout specifies the duration that authorizations will be cached. Caching decreases the volume of messages to the gPlazma cell or other authorization mechanism. To turn off caching, set the value to 0.

Default value is 120.

Usage example:

srm.service.gplazma.cache.timeout=60

[return to top]

srm.limits.request.bring-online.lifetime, srm.limits.request.put.lifetime and srm.limits.request.copy.lifetime

srm.limits.request.bring-online.lifetime, srm.limits.request.put.lifetime and srm.limits.request.copy.lifetime specify the lifetimes of the srmPrepareToGet (srmBringOnline) srmPrepareToPut and srmCopy requests lifetimes in millisecond. If the system is unable to fulfill the requests before the request lifetimes expire, the requests are automatically garbage collected.

Default value is 14400000 (4 hours)

Usage example:

srm.limits.request.bring-online.lifetime=14400000
srm.limits.request.put.lifetime=14400000
srm.limits.request.copy.lifetime=14400000

[return to top]

srm.limits.request.scheduler.ready.max, srm.limits.request.put.scheduler.ready.max, srm.limits.request.scheduler.ready-queue.size and srm.limits.request.put.scheduler.ready-queue.size

srm.limits.request.scheduler.ready.max and srm.limits.request.put.scheduler.ready.max specify the maximum number of the files for which the transfer URLs will be computed and given to the users in response to SRM get (srmPrepareToGet) and put (srmPrepareToPut) requests. The rest of the files that are ready to be transfered are put on the Ready queues, the maximum length of these queues are controlled by srm.limits.request.scheduler.ready-queue.size and srm.limits.request.put.scheduler.ready-queue.size parameters. These parameters should be set according to the capacity of the system, and are usually greater than the maximum number of the GridFTP transfers that this dCache instance GridFTP doors can sustain.

Usage example:

srm.limits.request.scheduler.ready-queue.size=10000
srm.limits.request.scheduler.ready.max=2000
srm.limits.request.put.scheduler.ready-queue.size=10000
srm.limits.request.put.scheduler.ready.max=1000

[return to top]

srm.limits.request.copy.scheduler.thread.pool.size and transfermanagers.limits.external-transfers

srm.limits.request.copy.scheduler.thread.pool.size and transfermanagers.limits.external-transfers. srm.limits.request.copy.scheduler.thread.pool.size is used to specify how many parallel srmCopy file copies to execute simultaneously. Once the SRM contacted the remote SRM system, and obtained a Transfer URL (usually GSI-FTP URL), it contacts a Copy Manager module (usually RemoteGSIFTPTransferManager), and asks it to perform a GridFTP transfer between the remote GridFTP server and a dCache pool. The maximum number of simultaneous transfers that RemoteGSIFTPTransferManager will support is transfermanagers.limits.external-transfers, therefore it is important that transfermanagers.limits.external-transfers is greater than or equal to srm.limits.request.copy.scheduler.thread.pool.size.

Usage example:

srm.limits.request.copy.scheduler.thread.pool.size=250
transfermanagers.limits.external-transfers=260

[return to top]

srm.enable.custom-get-host-by-address

srm.enable.custom-get-host-by-address srm.enable.custom-get-host-by-address enables using the BNL developed procedure for host by IP resolution if standard InetAddress method failed.

Usage example:

srm.enable.custom-get-host-by-address=true

[return to top]

srm.enable.recursive-directory-creation

srm.enable.recursive-directory-creation allows or disallows automatic creation of directories via SRM. Set this to true or false.

Automatic directory creation is allowed by default.

Usage example:

srm.enable.recursive-directory-creation=true

[return to top]

hostCertificateRefreshPeriod

This option allows you to control how often the SRM door will reload the server’s host certificate from the filesystem. For the specified period, the host certificate will be kept in memory. This speeds up the rate at which the door can handle requests, but also causes it to be unaware of changes to the host certificate (for instance in the case of renewal).

By changing this parameter you can control how long the host certificate is cached by the door and consequently how fast the door will be able to detect and reload a renewed host certificate.

Please note that the value of this parameter has to be specified in seconds.

Usage example:

hostCertificateRefreshPeriod=86400

[return to top]

trustAnchorRefreshPeriod

The trustAnchorRefreshPeriod option is similar to hostCertificateRefreshPeriod. It applies to the set of CA certificates trusted by the SRM door for signing end-entity certificates (along with some metadata, these form so called trust anchors). The trust anchors are needed to make a decision about the trustworthiness of a certificate in X.509 client authentication. The GSI security protocol used by SRM builds upon X.509 client authentication.

By changing this parameter you can control how long the set of trust anchors remains cached by the door. Conversely, it also influences how often the door reloads the set of trusted certificates.

Please note that the value of this parameter has to be specified in seconds.

Tip

Trust-anchors usually change more often than the host certificate. Thus, it might be sensible to set the refresh period of the trust anchors lower than the refresh period of the host certificate.

Usage example:

trustAnchorRefreshPeriod=3600