Resources
| Max used physical non-swap i386 memory size |
2000 |
| Max used physical non-swap x86_64 memory size |
4000 |
| Max size of scratch space used by jobs |
20000 |
| Max time of job execution |
6000 |
| Job wall clock time limit |
7200 |
| number cores |
| min number cores : |
|
| pref number cores : |
|
| max number cores : |
|
| number of ram |
| min number ram : |
0 |
| pref number ram : |
|
| max number ram : |
|
| scratch space values |
| min scratch space values : |
|
| pref scratch space values : |
|
| max scratch space values : |
|
Cloud Resources
| CPU Core |
0 |
| VM Ram |
0 |
| Storage Size |
0 |
Other requirements
Further recommendations from LHCb for sites:
The amount of memory in the field "Max used physical non-swap X86_64 memory size" of the resources section is understood to be the virtual memory required per single process of a LHCb payload. Usually LHCb payloads consist of one "worker process", consuming the majority of memory, and several wrapper processes. The total amount of virtual memory for all wrapper processes accounts for 1 GB which needs to be added as a requirement to the field "Max used physical non-swap X86_64 memory size" in case the virtual memory of the whole process tree is monitored.
The amount of space in field "Max size of scratch space used by jobs", shall be interpreted as, 5 GB needed for local software installation, the remaining amount is needed 50 % each for downloaded input files and produced output files. T2 sites only providing Monte Carlo simulation will only need to provide the scratch space of local software installation.
The CPU limits are understood to be expressed in kSI2k.minutes
The shared software area shall be provided via CVMFS. LHCb uses the mount point /cvmfs/lhcb.cern.ch on the worker nodes.
Provisioning of a reasonable number of slots per disk server, proportional to the maximum number of concurrent jobs at the site.
Advertisement of OS and machine capabilities in the BDII as described in https://wiki.egi.eu/wiki/HOWTO05 and https://wiki.egi.eu/wiki/HOWTO06 .
Separation of clusters running different OSes via different CEs.
Non T1 sites providing CVMFS, direct CREAM submission and the requested amount of local scratch space will be considered as candidates for additional workloads (e.g. data reprocessing campaign).