Services and Infrastructures


The Infrastructure and Services Department is responsible for the Data Center Facilities and the Storage and Computing Services PIC hosts. IT professionals working in this department also perform technical support and new developments for PIC’s users in order to adjust our services to the user needs.

Computing

DSCF6914PIC is a throughput-oriented site, where latency is not so important but final throughput is the critical point. This means that it is more important for us to finish as many jobs as we can, even if that means individuals jobs can take much more time to finish. Thus, instead of having a huge supercomputer with low-latency links as the way of getting our computing power, our approach is having replication of many small commodity units that, when working together, achieve our goal. However, we renew our hardware often to offer the last and faster processors in the market.

The hardware solution is currently based on HP blades, Dell and Intel servers with Intel Xeon processors: we currently have (as of July, 2016) 325 nodes with 5980 cores available. 36 of these WNs are immerse in the liquid cooling solution CarnoJet system from Green Revolution Cooling.

A part from the computing nodes, PIC offers all the resources required for grid computing: user interfaces, computing and publication elements.

Storage – Tape

StorageTek SL8500The tape infrastructure at PIC is provisioned through a Sun StorageTek SL8500 library, providing around 6650 tape slots which are expected to cover the PIC tape needs in the coming years. Enstore is the mass storage system (developed at Fermilab) managing 14PB of tape storage and providing distributed access to a total of 4.3 million files. The supported technologies are LTO-4, LTO-5, T10KC and T10KD, containing 16%, 16%, 61% and 6% of the total data respectively, in around 5750 tape cartridges.

The used/installed cartridges per technology were: 2003/2620 (LTO4), 565/1420(LTO5), 1330/1712(T10KT2). There were 5752 used slots in the library, of a total of 6632. A total of 29 tape drives are installed to read/write the data (12 IBM-LTO4, 4 HP-LTO5, 8 T10KC and 5 T10KD). Aggregated read/write rate has achiveved hourly average rates peaking at 3.5GB/s.

 

Storage – Disk

salaPIC disk storage is based on dCache, an open source project that can show all our disk servers (up to 60~) and to serve them in a unique software area (filesystem). It supports a large set of standard protocols (WebDAV, FTP, SRM, NFSv41, XRootD, …) for the data repository and its namespace.

Current managed data is ~7PB and we serve 5,1% of disk space for the CMS & ATLAS experiments and 6,5% for the LHCb experiment. Also, PIC also operates the disk storage for the IFAE ATLAS Tier2 and Tier3, which is the 25% of the Federated Spanish ATLAS Tier2.

Apart from the LHC projects, PIC acts as Tier0 of the MAGIC and PAU experiments, is one of ten science data centers of the EUCLID consortium and is ramping up the data acquisiton for the CTA telescope prototype.

 

Network

fibraPIC is connected to the other center through a 10Gbps physical connection shared with the following services:

· LHCOPN (LHC Optical Network): Connection with CERN and the rest of LHC Tier1 centers using a private network
· LHCONE (LHC Open Network Environment): Connection with the LHC Tier2 centers using a private network
· Internet connection for users and other experiments that transfer data to PIC

All the connections have a link set up given by the Anella Científica, organization that also connects the center with RedIRIS.
PIC is part of the HEPIX IPv6 Work Group that tests the compatibility of the Grid applications used in the WLCG.
The objective of this workgroup is to perform tests around the performance of the software used by the experiments of high energy physics in an IPv6 environment.

Regarding the Local Area Network (LAN), PIC has multiple 10Gbps machines and low latency that allow us to connect Storage and Batch systems and move a lot of data between them.

 

Particle Physics


lhc

The office coordinates and provides support to LHC activities (ATLAS, CMS, LHCb), Neutrino Physics experiments, VIP (Voxel Imaging PET), and many other (small) projects.

The enormous amounts of data produced by the LHC experiments require special, worldwide distributed computing resources for data reconstruction, simulation and data analysis. PIC operates a WLCG Tier-1 center that supports the ATLAS, CMS and LHCb experiments, as well as a fraction of the Spanish ATLAS Tier-2 center. A Tier-3 facility for ATLAS is provided, which helps the local physicists in their data analysis. Around 80% of the deployed PIC resources are exploited by the LHC activities. The PIC team holds several responsibility positions within WLCG, and in the LHC experiments, actively contributing to the core computing areas. The group provides support for the research group at IFAE for the T2K Neutrino experiment, and support for the exploitation of Grid resources for design studies of PET devices (VIP project). Technical support is provided to integrate experiment workflows to the Grid, when necessary.

Impression     cms     lhcb

 

Astrophysics and Cosmology


lapalmaThe Astrophysics and Cosmology works in collaboration with several group within the scientific community dedicated to the investigation of astronomical concepts. Major experiments in the area of Astrophysics are CTA and MAGIC, investigating high energy gamma ray sources through secondary Cherenkov radiation. DES, PAU and Euclid are cosmological surveys searching for dark energy and dark matter evidences through optical and near-infrared imaging. MICE supports cosmological surveys by simulating the dynamics of dark matter structures at very large scales and deriving galaxy catalogs from the resulting dark matter maps.

The data management for those collaborations covers the whole pipeline starting from data acquisition through analysis to data publication. The scientists and engineers of the group are involved in different technical and scientific aspects of the exploitation of massive data sets coming from simulations and terrestrial or satellite telescopes .

 

Collaborations


WLCG
WLCG – Worldwide LHC Computing Grid


WLCG Site

dcache
dCache


dCache Site

ltug
LTUG – Large Tape User Group


LTUG Site

magic
MAGIC – Major Atmospheric Gamma Imaging Cherenkov


Magic at PIC
Magic Site


ctaCTA – Cherenkov Telescope Array


CTA Site

euclidEUCLID – A space mission to map the Dark Universe


EUCLID Site

pauPAU – Physics of the accelerating Universe


PAU Survey Site

desDES – Dark Energy Survey


DES Site


helix

Helix Nebule Initiative


Helix Nebula Site

t2kT2K – Tokai to Kamioka neutrino experiment


T2K Site

Logo VIP-OKVIP – Voxel Imaging PET


VIP Site

parcrecercaUAB Research Park


Research Park Site


fnal

Fermilab


Enstore System