Overview of dCache Systems at BNL - Iris Wu Hiro Ito Jane Liu dCache workshop 2018 DESY, Hamburg Germany
←
→
Page content transcription
If your browser does not render page correctly, please read the page content below
Overview of dCache Systems at BNL Iris Wu Hiro Ito Jane Liu dCache workshop 2018 DESY, Hamburg Germany 1
ATLAS dCache Belle II dCache Simon's dCache PHENIX dCache Largest ATLAS T1 site Belle II T1 site Version 2.1.6 PHENIX experiment T0 site Core server 3.0.11,NFS: 3.0.38, Version 3.0.11 0.26PB disk space In the progress of upgrade pool:3.043 1.7 PB disk space TAPE backend 17.5 PB TAPE backend TAPE backend atlasdcache::pool::data: Puppet Management services dc016: 1: • Efficient diskspace: 51938544204217 SRM lan: 1000 • Automation GFTP Admin wan: 10 p2p: 20 common modules door pp: 16 rh: 1000 st: 2 autofs certificates pool checksum: ADLER32 type: disk HSM xrootd tags: CDCE ldap LVM disk: data Chimera device: /dev/md0 mdadm 3
US ATLAS Tier1 NFS4.1/pNFS dCache Disk pool nodes dCache @BNL HTTP/WebDav primary secondary Disk cache Admin nodes Resilience Management Globus Online GFTP read pools write pools Data transfer nodes Tape pool nodes 4
Protocols NFS 4.1 ● ● XRootD –Mounted in the Linux farm –Read is in production –US ATLAS Tier3 facility –Write is tested through ● GSI authentication /pnfs/usatlas.bnl.gov/users/ Support of XRootD third party transfer? 5
Protocols Globus Gridftp Server + dCache NFS4.1 dCache Native Gridftp Server • Pros • It is a native Globus. The server is always • Pros supported by Globus. • It is a native dCache!!! • NFS4.1 is a standard. • Performance and stability is great • Cons • Cons • The performance is not as good as the • Can dCache developer keep supporting? dCache gridftp option. • Can it be officially supported by Globus • The stability issue has been reported. 6
BNLBox (Cloud storage/Owncloud) • Pros • Easiest to transfer data in/out dCache • Automatic sync • Cons • SFTP can be a problematic. • Can we natively support? • Like CERN box with EOS? Own Cloud External Storage dCache NFS4.1 via SFTP dCache 7
Customization - Deletion namespace pool database Tune Cleaner to efficiently remove Remove records Files were removed from srmspacefile files physically from from namespace by in srm database on pool clients time favor pools with most deleted files query t_location_trash Dark Data 8
Customization– schema change dcache. srmspacefile id vogroup vorole spacereservation sizeinbytes creationtime pnfsid state id vogroup vorole spacereservation sizeinbytes creationtime pnfsid state chimerapath 9
Future operation-Splitting read pools for Tape area • Disk space crunch! • Various experiments and users have more read pools data than the size of spinning disks (Finite size) available to them. And, it is expected to grow further. newly written files HPSS disk cache • The tape storage is still cheaper than the spinning disks (or SSDs). • Would like to use the archive storage (aka TAPE) limited as effective as reasonably possible. Tape stage pools resource • TAPE requires a particular access to get the optimum performance. write HPSS Tape drives pools The callback option for the future? Non blocking CHIMERID to Namespace pulling (ChimeraDB) 10
Future operation – Ceph Pools • Pros: • Separate pool service hosts from actual storages. • remove the association of files in Ceph pools with t_location • Erasure codes allows the resiliency without duplicate copy Pool Host A Pool Host X Pool Host A … Pool Host X Pool A0--AN … Pool X0--XN Ceph librados pools Regular Storage Ceph • Questions • Performance of partial read??? • Scalability of Erasure code??? 11
Needs help from developers ● continue the support for Ceph pools ● continue the support for Globus online ● support XRootD 3td party transfer ● the issue with Resilience Management 12
13
You can also read