Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ideas for a HEP computing model

Similar presentations


Presentation on theme: "Ideas for a HEP computing model"— Presentation transcript:

1 Ideas for a HEP computing model
A forward looking DM Ideas for a HEP computing model Fabrizio Furano (CERN IT/GT)

2 The problem HEP experiments are very big data producers
The HEP community is an aggressive data consumer Analyses rely on statistics on complex data Scheduled (production) processing and user-based unscheduled analysis Performance, usability and stability are the primary factors The infrastructure must be able to guarantee access and functionalities The softwares must use the infrastructures well F.Furano - A forward looking DM Wed 06 July 2011

3 Structured files In the present times most computations are organized around very efficient “structured files” The ROOT format being the most famous They contain homogeneous data ready to be analyzed, at the various phases of HEP computing Centrally-managed data processing rounds create the “bases of data” to be accessed by the community F.Furano - A forward looking DM Wed 06 July 2011

4 Distributed data “One site is not enough to host everything and provide sufficient access capabilities to all the users” In the pre-Grid days the BaBar users were used to “manually choose” the site where they wanted to do analysis (i.e. hosting their favorite skims) by logging into it We suppose that a worldwide distributed storage is necessary for SuperB, like in many modern experiments  we start to become interested in what technologies could be available and easy to adopt in the next few years F.Furano - A forward looking DM Wed 06 July 2011

5 The minimalistic workflow
In general, an user will: Decide which analyses to perform for his new research Develop the code which performs it Typically a high level macro or a “simple” plugin of some experiment-based software framework Ask a system about the data requirements Which files contain the needed information Ask another system to process his analysis Collect the results Typically an experiment-based metadata repository and/or file catalogue Typically the GRID (WLCG) or batch farms or PROOF-like things This will likely become also his own computer as the hw performance increases and the sw uses it efficiently F.Furano - A forward looking DM Wed 06 July 2011

6 Semi-automated approach
One can: Choose carefully where to send a processing job, e.g. to the place which best matches the needed data set Use tools to create local replicas of the needed data files In the right places Eventually use tools also to push new data files to the “official” repositories If overdone this can be quite time and resource consuming Any variation is possible E.g. pre-populate everything before sending jobs, guessing the ‘perfect match’ between job needs and storage content F.Furano - A forward looking DM Wed 06 July 2011

7 Direct approach We might want not to be limited by local access
I.e. not pre-arrange all the data close to the computation (really a very tough problem) The technologies allow more freedom if: Everything is available r/w through URLs proto://host/path The analysis tools support URLs to random access files The I/O is performed at byte level directly in the file without copying it The analysis tools exploit advanced I/O features E.g. ROOT TTree + TTreeCache At some extent, this can work also via WAN. We will see more and more real-world examples of this. If overdone, the WAN could become a limiting factor Less and less… the bandwidth is increasing very fast HEP data files are big and quite static, better to exploit locality if possible Use or create locality as an optimization F.Furano - A forward looking DM Wed 06 July 2011

8 Outsource? Cloud services, outsourcing
If this is seen as a possible future need: Desirable to organise things so that it is possible with a low effort This could have an impact on some choices, e.g. the data protocols that are supported/used More difficult to ask a hosting for complex custom systems Easier to ask for a ‘normal’ service i.e. a Web server F.Furano - A forward looking DM Wed 06 July 2011

9 Where is file X The historical approach was to implement a “catalogue” using a DBMS This “catalogue” knows where the files are Or are supposed to be This can give a sort of “illusion” of a worldwide file system It must be VERY well encapsulated, however It would be nicer if this functionality were inside the file system/data access technology No need for complex systems/workarounds/glue code F.Furano - A forward looking DM Wed 06 July 2011

10 The protocol Babel Some are affectionate to this idea, i.e. using
A protocol (+its tools) to transfer files A protocol (+its tools) to access them in some sites A protocol (+ its tools) to access them in other sites A protocol (+ its tools) to access system X, then Y, then Z A protocol (+ its tools) to access <put your thing here> The impression is that these were often workarounds for protocols and tools that were not adequate to the (distributed) HEP use cases, or poorly implemented This surely highers the cost of development and operation The Web has use cases that are quite similar to the ones of HEP, and it uses only one This should be a good inspiration for the evolution of a new computing model F.Furano - A forward looking DM Wed 06 July 2011

11 Current and future evolutions
F.Furano - A forward looking DM Wed 06 July 2011

12 The survivors Likely, in a few years we will have just a few major alternatives for HEP data processing XROOTD, ramping up now in the LHC experiments EOS: an xrootd-based system, tailored to the needs of a big storage farm for HEP (currently only at CERN) HTTP/WebDAV Moves the world, slightly less efficient but with highly robust products that support it By now misses the client implementations of some features that are needed for HEP NFS4.1 A relatively new entry, as a refurbishment of a much older one For some it’s a promising path F.Furano - A forward looking DM Wed 06 July 2011

13 Xrootd and Scalla In 2002 there was the need of a data access system providing basically: Compliance to the HEP requirements “Indefinite” scaling possibility (up to 200Kservers) Maniacally efficient use of the hardware Accommodate thousands of clients per server Great interoperability/customization possibilities In the default config it implements a non-transactional distributed file system Efficient through LAN/WAN Not linked to a particular data format Particularly optimized for HEP workloads Thus matching very well the ROOT requirements F.Furano - A forward looking DM Wed 06 July 2011

14 Xrootd and Scalla (contd.)
Build efficient local storage clusters, virtually no limit to scalability Aggregating storage clusters into WAN federations Access efficiently remote data through WAN Push/pull files SE-to-SE without datamovers or external systems Build proxies which can cache a whole repository And increase the data access performance (or decrease the WAN traffic) through a decent ‘hit rate’ Build hybrid proxies Caching an official repository while storing local data locally Keep an eye on the development of EOS at CERN F.Furano - A forward looking DM Wed 06 July 2011

15 HTTP/WebDAV Wikipedia: “A set of methods based on HTTP that facilitates collaboration between users in editing and managing documents and files stored on WWW servers” We know it because we can browse a repository, or even mount it, with the favourite tools of our desktop/laptop Clients available in all modern operating systems, supported by various browsers and file managers Not quite born for HEP, but a well performed R&D activity could do it Most of the features are there, one could start with it immediately (maybe with suboptimal results for a full-optional HEP model) Easy to outsource such a service, if based on standard industry components F.Furano - A forward looking DM Wed 06 July 2011

16 HTTP/WebDAV (cont.) In the case it’s chosen for a future data access schema for HEP, some work has to be done to give to a client the funcs needed for HEP (e.g. auth/authz, redirections, sessions, multistreaming) The implementation in ROOT is a private one, very basic (but works quite well) Not very easy but a very promising project to work on Courtesy of Alex Alvarez Ayllon (CERN IT/GT) F.Furano - A forward looking DM Wed 06 July 2011

17 NFS4.1 A more modern version of the NFS protocol, adding features, also performance and scalability-oriented Supports the POSIX semantics and parallel access (pNFS), supported by well known products (e.g. dCache and DPM) “Contains the protocol elements to support WAN data access” is one of the statements. No HEP-related tests of this are known to me. It’s a distributed file system, whose client is available in the more recent Linux kernels. In a few years it will be a normal thing to have. For sure it will be adequate to the needs of a single data center Provided that there is ‘glue code’ that connects the site to the others Several HPC gurus (e.g. in Google) are suspicious against pure POSIX data access, supposed to put limits on scaling F.Furano - A forward looking DM Wed 06 July 2011

18 Federations Suppose that we can easily aggregate remote storage sites
And provide an efficient entry point which “knows them all natively” We could use it to access data directly between sites A job willing to access data willl not know that the data is in a friend site, it will just access it We could use it as a building block for a self-referring federation If site A is asked for file X, A will fetch X from some other ‘friend’ site, though the unique entry point A itself is a potential source, accessible through the aggregating entry point The XROOTD framework supports this natively, however it’s a quite general idea that can be applied to other systems F.Furano - A forward looking DM Wed 06 July 2011

19 An idea for an analysis facility
It contains: Institutional data: proxied Personal user’s data: local only Data accessible through: FUSE mount point if useful Simple data mgmt app Native access from ROOT (more efficient) Accessible through WAN Possibility to travel and still see the same repository from the (connected) laptop, that maybe has also some persistent caching (e.g. one of the new things in ROOT) Federable with friend sites with the least amount of glue code/systems Examples of this are in the ALICE CM and in some CMS sites, surely this concept will ramp up and become more popular F.Furano - A forward looking DM Wed 06 July 2011

20 Conclusion “The next level in Storage+Data Access/Data Management” …
A Web-like functional level, tailored to the particular HEP requirements Very good examples right now, many others are coming Interoperability and performance are the key factors F.Furano - A forward looking DM Wed 06 July 2011

21 Thank you Questions? F.Furano - A forward looking DM Wed 06 July 2011

22 The Data Management Files and datasets are stored into Storage Elements, hosted by sites The decision is often taken when they are produced Processing jobs are very greedy Up to MB/s now. The GRID machinery (ev. Together with some service of the experiment) decides where to run a job The service can also be human-based (!) Matching the locations of the data with the available computing resources is known as the “GRID Data Management Problem”. F.Furano - A forward looking DM Wed 06 July 2011

23 An example A B F C E D G My distributed sites With pre-filled storage
With computing farms My data greedy jobs Need: G,A Need: D,E Need: A,E Need: F,B Need: G,E F.Furano - A forward looking DM Wed 06 July 2011

24 WAN data access In WANs each client/server response comes much later
E.g. 180ms later With well tuned WANs one needs apps and tools built with WANs in mind Bulk xfer apps are easy (gridftp, xrdcp, fdt, etc.) There are more interesting use cases, and much more benefit to get The WAN direct access for analysis is a reality since quite some time. F.Furano - A forward looking DM Wed 06 July 2011


Download ppt "Ideas for a HEP computing model"

Similar presentations


Ads by Google