CERN IT Department CH-1211 Genève 23 Switzerland Internet Services Xrootd explained

Slides:



Advertisements
Similar presentations
Distributed Processing, Client/Server and Clusters
Advertisements

Distributed Xrootd Derek Weitzel & Brian Bockelman.
Technical Architectures
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
1 Status of the ALICE CERN Analysis Facility Marco MEONI – CERN/ALICE Jan Fiete GROSSE-OETRINGHAUS - CERN /ALICE CHEP Prague.
PROOF: the Parallel ROOT Facility Scheduling and Load-balancing ACAT 2007 Jan Iwaszkiewicz ¹ ² Gerardo Ganis ¹ Fons Rademakers ¹ ¹ CERN PH/SFT ² University.
Experiences Deploying Xrootd at RAL Chris Brew (RAL)
Research on cloud computing application in the peer-to-peer based video-on-demand systems Speaker : 吳靖緯 MA0G rd International Workshop.
CERN IT Department CH-1211 Genève 23 Switzerland t XROOTD news Status and strategic directions ALICE-GridKa operations meeting 03 July 2009.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
The Next Generation Root File Server Andrew Hanushevsky Stanford Linear Accelerator Center 27-September-2004
INSTALLING MICROSOFT EXCHANGE SERVER 2003 CLUSTERS AND FRONT-END AND BACK ‑ END SERVERS Chapter 4.
Data & Storage Services CERN IT Department CH-1211 Genève 23 Switzerland t DSS From data management to storage services to the next challenges.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
XROOTD Tutorial Part 2 – the bundles Fabrizio Furano.
Scalable Web Server on Heterogeneous Cluster CHEN Ge.
Scalla/xrootd Introduction Andrew Hanushevsky, SLAC SLAC National Accelerator Laboratory Stanford University 6-April-09 ATLAS Western Tier 2 User’s Forum.
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
July-2008Fabrizio Furano - The Scalla suite and the Xrootd1 cmsd xrootd cmsd xrootd cmsd xrootd cmsd xrootd Client Client A small 2-level cluster. Can.
11-July-2008Fabrizio Furano - Data access and Storage: new directions1.
Xrootd Monitoring Atlas Software Week CERN November 27 – December 3, 2010 Andrew Hanushevsky, SLAC.
CERN IT Department CH-1211 Genève 23 Switzerland t Xrootd setup An introduction/tutorial for ALICE sysadmins.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
July-2008Fabrizio Furano - The Scalla suite and the Xrootd1.
Xrootd Update Andrew Hanushevsky Stanford Linear Accelerator Center 15-Feb-05
ROOT and Federated Data Stores What Features We Would Like Fons Rademakers CERN CC-IN2P3, Nov, 2011, Lyon, France.
ROOT-CORE Team 1 PROOF xrootd Fons Rademakers Maarten Ballantjin Marek Biskup Derek Feichtinger (ARDA) Gerri Ganis Guenter Kickinger Andreas Peters (ARDA)
02-June-2008Fabrizio Furano - Data access and Storage: new directions1.
Performance and Scalability of xrootd Andrew Hanushevsky (SLAC), Wilko Kroeger (SLAC), Bill Weeks (SLAC), Fabrizio Furano (INFN/Padova), Gerardo Ganis.
Xrootd Present & Future The Drama Continues Andrew Hanushevsky Stanford Linear Accelerator Center Stanford University HEPiX 13-October-05
CERN IT Department CH-1211 Genève 23 Switzerland t Scalla/xrootd WAN globalization tools: where we are. Now my WAN is well tuned! So what?
CERN IT Department CH-1211 Genève 23 Switzerland t DSS Data Access in the HEP community Getting performance with extreme HEP data distribution.
Scalla Advancements xrootd /cmsd (f.k.a. olbd) Fabrizio Furano CERN – IT/PSS Andrew Hanushevsky Stanford Linear Accelerator Center US Atlas Tier 2/3 Workshop.
Lecture 4 Page 1 CS 111 Online Modularity and Virtualization CS 111 On-Line MS Program Operating Systems Peter Reiher.
Scalla Authorization xrootd /cmsd Andrew Hanushevsky SLAC National Accelerator Laboratory CERN Seminar 10-November-08
Data & Storage Services CERN IT Department CH-1211 Genève 23 Switzerland t DSS XROOTD news New release New features.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT Upcoming Features and Roadmap Ricardo Rocha ( on behalf of the.
3/12/2013Computer Engg, IIT(BHU)1 CLOUD COMPUTING-1.
PROOF tests at BNL Sergey Panitkin, Robert Petkus, Ofer Rind BNL May 28, 2008 Ann Arbor, MI.
SRM Space Tokens Scalla/xrootd Andrew Hanushevsky Stanford Linear Accelerator Center Stanford University 27-May-08
Scalla + Castor2 Andrew Hanushevsky Stanford Linear Accelerator Center Stanford University 27-March-07 Root Workshop Castor2/xrootd.
CERN IT Department CH-1211 Genève 23 Switzerland t ALICE XROOTD news New xrootd bundle release Fixes and caveats A few nice-to-know-better.
Federated Data Stores Volume, Velocity & Variety Future of Big Data Management Workshop Imperial College London June 27-28, 2013 Andrew Hanushevsky, SLAC.
1 Xrootd-SRM Andy Hanushevsky, SLAC Alex Romosan, LBNL August, 2006.
11-June-2008Fabrizio Furano - Data access and Storage: new directions1.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
CernVM-FS Infrastructure for EGI VOs Catalin Condurache - STFC RAL Tier1 EGI Webinar, 5 September 2013.
09-Apr-2008Fabrizio Furano - Scalla/xrootd status and features1.
DCache/XRootD Dmitry Litvintsev (DMS/DMD) FIFE workshop1Dmitry Litvintsev.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Scalla Update Andrew Hanushevsky Stanford Linear Accelerator Center Stanford University 25-June-2007 HPDC DMG Workshop
Lecture 17 Page 1 CS 111 Online Single System Image Approaches Built a distributed system out of many more- or-less traditional computers – Each with typical.
New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington,
CERN IT Department CH-1211 Genève 23 Switzerland t Large DBs on the GRID Getting performance with wild HEP data distribution.
Federating Data in the ALICE Experiment
Unit 3 Virtualization.
Dynamic Extension of the INFN Tier-1 on external resources
Xrootd explained Cooperation among ALICE SEs
Introduction to Data Management in EGI
Data access and Storage
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
Ákos Frohner EGEE'08 September 2008
Introduction to Networks
The INFN Tier-1 Storage Implementation
Scalla/XRootd Advancements
Introduction to Cloud Computing
Support for ”interactive batch”
Database System Architectures
Presentation transcript:

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services Xrootd explained A general introduction Fabrizio Furano CERN IT/GS Sept 2008 CERN Fellow seminars

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services The historical Problem: data access XRootD (and the Scalla suite) explained 1 Physics experiments rely on rare events and statistics –Huge amount of data to get a significant number of events The typical data store can reach 5-10 PB… now Millions of files, thousands of concurrent clients –Each one opening many files (about in Alice, up to 1000 in GLAST) –Each one keeping many open files –The transaction rate is very high Not uncommon O(10 3 ) file opens/sec per cluster –Average, not peak –Traffic sources: local GRID site, local batch system, WAN Need scalable high performance data access –No imposed limits on performance and size, connectivity Do we like clusters made of 5000 servers? We MUST be able to do it. –Need a way to avoid WN under-utilization

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services What is Scalla/XRootd? XRootD (and the Scalla suite) explained 2 The evolution of the BaBar-initiated xrootd project (SLAC+INFN-PD) Data access with HEP requirements in mind –But a completely generic platform, however Structured Cluster Architecture for Low Latency Access –Low Latency Access to data via xrootd servers POSIX-style byte-level random access –By default, arbitrary data organized as files –Hierarchical directory-like name space Protocol includes high performance features Exponentially scalable and self organizing –Tools and methods to cluster, harmonize, connect, …

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services The XRootd “protocol” XRootD (and the Scalla suite) explained 3 More or less, with Scalla/Xrootd come: –An architectural approach to data access Which you can use to design your scalable systems –A data access protocol (the xrootd protocol) –A clustering protocol –A set of sw pieces: daemons, plugins, interfaces, etc. –When someone tests Xrootd, he is: Designing a system (even small) Running the tools/daemons from this distribution The xrootd protocol is just one (quite immaterial) piece There are no restrictions at all on the served file types –ROOT, MPEG, TXT, MP3, AVI, …..

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services The Xrootd “protocol” The XRootD protocol is a good one –But doesn’t do any magic It does not multiply your resources It does not overcome bottlenecks –One of the aims of the project still is sw quality In the carefully crafted pieces of sw which come with the distribution –What makes the difference with Scalla/XRootD is: Scalla/XRootD Implementation details (performance + robustness) –And bad performance can hurt robustness (and vice- versa) Scalla SW architecture (scalability + performance + robustness) You need a clean design where to insert it Sane deployments (The simpler, the better – á la Google!)

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services xrootd Plugin Architecture XRootD (and the Scalla suite) explained 5 lfn2pfn prefix encoding Storage System (oss, drm/srm, etc) authentication (gsi, krb5, etc) Clustering (cmsd) authorization (name based) File System (ofs, sfs, alice, etc) Protocol (1 of n) (xrootd) Protocol Driver (XRD)

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services Different usages Fabrizio Furano - The Scalla suite and the Xrootd 6 Default set of plugins : –Scalable file server functionality –Its main historical function –To be used in common data mngmt schemes Example:The ROOT framework bundles it as it is –And provides one more plugin: XrdProofdProtocol –Plus several other ROOT-side classes –The heart of PROOF: the Parallel ROOT Facility A completely different task by loading a different plugin –With a different configuration of course… Massive low latency parallel computing of independent items (‘events’ in physics…) Using the core characteristics of the xrootd framework

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services File serving XRootD (and the Scalla suite) explained 7 Very open platform for file serving –Can be used in many many ways, even crazy An example? Xrootd-over-NFS-over-XFS data serving –BTW Do your best to avoid this kind of things! –In general, the really best option is ‘local disks’ and redundant cheap servers (if you want) + some form of data redundancy/MSS –Additional complexity can impact performance and robustness Scalla/Xrootd is only one, always up-to-date! andhttp://savannah.cern.ch/projects/xrootd –Many sites historically set up everything manually from a CVS snapshot Or wrote new plugins to accommodate their reqs –Careful manual config (e.g. BNL-STAR) –Many others rely on a standardized setup (e.g. several Alice sites, including the CERN Castor2 cluster up to now) –More about this later –Others take it from the ROOT bundle (e.g. for PROOF) –… which comes from a CVS snapshot –Again, careful and sometimes very delicate manual config

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services Most famous basic features XRootD (and the Scalla suite) explained 8 No weird configuration requirements –Scale setup complexity with the requirements’ complexity. No strange SW dependencies. Highly customizable Fault tolerance High, scalable transaction rate –Open many files per second. Double the system and double the rate. –NO DBs for filesystem-like funcs! Would you put one in front of your laptop’s file system? How long would the boot take? –No known limitations in size and total global throughput for the repo Very low CPU usage on servers Happy with many clients per server –Thousands. But check their bw consumption vs the disk/net performance! WAN friendly (client+protocol+server) –Enable efficient remote POSIX-like direct data access through WAN WAN friendly (server clusters) –Can set up WAN-wide huge repositories by aggregating remote clusters –Or making them cooperate

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services Basic working principle XRootD (and the Scalla suite) explained 9 cmsd xrootd cmsd xrootd cmsd xrootd cmsd xrootd Client Client A small 2-level cluster. Can hold Up to 64 servers P2P-like

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services Simple LAN clusters XRootD (and the Scalla suite) explained 10 cmsd xrootd cmsd xrootd cmsd xrootd cmsd xrootd Simple cluster Up to 64 data servers 1-2 mgr redirectors Simple cluster Up to 64 data servers 1-2 mgr redirectors cmsd cmsd xrootd cmsd xrootd cmsd xrootd cmsd xrootd cmsd xrootd cmsd xrootd cmsd xrootd cmsd xrootd cmsd xrootd cmsd xrootd cmsd xrootd cmsd xrootd cmsd xrootd Advanced cluster Up to 4096 (3 lvls) or 262K (4 lvls) data servers Advanced cluster Up to 4096 (3 lvls) or 262K (4 lvls) data servers Everything can have hot spares

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services Client and server XRootD (and the Scalla suite) explained 11  Very carefully crafted, heavily multithreaded ◦ Server side: lightweight, promote speed and scalability  High level of internal parallelism + stateless  Exploits OS features (e.g. async i/o, polling, selecting, sendfile)  Many many speed+scalability oriented features  Supports thousands of client connections per server  No interactions with complicated things to do simple tasks  Can easily be connected to MSS devices ◦ Client: Handles the state of the communication  Reconstructs everything to present it as a simple interface  Fast data path  Network pipeline coordination + r/w latency hiding  Connection multiplexing (just 1 connection per couple process- server, no matter the number of file opens)  Intelligent server cluster crawling  Server and client exploit multi core CPUs natively

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services Fault tolerance – what do we mean? XRootD (and the Scalla suite) explained 12 Server side –If servers go down, the overall functionality can be fully preserved Redundancy, MSS staging of replicas, … “Can” means that weird deployments can give it up –E.g. storing in a DB the physical endpoint server address for each file. (saves ~1ms per file but gives up the fault tolerance) Client side (+protocol) –The client crawls the server metacluster looking for data –The application never notices errors Totally transparent, until they become fatal –i.e. when it becomes really impossible to get to a working endpoint to resume the activity –Typical tests (try it!) Disconnect/reconnect network cables while processing Kill/restart any server

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services Available auth protocols XRootD (and the Scalla suite) explained 13  Password-based (pwd)‏ ◦ Either system or dedicated password file  User account not needed  GSI (gsi)‏ ◦ Handle GSI proxy certificates ◦ VOMS support should be inserted (Andreas, Gerri) ◦ No need of Globus libraries (and super-fast!)  Kerberos IV, V (krb4, krb5) ◦ Ticket forwarding supported for krb5  Security tokens (used by ALICE) ◦ Emphasis on ease of setup and performance Courtesy of Gerardo Ganis (CERN PH-SFT)

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services A nice example: The “many” paradigm XRootD (and the Scalla suite) explained 14 Creating big clusters scales linearly –The throughput and the size, keeping latency very low We like the idea of disk-based cache –The bigger (and faster), the better So, why not to use the disk of every WN ? –In a dedicated farm –500GB * 1000WN  500TB –The additional cpu usage is anyway quite low –And no SAN-like bottlenecks, just pure internal bus + disk speed Can be used to set up a huge cache in front of a MSS –No need to buy a faster MSS, just lower the miss rate and higher the cache performance ! Adopted at BNL for STAR (up to 6-7PB online) –See Pavel Jakl’s (excellent) thesis work –They also optimize MSS access to nearly double the staging performance –Quite similar to the PROOF approach to storage –Only storage. PROOF is very different for the computing part.

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services WAN direct access – Motivation XRootD (and the Scalla suite) explained 15 WAN bw is growing with time –But r/w analysis apps typically do not exploit it, they just copy files around, useful and not –USING the WAN bw is just a little more technologically difficult The technology can be encapsulated, and used transparently –The very high level of performance and robustness of direct data access cannot even be compared with the ones got from managing a universe of replicas Example: Some parts of the ALICE computing model rely on direct WAN access. Not one glitch in production, since one year. Just power cuts. With XrdClient+XRootD –Efficient WAN read access needs to know in advance the chunks to read Statistic-based (e.g. the embedded sequential read ahead) Exact prefetching (The ROOT framework does it for ROOT files) –Efficient WAN write access… just works VERY new feature, in beta testing phase as we speak

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services WAN direct access – hiding latency XRootD (and the Scalla suite) explained 16 Pre-xfer data “locally ” Pre-xfer data “locally ” Remote access Remote access Remote access+ Remote access+ Data Processing Data access Interesting! Efficient practical Interesting! Efficient practical

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services Cluster globalization Fabrizio Furano - The Scalla suite and the Xrootd 17 Up to now, xrootd clusters could be populated –With xrdcp from an external machine –Writing to the backend store (e.g. CASTOR/DPM/HPSS etc.) E.g. FTD in ALICE now uses the first. It “works”… Load and resources problems All the external traffic of the site goes through one machine –Close to the dest cluster If a file is missing or lost –For disk and/or catalog screwup –Job failure... manual intervention needed With 10 7 online files finding the source of a trouble can be VERY tricky

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services Virtual Mass Storage System – What’s that? XRootD (and the Scalla suite) explained 18 Basic idea: –A request for a missing file comes at cluster X, –X assumes that the file ought to be there –And tries to get it from the collaborating clusters, from the fastest one –The MPS (MSS intf) layer can do that in a very robust way Note that X itself is part of the game –And it’s composed by many servers In practice –Each cluster considers the set of ALL the others like a very big online MSS –This is much easier than what it seems Slowly Into production for ALICE in tier-2s NOTE: it is up to the computing model to decide where to send jobs (which read files)

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services cmsd xrootd GSI The Virtual MSS in ALICE XRootD (and the Scalla suite) explained 19 cmsd xrootd Prague NIHAM … any other cmsd xrootd CERN cmsd xrootd ALICE global redirector all.role meta manager all.manager meta alirdr.cern.ch:1312 But missing a file? Ask to the global metamgr Get it from any other collaborating cluster Local clients work normally

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services The ALICE::CERN::SE July trick Fabrizio Furano - The Scalla suite and the Xrootd 20 All of the ALICE cond data is in ALICE:CERN::SE –5 machines pure xrootd, and all the jobs access it, from everywhere –There was the need to refurbish that old cluster (2 yrs old) with completely new hw –VERY critical and VERY stable service. Stop it and every job stops. A particular way to use the same pieces of the vMSS In order to phase out an old SE –Moving its content to the new one –Can be many TBs –rsync cannot sync 3 servers into 5 or fix the organization of the files Advantages –Files are spread evenly  load balancing is effective –More used files are fetched typically first –No service downtime, the jobs did not stop –Server downtime of 10-20min –The client side fault tolerance made the jobs retry with no troubles

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services cmsd xrootd New SE (starting empty) The ALICE::CERN::SE July trick Fabrizio Furano - The Scalla suite and the Xrootd 21 cmsd xrootd Old CERN::ALICE::SE (full) cmsd xrootd ALICE global redirector Grid Shuttle other

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services Virtual MSS XRootD (and the Scalla suite) explained 22 The mechanism is there, fully “boxed” –The new setup does everything it’s needed A side effect: –Pointing an app to the “area” global redirector gives complete, load-balanced, low latency view of all the repository –An app using the “smart” WAN mode can just run Probably now a full scale production/analysis won’t –But what about a small debug analysis on a laptop? –After all, HEP sometimes just copies everything, useful and not I cannot say that in some years we will not have a more powerful WAN infrastructure –And using it to copy more useless data looks just ugly –If a web browser can do it, why not a HEP app? Looks just a little more difficult. Better if used with a clear design in mind

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services Scalla/Xrootd simple setup XRootD (and the Scalla suite) explained 23 The xrd-installer setup has been refurbished to include all the discussed items –Originally developed by A.Peters and D.Feichtinger Usage: same of before –Download the installer script –wget –Run it xrd-installer –install –prefix Then, there are 2 small files containing the parameters –For the token authorization library (if used) –For the rest (about 10 parameters) –Geeks can also use the internal xrootd config file template »And have access to really everything »Needed only for very particular things

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services Setup important parameters XRootD (and the Scalla suite) explained 24 VMSS_SOURCE: where this cluster tries to fetch files from, in the case they are absent. LOCALPATHPFX: the prefix of the namespace which has to be made "facultative” (not needed by everybody) LOCALROOT is the local (relative to the mounted disks) place where all the data is put/kept by the xrootd server. OSSCACHE: probably your server has more than one disk to use to store data. Here you list the mountpoints. MANAGERHOST: the redirector’s name SERVERONREDIRECTOR: is this machine both a server and redirector? OFSLIB: the authorization plugin library to be used in xrootd. METAMGRHOST, METAMGRPORT: host and port number of the meta-manager (global redirector)

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services Setup: start and stop XRootD (and the Scalla suite) explained 25 Check the status of the daemons xrd.sh Check the status of the daemons and start the ones which are currently not running (and make sure they are checked every 5 mins). Also do the log rotation. xrd.sh -c Force a restart of all daemons xrd.sh -f Stop all daemons xrd.sh -k

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services Conclusion XRootD (and the Scalla suite) explained 26 We spoke about –Base features and philosophy –Maximum performance and robustness with the least cost –Basic working principle –WAN access and WAN-wide clusters –But the WAN could be a generic ADSL (which has quite a high latency) –We like the idea of being able to access data efficiently and without pain, no matter from where –And this is quite difficult to achieve with typical distributed FSs –WAN-wide clusters and inter-cluster cooperation –Setup A lot of work going on –Assistance, integration, debug, deployments –New fixes/features –Documentation to update on the website (!)

CERN IT Department CH-1211 Genève 23 Switzerland Internet Services Acknowledgements XRootD (and the Scalla suite) explained 27 Old and new software Collaborators –Andy Hanushevsky, Fabrizio Furano (client-side), Alvise Dorigo –Root: Fons Rademakers, Gerri Ganis (security), Bertrand Bellenot (MS Windows porting) –Derek Feichtinger, Andreas Peters, Guenter Kickinger –STAR/BNL: Pavel Jakl, Jerome Lauret –Cornell: Gregory Sharp –SLAC: Jacek Becla, Tofigh Azemoon, Wilko Kroeger, Bill Weeks –Peter Elmer Operational collaborators –BNL, CERN, CNAF, FZK, INFN, IN2P3, RAL, SLAC