1 ATLAS-VALENCIA COORDINATION MEETING 9th May 2008 José SALT TIER2 Report 1.- Present Status of the ES-ATLAS-T2 2.- Computing-Analysis issues Overview.

Slides:



Advertisements
Similar presentations
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
Advertisements

Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
December Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Experience with the WLCG Computing Grid 10 June 2010 Ian Fisk.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
Main title ERANET - HEP Group info (if required) Your name ….
Main title HEP in Greece Group info (if required) Your name ….
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
David Adams ATLAS ADA, ARDA and PPDG David Adams BNL June 28, 2004 PPDG Collaboration Meeting Williams Bay, Wisconsin.
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
LCG Introduction John Gordon, STFC-RAL GDB September 9 th, 2008.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
October 2006ICFA workshop, Cracow1 HEP grid computing in Portugal Jorge Gomes LIP Computer Centre Lisbon Laboratório de Instrumentação e Física Experimental.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Ricardo Rocha CERN (IT/GS) EGEE’08, September 2008, Istanbul, TURKEY Experiment.
CMS Computing Model Simulation Stephen Gowdy/FNAL 30th April 2015CMS Computing Model Simulation1.
PanDA Status Report Kaushik De Univ. of Texas at Arlington ANSE Meeting, Nashville May 13, 2014.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
MND review. Main directions of work  Development and support of the Experiment Dashboard Applications - Data management monitoring - Job processing monitoring.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
1 Andrea Sciabà CERN The commissioning of CMS computing centres in the WLCG Grid ACAT November 2008 Erice, Italy Andrea Sciabà S. Belforte, A.
Pavel Nevski DDM Workshop BNL, September 27, 2006 JOB DEFINITION as a part of Production.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Response of the ATLAS Spanish Tier2 for.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
1 Studies for setting up an Analysis Facility for ATLAS Data Analysis at CNRST and its application to Top Physics Acuerdo Bilateral CSIC-CNRST: 2007MA0057.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
1 ATLAS-VALENCIA ANALYSIS MEETING 24- Sept-2007 José SALT The Spanish ATLAS TIER-2. Interaction Model with the IFIC TIER Description of the Spanish.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Presented by: Álvaro Fernandez Casani IFIC – Valencia (Spain) eScience Intrastructure T2-T3 for High Energy Physics Santiago.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
LCG Introduction John Gordon, STFC-RAL GDB June 11 th, 2008.
17 September 2004David Foster CERN IT-CS 1 Network Planning September 2004 David Foster Networks and Communications Systems Group Leader
Grid Computing 4 th FCPPL Workshop Gang Chen & Eric Lançon.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
1 Presented by : José F. Salt Cairols UAM (Madrid), 5 th and 6th June 2012 XIII Presential Meeting of the Spanish ATLAS TIER-2.
Presented by: Santiago González de la Hoz IFIC – Valencia (Spain) Experience running a distributed Tier-2 and an Analysis.
ATLAS Distributed Analysis S. González de la Hoz 1, D. Liko 2, L. March 1 1 IFIC – Valencia 2 CERN.
Belle II Physics Analysis Center at TIFR
ATLAS Distributed Analysis tests in the Spanish Cloud
Added value of new features of the ATLAS computing model and a shared Tier-2 and Tier-3 facilities from the community point of view Gabriel Amorós on behalf.
Data Challenge with the Grid in ATLAS
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Readiness of ATLAS Computing - A personal view
LHC Data Analysis using a worldwide computing grid
Status and Plans of the Spanish ATLAS
ATLAS Distributed Analysis tests in the Spanish Cloud
Presentation transcript:

1 ATLAS-VALENCIA COORDINATION MEETING 9th May 2008 José SALT TIER2 Report 1.- Present Status of the ES-ATLAS-T2 2.- Computing-Analysis issues Overview

2 Ramp-up of Tier-2 Resources (after LHC rescheduling) numbers are cumulative Evolution of ALL ATLAS T-2 resources according to the estimations made by ATLAS CB (October 2006) Strong increase of resources IFAEUAMIFICTOTAL CPU(ksi2k) Disck (TB) Spanish ATLAS T-2 assuming a contribution of a 5% to the whole effort Present resources of the Spanish ATLAS T-2 (May’08) Resources: Present Status and Growing Profile

3 IFAEUAMIFICT-2 CPU(kSI2k) Disk(TB) foreseen 2008 CPU(kSI2k) Pledged foreseen 2008 Disk(TB) Pledged New acquisitions in progress to get the pledged resources IFAE: –CPU: equipment already at PIC –Disk: 2 Thumpers SUN IFIC: –CPU & Disk in purchase phase UAM: –CPU & Disk in purchase phase Accounting values are normalized according to WLCG recommendantions

4 UAM: –José del Peso (PL) –Juanjo Pardo (*) –Luís Muñoz –Pablo Fernández (*T.MEC) IFAE: –Andreu Pacheco (PL) –Jordi Nadal (*) –Carlos Borrego (*) (<-) –Marc Campos –Hegoi Garaitogandia ( NIKHEF) IFIC –José Salt (PL and Coordinator) –Javier Sánchez (Tier-2 Operation Coordination) –Santiago González (Tier-3) –Alvaro Fernández (EGEE) –Mohammed Kaci (*T.MEC) –Luis March (*) –Farida Fassi ( FR Tier-1) –Gabriel Amorós (EGEE) –Alejandro Lamas (*) –Roger Vives (Ph.D student) –Elena Oliver (Ph. D. student) –Miguel Villaplana (Ph. D. student) Total FTE = 14 FTEs Human Resources of the Spanish ATLAS TIER-2 (*)means payed by the Tier-2 project

5 To increase surface from 90 m**2 to 150 m**2 – OK Upgrade UPS: 50 KVA to 250 KVA –To be delivered To install 70 lines of 16 Amps ( 3 lines/rack) –pending To increase the power for the building (Electrical 20KV transformer, DIESEL generator, Low Voltage distribution, …) –Public tendering in progress To change the air conditioning (impulsion on technical floor) –Waiting economical proposal New racks –Buying process Redistribution of all machines located at Computer Center –pending Execution: in progress and it will be ready by Summer 2008 Upgrade of IFIC Computing Room new area

6 Infrastructure Maintenance Production of Sim. Data (Public) Production of Sim. Data (Private) User Support Data Management Distributed Analysis Interactive Analysis Local Batch Facility End Users Interaction Scheme TIER-2 TIER-3

7 MC Production MC Production in 2008: Since January-2008, ATLAS has migrated to PANDA executor The production efficiency has been positively affected; the average efficiency was 50% and now is T2-ES T2-ES contribution is 1.2% in the first 2008 quaterly ( esentialy due to 2 downtime/paradas of PIC ) MC Production in 2006 and The production in T2-ES follows the same trend as LCG/EGEE (good performance of the ES-ATLAS-T2) -The ES-ATLAS-T2 average contribution to the total Data Production in LCG/EGEE was 2.8% in (take into account that 250 centers/institutes are Participating, 10 of them are T-1)

8 Activity: –To organize the ATLAS User Support in GRID Computing –Site User Support has started at the end of 2006 –To solve questions/doubts/getting started… – To provide periodical tutorials/hands on software tools for analysis, simulation,etc; –Maintainance of the related Twiki: Since 17-April-08 – Transfer of responsibility ATLAS Grid Computing User IFIC Farida Fassi Mohammed Kaci Elena Oliver Miguel Villaplana

9 Distributed data management exercises at Tier-2 Cosmic rays data taking (Tier-1 => Tier-2) 1) End Sept. 2007: M4 exercise (ESD & CBNT) 2) End Oct. – begin Nov 2007: M5 exercise (ESD & CBNT) 3) 12 Feb. 2008: FDR-1 (fdr08_run1, AOD) 4) 25 Feb. 2008: CCRC08 (ccrc08_t0, AOD) Data transfer (GB/day)Number of completed file transfers per day CCRC08 M5 IFIC tests DDM activity at Tier-2 in February 2008 Data Movement Management(DMM)

10 Data stored at the Spanish ATLAS Tier-2: Data at IFICData at UAMData at IFAE Tier-2Total data (TB)AODs (TB)AOD contribution IFAE % UAM % IFICDISK % More info: Data Movement Management(DMM)

11 Requested IFIC in Salvador Marti (silicio) misal1_csc PythiaZmumu.digit.RDO.v _tid Salvador Marti (silicio) misal1_csc pythia_minbias.digit.RDO.v _tid Salvador Marti (silicio) misal1_csc T1_McAtNlo_Jimmy.digit.RDO.v _tid Salvador Marti (silicio) misal1_csc T1_McAtNlo_Jimmy.digit.RDO.v _tid Susana Cabrera (Tier-3) trig1_misal1_mc T1_McAtNlo_Jimmy.recon.AOD.v _tid Salvador Marti (silicio) misal1_csc PythiaH130zz4l.digit.RDO.v _tid Susana Cabrera (Tier-3) trig1_misal1_csc J0_pythia_jetjet.recon.AOD.v _tid trig1_misal1_csc J1_pythia_jetjet.recon.AOD.v _tid trig1_misal1_mc J2_pythia_jetjet.recon.AOD.v _tid trig1_misal1_mc J3_pythia_jetjet.recon.AOD.v _tid trig1_misal1_csc J4_pythia_jetjet.recon.AOD.v _tid trig1_misal1_mc J5_pythia_jetjet.recon.AOD.v _tid trig1_misal1_mc J6_pythia_jetjet.recon.AOD.v _tid trig1_misal1_csc J7_pythia_jetjet.recon.AOD.v _tid Susana Cabrera (Tier-3) user.top.TopView121302_MuidTau1p3p.trig1_misal1_mc TTbar_FullHad_McAtNlo_Jimmyv user.top.TopView121302_MuidTau1p3p.trig1_misal1_mc AcerMC_Wtv user.top.TopView121302_MuidTau1p3p.trig1_misal1_mc AcerMC_schanv user.top.TopView121302_MuidTau1p3p.trig1_misal1_mc AcerMC_tchanv user.top.TopView121302_MuidTau1p3p.trig1_misal1_mc AlpgenJimmyWenuNp2_pt20_filt3jetv user.top.TopView121302_MuidTau1p3p.trig1_misal1_mc AlpgenJimmyWenuNp3_pt20_filt3jetv user.top.TopView121302_MuidTau1p3p.trig1_misal1_mc AlpgenJimmyWenuNp4_pt20_filt3jetv user.top.TopView121302_MuidTau1p3p.trig1_misal1_mc AlpgenJimmyWenuNp5_pt20_filt3jetv user.top.TopView121302_MuidTau1p3p.trig1_misal1_mc AlpgenJimmyWmunuNp2_pt20_filt3jetv user.top.TopView121302_MuidTau1p3p.trig1_misal1_mc AlpgenJimmyWmunuNp3_pt20_filt3jetv user.top.TopView121302_MuidTau1p3p.trig1_misal1_mc AlpgenJimmyWmunuNp4_pt20_filt3jetv

12 user.top.TopView121302_MuidTau1p3p.trig1_misal1_mc AlpgenJimmyWmunuNp5_pt20_filt3jetv user.top.TopView121302_MuidTau1p3p.trig1_misal1_mc AlpgenJimmyWtaunuNp2_pt20_filt3jetv user.top.TopView121302_MuidTau1p3p.trig1_misal1_mc AlpgenJimmyWtaunuNp3_pt20_filt3jetv user.top.TopView121302_MuidTau1p3p.trig1_misal1_mc AlpgenJimmyWtaunuNp4_pt20_filt3jetv user.top.TopView121302_MuidTau1p3p.trig1_misal1_mc AlpgenJimmyWtaunuNp5_pt20_filt3jetv user.top.TopView121302_MuidTau1p3p.trig1_misal1_csc AcerMCbbWv user.top.TopView121302_MuidTau1p3p.trig1_misal1_csc WZ_Herwigv trig1_misal1_mc AlpgenJimmyZeeNp2LooseCut.recon.AOD.v trig1_misal1_mc AlpgenJimmyZeeNp3LooseCut.recon.AOD.v trig1_misal1_mc AlpgenJimmyZeeNp4LooseCut.recon.AOD.v trig1_misal1_mc AlpgenJimmyZeeNp5LooseCut.recon.AOD.v trig1_misal1_mc AlpgenJimmyZmumuNp2LooseCut.recon.AOD.v trig1_misal1_mc AlpgenJimmyZmumuNp3LooseCut.recon.AOD.v trig1_misal1_mc AlpgenJimmyZmumuNp4LooseCut.recon.AOD.v trig1_misal1_mc AlpgenJimmyZmumuNp5LooseCut.recon.AOD.v trig1_misal1_mc AlpgenJimmyZtautauNp2LooseCut.recon.AOD.v trig1_misal1_mc AlpgenJimmyZtautauNp3LooseCut.recon.AOD.v trig1_misal1_mc AlpgenJimmyZtautauNp4LooseCut.recon.AOD.v trig1_misal1_mc AlpgenJimmyZtautauNp5LooseCut.recon.AOD.v Susana Cabrera (Tier-3) trig1_misal1_mc AlpgenJimmyWbbNp1.recon.AOD.v trig1_misal1_mc AlpgenJimmyWbbNp2.recon.AOD.v trig1_misal1_mc AlpgenJimmyWbbNp3.recon.AOD.v trig1_misal1_mc AcerMC_Wt.merge.AOD.v _tid trig1_misal1_mc AcerMC_schan.merge.AOD.v trig1_misal1_mc AcerMC_tchan.merge.AOD.v

13 It is provided by the Spanish NREN RedIRIS -Connection at 1 Gbps to University backbone -10 Gbps among RedIRIS POP in Valencia, Madrid and Catalunya Atlas collaboration: -More than 9 PetaBytes (> 10 million of files) transferred in the last 6 months among Tiers -The ATLAS link requirement between Tier1 and Tiers-2 has to be 50 MBytes/s (400 Mbps) in a real data taken scenario. Data transfer between Spanish Tier1 and Tier2. We reached 250 Mbps using gridftp between T1 -> T2 (CCRC’08) Data transfer from CASTOR (IFIC) for a TICAL private production. We reached 720 Mbps (plateau) In about 20 hours (4th March 08) High rate is possible Networking and Data Transfer

14 Analysis Use Cases #1: AANT ntuple generation from a given dataset to analize events with semileptonic channels to study Top Polarization (Roger) –Challenge: to process evts –Distributed Analysis is needed –GANGA and TopView are used –Several sites in the game: IFIC (mainly),Lyon,FZK,CYF –Good performance of IFIC

15 Analysis Use Cases #2: Analysis of AOD for the process Z_H -> t tbaar (Little Higgs) ( physics analysis done by Miguel and Elena) –4 ROOT files x 5000 of event generation in local –30 files AOD x 250 reconstruction events in GRID (20000 evts) –Whole process takes 5 days/AOD file –Strategy; To test with a small number of events Once problems are fixed, jobs are sent to Tier-2 of 12 countries –We save time : 400 days running in local / 14 days using GRID

16 Last grid use WLCG Workshop

17 Monthly Report -Very good % - well placed in the ranking (about 60 Tier-2 in the list) January-08 February-08 March-08 Tier-2 Availability & Reliability Report

18 FDR & CCRC’ GB/s out of CERN Weekly Throughput

19 There are 3 people from the Spanish Tier-2 (out of 25) contributing to the ATLAS Distributed Computing operations Shifts: –L. March (IFIC), J. Nadal (IFAE) and M. Campos (IFAE) Shifts involve different activities: –MC production, DDM services, Distributed Analysis and Tier-0 These coordinated shifts started at the end of January 2008, around one month ago. They are official shifts for the ATLAS computing since then. ADC operations Shifts

20 2 shifters/day: one from USA and another from Europe. Each shift covers 15 hours/day with USA/Canada & Europe time zone. Shifts in Europe: from 9 am to 5 pm It will be increased to 24 hours/day with Asia and far East shifters. More info: Shifts covers 15 hours/day ADC operations Shifts

21 L. March has accepted to be the liaison between the ATLAS offline software commissioning and the Tier-1s for re-processing cosmic rays data. Started during last ATLAS Software Week (15-29 Feb. 2008). Work and responsibilities are under study. Tier-1 re-processing More info:

22 To identify the datasets and datastreams –updates due to the 10 TeV –Early Physics ( at CERN) User Space on the GRID –Under discussion in ATLAS Computing community –T-2 is a Collaboration Infrastructure and user disk space will be constrained by other concurrent activities (MC Production) –To put in place IFIC Local Resources: discuss the funding model from the different ATLAS projects (IFIC) To extract a few analysis ‘workflows’ in order not to spend time supporting the users To identify, among the different analysis, which level of priority can be assigned to the resources. Computing-Analysis Issues

23 User Support –To give problem-solution statistics –To establish limits in the kind of problems to solve Computing-Physics Analysis Board: –Necessary to follow-up the open issues and Action Lists in the inter-meeting period –Composition: 2-3 Computing Analysis –Establish coordination with the UAM and IFAE groups Computing-Analysis Issues

24 BACKUP SLIDES

25 Data Management is one of the main activities inside the ATLAS Spanish federated Tier-2. AOD physics data are going to be stored at these sites (distributed geographically) Data Management main activities: –Data monitoring at federated Tier-2: – –Old and unnecessary data cleaning –Inconsistency check and cleaning up (catalogue entries and/or file size mismatch) –Data replication to Tier-2 for Tier-2 users (using DQ2 client and doing subscriptions) Coordination with Tier-1: –Bi-weekly Tier-1/Tier-2 coordination meetings where Tier- 1/Tier-2 data management issues are discussed –There is a collaboration project between Tier-1/Tier-2 to monitor some ATLAS/Grid services at both sites Data Movement Management(DMM)

26 Main objective: to achieve a more federative structure Organization & management of the project is responsibility of the subproject leaders –SL constitute the Project Management Board A. Pacheco (IFAE), J. del Peso (UAM) and J. Salt (IFIC) –PMB chairman is the Coordinator of the Project J. Salt (Tier-2 Coordinator) PMB meeting every 4-5 weeks Tier-2 Operation Meetings: Virtual, biweakly Presential Tier-2 meetings: every 6 months –February’06 : Valencia –October’06: Madrid –May’07: Barcelona –November’07: Valencia with a previous T1-T2 Coordination meeting –May’08: foreseen in Madrid PROJECT MANAGEMENT (PM)

27 Spanish physicists are doing physics analysis using GANGA on data distributed around the world This tool has been installed in our Tier2 infrastructure GANGA is an easy-to-use front-end for job definition, management and submission. Users interaction via: Its own Python shell (command line) A Graphical User Interface (GUI) In our case the jobs are sent to the LCG/EGEE Grid flavour. We are doing performance test: LCG Resource Broker (RB) gLite Workload Management System (WMS). New RB from EGEE Distributed Analysis System (DAS)

28 Data Management More info:

29 Results from FDR-1 (12 Feb. 2008): Tier-2datasetsTotal files in datasets IFAE2652 IFICDISK4487 UAM2652 More info: Data Movement Management(DMM)