GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 1 Networking in Under 30 Minutes ! Richard Hughes-Jones, University of Manchester.

Slides:



Advertisements
Similar presentations
The DataTAG Project 25 March, Brussels FP6 Information Day Peter Clarke, University College London.
Advertisements

GridPP Meeting, Cambridge, 14 Feb 2002 Paul Mealor, UCL Networking Paul Mealor.
ICFA SCIC Meeting CERN Dec 01 R. Hughes-Jones Manchester Network Connectivity and Projects – a Perspective from the UK Richard Hughes-Jones PPNCG SuperJANET4.
MB - NG MB-NG Meeting UCL 1 Nov 02 R. Hughes-Jones Manchester 1 Status of Task 2 Traffic Generation and Measurement.
MB - NG MB-NG Technical Meeting 03 May 02 R. Hughes-Jones Manchester 1 Task2 Traffic Generation and Measurement Definitions Pass-1.
MB - NG MB-NG Jan 2002 R. Hughes-Jones Manchester Some Edge (Bbone) Router requirements Connect to the test systems in the IP domain. Accept marked packets.
DataTAG CERN Oct 2002 R. Hughes-Jones Manchester Initial Performance Measurements With DataTAG PCs Gigabit Ethernet NICs (Work in progress Oct 02)
Project Partners Project Collaborators The Council for the Central Laboratory of the Research Councils Funded by EPSRC GR/T04465/01
CALICE, Mar 2007, R. Hughes-Jones Manchester 1 Protocols Working with 10 Gigabit Ethernet Richard Hughes-Jones The University of Manchester
JIVE VLBI Network Meeting 15 Jan 2003 R. Hughes-Jones Manchester The EVN-NREN Project Richard Hughes-Jones The University of Manchester.
GNEW2004 CERN March 2004 R. Hughes-Jones Manchester 1 End-2-End Network Monitoring What do we do ? What do we use it for? Richard Hughes-Jones Many people.
Meeting on ATLAS Remote Farms. Copenhagen 11 May 2004 R. Hughes-Jones Manchester Networking for ATLAS Remote Farms Richard Hughes-Jones The University.
Slide: 1 Richard Hughes-Jones T2UK, October 06 R. Hughes-Jones Manchester 1 Update on Remote Real-Time Computing Farms For ATLAS Trigger DAQ. Richard Hughes-Jones.
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
CAIDA Bandwidth Estimation Meeting San Diego June 2002 R. Hughes-Jones Manchester 1 EU DataGrid - Network Monitoring Richard Hughes-Jones, University of.
CdL was here DataTAG/WP7 Amsterdam June 2002 R. Hughes-Jones Manchester 1 EU DataGrid - Network Monitoring Richard Hughes-Jones, University of Manchester.
Astronomy Sysman Meeting 29/30 April 02 R. Hughes-Jones Manchester 1 Introduction to the PPNCG Networking for the PPARC Community  Introduction to the.
Optical Networking Status of Discussion in the UK Richard Hughes-Jones The University of Manchester Particle Physics Network Coordination Group TERENA.
DataGrid WP7 Meeting CERN April 2002 R. Hughes-Jones Manchester Some Measurements on the SuperJANET 4 Production Network (UK Work in progress)
JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester Brief Report on Tests Related to the e-VLBI Project Richard Hughes-Jones The University.
CALICE UCL, 20 Feb 2006, R. Hughes-Jones Manchester 1 10 Gigabit Ethernet Test Lab PCI-X Motherboards Related work & Initial tests Richard Hughes-Jones.
DataTAG Meeting CERN 7-8 May 03 R. Hughes-Jones Manchester 1 High Throughput: Progress and Current Results Lots of people helped: MB-NG team at UCL MB-NG.
EDG WP7 Networking Demonstration uDemonstration sending HEP data CERN to SARA Multiple streams of TCP packets Tuned TCP parameters: ifconfig eth0 txqueuelen.
PFLDNet Argonne Feb 2004 R. Hughes-Jones Manchester 1 UDP Performance and PCI-X Activity of the Intel 10 Gigabit Ethernet Adapter on: HP rx2600 Dual Itanium.
© 2006 Open Grid Forum Interactions Between Networks, Protocols & Applications HPCN-RG Richard Hughes-Jones OGF20, Manchester, May 2007,
Slide: 1 Richard Hughes-Jones CHEP2004 Interlaken Sep 04 R. Hughes-Jones Manchester 1 Bringing High-Performance Networking to HEP users Richard Hughes-Jones.
ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones Network Measurement & Characterisation and the Challenge of SuperComputing SC200x.
CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 1 European Topology: NRNs & Geant SuperJANET4 CERN UvA Manc SURFnet RAL.
MB - NG MB-NG Meeting UCL 17 Jan 02 R. Hughes-Jones Manchester 1 Discussion of Methodology for MPLS QoS & High Performance High throughput Investigations.
02 nd April 03Networkshop Managed Bandwidth Next Generation F. Saka UCL NETSYS (NETwork SYStems centre of excellence)
GGF4 Toronto Feb 2002 R. Hughes-Jones Manchester Initial Performance Measurements Gigabit Ethernet NICs 64 bit PCI Motherboards (Work in progress Mar 02)
13th-14th July 2004 University College London End-user systems: NICs, MotherBoards, TCP Stacks & Applications Richard Hughes-Jones.
R. Hughes-Jones Advanced Networks Workshop Montreal November 2000 Networking for the Particle Physics Community - a UK Perspective uIntroduction to the.
MB - NG Managed Bandwidth - Next Generation. MB - NG u Project to investigate and pilot:  End-to-end traffic engineering and management over multiple.
R. Hughes-Jones ESnet Meeting Kyoto July 2000 Networking for HEP in the UK uIntroduction to the PPNCG uNetwork Topologies uMonitoring Activities uPeering.
Slide: 1 Richard Hughes-Jones e-VLBI Network Meeting 28 Jan 2005 R. Hughes-Jones Manchester 1 TCP/IP Overview & Performance Richard Hughes-Jones The University.
ICFA SCIC Meeting CERN 28 Sep 02 R. Hughes-Jones Manchester Networking from the UK Richard Hughes-Jones PPNCG.
1 Overview of IEPM-BW - Bandwidth Testing of Bulk Data Transfer Tools Connie Logg & Les Cottrell – SLAC/Stanford University Presented at the Internet 2.
GridPP Collaboration Meeting Nov 2001 R. Hughes-Jones Manchester Network Monitoring and GridPP Richard Hughes-Jones, University of Manchester 6 November.
US –Japan N+N 1 The Grid and the Network The UK Network Infrastructure A summary of E-Science supported Network projects in the UK Protocols Middleware.
MB - NG MB-NG Meeting Dec 2001 R. Hughes-Jones Manchester MB – NG SuperJANET4 Development Network SuperJANET4 Production Network Leeds RAL / UKERNA RAL.
Online-Offsite Connectivity Experiments Catalin Meirosu *, Richard Hughes-Jones ** * CERN and Politehnica University of Bucuresti ** University of Manchester.
CAIDA Bandwidth Estimation Meeting San Diego June 2002 R. Hughes-Jones Manchester UDPmon and TCPstream Tools to understand Network Performance Richard.
PFLDNet Workshop February 2003 R. Hughes-Jones Manchester Some Performance Measurements Gigabit Ethernet NICs & Server Quality Motherboards Richard Hughes-Jones.
GridPP 11 th Collaboration Meeting Networking: Current Status Robin Tasker 14 September 2004.
TERENA Networking Conference, Zagreb, Croatia, 21 May 2003 High-Performance Data Transport for Grid Applications T. Kelly, University of Cambridge, UK.
Tiziana FerrariThe DataTAG Projct, Roma Nov DataTAG Project.
DataGrid WP7 Meeting Amsterdam Nov 01 R. Hughes-Jones Manchester 1 UDPmon Measuring Throughput with UDP  Send a burst of UDP frames spaced at regular.
MB - NG MB-NG Meeting UCL 17 Jan 02 R. Hughes-Jones Manchester 1 Discussion of Methodology for MPLS QoS & High Performance High throughput Investigations.
GNEW2004 CERN March 2004 R. Hughes-Jones Manchester 1 Lessons Learned in Grid Networking or How do we get end-2-end performance to Real Users ? Richard.
18/09/2002Presentation to Spirent1 Presentation to Spirent 18/09/2002.
Networks ∙ Services ∙ People Richard-Hughes Jones eduPERT Training Session, Porto A Hands-On Session udpmon for Network Troubleshooting 18/06/2015.
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
The EU DataTAG Project Richard Hughes-Jones Based on Olivier H. Martin GGF3 Frascati, Italy Oct 2001.
Networking and the Grid Ahmed Abdelrahim NeSC NeSC PPARC e-Science Summer School 10 th May 2005.
Multi Protocol Label Switching (MPLS) UK HEP Grid Networking meeting UCL - 8th May David Salmon - RAL.
Recent experience with PCI-X 2.0 and PCI-E network interfaces and emerging server systems Yang Xia Caltech US LHC Network Working Group October 23, 2006.
Connect. Communicate. Collaborate 4 Gigabit Onsala - Jodrell Lightpath for e-VLBI Richard Hughes-Jones.
DataGrid WP7 Meeting Jan 2002 R. Hughes-Jones Manchester Initial Performance Measurements Gigabit Ethernet NICs 64 bit PCI Motherboards (Work in progress)
MB MPLS MPLS Technical Meeting Sep 2001 R. Hughes-Jones Manchester SuperJANET Development Network Testbed – Cisco GSR SuperJANET4 C-PoP – Cisco GSR.
R. Hughes-Jones Manchester
Networking between China and Europe
Networking for grid Network capacity Network throughput
CERN-USA connectivity update DataTAG project
DataTAG Project update
MB-NG Review High Performance Network Demonstration 21 April 2004
Wide Area Networking at SLAC, Feb ‘03
5th EU DataGrid Conference
MB – NG SuperJANET4 Development Network
High-Performance Data Transport for Grid Applications
Presentation transcript:

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 1 Networking in Under 30 Minutes ! Richard Hughes-Jones, University of Manchester

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 2 E-science core project MB - NG u Project to investigate and pilot:  end-to-end traffic engineering and management over multiple administrative domains – MPLS in core diffserv at the edges.  Managed bandwidth and Quality-of-Service provision. (Robin T)  High performance high bandwidth data transfers. (Richard HJ)  Demonstrate end-to-end network services to CERN using Dante EU-DataGrid and to the US DataTAG. u Partners:CISCO, CLRC, Manchester, UCL, UKERNA plus Lancaster and Southampton (IPv6) u Status:  Project is running with people in post at Manchester and UCL.  Project Tasks have been defined and Detailed planning in progress  Kit list for the routers given to Cisco  Test PC ordered  UKERNA organising core network and access links – SJ4 10Gbit upgrade  Strong Links with GGF

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 3 MB – NG SuperJANET4 Development Network (22 Mar 02) Gigabit Ethernet 2.5 Gbit POS Access 2.5 Gbit POS core MPLS Admin. Domains Dark Fiber (SSE) POS WorldCom SuperJANET4 Production Network Leeds SJ4 Dev C-PoP Warrington SJ4 Dev C-PoP London UCL OSM-4GE- WAN-GBIC OSM- 1OC48- POS-SS UCL OSM-4GE- WAN-GBIC MCC OSM-4GE- WAN-GBIC OSM- 1OC48- POS-SS MAN OSM-4GE- WAN-GBIC RAL OSM-4GE- WAN-GBIC OSM- 1OC48- POS-SS RAL OSM-4GE- WAN-GBIC OC48/POS- SR-SC SJ4 Dev ULCC OC48/POS- SR-SC WorldCom SJ4 Dev C-PoP Reading OC48/POS- LR-SC WorldCom ULCC MB - NG

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 4 Defining Background Traffic u Regular traffic - constant size packet, regular spaced in time u Poisson traffic - constant size, exponential spacing to form transient queues u IETF traffic mix - different sizes and different probability of each size sent u Play back of real traffic patterns generated from packet headers pre-recorded from suitable points of the production network. This might include:  Video Conference traffic -> play back - rude/crude tools  UCL real conf playback tool  General traffic captured at edge of a site, e.g. Manchester  Do tests with a gen to see what gets dropped 0.5 Gbit typical peak UCL u Web-bursty traffic – web mirror – wget u Need to be able to reproduce traffic Statisticaly In general UDP best to understand the net Consider UDP TCP flows u Need ToS / QoS to be set u How to control  Start Stop  measure load as function of time – links and Queues  Start and end numbers

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 5 Defining the Measurements (10) u UDP round-trip latency vs packet size & Histograms  Sum of dt/dl transfer rates  Router & stack performance  Indication of network load & Switching / transport effects – spikes u UDP 1-way delay u UDP throughput vs Packet size and transmit delay  Throughput behaviour  Offered vs achieved throughput u UDP Packet Loss vs transmit rate and burst size  Loss rate Packet loss distribution as function of time  Buffer sizes in the path & Detect packet re-ordering u UDP Inter-frame Jitter as function of packet transmit spacing  Indication of network load  Behaviour of end system NICs u TCP round-trip latency vs Message size & Histograms  Sum of dt/dl transfer rates  Stack / protocol performance – detect Packet size dependencies u TCP throughput vs Message size and transmit delay  Throughput behaviour cf UDP  Packet loss distribution as function of time + Re-transmit rate u TCP throughput vs Window size / TCP tuning u TCP throughput vs number of streams  Stream throughput – benefits & effect on the network  Packet loss distribution as function of time + Re-transmit rate u TCP Protocol behaviour - tcptrace Align Metrics with GGF/IETF

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 6 Defining the Measurements (11) u TCP round-trip latency vs Message size  Sum of dt/dl transfer rates  (Router performance)  Stack / protocol performance  Detect Packet size dependencies u TCP round-trip histograms  Indication of network load  (Switching / transport effects – spikes)  Stack / protocol performance u TCP throughput vs Message size and transmit delay  Throughput behaviour cf UDP  Offered vs achieved throughput  Packet loss distribution as function of time + Re-transmit rate  Loss as func of pkt rate eg keep the data rate the same change pkt size – multi-streams u TCP throughput vs Window size / TCP tuning u TCP throughput vs number of streams  Stream throughput - benefits  Packet loss distribution as function of time + Re-transmit rate  Effect on Network u TCP Protocol behaviour - tcptrace  What are the “burst” lengths  Effect of routers / end system NICs u All this for WRED Wt fair Qing data rate const and ch pkt size – chack how well the routers do the Qing Align Metrics with GGF/IETF

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 7 The EU DataTAG project u EU Transatlantic Girid project. u Status: Well under way – People in post, Link expected Jul 02 u Partners: CERN/PPARC/INFN/UvA. IN2P3 sub-contractor u US Partners: Caletch, ESnet, Abilene, PPDG, iVDGL … u The main foci are:  Grid Network Research including: Provisioning (CERN) Investigations of high performance data transport (PPARC) End-to-end inter-domain QoS + BW / network resource reservation Bulk data transfer and monitoring (UvA)  Interoperability between Grids in Europe and the US PPDG, GriPhyN, DTF, iVDGL (USA) PPDG, GriPhyN, DTF, iVDGL (USA)

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 8 DataTAG Possible Configuration multi-platform multi-vendor Abilene ESNET UK SuperJANET4 IT GARR- B IT GARR- B GEANT NL SURFne t NL SURFne t 2.5 Gbit PoS lambda SLAC Cisco Juniper Alcatel Cisco Juniper Alcatel Giga Switch Juniper Light Switch Giga Switch Cisco 6509 Juniper Light Switch CERN (Geneva) Starlight (Chicago) Fermi

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 9 The SuperMicro P4DP6 Motherboard uDual Xeon Prestonia (2cpu/die) u 400 MHx Front side bus u Intel® E7500 Chipset u 6 PCI-X slots u 4 independent PCI buses u Can select:  64 bit 66 MHz PCI  100 MHz PCI-X  133 MHz PCI-X u Mbit Ethernet u Adaptec AIC-7899W dual channel SCSI u UDMA/100 bus master/EIDE channels  data transfer rates of 100 MB/sec burst u Collaboration: Boston Ltd. (Watford) – SuperMicro Motherboards, CPUs, Intel GE NICs Brunel University – Peter Van Santen University of Manchester – Richard Hughes-Jones

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 10 Latency & Throughput: Intel Pro/1000 on P4DP6  Motherboard: SuperMicro P4DP6 Chipset: Intel E7500 (Plumas)  CPU: Dual Xeon Prestonia (2cpu/die) 2.2 GHz Slot 4: PCI, 64 bit, 66 MHz  RedHat 7.2 Kernel Latency high but smooth Indicates Interrupt coalescence Slope us/byte, Expect: PCI GigE0.008 PCI us/byte Max throughput 950Mbit/s Some throughput drop for packets >1000 bytes tests_Boston.ppt

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 11 New External Connectivity New External Connectivity  6 * 155 Mbit links  2.5Gbit line installed  IP commodity peer in London  Research traffic over 2.5G bit  Peer in Hudson St.  622 Mbit to Esnet  622 Mbit to Abilene.

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 12 Connectivity to Europe : Geant Connectivity to Europe : Geant  Start mid November 2001  UKERNA switched off TEN Dec 2001

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 13 ICFAMON Plot from DL to CERN for 18 th Feb to 3 rd Mar 2002 Connectivity to Europe Connectivity to Europe  UK Dante Access link 2.5 Gbit POS  Remember 19 th Oct to 1 st Nov Mbit Access link over loaded Sustained rate 130 Mbit

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 14 Monitoring: US Traffic Monitoring: US Traffic UKERNA Traffic data Kbit/s. Blue Traffic from US; Maroon Traffic to US 7 day periods 1 hour averages Weekend-Before Weekday-After Weekday-Before 14 Jan 2002 (800Mbit/s) peak 86% of total 930 Mbit 17 Jan 2002 Peering altered 22 Jan 22 Jan 2002 Weed day peak 175 Mbit/s

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 15 Monitoring: US Traffic Monitoring: US Traffic UKERNA Traffic data Kbit/s. Blue Traffic from US; Maroon Traffic to US 7 Dec 2001 (900kbit/s) 29 Jan 2002 (175kbit/s) peak is 88% of total BW 930 Mbit 10 minute averages 7 days 1 hour averages

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 16 DataGrid Network Monitoring u Several tools in use – plugged into a coherent structure:  PingER, RIPE one way times, iperf, UDPmon, rTPL, GridFTP, and NWS prediction engine u continuous tests for last few months to selected sites:  DL Man RL UCL CERN Lyon Bologna SARA NBI SLAC … uThe aims of monitoring for the Grid:  to inform Grid applications, via the middleware, of the current status of the network – input for resource broker and scheduling  to identify fault conditions in the operation of the Grid  to understand the instantaneous, day-to-day, and month-by-month behaviour of the network – provide advice on configuration etc. uNetwork information published in LDAP schema – new self-defining uCost Function in development – collaboration with WP1 & WP2 uWill be used by UK GridPP and e-science sites and non HEP WPs uLinks to the US

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 17 Local Network Monitoring Store & Analysis of Data (Access) Access to current and historic data and metrics via the Web, i.e. WP7 NM Pages, access to metric forecasts Backend LDAP script to fetch metrics Monitor process to push metrics local LDAP Server Grid Application access via LDAP Schema to - monitoring metrics; - location of monitoring data. PingER (RIPE TTB) iperf rTPL NWS etc LDAP Schema Grid Apps GridFTP Network Monitoring Architecture Robin Tasker

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 18 Network Monitoring Components PingNetmonUDPmoniPerfRipe Cron script plot Table LDAP raw control Cron script control Cron script plot Table LDAP rawplot Table LDAP raw WEB DisplayAnalysisGrid BrokerPredictions Web I/f Scheduler Tool Clients LDAP raw LDAP raw

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 19 Network Monitoring

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 20 Network Monitoring: Ping

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 21 Network Monitoring: Iperf (TCP)

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 22 HI Performance UDP Man - RAL Gigabit I/f Latency 5.2 mS Slope us/byte or 2 PCs expect: PCI GigE0.008 PCI Total us/byte 7 routers extra links 3 GigE G PoS Mbit0.012 Total Structure seen: Period 88 bytes Variation 150 – 190 us Max throughput 550Mbit/s Some throughput drop for packets < 20 us spacing  Manc 64bit 66 MHz PCI RedHat 7.1 Kernel NIC: NetGear  RAL RedHat 7.1 Kernel NIC: Intel pro 1000

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 23 HI Performance UDP Man - UvA Gigabit I/f Latency mS Slope us/byte For 2 PCs expect: PCI GigE0.008 PCI Total us/byte n routers extra links ?? No Structure seen: Throughput 825Mbit/s 1400 bytes Some throughput drop for packets < 20 us spacing Throughput 725Mbit/s 1200 bytes  Manc Motherboard: SuperMicro 370DLE Chipset: ServerWorks III LE Chipset  CPU: PIII 800 MHz PCI:64 bit 66 MHz  RedHat 7.1 Kernel  NIC: NetGear  UvA RedHat 7.1 Kernel ?  NIC: NetGear?

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 24 iperf TCP & UDP throughput MAN-SARA From 20 Oct 01 Iperf TCP throughput Mbit/s ucl – sara byte buffer Forecast UDPmon throughput Mbit/s man – sara 300 * 1400 byte frames

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 25 iperf & Pinger UK-Bologna From 20 Oct 01 Iperf throughput ucl – Bologna byte buffer Forecast in green PingER rtt (ms) dl – Bologna 1000 byte packet Forecast

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 26 Geant Enabled Routing Stable Iperf Throughput Mbit/s UCL – SARA byte buffer UDPmon Loss Throughput Mbit/s MAN – SARA iperf throughput UCL-SARA From 1 Nov 01 – Geant Operational

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 27 Iperf mem-mem vs file copy disk to disk Les Cottrell SLAC Iperf TCP Mbits/s File copy disk-to-disk Fast Ethernet OC3 Disk limited Over 60Mbits/s iperf >> file copy

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 28 Don’t Forget Involvement with: u GGF u US Grids PPDG, iVDGL uUKQCD UKDMC (dark matter) MINOS uAstroGRID uAccessGRID u E-science Centres u Optical “Lambda Switching” Projects u Collaborations with UKERNA, Dante, Terena …

GridPP Collaboration Meeting May 2002 R. Hughes-Jones Manchester 29 More Information Some URLs  PPNCG Home page with Stop Press:  and  DataGrid WP7 Networking:  DataGrid WP7 EDG Monitoring:  IEPM PingER home site:  IEPM-BW site: