Presentation is loading. Please wait.

Presentation is loading. Please wait.

John DeHart and James Moscola (Original FastPath Design) August 2008 Flow Stats Module -- Control.

Similar presentations


Presentation on theme: "John DeHart and James Moscola (Original FastPath Design) August 2008 Flow Stats Module -- Control."— Presentation transcript:

1 John DeHart and James Moscola (Original FastPath Design) August 2008 Flow Stats Module -- Control

2 2 - Flow Stats Module – John DeHart and James Moscola SPP V1 LC Egress with 1x10Gb/s Tx SWITCHSWITCH MSFMSF Rx1 RBUFRBUF Rx2 Key Extract Lookup Hdr Format Flow Stats1 NN 1x10G Tx1 NN 1x10G Tx2 MSFMSF TBUFTBUF NN RTMRTM Stats (1 ME) SRAM3 SCR SRAM1 SRAM2 NN Flow Stats2 TCAM XScale SCR XScale NAT Miss Scratch Ring XScale NAT Pkt return SCR SRAM Archive Records Port Splitter QM0 SCR QM1 SCR QM2 SCR QM3 SCR SRAM Freelist

3 3 - Flow Stats Module – John DeHart and James Moscola SPP V1 LC Egress with 10x1Gb/s Tx SWITCHSWITCH MSFMSF Rx1 RBUFRBUF Rx2 Key Extract Lookup Hdr Format MSFMSF TBUFTBUF NN RTMRTM SCR XScale NAT Miss Scratch Ring TCAM 5x1G Tx1 (P0-P4) 5x1G Tx2 (P5-P9) SCR Flow Stats1 SRAM1 SRAM2 Flow Stats2 XScale NAT Pkt return SCR SRAM Archive Records Port Splitter QM0 SCR QM1 SCR QM2 SCR QM3 SCR Stats (1 ME) SRAM3 SCR SRAM Freelist

4 4 - Flow Stats Module – John DeHart and James Moscola Overview of Flow Stats Main functions »Uniquely identify flows based on 6-tuple Hash header values to get an index into a table of records »Maintain packet and byte counts for each flow Compare packet header with header values in record, and increment if same Otherwise, follow hash chain until correct record is found »Send flow information to XScale for archiving every five minutes Secondary functions »Maintain hash table Identify and remove flows that are no longer active Ø Invalid flows are removed so memory can be resused

5 5 - Flow Stats Module – John DeHart and James Moscola Design Considerations Efficiently maintaining a hash table with chained collisions »Efficiently inserting and deleting records Efficiently reading hash table records Synchronization issues »Multiple threads modifying hash table and chains

6 6 - Flow Stats Module – John DeHart and James Moscola Start Timestamp (16b) Packet Counter (32b) SrcPort (16b)DestPort (16b) Destination Address (32b) Source Address (32b) Protocol (8b) LW0 LW1 LW2 LW3 LW4 LW5 LW6 LW7 Flow Record Total Record Size = 8 32-bit words »V is valid bit Only needed at head of chain ‘1’ for valid record ‘0’ for invalid record »Start timestamp (16-bits) is set when record starts counting flow Reset to zero when record is archived »End timestamp (16-bits) is set each time a packet is seen for the given flow »Packet and Byte counters are incremented for each packet on the given flow Reset to zero when record is archived »Next Record Number is next record in hash chain 0x1FFFF if record is tail Address of next record = (next_record_num * record_size) + collision_table_base_addr Next Record Number (17b) Slice ID (VLAN) (12b) Reserved (6b) Byte Counter (32b) Reserved (14b) = Member of 6-tuple V (1b) End Timestamp (16b) TCP Flags (6b)

7 7 - Flow Stats Module – John DeHart and James Moscola Timestamp Details Timestamp on XScale is 64-bits Storing 64-bit start and end timestamps would cause each flow record to be too large for a single SRAM read Instead, only store the 16-bits of each timestamp required to represent a five minute time interval » Clock frequency = 1.4 GHz » Timestamp increments every 16 clock cycles » Use bits 41:26 for 16 bit timestamps (2 26 * 16 cycles)/1.4GHz =.767 seconds (2 41 * 16 cycles)/1.4GHz =25131.69 seconds (418 minutes) » Time interval that can be represented using these bits.767 seconds through 418 minutes

8 8 - Flow Stats Module – John DeHart and James Moscola Hash Table Memory Allocating 4 MBytes in SRAM Channel 3 for hash table »Supports ~130K records »Divided memory 75% for the main table and 25% for the collision table »Memory required = Main_table_size + Collision_table_size.75*(#records * #bytes/record) +.25*(#records * #bytes/record) ~98K records + ~32K records ~3Mbytes + ~1Mbytes Space for main table and collision table can be adjusted to tune performance »Larger main table means fewer collisions, but still need adequate space for collision table Collision Table Main Table ~25% ~75%

9 9 - Flow Stats Module – John DeHart and James Moscola Inserting Into Hash Table IXP has 3 different hash functions (48-bit, 64-bit, 128-bit) »Using 64-bit hash function is sufficient and takes less time than 128-bit hash function Not including Source Addr or Protocol into address HASH(D.Addr, S.Port, D.Port); Result of hash is used to address the main hash table »Since we want ~100K records in main table, result of hash is used to get as close to 100K entries as possible by adding a 16bit and 15bit chunk from the hash result hash_result(15:0) + hash_result(30:16) = record_number »Records in the main table represent the head of a chain »If slot at head of chain is empty (valid_bit=0), store record there »If slot at head of chain is occupied, compare 6-tuple If 6-tuple matches Ø If packet_count == 0 then (existing flows will have 0 packet_counts when previous packets on flow have just been archived) –Increment packet_counter for record –Add size of current packet to byte_counter –Set start and end time stamps Ø If packet_count > 0 then –Increment packet_counter for record –Add size of current packet to byte_counter –Set end time stamp If 6-tuples doesn’t match then a collision has occurred and the record needs to be stored in collision table Collision Table Main Table

10 10 - Flow Stats Module – John DeHart and James Moscola Hash Collisions Hash collisions are chained in linked list »Head of list is in the main table »Remainder of list is in collision table SRAM ring maintains list of free slots in collision table »Slots are numbered from 0 to #_Collision_Table_Slots Same as next_record_number To convert to memory address (slot_num * record_size) + collision_table_base_addr »When a collision occurs, a pointer to an open slot in the collision table can be retrieved from the SRAM ring »When a record is removed from the collision table, a pointer is returned to the SRAM ring for the invalidated slot Collision Table Main Table SRAM Ring Free list

11 11 - Flow Stats Module – John DeHart and James Moscola Archiving Hash Table Records Send all valid records in hash table to XScale for archiving every 5 minutes For each record in the main table (i.e. start of chain)... »For each record in hash chain... If record is valid... Ø If packet count > 0 then –Send record to XScale via SRAM ring –Set packet count to 0 –Set byte count to 0 –Leave record in table Ø If packet count == 0 then –Flow has already been archived –No packet has arrived on flow in 5 minutes –Record is no longer valid –Delete record from hash table to free memory Start Timestamp_high (32b) Start Timestamp_low (32b) End Timestamp_high (32b) Packet Counter (32b) SrcPort (16b)DestPort (16b) Destination Address (32b) Source Address (32b) Protocol (8b) LW0 LW1 LW2 LW3 LW4 LW5 LW6 LW7 End Timestamp_low (32b) LW8 LW9 Slice ID (VLAN) (12b) Reserved (6b) Byte Counter (32b) Info Sent to XScale for each flow every 5 minutes TCP Flags (6b)

12 12 - Flow Stats Module – John DeHart and James Moscola Deleting Records from Hash Table While archiving records »If packet count is zero then remove record from hash table Record has already been archived, and no packets have arrived in the last five minutes To remove a record »If ((record == head) && (record == tail)) Ø Valid_bit = 0 »Else If ((record == head) && (record != tail)) Replace record with record.next Free the slot for the moved record »Else if record != head Set previous records next pointer to record.next Free slot for the deleted record Collision Table Main Table SRAM Ring Free list

13 13 - Flow Stats Module – John DeHart and James Moscola Memory Synchronization Issues Multiple threads reading/writing same block of memory Only allow 1 ME to modify structure of hash table »Inserting and deleting nodes Use global registers to indicate that the structure of the hash table is being modified »Eight global lock registers (1 per thread) to indicate what chain in the hash table is being modified »When a thread wants to insert/delete a record from hash table Store pointer to the head of the hash chain in the threads dedicated global lock register Ø If another thread is processing a packet that hashed to the same hash chain, wait for lock register to clear and restart processing packet Ø Otherwise, continue processing the packet normally Clear global lock register when done with insert/deletes Ø Value of 0xFFFFFFFF indicates that lock is clear

14 14 - Flow Stats Module – John DeHart and James Moscola Flow Stats Execution ME 1 »Init - Configure hash function »8 threads Read packet header Hash packet header Send header and hash result to ME2 for processing ME 2 (thread numbers may need adjusting) »Init - Load SRAM ring with addresses for each slot in the collision table Init - Set TIMESTAMP to 0 »7 threads (ctx 1-7) Insert records into hash table Increment counter for records »1 thread (ctx 0) Archive and delete hash table records

15 15 - Flow Stats Module – John DeHart and James Moscola Diagram of Flow Stats Execution (ME1) get buffer handle from QM read buffer descriptor (SRAM) read packet header (DRAM) build hash key compute hash send packet info to ME2 send buffer handle to TX 150 cycles 300 cycles ~50 cycles 100 cycles 60 cycles ~570 cycles 300 cycles

16 16 - Flow Stats Module – John DeHart and James Moscola Incrementing Counters »Adds records to hash chain, but doesn’t remove them match?valid? read hash table record (SRAM) compare record to header insert new record NoYes No ~10 cycles 150 cycles get packet info from ME1 60 cycles tail? Yes No get record slot from freelist 150 cycles insert new record read next record in chain 150 cycles Best: ~360 cycles Worst: ~520 +160x x Diagram of Flow Stats Execution (ME2) Iterating through hash chain Locking head of chain clear lock register count==0? No Yes Write START/END time & new counts Write END time & new counts clear lock register clear lock register clear lock register set register to lock chain set register to lock chain set register to lock chain set register to lock chain 150 cycles

17 17 - Flow Stats Module – John DeHart and James Moscola read current time set valid bit to zero head of list? No write next_ptr to previous list item return record slot to freelist set register to lock chain clear lock register set register to lock chain clear lock register tail of list? Yes set register to lock chain read record.next replace record with record.next return record.next slot to freelist clear lock register Diagram of Flow Stats Execution (ME2) Archiving Records »Removes records from hash chain, but doesn’t add them »Processing of archiving records occurs every five minutes 5 minutes? No Yes read next record from main table send record to XScale set register to lock chain reset counters and timestamps clear lock register more records in chain? count == 0? Yes No Yes Waiting to archive Locking head of chain valid? done with all records? Yes No Yes read next record in chain No Yes No

18 18 - Flow Stats Module – John DeHart and James Moscola match current chain? Yes check global lock values No continue processing packet Return from Swap When returning from each CTX switch, always check global lock registers »If any of the global locks contain the address of the hash chain that the current thread is trying to modify, then the hash chain is locked and the current thread must restart processing on the current packet »If none of the global locks contain the address of the hash chain that the current thread is trying to modify, then the current thread can just continue processing that packet as usual restart procssing packet

19 19 - Flow Stats Module – John DeHart and James Moscola SPP V1 LC Egress with 1x10Gb/s Tx Flow Stats1 NN 1x10G Tx1 SRAM3 Flow Stats2 XScale SCR SRAM Archive Records QM0 QM1 QM2 QM3 SCR SRAM Freelist SrcPort (16b)DestPort (16b) Destination Address (32b) Source Address (32b) Slice ID (VLAN) (12b) Protocol (8b) Hash Result (17b) Rsv (2b) Rsvd (3b) Packet Length (16b) Start Timestamp_high (32b) Start Timestamp_low (32b) End Timestamp_high (32b) Packet Counter (32b) SrcPort (16b)DestPort (16b) Destination Address (32b) Source Address (32b) Protocol (8b) End Timestamp_low (32b) Slice ID (VLAN) (12b) Reserved (6b) Byte Counter (32b) Buffer Handle(24b) Rsv (3b) Port (4b) V1V1 V: Valid Bit TCP Flags (6b) TCP Flags (6b)

20 20 - Flow Stats Module – John DeHart and James Moscola Flow Statistics Module Scratch rings »QM_TO_FS_RING_1: 0x2400 – 0x27FF// for receiving from QM »QM_TO_FS_RING_2: 0x2800 – 0x2BFF// for receiving from QM »FS1_TO_FS2_RING: 0x2C00 - 0x2FFF// for sending data from FS1 to FS2 »FS_TO_TX_RING_1: 0x3000 - 0x33FF// for sending data to TX1 »FS_TO_TX_RING_2: 0x3400 – 0x37FF// for sending data to TX2 SRAM rings »FS2_FREELIST: 0x???? - 0x???? // stores list of open slots in collision table »FS2_TO_XSCALE: 0x???? – 0x???? // for sending record information to the XScale for archiving LC Egress SRAM Channel 3 info for Flow Stats »HASH_CHAIN_TAIL0x1FFFF// indicates the end of a hash chain »ARCHIVE_DELAY0x0188// 5 minutes »RECORD_SIZE8 * 4 = 32// 8 32-bit words/record * 4 bytes/word »TOTAL_NUM_RECORDS130688// MAX with 4 MB table is ~130K records »NUM_HASH_TABLE_RECORDS98304// NUM_HASH_TABLE_RECORDS<=TOTAL_NUM_RECORDS (mod 32 = 0) »NUM_COLLISION_TABLE_RECORDSTOTAL_NUM_RECORDS - NUM_HASH_TABLE_RECORDS = 32384 »LCE_FS_HASH_TABLE_BASESRAM_CHANNEL_3_BASE_ADDR + 0x200000 = 0xC0200000 »LCE_FS_HASH_TABLE_SIZE0x400000 »LCE_FS_COLLISION_TABLE_BASE(HASH_TABLE_BASE + (RECORD_SIZE * NUM_HASH_TABLE_RECORDS)) = 0xC0500000

21 21 - Flow Stats Module – John DeHart and James Moscola Overview of Flow Stats 2 MEs in Fastpath to collect flow data for each pkt »Byte counter per flow »Pkt counter per flow »Archive data to XScale via SRAM ring every 5 minutes XScale control daemon(s) to process data »Receive flow information from MEs »Reformat to put into PlanetFlow format »Maintain databases for PlanetLab archiving and for identifying internal flows (pre-NAT translation) when an external flow (post-NAT) has a complaint lodged against it.

22 22 - Flow Stats Module – John DeHart and James Moscola SPP V1 LC Egress with 10x1Gb/s Tx SWITCHSWITCH MSFMSF Rx1 RBUFRBUF Rx2 Key Extract Lookup Hdr Format MSFMSF TBUFTBUF NN RTMRTM SCR XScale NAT Miss Scratch Ring TCAM 5x1G Tx1 (P0-P4) 5x1G Tx2 (P5-P9) SCR Flow Stats1 SRAM1 SRAM2 Flow Stats2 XScale NAT Pkt return SCR SRAM Archive Records Port Splitter QM0 SCR QM1 SCR QM2 SCR QM3 SCR Stats (1 ME) SRAM3 SCR SRAM Freelist

23 23 - Flow Stats Module – John DeHart and James Moscola Start Timestamp (16b) Packet Counter (32b) SrcPort (16b)DestPort (16b) Destination Address (32b) Source Address (32b) Protocol (8b) LW0 LW1 LW2 LW3 LW4 LW5 LW6 LW7 Flow Record Total Record Size = 8 32-bit words »V is valid bit Only needed at head of chain ‘1’ for valid record ‘0’ for invalid record »Start timestamp (16-bits) is set when record starts counting flow Reset to zero when record is archived »End timestamp (16-bits) is set each time a packet is seen for the given flow »Packet and Byte counters are incremented for each packet on the given flow Reset to zero when record is archived »For TCP Flows, the TCP Flags are or’ed in from each packet »Next Record Number is next record in hash chain 0x1FFFF if record is tail Address of next record = (next_record_num * record_size) + collision_table_base_addr Next Record Number (17b) Slice ID (VLAN) (12b) Reserved (6b) Byte Counter (32b) Reserved (14b) = Member of 6-tuple V (1b) End Timestamp (16b) TCP Flags (6b)

24 24 - Flow Stats Module – John DeHart and James Moscola Archiving Hash Table Records Send all valid records in hash table to XScale for archiving every 5 minutes For each record in the main table (i.e. start of chain)... »For each record in hash chain... If record is valid... Ø If packet count > 0 then –Send record to XScale via SRAM ring –Set packet count to 0 –Set byte count to 0 –Leave record in table Ø If packet count == 0 then –Flow has already been archived –No packet has arrived on flow in 5 minutes –Record is no longer valid –Delete record from hash table to free memory Start Timestamp_high (32b) Start Timestamp_low (32b) End Timestamp_high (32b) Packet Counter (32b) SrcPort (16b)DestPort (16b) Destination Address (32b) Source Address (32b) Protocol (8b) LW0 LW1 LW2 LW3 LW4 LW5 LW6 LW7 End Timestamp_low (32b) LW8 LW9 Slice ID (VLAN) (12b) Reserved (6b) Byte Counter (32b) Info Sent to XScale for each flow every 5 minutes TCP Flags (6b)

25 25 - Flow Stats Module – John DeHart and James Moscola Overview of Flow Stats Control Main functions »Collection of Flow Information for PlanetLab Node Used when a complaint is lodged about a misbehaving flow Must be able to identify flow and the Slice that produced it. »Aggregation of Flow Information from: Multiple GPEs Multiple NPEs »Correlation with NAT records to identify internal flow and external flow External flow will be what complaint will be about. Internal flow will be what involved PlanetLab researcher will know about.

26 26 - Flow Stats Module – John DeHart and James Moscola Overview of PlanetFlow PlanetFlow »Unprivileged slice Flow Collector: Ø Ulogd (fprobe-ulog) –Netlink socket –Uses VSys for privileged operations –Every 5 minutes dumps its cache to DB DB: Ø On PlanetLab Node Ø 5-minute records Ø Flows spanning 5-minute intervals aggregated daily. Central Archive »At Princeton? »Updated periodically by using rsync to retrieve new DB entries from ALL PlanetLab nodes.

27 27 - Flow Stats Module – John DeHart and James Moscola SPP PlanetFlow Ingress XScale Egress XScale MEs SCR SRAM SCR SCD NAT Scratch Rings FlowStats SRAM Ring CP Ext PF DB NATdFSd NAT records Flow records GPE PF DB GPE dbAccumulator Central Archive rsync PF DB HFLKFS2 Central Archive Record = Ext PF DB Record =

28 28 - Flow Stats Module – John DeHart and James Moscola SPP PlanetFlow Ingress XScale Egress XScale MEs SCR SRAM SCR SCD NAT Scratch Rings FlowStats SRAM Ring CP Ext PF DB NATdFSd NAT records Flow records GPE PF DB GPE dbAccumulator Central Archive rsync PF DB HFLKFS2 Central Archive Record = Ext PF DB Record =

29 29 - Flow Stats Module – John DeHart and James Moscola Translations needed NPE Flow Records: »VLAN to SliceID Comes from SRM »IXP timestamp to wall clock time SCD records wall clock time it started IXP How do we manage time slip between clocks? GPE Flow Records: »NAT Port translations Src Port from GPE record becomes SPP Orig Src Port Src Port from natd translation record becomes Src Port Ø natd provides port translation updates

30 30 - Flow Stats Module – John DeHart and James Moscola SPP PlanetFlow Databases NAT records Flow records Central Archive Ext PF DB CP PF DB GPE

31 31 - Flow Stats Module – John DeHart and James Moscola Merging of DBs NPE Flows »No NAT »Goes directly into Ext PF DB SPP Orig Src Port == SrcPort »Do they need SliceID translation? We use the VLAN, but this probably needs to be the PlanetLab version of a Slice ID. SRM will provide a VLAN to SliceID translation Ø Where and When? GPE Configured Flows »No NAT »Goes directly into Ext PF DB SPP Orig Src Port == SrcPort GPE NAT Flows »Find corresponding NAT Record, extract Translated SrcPort Insert record into Ext PF DB with original SrcPort moved to SPP Orig Src Port Set Src Port to translated SrcPort CP Traffic?

32 32 - Flow Stats Module – John DeHart and James Moscola Overview of PlanetFlow PlanetFlow »Unprivileged slice Flow Collector: Ø Ulogd (fprobe-ulog) –Netlink socket –Uses VSys for privileged operations –Every 5 minutes dumps its cache to DB DB: Ø On PlanetLab Node Ø 5-minute records Ø Flows spanning 5-minute intervals aggregated daily. Central Archive »At Princeton? »Updated periodically by using rsync to retrieve new DB entries from ALL PlanetLab nodes. XX

33 33 - Flow Stats Module – John DeHart and James Moscola PlanetFlow Raw Data 0005 0011 8e10638b 48a40477 00062638 0000371d 0000 0000 80fc99cd 80fc99d3 00000000 0000 0004 0000000b 0000062d 8dae5570 8dae558b cc1f 01bb 00 1f 0600 0000 0000 02000000 80fc99cd 80fc99d3 00000000 0000 0004 0000001a 000008b7 8dae54eb 8dae5533 cc1e 01bb 001e 0600 0000 0000 02000000 SADA IPv4 NextHop (Unused) Pkt CountByte Count Src PortDst Port Pad Tcp flags Proto Src Tos Src As (Unused) Dst As (Unused) XID (SliceID)SADA In SNMP (if_nametoindex) Out SNMP (if_nametoindex) Pkt CountByte Count First Switched (flow creation time) Last Switched (time of last pkt) Src PortDst Port Pad Tcp flags Src Tos XID (SliceID) Uptime Unix SecsUnix nSecsVersionCount Flow Sequence Pad16 (unused) First Switched (flow creation time) Last Switched (time of last pkt) In SNMP (if_nametoindex) Out SNMP (if_nametoindex) IPv4 NextHop (Unused) Src As (Unused) Dst As (Unused) NetFlow Header (beginning of file and repeats every 30 flow records) NetFlow Flow Record NetFlow Flow Record 128.252.153.205128.252.153.211 52254 443 52255 443 128.252.153.205128.252.153.211 Proto Engine Type (unused) Engine Id (unused) 223126 158111

34 34 - Flow Stats Module – John DeHart and James Moscola SPP/PlanetFlow Raw Data 0005 0011 8e10638b 48a40477 00062638 0000371d xx yy 0000 80fc99cd 80fc99d3 00000000 0000 0004 0000000b 0000062d 8dae5570 8dae558b cc1f 01bb 00 1f 0600 zzzz 0000 02000000 80fc99cd 80fc99d3 00000000 0000 0004 0000001a 000008b7 8dae54eb 8dae5533 cc1e 01bb 001e 0600 zzzz 0000 02000000 SADA IPv4 NextHop (Unused) Pkt CountByte Count Src PortDst Port Pad Tcp flags Proto Src Tos SPP Orig Src Port Dst As (Unused) XID (SliceID)SADA In SNMP (if_nametoindex) Out SNMP (if_nametoindex) Pkt CountByte Count First Switched (flow creation time) Last Switched (time of last pkt) Src PortDst Port Pad Tcp flags Src Tos XID (SliceID) Uptime (msecs) Unix SecsUnix nSecsVersionCount Flow Sequence Pad16 (unused) First Switched(msec) (flow creation time) Last Switched(msec) (time of last pkt) In SNMP (if_nametoindex) Out SNMP (if_nametoindex) IPv4 NextHop (Unused) SPP Orig Src Port Dst As (Unused) NetFlow Header (beginning of file and repeats every 30 flow records) NetFlow Flow Record NetFlow Flow Record 128.252.153.205128.252.153.211 52254 443 52255 443 128.252.153.205128.252.153.211 Proto SPP Engine Type SPP Engine Id 223126 158111

35 35 - Flow Stats Module – John DeHart and James Moscola Issues and Notes Time: »Keeping time in sync among various machines: Flow Stats ME timestamps with IXP clock ticks. Ø Something has to convert this to a Unix time. GPE(s) timestamps with Unix gettimeofday(). CP collects flow records and aggregates based on time. Proposal: Ø XScale, GPE(s) and CP will use ntp to keep their Unix times in sync Ø At the beginning of each reporting cycle, the Flow Stats ME should send a timestamp record just to allow the XScale and CP to keep the time in sync. Ø OR: Can XScale read the IXP clock tick and report that to the CP with along with the XScale’s Unix time. »What are the times that are recorded in the Header and Flow Records? Header Ø Uptime (msecs): msecs since a base start time Ø Time since Unix Epoch: time since January 1, 1970 –Unix secs –Unix nSecs Ø Uptime and Unix (secs, nSecs) represent the SAME time –So that the Flow times can be calculated based on them. Flow Record Ø First Switched (flow creation time): msecs since a base start time Ø Last Switched (last packet in flow seen time): msecs since base start time

36 36 - Flow Stats Module – John DeHart and James Moscola Issues and Notes (continued) NetFlow Header »Filled in AFTER 30 flow records are filled in OR we get a timeout (10 minutes) »COUNT field tells how many flow records are valid. File or data packet is ALWAYS padded out to a size that would hold 30 flow records »Flow Sequence: Running total of number of flow records emitted. Flow Header and Flow Records »Emitted in chunks of 30 flow records plus a Flow Header Emitted either by writing to a file or sending over a socket to a mirror site. Padded out to a size that would hold 30 flow records. »A flow is emitted when it has been inactive for at least a minute or when it has been active for at least 5 minutes. Fprobe-ulog threads: »emit_thread »scan_thread »cap_thread »unpending_thread Flow lists »flows[]: hashed array of flows, buckets chained off head of list These are flows that have been reported over netlink socket »flows_emit: linked list of flows ready to be emitted.

37 37 - Flow Stats Module – John DeHart and James Moscola Issues and Notes (continued) VLANs and SliceIDs »NPE and LC use VLANs to differentiate Slices »Flow records must record slice IDs SRM will provide VLAN to SliceID translation »GPE(s) do not differentiate Slices by VLAN. All flows from a GPE will use the same VLAN GPE keeps flow records locally using Slice ID Flow Stats ME could ignore GPE flow packets if it was told what the default GPE VLAN was. Ø Otherwise, one of the fs daemons could drop the flow records for the GPE flows that the Flow Stats ME reports. Slice ID: »What exactly is it? »Is the XID that is recorded by PlanetFlow actually the slice id or is it the VServer id?

38 38 - Flow Stats Module – John DeHart and James Moscola Issues and Notes (continued) NAT Port Translations »GPE flow records are the ones that need the NAT Port translation data »GPE flow records will come across from the GPE(s) to the CP via rsync or similar »natd will report NAT port translations with timestamps to the fs daemon »fs daemon will have to maintain NAT port translations (with their timestamps) for possible later correlation with GPE flow records GPE(s) will all use the same default VLAN »SRM will send this VLAN to scd so it can write it to SRAM for the fs ME to read in Fs ME will then filter out GPE flow records. SRM   fsd messaging »srm will push out VLAN  SliceID translation creation and deletion messages srm will wait ~10 minutes before re-using a VLAN srm will send the delete VLAN message after waiting the 10 minutes. fsd should not have to keep any history of VLAN/SliceID translations Ø It should get the creation before it receives any flow records for it Ø It should get the last flow record before it gets the deleteion »fsd will also be able to query SRM for current translation This will facilitate a restart of the fsd while the SRM maintains current state.

39 39 - Flow Stats Module – John DeHart and James Moscola Issues and Notes (continued) rsync of flow record files from GPE(s) to CP »A particular run of rsync may get a file that is still being written to by fprobe- ulog on the GPE A subsequent rsync will may get the file again with additional records in it. »Sample rsync command: rsync --timeout 15 -avzu -e "ssh -i /vservers/plc1/etc/planetlab/root_ssh_key.rsa " root@drn02:/vservers/pl_netflow/pf /root/pf This will report the files that have been copied over

40 40 - Flow Stats Module – John DeHart and James Moscola Issues and Notes (continued) Sample fprobe-ulog command: »/sbin/fprobe-ulog -M -e 3600 -d 3600 -E 60 -T 168 -f pf2 -q 1000 -s 30 -D 250000 »Started from /etc/rc.d/rc[2345].d/S56fprobe-ulog All linked to /etc/init.d/fprobe-ulog GPE Flow record collection daemon: fprobe-ulog »Scan thread Collects flow records into a linked list »Emit thread Periodically writes flow records out to a file Ø Every 600 seconds – ten minutes! »Daemon can also send flow records to a remote collector! So we could have the GPEs emit their flow records directly to the flow stats daemon on the CP. Sample command: Ø /sbin/fprobe-ulog -M -e 3600 -d 3600 -E 60 -T 168 -f pf2 -q 1000 -s 30 -D 250000 : [/[<local][/<type]] … Ø There can be multiple remote host specifications Ø Where –remote: remote host to send to –port: destination port to send to –local: local hostname to use –type: m for mirror-site, r for rotate-site –send to all mirror-sites, rotate through rotate-sites.

41 41 - Flow Stats Module – John DeHart and James Moscola SPP PlanetFlow Ingress XScale Egress XScale MEs SCR SRAM SCR scd NAT Scratch Rings FlowStats SRAM Ring CP Ext PF DB natd fsd GPE Central Archive rsync HFLKFS2 Central Archive Record = Ext PF DB Record = fprobe srm

42 42 - Flow Stats Module – John DeHart and James Moscola Plan/Design Flow Stats daemon, fsd, runs on CP »Collects flow records from GPE(s) and NPE(s) and writes them into a series of PlanetFlow2 files with names: pf2.#, where # is (0-162) Current file is closed after N minutes and # is incremented and new file is opened and started. Ø This mimics what fprobe-ulog does now on the GPE(s) These files are then collected periodically by PLC for use and archiving Ø I don’t think there is any explicit indication that PLC has picked up the files but the timing must be such that we know it is done before we roll over the file names and overwrite an old file. »Gets NAT data from natd Keep records of this with timestamps so we can correlate with flow records coming from GPE(s) Ø Check with Mart on how this will work »Gets VLAN to sliceID data from srm srm will send start translation, stop translation msgs with a 10 minute wait period when stopping a translation to make sure we are done with flow records for that slice Ø FS ME archives records every 5 minutes. Slices are long lived (right?) so this should not be a problem Fsd can also request a translation from srm Ø This is in case fsd has to be restarted while srm and other daemons continue running.

43 43 - Flow Stats Module – John DeHart and James Moscola Plan/Design (continued) Fsd gathers records from GPE(s) and NPE(s) »Gathers flow records from GPE(s) via socket(s) from fprobe- ulog on GPE(s) Come across as one data packet with up to 30 flow records Ø Packet is padded out to full 30 flow records with Count in Header indicating how many of them are valid Update NetFlow header to indicate that this is an SPP and which SPP node it is using Engine Type and Engine ID fields Update with NAT data and write immediately out to current pf2 file keeping its NetFlow header. »Gathers flow records from NPE(s) via socket from scd on XScale Come across one flow record at a time Ø No NetFlow Header Create NetFlow Header Ø With appropriate Uptime and UnixTime (secs, nsecs) Ø With SPP Engine Type and SPP Engine ID Ø Modify Flow Record times to be msecs correlated with Uptime Update NPE flow record with SliceID from srm. Collect NPE records for a period of time or until we get 30 and then write them out to current pf2 file with NetFlow header.

44 44 - Flow Stats Module – John DeHart and James Moscola Plan/Design (continued) FS ME and scd »Use a command field in records coming across from FS ME to scd »Use one command to set current time When FS ME is starting an archive cycle, first it sends a timestamp command When scd gets this timestamp command it associates it with a gettimeofday() time and sends the FS ME time and the gettimeofday() time to the fsd on the CP so it can associated ME times with Unix times. »Use another command to indicate flow records Flow records can be sent directly on to fsd on CP

45 45 - Flow Stats Module – John DeHart and James Moscola End

46 46 - Flow Stats Module – John DeHart and James Moscola OLD STUFF

47 47 - Flow Stats Module – John DeHart and James Moscola PlanetFlow Raw Data 0500 0b00 8385 1bd2 a148 31d4 0f00 f84d 0000 8134 0000 0000 fc80 cd99 bb42 04e0 0000 0000 0000 0400 0000 0500 0000 7c01 2e85 eeb2 6d85 d636 7b00 7b00 0000 0011 0000 0000 0002 0000 fc80 cd99 fc80 d399 0000 0000 0000 0400 0000 1a00 0000 b708 3785 9d52 3785 e352 b1b2 bb01 1e00 0006 0000 0000 0002 0000 SADA IPv4 NextHop (Unused) Pkt CountByte Count Src PortDst Port Pad Tcp flags Proto Src Tos Src As (Unused) Dst As (Unused) XID (SliceID)SADA In SNMP (if_nametoindex) Out SNMP (if_nametoindex) Pkt CountByte Count First Switched (flow creation time) Last Switched (time of last pkt) Src PortDst Port Pad Tcp flags Proto Src Tos XID (SliceID) Uptime Unix SecsUnix nSecsVersionCount Flow Sequence Eng. Type (unused) Engine Id (unused) Pad16 (unused) NetFlow Header (beginning of file and repeats every 30 flow records) NetFlow Flow Record NetFlow Flow Record Each 16 bits has bytes swapped First Switched (flow creation time) Last Switched (time of last pkt) In SNMP (if_nametoindex) Out SNMP (if_nametoindex) IPv4 NextHop (Unused) Src As (Unused) Dst As (Unused)

48 48 - Flow Stats Module – John DeHart and James Moscola SPP PlanetFlow Databases NAT records Flow records Central Archive Ext PF DB CP Int PF DB CP PF DB GPE

49 49 - Flow Stats Module – John DeHart and James Moscola SPP PlanetFlow Ingress XScale Egress XScale MEs SCR SRAM SCR SCD NAT Scratch Rings FlowStats SRAM Ring CP Ext PF DB NATdFSd NAT records Flow records GPE PF DB GPE dbAccumulator Central Archive rsync PF DB HFLKFS2 Central Archive Record = Ext PF DB Record = Int PF DB Record = Int PF DB

50 50 - Flow Stats Module – John DeHart and James Moscola Merging of DBs NPE Flows »No NAT »Goes directly into Ext PF DB and into Int PF DB Internal SrcPort == SrcPort »Do they need SliceID translation? We use the VLAN, but this probably needs to be the PlanetLab version of a Slice ID. SRM will provide a VLAN to SliceID translation Ø Where and When? GPE Configured Flows »No NAT »Goes directly into Ext PF DB and into Int PF DB Internal SrcPort == SrcPort GPE NAT Flows »Find corresponding NAT Record, extract Translated SrcPort »Insert record with translated SrcPort into Ext PF DB »Insert record with internal SrcPort into Int PF DB CP Traffic?


Download ppt "John DeHart and James Moscola (Original FastPath Design) August 2008 Flow Stats Module -- Control."

Similar presentations


Ads by Google