Download presentation
Presentation is loading. Please wait.
1
Institute of Applied Astronomy
OPERATING EXPERIENCE AND PROSPECTS OF DATA TRANSFER AND REGISTRATION SYSTEM FOR RT-13 I. A. Bezrukov, A. I. Salnikov, V. A. Yakovlev - IAA RAS, Russian Federation A. V. Vylegzhanin - Ioffe Physical & Technical Institute RAS, Russian Federation Objectives and structure of the equipment Since mid-2015 IAA RAS operated system buffering and data transfer (SBPD) observations for the two-element radio interferometer (Observatory IAA RAS "Badary", "Zelenchukskaya" and correlation processing center in St. Petersburg) on the basis of antennas with a mirror diameter of 13.2 m (RT -13). Regular observations on the two-element radio interferometer to rapidly provide space navigation system GLONASS amended universal time. The main objectives of the new system are: Registration 8 of data streams from a broadband signal conversion system at 2 Gbit / s on each channel; Rapid transfer of large amounts of data in correlation processing center (CHU RAS) in channels with a bandwidth up to 10 Gbit / s simultaneously with the registration of observational data; Storing up to 20 terabytes of data observations on the disk subsystem in observatories and up to 40 TB of data in CCI RAS. The hardware platform SBPD observatories is a Dell PowerEdge r720 server with two Intel Xeon E GHz processors, 128 GB of RAM and two disk array Dell Power Vault MD1220. In addition, the server is equipped with four two-port network Intel X520 cards that total server ports provide ten 10GbE network interfaces, eight of which are used to record data from ESPTT and two are for the transmission of data in CCI IPA. Connecting to ESPTT SBPD through Cisco Nexus switch with 10GbE ports. The CHU RAS as the hardware platform used by Dell Power Edge r730 server with two Intel Xeon E GHz processors, 128 GB of RAM and two disk array Dell Power Vault MD1420. (Fig. 1) DTRS running FreeBSD OS 10.1 running with the ZFS file system. As a specialized means of processing high-speed data stream used netmap. Fig.1. General view SBPD Observatory "Zelenchukskaya" (left and center) and CCI Academy of Sciences (right) м Disk Performance Benchmarking Disk Type Stripe, MBps MBps MBps 2Disks 3 Disks 4 Disks 5 Disks SAS 10K 600GB 188 262 316 140 209 117 143 NL-SAS 7.2K, 1TB 133 200 241 93 185 87 130 SSD 480GB 274 338 403 189 236 141 157 SATA 7.2K 1TB 160 192 111 167 131 To compare the performance of different drive configurations pre-tests were conducted to simulate the recording of data in the time monitoring mode. For this purpose, a random access memory created block data (white noise) of 10 GB, after which the block was recorded 60 times eight threads at intervals of 20 seconds ZFS into eight sections. Recording was carried out standard utility dd. Measurement recording speed is command zpool iostat. The following types of discs were tested:• SAS (rpm 10000); • MLC SSD; • Near Line SAS (rpm 7200); • SATA (rpm 7200). 1 1 Table 1 shows the test results for Stripe sets (analogue RAID0) and RAIDZ (analogue RAID5) for two types of Intel Xeon E GHz processor and E GHz. Values in bold providing recording speed sufficient to register the data flow from one channel ESPTT. Based on the test disk subsystem DTRS was established on the basis of SAS drives. Operating experience DTRS and storage systems in IAA RAS with different types of drives (SAS, NL SAS, SATA) shows that the selected types of drives are reliable and provide the required parameters for the writing speed / reading with simultaneous transfer to the observatories, as well as CCI RAS. For two years of continuous operation DTRS and storage systems was not a single failure of all types of used drives (SAS, NL SAS and SATA). During this period, it was held over 700 hours of VLBI sessions on the RT-13 radio telescopes. The total amount of information recorded and transmitted to the CHU RAS two observatories ( "Badary" and "Zelenchukskaya") was of the order of 1.7 PBytes. It should be emphasized that in observatories priority data stream is a record at speeds up to 16 Gb / s without the disk subsystem errors and data loss. The CHU RAS priority is the data rate of the storage system in the correlator. Taking into account the development needs of the data stream buffering systems at speeds of 32 Gbit / s or more, increase disk space for the size of the long process of observation to hundreds of terabytes, it is advisable to more thoroughly evaluate the performance of the disk subsystem. If possible, evaluation methodology should include the most important parameters of the ZFS file system, hardware and software, as well as the data block size. When choosing the type of disk drive, you must also take into account its cost. The sequence of operations by a new technique: In the server's RAM by using a random number generator creates a test file size of ⅔ of the total amount of RAM. File size is caused by the need to eliminate the impact of file system cache memory for writing and must be twice the size of available RAM; Testing is carried out by a series of sequential read and write test file on different disk array configurations. As the tool used by the load system utility gdd. Measurement recording speed can be iostat utility; measured the performance of disk I / O operations, the following configurations of disk arrays: Stripe: 1 disk, two disks, four disks; RAIDZ: 3 disks, 5 disks. Testing is carried out for each disk array with the different sizes of the recorded data block of 512 bytes (default dd); 4096 bytes (Advanced Format HDD sector size); 5152 bytes (vdif package size); 8224 bytes (max vdif pkt size); 128K (default ZFSblock size). It compares the write speed and without a flag set expectations completion of recording data on the disk array (gdd conv = fdatasync). В ходе эксперимента сравнивались следующие типы дисковых накопителей: SAS rpm 10k, 600GB; SATA rpm 7.2k, 6TB (4k Advanced Format); SSD 480 GB. The results of experimental studies Disk Type Blocksize, Bytes Средняя скорость записи, MBps SSD 480GB SAS 600GB(1 year used) SATA 6TB 1d Stripe 4K 194 100 197 5152 195 97 200 8224 96 196 128K 88 202 2d Stripe 359 201 337 358 204 370 363 185 353 174 381 4d Stripe 441 377 365 505 384 412 647 371 537 700 356 576 3d Raidz 491 249 302 532 238 292 525 383 555 220 418 5d Raidz 490 396 459 560 380 513 734 663 830 340 771 Studies were carried out on the disk subsystem hardware, similar equipment in the observatories IAA RAS (Dell r720 server with an array MD1220 c drives form factor 2.5. ", And an array of SuperMicro c drives form factor 3.5"). In the experiments, wheels, spent about one year. Measuring performance of the disk subsystem conducted just for the record. test duration of one hour. The measurement results are shown in Table 2. The results of the selected fields for the two most important dimensions of the recorded data blocks byte block of data used for the registration of data in the international VLBI observations. For comparison, results are also evaluated the performance of the disk subsystem in observatories in the observations during simultaneous recording and transmission of data. In the studies evaluated the disk subsystem performance DTRS (SAS disks in pools 4d Stripe and 3d Raidz), installed in the observatories IAA RAS in the RT-13 in the VLBI observations (mode: test duration of one hour duration of the registration / data records 10, pause between the registration of the pool - 20). The speed of reading and writing in the pools made system utilities OS UNIX FreeBSD: iostat and vmstat. The test results are shown in Table 3. Analysis of test results from the revised methodology shows that the modern-the SATA drives up to 6 TB Form Factor 3.5 "allow log data output from each of the eight channels ESPTT at 2 Gb / s. Disks Type Read, MBps Write, MBps 4d Stripe 35 104 3d Raidz 22 130 Table 3. Results of the evaluation of the performance of the disk subsystem work (working system, avg speed) Table 2. Results of the tests of the disk subsystem in different configurations and sizes of pools of data blocks Рис. 3. Трафик данных наблюдений шести часовых РСДБ-сессий, принятых в ЦКО РАН High-speed data transfer protocols Mbps One of the most important parameters DTRS is efficiency of data delivery to a processing center. The system must not only record the data stream from ESPTT but simultaneously transmit these data correlation processing facility. You need to use advanced high-speed protocol for solving the problem of rapid transmission of large amounts of data observations. It is important that in the process of data transmission and there was no loss at registration error observations from ESPTT at 16 Gbit / s. As these protocols have been selected and Tsunami-UDP UDT. These protocols are commonly used in our country as well as foreign organizations. The main criteria when comparing the protocols are operational observational data transmission in the CCI and the absence of data loss and errors in registration in the band of 16 Gb / s in the observatories. Both protocols work on the scheme of client-server and the UDP-transmitted data packets. The essential drawback of Tsunami-UDP protocol is almost wholly CPU core, regardless of channel capacity. As shown by experimental studies on the transmission of observational data time VLBI sessions of observatories "Badary" and "Zelenchukskaya" in CCI in St. Petersburg, the use of Tsunami-UDP protocol leads to loss of data in the registration data buffering system from ESPTT at a speed of 16 Gbit / s observatories with simultaneous transmission of data in the CCA. It should also be noted that the software support of Tsunami-UDP protocol is currently almost non-existent and protocol compatibility for new versions of the Unix operating system is not checked. In contrast to the Tsunami-UDP, UDT protocol is supported by the developers of the FreeBSD operating system and is included in the base binary packages FreeBSD software. over 1 Gb / s essential for broadband communications channels has the possibility of maximum use of the channel bandwidth that can be achieved in several data transmission flows. Implementation UDT protocol provides such an opportunity. Software package (server) includes adaptation algorithm to channel capacity, which is essential in the case of data transmission in the process of simultaneous recording. Observational data from observatories on the DTRS transferred to CCI via broadband connection for the Internet. In experimental studies used channels with a bandwidth of 2 Gbit / s in each observatory, and 4 Gb / s CHU (St. Petersburg). Transfer time VLBI sessions of observations carried out four and eight threads UDT software package: four streams for transmission with simultaneous recording at a speed of 16 Gbit / s and eight - hour after the completion of the VLBI session. Fig. 3-5 as an example, the data observations six time VLBI sessions on a two-element radio interferometer RT-13 on broadband connection. The graphs of the data rate of the time built a standard package MRTG. averaging time graphs of five minutes. Analysis of the results of experimental studies have shown that multi-threaded data transmission in broadband communication channel provides the desired efficiency of delivery of data to the CHU of observatories. Time of day Mbps Time of day Mbps Time of day Рис. 3. Трафик данных наблюдений шести часовых РСДБ-сессий, принятых в ЦКО РАН Conclusion The results of these studies suggest the possibility of using the registration system and data modern SATA drives in certain configurations pools, causing it to reduce the cost and thus a significant increase in storage space, because maximum volume of modern SATA disks is larger than SAS or SSD-drive. Using the UDT protocol and multi-streaming data CHU simultaneously with registration at observatories will enable rapid delivery of data to the CCI is almost in real time. Testing of the new system of registration and data transmission using the multi-threaded data transfer protocol UDT simultaneously with the registration and the disk subsystem on the SATA drives are planned on the radio telescope RT-13 in the observatory "Bright" to create IAA RAS which starts in 2016. 13th European VLBI Network Symposium and Users Meeting, Saint-Petersburg, IAA RAS, 2016, September
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.