Download presentation
Presentation is loading. Please wait.
1
Integrating DPDK/SPDK with storage application
@ Vishnu Itta @ Mayank Patel Containerized Storage for Containers
2
Agenda Problem Statement
Current state of legacy applications and its issues Possible solutions Integrating DPDK related libraries with legacy applications Approach of client/server model to use SPDK Future work
3
Problem Statement
4
State of legacy storage applications
Highly depends on locks for synchronization Heavy system calls, context switches Dependency on kernel for scheduling Frequent memory allocations Dependency on locks and heavy page faults due to huge memory requirement Copy of data multiple times between userspace and device Leads to Doesn’t scale with current hardware trends Context switches IO latencies
5
Possible solutions lockless data structures Reactor pattern
Usage of HUGE pages Integrating with userspace NVM driver from SPDK Leads to Scalability with CPUs Lesser context switches Better performance
6
Integrating DPDK related libraries with storage applications
7
Linking DPDK with application
Shared library CONFIG_RTE_BUILD_SHARED_LIB=y Static library CONFIG_RTE_BUILD_SHARED_LIB=n Using rte.ext or rte. Makefiles from dpdk/mk For building application : rte.extapp.mk For shared libraries : rte.extlib.mk, rte.extshared.mk Fo building object files : rte.extobj.mk
8
Integration of DPDK libraries with Application libraries
Usage of constructor in libraries Force linking of unused libraries with shared library LDFLAGS += --no-as-needed -lrte_mempool --as-needed rte_eal_init () - to initialize EAL - from legacy application
9
Memory allocation with rte_eal library
Memory allocation from HUGE pages No overhead of address transformation at kernel Issues with this approach in legacy multi-threaded applications: Spinlock with multiple threads Dedicated core to threads
10
Cache base allocation with mempool library
Allocator of fixed-size object Ring base handler for managing object block Each get/put request does CAS operations
11
Ring library to synchronize threads operations
One thread to process operations Message passing between core thread to/from other threads through RING buffer Single CAS operation Issue Dedicated CPU core to thread But, is there a way to integrate SPDK with current state of legacy applications without much redesign?
12
Approach of client/server model to use SPDK
13
Table with data pointers
vhost-user + virtio Fast movement of data between processes (zero copy) Based on shared memory with lockless ring buffers Processes exchange data without involving kernel Used ring buf Legacy App Vhost-user client Vhost-user server SPDK Table with data pointers Avail ring buf
14
vhost-user client impl
Minimalistic lib has been implemented for prototyping: It is needed for embedding to legacy application replacing read/write calls Provides simple API for read and write ops, sync and async variants Storage device to open is UNIX domain socket with listening SPDK vhost server instead of traditional block device
15
Results Our baseline is SPDK with NVMe userspace driver
Vhost-user library can achieve around 90% of native SPDK’s performance if tuned properly Said differently: overhead of virtio library can be as low as 10% For comparison, using SPDK with libaio which simulates what could have been achieved with traditional IO interface, achieves 80%
16
Future work While the results may not be appealing enough, it is not the end SPDK scales nicely with number of CPUs. If vhost-user can scale too, that could make the gap between traditional IO interfaces and vhost-user bigger Requires more work on vhost-user library (adding support for multiple vrings in order to be able to issue IOs to SPDK in parallel) The concern is still there if legacy apps could fully utilize this potential without major rewrite remains
17
QUESTIONS?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.