Download presentation
Presentation is loading. Please wait.
1
Abhishek Singh Rana rana@fnal.gov UC San Diego
dCache: Basic Tuning dCache Workshop July Fermilab Abhishek Singh Rana UC San Diego
2
Acknowledgments FNAL- Jon Bakken, Timur Perelmutov, Alex Kulyavtsev, Robert Kennedy DESY - Michael Ernst, Patrick Fuhrmann UCSD - Terrence Martin
3
A few throttled dCache parameters -1
Configuration parameters throttled for high-throughput Cell UCSD T2 FNAL T1 (approx) Parallel Streams SRM 200 20 Maximum Transfers CopyMgr 10000 150 Xfer Mgrs 180, 3 Maximum by same owner (Get, Put, Copy) 1000 100 Thread queue size
4
A few throttled dCache parameters - 2
Configuration parameters throttled for high-throughput Cell UCSD T2 FNAL T1 (approx) Thread pool size (Get, Put, Copy) SRM 300 250 Maximum waiting requests 10000 1000 Ready queue size (Get, Put) Maximum ready requests 100 Maximum logins GFTP 500
5
A few throttled dCache parameters - 3
Configuration parameters throttled for high-throughput Cell UCSD T2 FNAL T1 (approx) Maximum streams per client GFTP 200 20 WAN queue (applicable to GFTP on WAN) Pools 16 2, 5 Default queue (applicable to Dcap on LAN) 500 500, 600, 20 Cost factors (Space:Load) Pool Mgr 10:1-1:10 1:5
6
SRM: GFTP doors selection
SRM now does an ordered list, and chooses a random door among the top N doors. There are 2 additional parameters in the SRM batch file: login-broker-update-period = time between new information from login broker. Default is 30 seconds. num-doors-in-rand-selection = top N doors to do a random selection. Default is 3. If this is set equal to the total number of GFTP doors, then GFTP door selection will be completely random.
7
SRM: Request Scheduling
Please see Timur’s presentation for exact details. States Pending Ready TQueued Running Done or AsyncWait Request thread queue Request thread pool Maximum waiting requests Ready queue size Maximum ready requests
8
GFTP: kernel TCP buffer tuning
Limited by system memory. /etc/sysctl.conf # increase Linux TCP buffer limits net.core.rmem_max = net.core.wmem_max = net.core.rmem_default = 8192 net.core.wmem_default = 32768 # increase Linux autotuning TCP buffer limits # min, default, and max number of bytes to use net.ipv4.tcp_rmem = net.ipv4.tcp_wmem = #net.ipv4.tcp_mem = net.ipv4.tcp_mem = net.ipv4.tcp_window_scaling = 1
9
PoolManager: Cost Module
Performance/Load Cost: Load of a pool is determined by comparing the current number of active and waiting transfers to the maximum number of concurrent transfers allowed. Average is taken for each mover type which has a non-zero maximum allowed. Space Cost: Depends either on the free space in the pool or on the age of the least recently used file which would have to be deleted. Total cost is a linear combination (with weights) of the performance cost and space cost.
10
PoolManager: Cost Module
Example scenarios: Give Space Cost more weight: When some pools are almost full. Give Load Cost more weight: When most pools have free space available, and when some of the pools are expected to have a lot of transfer activity.
11
Pools: Multiple Mover Queues
If requests are queued, the slow processing dCap jobs might clog up the queue and not let the fast GridFTP request through, even though the pool just sits waiting for jobs using dCap to request more data. While this could be temporarily fixed by setting the maximum active requests to a higher value, but then too many GridFTP requests could put a very high load on the pool host. For a multi-mover queue setup, the pools have to be told to start several queues and the doors have to be configured to use one of these. Use separate queues for the movers, depending on the door initiating them. This easily allows for a separation of requests of separate protocols.
12
SRMCP usage Typical Numbers for good transfers
Number of SRMCP clients at once: 9 Number of files per SRMCP client: 5-10 Number of streams per file: 10
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.