Download presentation
Presentation is loading. Please wait.
Published byBruno Kelley Modified over 6 years ago
1
TS7700 Performance and Capacity Daily and Hourly charts
This PowerPoint presentation graphs many of the TS7700 statistics that you can see in the VEHSTATS reports. Use the charts to be able to see where anomalies or trends are occurring. Then, if needed, you can investigate in more detail in the 15-min interval VEHSTATS reports or in the Hourly data tab of the spreadsheet used to produce this presentation. To modify the charts here, you must modify the charts in the Excel spreadsheet. All of the charts are linked in from the spreadsheet.
2
Short description This presentation contains information and graphs about the performance and capacity of a TS7700 virtualization engine This will show the statistics for only one cluster (even if in a multi-cluster grid) If you want the views from other clusters in the grid, you will need to generate separate presentations for each cluster Only the statistics for one cluster can be displayed in a single presentation. If you want to see views for other clusters, just import the data for that cluster into the spreadsheet and generate a new PowerPoint presentation for that cluster.
3
Agenda This presentation contains the following sections: In PowerPoint, right click on the section name and then “Open Hyperlink” to go directly to the beginning of that section. Overview Data transfer Virtual mounts Virtual mount times Virtual Drive and Physical Drive usage Physical mounts Physical mount times Data compression ratios Blocksizes Tape Volume Cache performance Throttling Multi cluster configuration (Grid) Import/Export Usage Capacities: Active Volumes and GB stored Capacities: Cartridges used Pools (Common Scratch Pool and up to 4 Storage Pools ) In Screen Show mode, you can just click on a Section Name to go to beginning of that section. If not in Screen Show mode, you have to right click on the section name and then click on “Open Hyperlink” in the pop-up box.
4
Preliminary notes In each section:
First will be shown the daily values for the period of analysis Second will be shown the hourly values for a selected day. The hour on the axis is the end-of-hour time. To further differentiate these from the daily charts, the hourly charts will have headers in this dark blue color.
5
Overview In my opinion, the overall average mount time and overall cache miss % are good indicators of the performance of the TS Hopefully, the cache miss % can be kept low and therefore the average virtual mount time will be small also. This also shows the maximum mounts and data transfers per day.
6
Overview for the selected day
These are statistics for the single day selected.
7
Host Data transfers The following charts show
Overview of data transfer GB read GB write Total data transfers Max. and avg. throughput Max. read throughput Max. write throughput Avg. throughput Header slide for charts about the Host data transfer to and from the TS7700. Return to Agenda
8
Overview of data transfer
With the stacked bar, you can see what proportion of the Daily Total GB is represented by Read GB and Write GB respectively. This chart is more of a capacity planning chart as it gives you an idea of how much workload (measured in host GB) is being done per day. Comment:
9
All data transfer Comment:
A different view for the same data where you can visually see the magnitude of the Write GB and Read GB as well as the Total GB Comment:
10
Host Read How much data is read from the TS7700 per day. Comment:
11
Host Write How much data is written to the TS7700 per day. This is measured in Host GB. Comment:
12
Maximum and average throughput
The MaxQtr_Rd_MB_s and MaxQtr_Wr_MB_s and Max_Qtr_MB_s are the Maximum Read MB/sec and Write MB/sec and Total MB/sec respectively for a 15 minute interval recorded by the TS7740 on that day. Comment:
13
Maximum read throughput
This indicates peak read MB/sec per day, measured over a 15 minute interval. Comment:
14
Maximum write throughput
This indicates peak write MB/sec per day, measured over a 15 minute interval. Comment:
15
Average throughput Comment:
Average total MB/sec for the complete day. Since it’s an average for the whole day, it doesn’t really tell you about the peaks. Comment:
16
Cache Data Flows Comment: 16
The MaxQtr_Rd_MB_s and MaxQtr_Wr_MB_s and Max_Qtr_MB_s are the Maximum Read MB/sec and Write MB/sec and Total MB/sec respectively for a 15 minute interval recorded by the TS7740 on that day. Comment: 16
17
A different view for the same data where you can visually see the magnitude of the Write GB and Read GB as well as the Total GB in each hour of the day Comment:
18
The Max_Rd_MB_s and Max_Wr_MB_s and Max_MB_s are the Maximum Read MB/sec and Write MB/sec and Total MB/sec respectively for a 15 minute interval recorded by the TS7740 in the hour. Average is measured over the 4 15 minute intervals in an hour.. Comment:
19
Maximum read MB/sec in an hour, measured over a 15 minute interval.
Comment:
20
Maximum write MB/sec in an hour, measured over a 15 minute interval.
Comment:
21
Average total MB/sec for an hour.
Comment:
22
Hourly Cache Data Flows
Average total MB/sec for an hour. Comment: 22
23
Virtual mounts The following charts show
Overview of virtual mounts by type Scratch mounts Read hit mounts Read miss mounts % mount misses Header page for charts about virtual mounts. A virtual mount can be a scratch mount for output, a read hit mount where the volume is still in the cache, or a read miss mount where the data is no longer in cache and needs to be recalled from the stacked tape cartridge. Return to Agenda
24
Virtual mounts Comment:
In most TS7740 Virtualization Engines, you would expect the Read Misses to be a relatively small proportion of the total mounts. With the stacked bar, you can see what proportion of the Daily Total mounts is represented by Scratch Mounts, Read Hits, and Read Misses respectively Comment:
25
Virtual mounts Comment: 25
A different view of the same data where you can visually see the magnitude of each mount type Comment: 25
26
Virtual mounts Look at Daily Scratch Mounts only Comment:
27
Virtual Mounts Look at Daily Read Hit Mounts only Comment:
28
Virtual Mounts Look at Daily Read Miss Mounts only Comment:
29
Virtual Mounts Comment:
What percent of the Total Daily Virtual Mounts are Cache Misses ? This will vary depending upon the workload. Typically, weekly or monthly workloads will have higher cache misses than daily workloads. Archive/retrieval workloads (e.g. HSM ML2) will also typically have higher cache misses. Comment:
30
Virtual mounts Comment:
Virtual mounts by hour. The quantity of the virtual mounts can vary greatly from hour to hour depending upon the workloads being run in that hour. Comment:
31
Virtual mounts Look at Hourly Scratch Mounts only Comment:
32
Virtual Mounts Look at Hourly Read Hit Mounts only Comment:
33
Virtual Mounts Look at Hourly Read Miss Mounts only Comment:
34
Virtual Mounts Comment:
What percent of the Total Hourly Virtual Mounts are Cache Misses ? This could vary greatly from hour to hour due to the workloads being run during that hour. Comment:
35
Virtual mount times The following charts show the average virtual mount times for Overall virtual mounts Scratch mounts Read hits Read misses Header page for charts about the virtual mount times. How long does it takes to satisfy a virtual mount of the various types. Return to Agenda
36
Average virtual mount times
The average virtual mount time may have some correlation to number of virtual mounts. At high levels of virtual mounts, there may be queuing for drives or cartridges for read misses. Comment: 36
37
Average virtual mount time vs. Cache Miss Percent
The average virtual mount time will usually have a high correlation to the Cache Miss %. As the Cache Miss % goes up, a higher proportion of the virtual mounts will be longer read miss mounts and the average will increase. Comment: 37
38
Average mount times for scratch mounts
Scratch mount times should usually be 2 to 3 seconds, possibly longer in grids. Comment:
39
Average mount times for read hits
Read hit mount times should usually be 2 to 3 seconds, possibly longer in grids. Comment:
40
Average mount times for read misses
Read miss mount times will usually be 60 to 100 seconds and can be even larger at high numbers of read misses and where there is contention for the backend drives. Comment:
41
Average hourly virtual mount times
Comment:
42
Average mount times for scratch mounts
Comment:
43
Average mount times for read hits
Comment: 43
44
Average mount times for read misses
Comment:
45
Drive usage The following charts show average and maximum usage of
Virtual drives Physical drives Header page for charts about virtual drives and physical drives. A TS7700 cluster can have up to 256 virtual drives and as many as 16 physical drives Return to Agenda
46
Usage of virtual drives
The TS7740 has 256 virtual drives so make sure you are not approaching this level of maximum virtual drives. Max is the highest number of virtual drives seen in use in any 15 minute interval during the day. Comment:
47
Usage of physical drives
TS7740 can have up to 16 backend physical drives but your system may have less. The Max Physical Mounted is usually the number of drives installed. If the average is near the Max most of the time, it may be time to add more drives or get another TS7740 subsystem. Comment:
48
Usage of virtual drives
Comment:
49
Usage of physical drives
Comment:
50
Physical mounts The following charts show all physical mounts and how they are split up into Recall mounts (staging) Migration mounts (writing to tape) Reclamation mounts Header page for physical mounts. Physical mounts can be for migration to tape, recall from tape, or the reclamation process. Generally, the recall mounts will be the largest proportion of the physical mounts. Return to Agenda
51
All physical mounts Comment:
Most physical mounts will be for recalls unless there are very few recalls. Usually reclamation mounts are quite small. Comment:
52
Recall mounts Comment:
53
Recall mounts vs. Read Misses
You would expect most Read Miss mounts to result in a Physical Recall Mount. However, for some situations, the cartridge may already be mounted and the TS7700 only needs to move to the new location to retrieve the virtual volume. To the extent that this happens, Recall Mounts may be less than Read Misses. It is also possible for Read Miss mounts in one hour to have the Recall mount in a subsequent hour. Comment: 53
54
Migration mounts Comment:
These are mounts to stack and write data to a tape. With large cartridges, a single mount can last many hours. If you use many storage pools, this number might be higher as the TS7700 may have to switch between writing to one pool and then to another pool, causing more mounts. Comment:
55
Reclamation mounts Comment:
These mounts (both for input and output) occur when the TS7700 is trying to consolidate the data that is still active onto a fewer number of cartridges. Comment:
56
Hourly physical mounts
Comment:
57
Hourly Recall mounts Comment:
58
Hourly Migration mounts
Comment:
59
Hourly Reclamation mounts
Comment:
60
Physical mount times The following charts show
Maximum physical mount time Average physical mount time Header page for charts about physical mount times. Primarily look at the average physical mount times as any pause or problem during the day could cause the maximum physical mount time to be quite large. Return to Agenda
61
Daily Maximum physical mount time
Maximum physical mount times could be quite large if there was a pause of the library or large queue of mounts to be satisfied. I generally don’t find this statistic very useful. Comment:
62
Daily Average physical mount time
For 3592 drives, expect physical mount times to be in the 30 to 80 seconds range. Higher times may indicate a large number of recall mounts where there are not enough physical drives at times and therefore there is queuing for a drive (or possibly queuing for the robot) to become available. Comment:
63
Daily Average physical mount time
For 3592 drives, expect physical mount times to be in the 40 to 80 seconds range. Higher times may indicate a large number of recall mounts where there are not enough physical drives at times and therefore there is queuing for a drive (or possibly queuing for the robot) to become available. Slide added 01/16/2009. Comment: 63
64
Physical mount time by hour
Comment:
65
Physical mount time by hour
For 3592 drives, expect physical mount times to be in the 40 to 80 seconds range. Higher times may indicate a large number of recall mounts where there are not enough physical drives at times and therefore there is queuing for a drive (or possibly queuing for the robot) to become available. Comment:
66
Physical mount time by hour
For 3592 drives, expect physical mount times to be in the 40 to 80 seconds range. Higher times may indicate a large number of recall mounts where there are not enough physical drives at times and therefore there is queuing for a drive (or possibly queuing for the robot) to become available. Slide added 01/16/2009. Comment: 66
67
Data Compression Ratios
The following charts show Compression ratios (Total, read, write) Write compression ratio Read compression ratio Header page for charts about compression ratios. Generally, the write compression ratio is probably a better indicator of the compression ratios that you are currently achieving. Return to Agenda 67
68
Daily Compression Ratios
How well is the data compressing as it moves from the channel to the cache? This metric tells you. Comment: 68
69
Daily Compression Ratios
Usually, the write compression ratio is what I use for the overall compression. Comment: 69
70
Daily Compression Ratios
71
Hourly Compression Ratios
Hourly chart Comment: 71
72
Hourly Compression Ratios
Hourly chart Comment: 72
73
Hourly Compression Ratios
74
Blocksize Performance depends on the block size used. The bigger, the better. The following chart shows information about the block sizes used. The block size ranges are: Less than or equal to 2KB Less than or equal to 4KB Less than or equal to 8KB Less than or equal to 16KB Less than or equal to 32KB Less than or equal to 64KB Greater than 64KB The chart shows the percent of total blocks for each block size range. The majority of block sizes should be in the less than or equal to 32K range (standard QSAM) or the less than or equal to 64K range (DFDSS and FDR). Return to Agenda
75
Daily % Blocks Transferred
Comment:
76
Hourly % Blocks Transferred
Comment:
77
Tape Volume Cache Performance
Tape Volume Cache Management may be used to influence the use of the Tape Volume Cache (TVC). This can be done with the Storage Class attribute. This parameter selects Preference Groups (PG0 or PG1). With no usage of IART, all volumes default to PG1. PG0 Preferential early removal of volumes from cache, largest first. IART >= 100 PG1 Removal of volumes after PG0, least recently used first. IART < 100 The following charts show: Number of virtual volumes in cache (PG0 and PG1) Amount of GB (compressed) in cache (PG0 and PG1) Duration in cache in hours, separated for PG0 and PG1 Number of virtual volumes removed from cache in the last 4 hours and the last 48 hours, separated for PG0 and PG1 Header page for the Tape Volume Cache charts Return to Agenda
78
Virtual Volumes in Cache
Usually, you would expect much more PG1 volumes to be in cache as PG0 volumes are normally removed from the cache as soon as they are pre-migrated (and replicated to other clusters if grided). Comment:
79
Data in Cache Usually, you would expect much more PG1 data to be in cache as PG0 data is normally removed from the cache rather quickly. The stacked bar here should be approximately 90% of your actual cache size unless you have predominately PG0 data. Comment:
80
Residency Time in Cache
Residency time for PG1 data should be the largest with relatively small times for PG0 data. Residency time tends to vary with the amount of PG1 data being written into the cache. Comment:
81
Residency Time in Cache
Residency time for PG1 data should be the largest with relatively small times for PG0 data. Residency time tends to vary with the amount of PG1 data being written into the cache. Comment:
82
Virtual volumes purged
This is the number of virtual volumes that were removed from the cache in the last 4 hours. This is not the pre-migration workload. I’m not sure how I would use this statistic. Comment:
83
Virtual volumes purged
This is the number of virtual volumes that were removed from the cache in the last 48 hours. This is not the pre-migration workload. I’m not sure how I would use this statistic. Comment:
84
Virtual volumes in Cache
Usually, you would expect much more PG1 volumes to be in cache as PG0 volumes are normally removed from the cache as soon as they are pre-migrated (and copied to other clusters if grided). Comment:
85
Data in Cache by hour Comment:
Usually, you would expect much more PG1 data to be in cache as PG0 data is normally removed from the cache rather quickly. The stacked bar here should be approximately 90% of your actual cache size unless you have predominately PG0 data. Comment:
86
Residency Time in Cache (4 Hr.)
Residency time for PG1 data should be the largest with relatively small times for PG0 data. Residency time tends to vary with the amount of PG1 data being written into the cache. Comment:
87
Residency Time in Cache (PG1)
Comment:
88
Virtual volumes purged - PG0
This is the number of PG0 virtual volumes that were removed from the cache in the last 4 and last 48 hours. This is not the pre-migration workload. I’m not sure how I would use this statistic. Comment:
89
Virtual volumes purged – PG1
This is the number of PG1 virtual volumes that were removed from the cache in the last 4 and the last 48 hours. This is not the pre-migration workload. I’m not sure how I would use this statistic. Comment:
90
Throttling Throttling will be occur if the amount of data exceeds certain thresholds. This is only a simplified definition. The redbook SG , Chapter 8, Performance and monitoring contains a detailed description of throttling. The following charts show if there is Write throttling (Data arriving faster than can be migrated to tape) Copy throttling (Data arriving faster from other clusters than can be migrated to tape) GB to Migrate queue (1.5 only) These values are the maximum for the day. Zeros is the best case. Header page for Write Throttling and Copy Throttling charts. A new White Paper entitled “Understanding, Monitoring, and Tuning the TS7700 Performance” published in April 2009 has very detailed information about the Throttling controls. Return to Agenda
91
Write/Copy throttling
If you are getting throttling, you should understand why. Usually a very high peak write workload is hitting the TS7740 at multiple contiguous hours of the day. Write throttling could also occur if there were not enough backend drives to handle the pre-migration of those high write throughputs. Copy throttling controls the input of data from other clusters. It is often turned on at the same time Write Throttling is turned on. Deferred Copy Throttling slows down copies to other cluster to allow more peak Host input. Comment:
92
Write Data vs. Write throttling %
Slide added 01/16/ Throttling could be related to high write transfers during a period. This chart lets you compare the write throughput against the write throttling % to see if there is correlation, although this might be more evident on the hourly charts. Comment: 92
93
Write Data vs. Copy throttling %
Slide added 01/16/ Copy Throttling could be related to high data transfers during a period. This slide lets you compare the Write throughput against the Copy throttling % to determine if there is a correlation, although this might be more evident on the hourly charts. Comment: 93
94
Data to Migrate Comment: 94
Slide added 05/11/ During the day, what was the maximum amount of data that had build up in the cache but still needed to be pre-migrated. Write throttling can begin if this builds to more than 1000 GB of data. This statistic is only available beginning with the 1.5 version of the microcode. Comment: 94
95
Write/Copy throttling hourly
If you are getting throttling, you should understand why. Usually a very high peak write workload is hitting the TS7740 at multiple contiguous hours of the day. Write throttling could also occur if there were not enough backend drives to handle the high write throughputs. Comment:
96
Write Data vs. Write throttling %
Slide added 01/16/ Throttling could be related to high write transfers during a period. This chart lets you compare the write throughput against the write throttling % to see if there is correlation, although this might be more evident on the hourly charts. Comment: 96
97
Write Data vs. Copy throttling %
Slide added 01/16/ Throttling could be related to high write transfers during a period. This slide lets you compare the Write throughput against the Copy throttling % to determine if there is a correlation, although this might be more evident on the hourly charts. Comment: 97
98
Data to Migrate Comment: 98
Slide added 05/11/ During each hour, what was the maximum amount of data that had build up in the cache but still needed to be pre-migrated. Write throttling can begin if this builds to more than 1000 GB of data. This statistic is only available beginning with the 1.5 version of the microcode Comment: 98
99
Multi cluster Grid Queues
If the TS7700 is a member of a multi cluster GRID configuration, the following charts contain these statistics: Point-in-time: End of Interval and/or Maximum Virtual volumes remaining to be received by this cluster GiB remaining to be received by this cluster GiB remaining to be copied to another cluster Deferred Copy minutes and Immediate Copy minutes GiB and MiB/sec are compressed numbers GiB = 1024 * 1024 * 1024 bytes Header page for grided TS7700’s. You may want to delete this set of slides if your TS7700 is not in a grid. Return to Agenda
100
Virtual Volumes to be Received
What amount of volumes are queuing up that need to be copied FROM another cluster. This queue is actually in the cluster that is to receive the data. These are point-in-time statistics. Max is highest amount at the end of any 15 minutes during the day. Comment:
101
Data to be Received Comment:
What amount of GB is queuing up that needs to be copied FROM another cluster. This queue is actually in the cluster that is to receive the data. These are point-in-time statistics. Max is highest amount at the end of any 15 minutes during the day. Comment:
102
Maximum Data to Copy Comment: 102
What amount of GB is queuing up that needs to be copied TO another cluster. This statistic is only available beginning with the 1.5 version of the microcode These are point-in-time statistics. Max is highest amount at the end of any 15 minutes during the day. Comment: 102
103
Deferred Copy Minutes Comment: 103
How many minutes have the Deferred Copies queue gotten behind. Usually dependent on write workload and grid bandwidth and grid latency. These are point-in-time statistics. Max is highest amount at the end of any 15 minutes during the day. Comment: 103
104
Immediate Copy Minutes
How much minutes have the Immediate Copies queue gotten behind. Hopefully not much unless grid bandwidth is insufficient or grid latency is too high. These are point-in-time statistics. Max is highest amount at the end of any 15 minutes during the day. Comment: 104
105
Virtual Volumes to Receive by hour
What amount of volumes are queuing up that need to be copied FROM another cluster. This queue is actually in the cluster that is to receive the data. These are point-in-time statistics. Max is highest amount at the end of any 15 minutes during the hour. Comment:
106
Data to Receive by hour Comment:
What amount of MB is queuing up that needs to be copied FROM another cluster. This queue is actually in the cluster that is to receive the data. These are point-in-time statistics. Max is highest amount at the end of any 15 minutes during the hour. Comment:
107
Data to Copy by hour Comment: 107
What amount of GB is queuing up that needs to be copied TO another cluster. This statistic is only available beginning with the 1.5 version of the microcode. These are point-in-time statistics. Max is highest amount at the end of any 15 minutes during the hour. Comment: 107
108
Deferred Copy Minutes Comment: 108
How many minutes have the Deferred Copies queue gotten behind. Usually dependent on write workload and grid bandwidth and grid latency. These are point-in-time statistics. Max is highest amount at the end of any 15 minutes during the hour. Comment: 108
109
Immediate Copy Minutes
How many minutes have the Immediate Copies queue gotten behind. Hopefully not much unless there is insufficient grid bandwidth or too much latency in the grid. These are point-in-time statistics. Max is highest amount at the end of any 15 minutes during the hour. Comment: 109
110
Multi cluster Grid Transfers
If the TS7700 is a member of a multi cluster GRID configuration, the following charts contain these statistics: For the Interval: Day or Hour GiB copied from this cluster during the Interval Max. MiB/sec copied from this cluster Avg. MiB/sec copied from this cluster Remote Read MiB from other clusters Remote Write MiB to other clusters GiB and MiB/sec are compressed numbers MiB = 1024 * 1024 bytes Header page for grided TS7700’s. You may want to delete this set of slides if your TS7700 is not in a grid. Return to Agenda 110
111
Grid Activities Comment: 111
How much data per day is being copied to other clusters in the Grid. These are in compressed GB. Comment: 111
112
Grid Activities Comment: 112
What is the maximum hourly MB/sec (compressed numbers) throughput being copied to other clusters in the grid. Comment: 112
113
Grid Activities Comment: 113
What is the average (over 24 hours) MB/sec (compressed numbers) throughput being copied to other clusters in the grid. Comment: 113
114
Grid Activities Comment: 114
How much data is being remotely read (i.e. read from another cluster because this cluster does not have a copy of the volume). How much data is remotely written to another cluster. Management class may specify ND to make this happen in 2-way grid. Comment: 114
115
Grid Activity
116
Hourly MiB copied Comment: 116
How much data per hour is being copied to other clusters in the Grid. These are compressed MiB numbers. Comment: 116
117
Max. MiB/sec copied Comment: 117
What is the maximum 15 minute MiB/sec (compressed numbers) throughput in the hour being copied to other clusters in the grid. Comment: 117
118
Avg. Hourly MiB/sec copied
What is the average (over this hour) MB/sec (compressed numbers) throughput being copied to other clusters in the grid. This is the average of the 4 15-minute intervals captured. Comment: 118
119
Remote Data Transfer Comment: 119
How much data (per hour) is being remotely read (i.e. read from another cluster because this cluster does not have a copy of the volume). How much data (per hour) is remotely written to another cluster. Management class may specify ND for CL0 to make this happen in 2-way grid. Comment: 119
120
Import/Export The follow charts show the number of
Imported and exported physical cartridges per period Imported and exported virtual volumes per period Imported and exported GB (compressed) per period The following charts will have zero values if the Import/Export function is not in use. If using the TS7740 Copy Export for DR, this will tell you the cartridges, virtual volumes, and GB of data exported or imported each day. If you are not using Import/Export, you may want to delete this set of charts. Return to Agenda
121
Import/Export Comment:
How many 3592 cartridges per day are being exported from the TS7700 or imported back into the TS7700 Comment:
122
Import/Export Comment:
How many virtual volumes per day are being exported from the TS7700 or imported back into the TS7700 Comment:
123
Import/Export Comment:
How much GB per day is being exported from the TS7700 or imported back into the TS7700 Comment:
124
Import/Export Comment:
How many 3592 cartridges per hour are being exported from the TS7700 or imported back into the TS7700 Comment:
125
Import/Export Comment:
How many virtual volumes per hour are being exported from the TS7700 or imported back into the TS7700 Comment:
126
Import/Export Comment:
How much MB per hour are being exported from the TS7700 or imported back into the TS7700. Comment:
127
Capacities: Active data
The follow charts show The number of Active logical volumes The amount of Active GB of data The amount of TVC occupied by PG0 and PG1 (new in version 2.1) The GiB numbers are GiB after compression Header page for charts about the amount of active data stored in the TS7700. These are primarily capacity indicators showing the number of virtual volumes in use and the Gigabytes (compressed amount) of tape data stored in the TS7700 Return to Agenda
128
Daily Active logical volumes
How many logical volumes are being managed? If you are using dual copy pools or copy export pools, a virtual volume may be counted twice for this metric. Comment:
129
Daily Active Data Comment:
How much data is active in the TS7740 system? Remember, some of this data may be for dual copies of same data. Comment:
130
TVC occupied by PG0 and PG1
131
Active Logical Volumes
How many logical volumes are being managed? If you are using dual copy pools or copy export pools, a virtual volume may be counted twice for this metric. Comment:
132
Active Data How much data is active in the TS7740 system? Remember, some of this data may be for dual copies of same data. Comment:
133
Capacities: Cartridges used
First there will be an overview about the maximum and minimum number of cartridges per type in the whole period. Following are charts about the usage of each type of cartridge, private and scratch Capacities vary with the model of the 3592 drive used: Media Type Native Capacity JA Data 640 GB (TS1130), 500 GB (TS1120), 300 GB (J1A) JB Extended Data 1.6 TB (TS1140), 1 TB (TS1130), 700 GB (TS1120) JC Advanced Data 7 TB (TS1150), 4 TB (TS1140) JD Advanced Data 10 TB (TS1150) JJ Economy 128 GB (TS1130), 100 GB (TS1120), 60 GB (J1A) JK Advanced Economy 900 GB (TS1150), 500 GB (TS1140) JL Advanced Economy 2 TB (TS1150) JR Economy WORM 128 GB (TS1130), 100 GB (TS1120), 60 GB (J1A) JW WORM 640 GB (TS1130), 500 GB (TS1120), 300 GB (J1A) JX Extended WORM 1.6 TB (TS1140), 1 TB (TS1130), 700 GB (TS1120) JY Advanced WORM 7 TB (TS1150), 4 TB (TS1140) JZ Advance WORM 10 TB (TS1150) If all values of a chart are zero, this cartridge type isn't in use. Only daily charts will be shown for cartridges If you are using the Common Scratch Pool, then the number of scratch cartridges that are in Pools 1-32 should be relatively small. If you have dedicated scratch cartridges in a pool, then these would show up in the Pool 1-32 information and not in the Common Scratch Pool Return to Agenda
134
Overview about cartridges
Cartridges are either assigned to the Common Scratch Pool or to one of the 32 data storage pools. If you are borrowing from the CSP, the number of scratch cartridges in the 32 data storage pools will be small, usually 2 per active pool.
135
3592 Cartridges These are cartridges assigned to any of the 32 storage pools but not the Common Scratch Pool. Comment:
136
3592 Cartridges How many JB cartridges overall. Comment: 136
137
3592 Cartridges How many JA cartridges overall. Comment:
138
3592 Cartridges How many JJ cartridges overall. Comment:
139
CSP and Storage Pools The following charts show pools:
Common Scratch Pool Up to 4 Storage Pools (default is Pools 01-04) Only daily charts will be shown for pools The daily charts will show the values at the end of the day. This is the header slide for charts about the Common Scratch Pool and for up to 4 Storage Pools In the ORDERxxx member, you may choose up to 4 pools that you want to show. The default ORDERxxx chooses Pools 01,02,03,04 Return to Agenda
140
Common Scratch Pool The values for the Common Scratch Pool (CSP) will show the number of scratch volumes in the CSP at the end of the period. Only three types of cartridges will be shown, depending on what was specified in the list ORDERV12 (for VEHSTATS), for example: 3592 JA cartridges 3592 JJ cartridges 3592 JB cartridges In most cases, the storage pools will borrow cartridges from the CSP and return cartridges to the Common Scratch Pool when no longer used Information about the Common Scratch Pool
141
Common Scratch Pool Daily
How many cartridges of each type are in the Common Scratch Pool at the end of each day. Comment:
142
Storage Pools The following charts display information for up to 4 storage pools: Number of Active virtual volumes Amount of Active GiB’s (compressed) GiB Written to Pool & GiB Read from Pool Number of Scratch and Private cartridges used by the Pool If this Pool borrows from the Common Scratch Pool, usually there will be only 2 scratch cartridges Default is storage pools 01, 02, 03, 04 but you can choose which 4 you want to display in the ORDERxxx member used by VEHSTATS. GiB Written is the TS7740 Pre-Migration backend workload. GiB Read is the TS7740 Recall from Tape backend workload. These are invalid for a TS7720 so you could delete these slides if you only have TS7720’s.
143
Active Virtual Volumes by Pool
Active virtual volumes at the end of day for this storage pool. Active virtual volumes does include volumes that have been expired by your TMS but have not yet reached the TS7700 Expire Time grace period. Comment:
144
Active Virtual Volumes by Pool
Active virtual volumes at the end of day for this storage pool. Active virtual volumes does include volumes that have been expired by your TMS but have not yet reached the TS7700 Expire Time grace period. Comment: 144
145
Active Virtual Volumes by Pool
Active virtual volumes at the end of day for this storage pool. Active virtual volumes does include volumes that have been expired by your TMS but have not yet reached the TS7700 Expire Time grace period. Comment: 145
146
Active Virtual Volumes by Pool
Active virtual volumes at the end of day for this storage pool. Active virtual volumes does include volumes that have been expired by your TMS but have not yet reached the TS7700 Expire Time grace period. Comment: 146
147
Active Data by Pool Comment:
Active GB (compressed GB) of data at the end of the day for this storage pool. Active GiB does include volume capacity that have been expired by your TMS but have not yet reached the TS7700 Expire Time grace period Comment:
148
Active Data by Pool Comment: 148
Active GB (compressed GB) of data at the end of the day for this storage pool. Active GiB does include volume capacity that have been expired by your TMS but have not yet reached the TS7700 Expire Time grace period Comment: 148
149
Active Data by Pool Comment: 149
Active GB (compressed GB) of data at the end of the day for this storage pool. Active GiB does include volume capacity that have been expired by your TMS but have not yet reached the TS7700 Expire Time grace period Comment: 149
150
Active Data by Pool Comment: 150
Active GB (compressed GB) of data at the end of the day for this storage pool. Active GiB does include volume capacity that have been expired by your TMS but have not yet reached the TS7700 Expire Time grace period Comment: 150
151
Read/Write Data by Pool
How much data is written to this pool by day and how much is read from the pool by day. These are compressed GiB numbers. GiB Write is the TS7740 Premigration backend workload. GiB Read is the TS7740 Recall backend workload. Comment: 151
152
Read/Write Data by Pool
How much data is written to this pool by day and how much is read from the pool by day. These are compressed GiB numbers. GiB Write is the TS7740 Premigration backend workload. GiB Read is the TS7740 Recall backend workload. Comment: 152
153
Read/Write Data by Pool
How much data is written to this pool by day and how much is read from the pool by day. These are compressed GiB numbers. GiB Write is the TS7740 Premigration backend workload. GiB Read is the TS7740 Recall backend workload. Comment: 153
154
Read/Write Data by Pool
How much data is written to this pool by day and how much is read from the pool by day. These are compressed GiB numbers. GiB Write is the TS7740 Premigration backend workload. GiB Read is the TS7740 Recall backend workload. Comment: 154
155
Cartridges by Pool Comment:
Private and Scratch cartridges belonging to this pool at the end of the day. Comment:
156
Cartridges by Pool Comment: 156
Private and Scratch cartridges belonging to this pool at the end of the day. Comment: 156
157
Cartridges by Pool Comment: 157
Private and Scratch cartridges belonging to this pool at the end of the day. Comment: 157
158
Cartridges by Pool Comment: 158
Private and Scratch cartridges belonging to this pool at the end of the day. Comment: 158
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.