Download presentation
Presentation is loading. Please wait.
Published byAndra McCarthy Modified over 9 years ago
1
1 Garbage Collection Danny Angus
2
Introduction Student loans, amongst other things, run B2B applications implementing government policy in the UK. We process 900,000+ complex loan applications in the six months from March to September every year. 2
3
Introduction We have to tune performance, our funding stakeholders demand we provide value for money. (As a tax payer so do I). You should tune for performance too. 3
4
Myth Memory management is one of the things Java is supposed to do for us. We do have to be good citizens And sometimes it still fails to live up to our expectations. 4
5
Just take a look at these comments from APR, and imagine the code. Glad you don’t have to write it? I know I am. /* Round up the block size to the next boundary, but always * allocate at least a certain size (MIN_ALLOC). */ /* Find the index for this node size by * dividing its size by the boundary size */ /* First see if there are any nodes in the area we know * our node will fit into. */ /* Walk the free list to see if there are * any nodes on it of the requested size * * NOTE: an optimization would be to check * allocator->free[index] first and if no * node is present, directly use * allocator->free[max_index]. This seems * like overkill though and could cause * memory waste. */ /* If we found nothing, seek the sink (at index 0), if * it is not empty. */ /* Walk the free list to see if there are * any nodes on it of the requested size */ /* If we haven't got a suitable node, malloc a new one * and initialize it. */ 5
6
So what do we have A load of theory but no silver bullet (but I think Sun would like the “AgressiveHeap” option to work for most of us) No “API” which we (the hardworking geeks in the field) can use to even hint about what our applications are going to be doing. A few general heap options A lot of “secret” knowledge 6
7
How to keep up with the work Suns JVM uses “generational” heap management Partions for old & young Java can create a lot of new objects, For long running processes under steady heavy load that means a real lot. 7
8
The Generations Young generation New space, when you create an object memory here is allocated. Tenured generation If it last longer than a wee while it is moved here Permanent generation Stuff (classes loaded mainly) that won’t be thrown away 8
9
And there’s more to the young Creation space Two survivor spaces Objects are promoted from survivor space when they survive a certain number of collections. You can set this number. 9
10
A Generalisation about types Mark and sweep Start in the VM and follow all of the object references you can reach, mark each object Remove the ones you cant reach Stop and copy Stop execution Copy all the objects you can reach into a new space 10
11
Mark and sweep Lots of work to do all that marking Leads to fragmentation You can sweep concurrent with execution This is good if you have plenty of CPU and want to avoid pauses. Still need to defrag now and then though. More garbage == more work 11
12
Stop and copy You need twice as much memory Pauses everything for the whole time it takes De-fragments the space every time More survivors == more work 12
13
JRE 1.4 Copying collector – default for the young space – aka “minor collection” Parallel copying –XX:+UseParNewGC Parallel scavenge –XX:+UseParallelGC Mark – compact – default for tenured space – major collection Concurrent collector – XX:+UseConcMarkSweepGC 13
14
What is your goal? “They” expect you to choose between Throughput & Response time, or speed v no pauses. But if you’re like me you’ll want everything. 14
15
The compromise Balance the time spent in collections against the number of collections. Get the maximum benefit from each collection. 15
16
Some more clues Space fills to its threshold, and is cleaned right out. In a steady state process there will be an apparent base load in a newly cleaned space, you can measure this and size the spaces. Then you can pick the collector that least intrusively keeps up with the rate at which you use the space. 16
17
So how do you measure? -loggc: -XX:PrintGCDetails -verbose:gc [GC 0.000: [DefNew: 512K->64K(576K), 0.0051493 secs] 512K->155K(1984K), 0.0053630 secs] Draw the graph You’re looking for a sawtooth shape The peaks and troughs should reach the same point every time. 17
18
Pink is before, blue after See the small amount cleaned at each small collection And the big amount cleaned in the major collection What does this suggest? Young space is too small Time line – there is no time on this graph If the whole think took a day do we have too few major collections If this whole thing is a minute do we have too many Context is important Illustration 18
19
If the heap in use grows Reduce the frequency of collections by increasing the young space 19
20
This time the young generation is more cleaned Fewer promotions Tenured space in use after the major collection If this continues the size of the heap will increase And pauses will be bigger Another example 20
21
Logarythmic early growth Steady young size and promotion Would we want the lines to be more horizontal? Only you can tell, balance memory use vs CPU Is this under load or is your normal peak load worse? Optimise for normal peak = = efficient Optimise for occasional peak = = robust Tie it back to real events 21
22
If major collections are too frequent Increase the size of the heap a bit.. 22
23
Don’t overdo it If the tenured space is too big you’ll have longer pauses for the tenured collection Reduce the number of promtions by sizing the young generation. -XX:NewSize -XX:MaxNewSize -XX:SurvivorRatio 23
24
Size the young gen 24
25
Use Subtraction… [GC 83473.842: [DefNew: 98814K->7027K(104832K), 0.0829770 secs] 704440K->612653K(1036928K), 0.0831090 secs] [GC 83475.633: [DefNew: 100210K->3604K(104832K), 0.0963160 secs] 705836K->613349K(1036928K), 0.0964500 secs] 696k promoted. 25
26
Time, don’t forget it 26
27
A real default profile 27
28
Here’s one I prepared earlier 28
29
Myth exploded “Make –Xmx and –Xms the same size” If you have plenty of RAM make –Xmx big to cope with the unexpected But keep –Xms small to manage collection frequency and size 29
30
Tip Never let the OS swap out the Java heap This is because marking involves traversing the whole heap in unpredictable patterns 30
31
Size the Young generation Make those collections work for you New size, new ratio balance the eden space and the survivor spaces You want as many of your short lived objects to live and die in one collection, without letting the collection get too onerous of course 31
32
Size the permanent space Size it just a bit bigger than you need 32
33
Myth exploded “allocations which can’t be made in permanent space will be made from tenured space” In practice a compacting collection occurs, and if it doesn’t release enough concurrent memory in the perm space the jvm spins up to 100% re-trying the compaction. 33
34
Parallel and concurrent Might make sense if you have plenty of cpu Uses more cpu, but no pauses Except that you still get de-fragmentation pauses, threads have to be parked before objects can be moved and references re- written. Marking takes a while, and then you have to collect. This may be too slow to clear down during peaks. 34
35
Aggressive heap The silver bullet? 35
36
Some more stuff - Xincgc / -Xnoincgc -XX:ParallelGCThreads= -XX:+UseCMSCompactAtFullCollection -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=1024 -XX:SoftRefLRUPolicyMSPerMB=10000 -XX:PretenureSizeThreshold= -XX:+DisableExplicitGC -XX:CMSInitiatingOccupancyFraction= -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSFullGCsBeforeCompaction=1 36
37
Sizes Xmsvalue -Xmxvalue -Xmnvalue -XX:MinHeapFreeRatio=minimum -XX:MaxHeapFreeRatio=maximum -XX:NewRatio=ratio -XX:NewSize=size -XX:MaxNewSize=size -XX:MaxPermSize= -XX:+AggressiveHeap -XX:+UseAdaptiveSizePolicy 37
38
Conclusion If you spend too much time doing this you probably need to buy hardware. But it is worth doing now and again. Thanks to Student Loans for letting me talk frankly about our problems. 38
39
Questions? http://people.apache.org/~danny 39
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.