Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSC 213 – Large Scale Programming. Today’s Goals  Consider what new does & how Java works  What are traditional means of managing memory?  Why did.

Similar presentations


Presentation on theme: "CSC 213 – Large Scale Programming. Today’s Goals  Consider what new does & how Java works  What are traditional means of managing memory?  Why did."— Presentation transcript:

1 CSC 213 – Large Scale Programming

2 Today’s Goals  Consider what new does & how Java works  What are traditional means of managing memory?  Why did they change how this was done for Java?  What are the benefits & costs of these changes?  Examine real-world use of graphs & its benefits  How do all of those graph algorithms get used?  Can we take advantage of this knowledge somehow?  What occurs in real-world we have not covered?  And why is beer ALWAYS answer to life’s problems

3 Explicit Memory Management  Traditional form of memory management  Used a lot, but fallen out of favor  malloc / new  Commands used to allocate space for an object  free / delete  Return memory to system using these command  Simple to use

4 Explicit Memory Management  Traditional form of memory management  Used a lot, but fallen out of favor  malloc / new  Commands used to allocate space for an object  free / delete  Return memory to system using these command  Simple to use, but tricky to get right memory leak  Forget to free  memory leak dangling pointer  free too soon  dangling pointer

5 Dangling Pointers Node x = new Node(“happy”);

6 Dangling Pointers Node x = new Node(“happy”); Node ptr = x; Node x = new Node(“happy”); Node ptr = x;

7 Dangling Pointers Node x = new Node(“happy”); Node ptr = x; delete x; // But I’m not dead yet! Node x = new Node(“happy”); Node ptr = x; delete x; // But I’m not dead yet!

8 Dangling Pointers Node x = new Node(“happy”); Node ptr = x; delete x; // But I’m not dead yet! Node y = new Node(“sad”); Node x = new Node(“happy”); Node ptr = x; delete x; // But I’m not dead yet! Node y = new Node(“sad”);

9 Dangling Pointers Node x = new Node(“happy”); Node ptr = x; delete x; // But I’m not dead yet! Node y = new Node(“sad”); cout << ptr.data << endl; // sad  Node x = new Node(“happy”); Node ptr = x; delete x; // But I’m not dead yet! Node y = new Node(“sad”); cout << ptr.data << endl; // sad 

10 Dangling Pointers Node x = new Node(“happy”); Node ptr = x; delete x; // But I’m not dead yet! Node y = new Node(“sad”); cout << ptr.data << endl; // sad  Node x = new Node(“happy”); Node ptr = x; delete x; // But I’m not dead yet! Node y = new Node(“sad”); cout << ptr.data << endl; // sad 

11 Solution: Garbage Collection  Allocate objects into program’s heap  No relation to heap implementing a priority queue  This heap is simply a “pile of memory”  Garbage collector scans objects on heap  Starts at references in program stack & static fields  Finds objects reachable from those program roots  We consider the unreachable objects “garbage”  Cannot be used again, so safe to remove from heap  Need to include free command is eliminated

12 No More Dangling Pointers Node x = new Node(“happy”);

13 No More Dangling Pointers Node x = new Node(“happy”); Node ptr = x; Node x = new Node(“happy”); Node ptr = x;

14 No More Dangling Pointers Node x = new Node(“happy”); Node ptr = x; // x reachable through ptr so cannot reclaim! Node x = new Node(“happy”); Node ptr = x; // x reachable through ptr so cannot reclaim!

15 No More Dangling Pointers Node x = new Node(“happy”); Node ptr = x; // x reachable through ptr so cannot reclaim! Node y = new Node(“sad”); Node x = new Node(“happy”); Node ptr = x; // x reachable through ptr so cannot reclaim! Node y = new Node(“sad”);

16 No More Dangling Pointers Node x = new Node(“happy”); Node ptr = x; // x reachable through ptr so cannot reclaim! Node y = new Node(“sad”); cout << ptr.data << endl; // happy! Node x = new Node(“happy”); Node ptr = x; // x reachable through ptr so cannot reclaim! Node y = new Node(“sad”); cout << ptr.data << endl; // happy!

17 No More Dangling Pointers Node x = new Node(“happy”); Node ptr = x; // x reachable through ptr so cannot reclaim! Node y = new Node(“sad”); cout << ptr.data << endl; // happy! Node x = new Node(“happy”); Node ptr = x; // x reachable through ptr so cannot reclaim! Node y = new Node(“sad”); cout << ptr.data << endl; // happy!

18  Static & locals are called root references  Must compute objects in their transitive closure Garbage Collection HEAP

19  Static & locals are called root references  Must compute objects in their transitive closure Garbage Collection HEAP

20  Static & locals are called root references  Must compute objects in their transitive closure Garbage Collection HEAP

21  Static & locals are called root references  Must compute objects in their transitive closure Garbage Collection HEAP

22  Static & locals are called root references  Must compute objects in their transitive closure Garbage Collection HEAP

23  Static & locals are called root references  Must compute objects in their transitive closure Garbage Collection HEAP

24  Static & locals are called root references  Must compute objects in their transitive closure Garbage Collection HEAP

25  Static & locals are called root references  Must compute objects in their transitive closure Garbage Collection HEAP

26  Static & locals are called root references  Must compute objects in their transitive closure Garbage Collection HEAP

27  Static & locals are called root references  Must compute objects in their transitive closure Garbage Collection HEAP

28  Static & locals are called root references  Must compute objects in their transitive closure Garbage Collection HEAP

29 Garbage Collection HEAP  Remove unmarked objects from the heap

30 Garbage Collection HEAP  Remove unmarked objects from the heap

31 Garbage Collection HEAP  Remove unmarked objects from the heap  New objects allocated into empty spaces

32 Why Not Always Use GC?  Garbage collection has obvious benefits  Eliminates some errors that often occurs  Added benefit: also makes programming easier

33 Why Not Always Use GC?  Garbage collection has obvious benefits  Eliminates some errors that often occurs  Added benefit: also makes programming easier  Also easier to update code when GC used for memory  GC also has several drawbacks couldwill  Reachable objects could, not will, be used again  More memory needed to hold the extra objects  It takes time to compute reachable objects

34 Cost of Accessing Memory  How long memory access takes is also important  Will make a major difference in time program takes  Imaginary scenario used to consider this effect:

35 Cost of Accessing Memory  How long memory access takes is also important  Will make a major difference in time program takes  Imaginary scenario used to consider this effect: I want a beer

36 Registers and Caches  Inside the CPU, find first levels of memory  At the lowest level, are processor’s registers

37 Registers and Caches  Inside the CPU, find first levels of memory  At the lowest level, are processor’s registers

38 Registers and Caches  Inside the CPU, find first levels of memory  At the lowest level, are processor’s registers  Very, very fast but…  … number of beers held is limited

39 Registers and Caches  Inside the CPU, find first levels of memory  At the lowest level, are processor’s registers  Use caches at next level for dearest memory

40 Registers and Caches  Inside the CPU, find first levels of memory  At the lowest level, are processor’s registers  Use caches at next level for dearest memory

41 Registers and Caches  Inside the CPU, find first levels of memory  At the lowest level, are processor’s registers  Use caches at next level for dearest memory  More space than registers, but…  … not as fast (walk across room)  Will need more beer if party is good

42 Horrors!  Processor does its best to keep memory local  Caches organized to hold memory needed soon  Makes guesses, since this requires predicting future  Will eventually drink all beer in house

43 Horrors!  Processor does its best to keep memory local  Caches organized to hold memory needed soon  Makes guesses, since this requires predicting future  Will eventually drink all beer in house

44 Horrors!  Processor does its best to keep memory local  Caches organized to hold memory needed soon  Makes guesses, since this requires predicting future  Will eventually drink all beer in house  30MB is largest cache size at the moment  Many programs need more than this  What do we do?

45 When the House Runs Dry…  What do you normally do when all beer gone?  Must go to store to get more…  … but do not want a DUI so we must walk to store  Processor uses RAM to store data that cannot fit  RAM sizes are much, much larger than caches  100 x slower to access, however

46 When Store Is Out Of Beer...

47

48 Ein Glass Bier, Bitte  Get SCUBA gear ready for WALK to Germany  Should find enough beer to handle any situation  But buzz destroyed by the very long wait per glass  If Germany runs out, you're drinking too much

49 Walking To Germany Is Slow…

50 Maintaining Your Buzz  Prevent long pauses by maintaining locality  Repeatedly access those objects in fast memory  Access objects in sequential order they are in memory  Both of properties take advantage of caching  Limit data used to size of cache (temporal locality)  (Spatial locality) Exploit knowing how cache works  Limiting data is not easy (or would have done it)  So taking advantage of spatial locality is our best bet

51 Cache Replacement Algorithms  When we access memory, add its block to cache  May need to evict a block if the cache already full  2+1 approaches used to select evicted block  FIFO maintains blocks in Queue and evicts oldest  Track each use and evict block least recently used  (Randomly choose a block to evict)  For good performance want to avoid worst case  But what is it?

52 Cache Replacement Workings Access Order During Program Execution 012345010125324  LRU  FIFO

53 What Does This Mean?  Large data sets require more thought & care  Start with, but do not end at, big-Oh notation  Consider memory costs and how to limit them  Most data structures do not grow this large  S TACK, Q UEUE, S EQUENCE rarely get above 1GB  Using very, very large G RAPH is not typical  Databases are largest data sets anywhere  Which data structures & implementations affected?

54 For Next Lecture  Remember, tests for your program #3 due  Think before submitting; do tests make sense?  Reading on memory hierarchy for Monday  How can we use experience of wanting a beer?  Organize searchable collections to help performance  I am taking students to conference on Friday  Will not be here  Will not be here, since I cannot be in two places at once


Download ppt "CSC 213 – Large Scale Programming. Today’s Goals  Consider what new does & how Java works  What are traditional means of managing memory?  Why did."

Similar presentations


Ads by Google