PADTAD, Nice, April 031 Efficient On-the-Fly Data Race Detection in Multithreaded C++ Programs Eli Pozniansky & Assaf Schuster.

Slides:



Advertisements
Similar presentations
Dataflow Analysis for Datarace-Free Programs (ESOP 11) Arnab De Joint work with Deepak DSouza and Rupesh Nasre Indian Institute of Science, Bangalore.
Advertisements

Part IV: Memory Management
Memory Management: Overlays and Virtual Memory
Goldilocks: Efficiently Computing the Happens-Before Relation Using Locksets Tayfun Elmas 1, Shaz Qadeer 2, Serdar Tasiran 1 1 Koç University, İstanbul,
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
D u k e S y s t e m s Time, clocks, and consistency and the JMM Jeff Chase Duke University.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Virtual Memory Introduction to Operating Systems: Module 9.
Eraser: A Dynamic Data Race Detector for Multithreaded Programs STEFAN SAVAGE, MICHAEL BURROWS, GREG NELSON, PATRICK SOBALVARRO and THOMAS ANDERSON.
Dynamic Data Race Detection. Sources Eraser: A Dynamic Data Race Detector for Multithreaded Programs –Stefan Savage, Michael Burrows, Greg Nelson, Patric.
1 Thread 1Thread 2 X++T=Y Z=2T=X What is a Data Race? Two concurrent accesses to a shared location, at least one of them for writing. Indicative of a bug.
Atomicity in Multi-Threaded Programs Prachi Tiwari University of California, Santa Cruz CMPS 203 Programming Languages, Fall 2004.
/ PSWLAB Atomizer: A Dynamic Atomicity Checker For Multithreaded Programs By Cormac Flanagan, Stephen N. Freund 24 th April, 2008 Hong,Shin.
Efficient On-the-Fly Data Race Detection in Multithreaded C++ Programs Eli Pozniansky & Assaf Schuster.
Cormac Flanagan and Stephen Freund PLDI 2009 Slides by Michelle Goodstein 07/26/10.
S. Narayanasamy, Z. Wang, J. Tigani, A. Edwards, B. Calder UCSD and Microsoft PLDI 2007.
PARALLEL PROGRAMMING with TRANSACTIONAL MEMORY Pratibha Kona.
“THREADS CANNOT BE IMPLEMENTED AS A LIBRARY” HANS-J. BOEHM, HP LABS Presented by Seema Saijpaul CS-510.
TaintCheck and LockSet LBA Reading Group Presentation by Shimin Chen.
DISTRIBUTED AND HIGH-PERFORMANCE COMPUTING CHAPTER 7: SHARED MEMORY PARALLEL PROGRAMMING.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
CS533 Concepts of Operating Systems Class 3 Monitors.
02/17/2010CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Cormac Flanagan UC Santa Cruz Velodrome: A Sound and Complete Dynamic Atomicity Checker for Multithreaded Programs Jaeheon Yi UC Santa Cruz Stephen Freund.
02/19/2007CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
/ PSWLAB Eraser: A Dynamic Data Race Detector for Multithreaded Programs By Stefan Savage et al 5 th Mar 2008 presented by Hong,Shin Eraser:
Operating System Chapter 7. Memory Management Lynn Choi School of Electrical Engineering.
Accelerating Precise Race Detection Using Commercially-Available Hardware Transactional Memory Support Serdar Tasiran Koc University, Istanbul, Turkey.
Cosc 4740 Chapter 6, Part 3 Process Synchronization.
AN EXTENDED OPENMP TARGETING ON THE HYBRID ARCHITECTURE OF SMP-CLUSTER Author : Y. Zhao 、 C. Hu 、 S. Wang 、 S. Zhang Source : Proceedings of the 2nd IASTED.
Eraser: A Dynamic Data Race Detector for Multithreaded Programs STEFAN SAVAGE, MICHAEL BURROWS, GREG NELSON, PATRICK SOBALVARRO, and THOMAS ANDERSON Ethan.
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
COMP 111 Threads and concurrency Sept 28, Tufts University Computer Science2 Who is this guy? I am not Prof. Couch Obvious? Sam Guyer New assistant.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Dynamic Data Race Detection. Sources Eraser: A Dynamic Data Race Detector for Multithreaded Programs –Stefan Savage, Michael Burrows, Greg Nelson, Patric.
Lecture 8 Page 1 CS 111 Online Other Important Synchronization Primitives Semaphores Mutexes Monitors.
Shared Memory Consistency Models. SMP systems support shared memory abstraction: all processors see the whole memory and can perform memory operations.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Processes and Virtual Memory
1 Chapter 9 Distributed Shared Memory. 2 Making the main memory of a cluster of computers look as though it is a single memory with a single address space.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
OpenMP for Networks of SMPs Y. Charlie Hu, Honghui Lu, Alan L. Cox, Willy Zwaenepoel ECE1747 – Parallel Programming Vicky Tsang.
HARD: Hardware-Assisted lockset- based Race Detection P.Zhou, R.Teodorescu, Y.Zhou. HPCA’07 Shimin Chen LBA Reading Group Presentation.
Distributed shared memory u motivation and the main idea u consistency models F strict and sequential F causal F PRAM and processor F weak and release.
On-the-Fly Data-Race Detection in Multithreaded Programs Prepared by Eli Pozniansky under Supervision of Prof. Assaf Schuster.
Eraser: A dynamic Data Race Detector for Multithreaded Programs Stefan Savage, Michael Burrows, Greg Nelson, Patrick Sobalvarro, Thomas Anderson Presenter:
COMP091 – Operating Systems 1 Memory Management. Memory Management Terms Physical address –Actual address as seen by memory unit Logical address –Address.
SMP Basics KeyStone Training Multicore Applications Literature Number: SPRPxxx 1.
Using Escape Analysis in Dynamic Data Race Detection Emma Harrington `15 Williams College
1 Programming with Shared Memory - 3 Recognizing parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Jan 22, 2016.
FastTrack: Efficient and Precise Dynamic Race Detection [FlFr09] Cormac Flanagan and Stephen N. Freund GNU OS Lab. 23-Jun-16 Ok-kyoon Ha.
Detecting Data Races in Multi-Threaded Programs
Presenter: Godmar Back
Data Race Detection Assaf Schuster.
Healing Data Races On-The-Fly
Distributed Shared Memory
Threads Cannot Be Implemented As a Library
Chapter 9 – Real Memory Organization and Management
CS533 Concepts of Operating Systems Class 3
Effective Data-Race Detection for the Kernel
On-the-Fly Data Race Detection (in Multi-threaded C++ Programs)
Threads and Memory Models Hal Perkins Autumn 2011
Page Replacement.
Threads and Memory Models Hal Perkins Autumn 2009
CS533 Concepts of Operating Systems Class 3
CSE 153 Design of Operating Systems Winter 19
Tools for the development of parallel applications
Programming with Shared Memory Specifying parallelism
Eraser: A dynamic data race detector for multithreaded programs
Chapter 8 & 9 Main Memory and Virtual Memory
Presentation transcript:

PADTAD, Nice, April 031 Efficient On-the-Fly Data Race Detection in Multithreaded C++ Programs Eli Pozniansky & Assaf Schuster

PADTAD, Nice, April 032 Thread 1Thread 2 X++T=Y Z=2T=X What is a Data Race? Two concurrent accesses to a shared location, at least one of them for writing. Indicative of a bug

PADTAD, Nice, April 033 Lock(m) Unlock(m)Lock(m) Unlock(m) How Can Data Races be Prevented? Explicit synchronization between threads: Locks Critical Sections Barriers Mutexes Semaphores Monitors Events Etc. Thread 1Thread 2 X++ T=X

PADTAD, Nice, April 034 Is This Sufficient? Yes! No! Programmer dependent Correctness – programmer may forget to synch Need tools to detect data races Expensive Efficiency – to achieve correctness, programmer may overdo. Need tools to remove excessive synch ’ s

PADTAD, Nice, April 035 Detecting Data Races? NP-hard [Netzer&Miller 1990] Input size = # instructions performed Even for 3 threads only Even with no loops/recursion Execution orders/scheduling (# threads) thread_length # inputs Detection-code ’ s side-effects Weak memory, instruction reorder, atomicity

PADTAD, Nice, April 036 #define N 100 Type g_stack = new Type[N]; int g_counter = 0; Lock g_lock; void push( Type& obj ){lock(g_lock);...unlock(g_lock);} void pop( Type& obj ) {lock(g_lock);...unlock(g_lock);} void popAll( ) { lock(g_lock); delete[] g_stack; g_stack = new Type[N]; g_counter = 0; unlock(g_lock); } int find( Type& obj, int number ) { lock(g_lock); for (int i = 0; i < number; i++) if (obj == g_stack[i]) break; // Found!!! if (i == number) i = -1; // Not found … Return -1 to caller unlock(g_lock); return i; } int find( Type& obj ) { return find( obj, g_counter ); } Where is Waldo?

PADTAD, Nice, April 037 #define N 100 Type g_stack = new Type[N]; int g_counter = 0; Lock g_lock; void push( Type& obj ){lock(g_lock);...unlock(g_lock);} void pop( Type& obj ) {lock(g_lock);...unlock(g_lock);} void popAll( ) { lock(g_lock); delete[] g_stack; g_stack = new Type[N]; g_counter = 0; unlock(g_lock); } int find( Type& obj, int number ) { lock(g_lock); for (int i = 0; i < number; i++) if (obj == g_stack[i]) break; // Found!!! if (i == number) i = -1; // Not found … Return -1 to caller unlock(g_lock); return i; } int find( Type& obj ) { return find( obj, g_counter ); } Can You Find the Race? Similar problem was found in java.util.Vector write read

PADTAD, Nice, April 038 Apparent Data Races Based only the behavior of the explicit synch not on program semantics Easier to locate Less accurate Exist iff “ real ” (feasible) data race exist Detection is still NP-hard  Initially : grades = oldDatabase; updated = false; grades = newDatabase; updated = true; while (updated == false); X:=grades.gradeOf(lecturersSon); Thread T.A. Thread Lecturer

PADTAD, Nice, April 039 Detection Approaches Restricted pgming model Usually fork-join Static Emrath, Padua 88 Balasundaram, Kenedy 89 Mellor-Crummy 93 Flanagan, Freund 01 Postmortem Netzer, Miller 90, 91 Adve, Hill 91 On-the-fly Dinning, Schonberg 90, 91 Savage et.al. 97 Itskovitz et.al. 99 Perkovic, Keleher 00 Issues: pgming model synch ’ method memory model accuracy overhead granularity coverage fork join fork join

PADTAD, Nice, April 0310 MultiRace Approach On-the-fly detection of apparent data races Two detection algorithms (improved versions) Lockset [ Savage, Burrows, Nelson, Sobalvarro, Anderson 97] Djit + [ Itzkovitz, Schuster, Zeev-ben-Mordechai 99] Correct even for weak memory systems Flexible detection granularity Variables and Objects Especially suited for OO programming languages Source-code (C++) instrumentation + Memory mappings Transparent Low overhead

PADTAD, Nice, April 0311 Djit + [Itskovitz et.al. 1999] Apparent Data Races Lamport’s happens-before partial order a,b concurrent if neither a hb → b nor b hb → a  Apparent data race Otherwise, they are “ synchronized ” Djit + basic idea: check each access performed against all “ previously performed ” accesses Thread 1Thread 2. a. Unlock(L). Lock(L). b a hb → b

PADTAD, Nice, April 0312 Djit + Local Time Frames (LTF) The execution of each thread is split into a sequence of time frames. A new time frame starts on each unlock. For every access there is a timestamp = a vector of LTFs known to the thread at the moment the access takes place ThreadLTF x = 1 lock( m1 ) z = 2 lock( m2 ) y = 3 unlock( m2 ) z = 4 unlock( m1 ) x =

PADTAD, Nice, April 0313 Djit + Local Time Frames (LTF) A vector ltf t [.] for each thread t ltf t [t] is the LTF of thread t ltf t [u] stores the latest LTF of u known to t If u is an acquirer of t’s unlock for k=0 to maxthreads-1 ltf u [k] = max( ltf u [k], ltf t [k] )

PADTAD, Nice, April 0314 Thread 1Thread 2Thread 3 (1 1 1) Djit + Vector Time Frames Example unlock( m1 ) write x read z(2 1 1) write x unlock( m2 ) unlock( m1 ) unlock( m2 ) read y (2 1 1) (2 2 1)

PADTAD, Nice, April 0315 Djit + Checking Concurrency P(a,b) ≜ ( a.type = write ⋁ b.type = write ) ⋀ ⋀ ( a.ltf ≥ b.timestamp[a.thread_id] ) a was logged earlier than b. P returns TRUE iff a and b are racing. Problem: Too much logging, too many checks.

PADTAD, Nice, April 0316 Thread 2Thread 1 lock( m ) write X read X unlock( m ) read X lock( m ) write X unlock( m ) lock( m ) read X write X unlock( m ) race Djit + Which Accesses to Check? No logging c No logging a in thread t 1, and b and c in thread t 2 in same ltf b precedes c in the program order. If a and b are synchronized, then a and c are synchronized as well.  It is sufficient to record only the first read access and the first write access to a variable in each ltf. b a

PADTAD, Nice, April 0317 a occurs in t 1 b and c “previously” occur in t 2 If a is synchronized with c then it must also be synchronized with b. Thread 1Thread 2. lock(m). a b. unlock. c. unlock(m). Djit + Which LTFs to Check?  It is sufficient to check a “current” access with the “most recent” accesses in each of the other threads.

PADTAD, Nice, April 0318 Djit + Access History For every variable v for each of the threads: The last ltf in which the thread read from v The last ltf in which the thread wrote to v w-ltf n... w-ltf 2 w-ltf 1 r-ltf n... r-ltf 2 r-ltf 1 VV LTFs of recent writes to v – one for each thread LTFs of recent reads from v – one for each thread On each first read and first write to v in a ltf every thread updates the access history of v If the access to v is a read, the thread checks all recent writes by other threads to v If the access is a write, the thread checks all recent reads as well as all recent writes by other threads to v

PADTAD, Nice, April 0319 Djit + Pros and Cons No false alarms No missed races (in a given scheduling)  Very sensitive to differences in scheduling  Requires enormous number of runs. Yet: cannot prove tested program is race free. Can be extended to support other synchronization primitives, like barriers and counting semaphores Correct on relaxed memory systems [Adve+Hill, 1990 data-race-free-1]

PADTAD, Nice, April 0320 Lockset The Basic Algorithm C(v) set of locks that protected all accesses tov so far locks_held(t) set of currently acquired locks Algorithm: - For each v, init C(v) to the set of all possible locks - On each access to v by thread t: - lh v  locks_held(t) - if it is a read, then lh v  lh v ∪ {readers_lock} - C(v)  C(v) ∩ lh v - if C(v)= ∅, issue a warning

PADTAD, Nice, April 0321 Lockset Example Thread 1Thread 2lh v C(v) { }{m1, m2,r_l} Warning: locking discipline for v is Violated !!! lock( m1 ) read v{m1,r_l} unlock( m1 ) { } lock( m2 ) write v{m2}{ } unlock( m2 ) { } r_l = readers_lock prevents from multiple reads to generate false alarms

PADTAD, Nice, April 0322 Lockset [Savage et.al. 1997] Warning: locking Discipline for v is Violated Thread 1 {m1, m2}{ } C(v)Locks(v)Thread 2 lock(m1) read v{m1} unlock(m1) { } lock(m2) write v{m2}{ } unlock(m2) { } Locking discipline: every shared location is consistently protected by a lock. Lockset detects violations of this locking discipline.

PADTAD, Nice, April 0323 Lockset vs. Djit + Thread 1Thread 2 y++ [1] lock( m ) v++ unlock( m ) lock( m ) v++ unlock( m ) y++ [2] [1] hb → [2], yet there might be a data race on y under a different scheduling  the locking discipline is violated

PADTAD, Nice, April 0324 Lockset Which Accesses to Check? a and b in same thread, same time frame, a precedes b, then: Locks a (v) ⊆ Locks b (v) Locks u (v) is set of locks held during access u to v. ThreadLocks(v) unlock … lock(m1) a: write v write v lock(m2) b: write v unlock(m2) unlock(m1) {m1} {m1}= {m1} {m1,m2} ⊇ {m1}  Only first accesses need be checked in every time frame  Lockset can use same logging (access history) as DJIT +

PADTAD, Nice, April 0325 Lockset Pros and Cons Less sensitive to scheduling Detects a superset of all apparently raced locations in an execution of a program: races cannot be missed  Lots of false alarms  Still dependent on scheduling: cannot prove tested program is race free

PADTAD, Nice, April 0326 Lockset Supporting Barriers Initializin g Virgin Shared Empty Clean Exclusive write by first thread read/write by same thread read/write by new thread barrier read/write by any thread read/write by some thread read/write by any thread, C(v) not empty read/write by any thread, C(v) is empty read/write by new thread, C(v) is empty read by any thread read/write by new thread, C(v) not empty barrier read/write by same thread Per-variable state transition diagram

PADTAD, Nice, April 0327 S A F L Combining Djit + and Lockset All shared locations in some program P All raced locations in P Violations detected by Lockset in P D Raced locations detected by Djit + in P All apparently raced locations in P Lockset can detect suspected races in more execution orders Djit + can filter out the spurious warnings reported by Lockset Lockset can help reduce number of checks performed by Djit + If C(v) is not empty yet, Djit + should not check v for races The implementation overhead comes mainly from the access logging mechanism Can be shared by the algorithms

PADTAD, Nice, April 0328 Implementing Access Logging: Recording First LTF Accesses Read-Only View No-Access View Physical Shared Memory Thread 1 Thread 2 Thread 4 Thread 3 Read- Write View Virtual Memory X Y X Y X Y X Y An access attempt with wrong permissions generates a fault The fault handler activates the logging and the detection mechanisms, and switches views

PADTAD, Nice, April 0329 Swizzling Between Views Thread Read-Only View No-Access View Physical Shared Memory Read- Write View Virtual Memory X Y X Y X Y X Y unlock(m) write x … unlock(m) read x write x read fault write fault write fault

PADTAD, Nice, April 0330 Detection Granularity A minipage (= detection unit) can contain: Objects of primitive types – char, int, double, etc. Objects of complex types – classes and structures Entire arrays of complex or primitive types An array can be placed on a single minipage or split across several minipages. Array still occupies contiguous addresses

PADTAD, Nice, April 0331 Playing with Detection Granularity to Reduce Overhead Larger minipages  reduced overhead Less faults A minipage should be refined into smaller minipages when suspicious alarms occur Replay technology can help (if available) When suspicion resolved – regroup May disable detection on the accesses involved

PADTAD, Nice, April 0332 Detection Granularity

PADTAD, Nice, April 0333 All allocation routines, new and malloc, are overloaded. They get two additional arguments: # requested elements # elements per minipage Type* ptr = new(50, 1) Type[50]; Every class Type inherits from SmartProxy template class: class Type : public SmartProxy {... } The SmartProxy class consists of functions that return a pointer (or a reference) to the instance through its correct view Highlights of Instrumentation (1)

PADTAD, Nice, April 0334 All occurrences of potentially shared primitive types are wrapped into fully functional classes: int  class _int_ : public SmartProxy { int val;...} Global and static objects are copied to our shared space during initialization All accesses are redirected to come from our copies No source code – the objects are ‘touched’: memcpy( dest->write(n),src->read(n),n*sizeof(Type) ) All accesses to class data members are instrumented inside the member functions: data_mmbr=0;  smartPointer()->data_mmbr=0; Highlights of Instrumentation(2)

PADTAD, Nice, April 0335 Example of Instrumentation void func( Type* ptr, Type& ref, int num ) { for ( int i = 0; i < num; i++ ) { ptr->smartPointer()->data += ref.smartReference().data; ptr++; } Type* ptr2 = new(20, 2) Type[20]; memset( ptr2->write( 20*sizeof(Type) ), 0, 20*sizeof(Type) ); ptr = &ref; ptr2[0].smartReference() = *ptr->smartPointer(); ptr->member_func( ); } Currently, the desired value is specified by user through source code annotation No Change!!!

PADTAD, Nice, April 0336 Loop Optimizations Original code: for ( i = 0; i < num; i++) arr[i].data = i; Instrumented code: for ( i = 0; i < num; i++) arr[i].smartReference().data = i;  Very expensive code Optimize when entire array on single minipage: if ( num > 0 ) arr[0].smartReference().data = 0;  Touch first element for ( i = 1; i < num; i++)  i runs from 1 and not from 0 arr[i].data = i;  Access the rest of array without faults Optimize otherwise (no synchronization in loop): arr.write( num );  Touch for writing all minipages of array for ( i = 0; i < num; i++)  Efficient if number of elements in array arr[i].data = i; is high and number of minipages is low

Reporting Races in MultiRace

Benchmark Specifications (2 threads) Input Set Shared Memory # Mini- pages # Write/ Read Faults # Time- frames Time in sec (NO DR) FFT2 8 *2 8 3MB49/ IS2 23 numbers 2 15 values 128KB360/ LU1024*1024 matrix, block size 32*32 8MB5127/ SOR1024*2048 matrices, 50 iterations 8MB2202/ TSP19 cities, recursion level 12 1MB92792/ WATER512 molecules, 15 steps 500KB315438/

Benchmark Overheads (4-way IBM Netfinity server, 550MHz, Win-NT)

Overhead Breakdown Numbers above bars are # write/read faults. Most of the overhead come from page faults. Overhead due to detection algorithms is small.

PADTAD, Nice, April 0341 Summary MultiRace is: Transparent Supports two-way and global synchronization primitives: locks and barriers Detects races that actually occurred (Djit + ) Usually does not miss races that could occur with a different scheduling (Lockset) Correct for weak memory models Scalable Exhibits flexible detection granularity

PADTAD, Nice, April 0342 Conclusions MultiRace makes it easier for programmer to trust his programs No need to add synchronization “just in case” In case of doubt - MultiRace should be activated each time the program executes

PADTAD, Nice, April 0343 Future/Ongoing work Implement instrumenting pre-compiler Higher transparency Higher scalability Automatic dynamic granularity adaptation Integrate with scheduling-generator Integrate with record/replay Integrate with the compiler/debugger May get rid of faults and views Optimizations through static analysis Implement extensions for semaphores and other synch primitives Etc.

PADTAD, Nice, April 0344 The End

PADTAD, Nice, April 0345 Minipages and Dynamic Granularity of Detection Minipage is a shared location that can be accessed using the approach of views. We detect races on minipages and not on fixed number of bytes. Each minipage is associated with the access history of Djit + and Lockset state. The size of a minipage can vary.

PADTAD, Nice, April 0346 Implementing Access Logging In order to record only the first accesses (reads and writes) to shared locations in each of the time frames, we use the concept of views. A view is a region in virtual memory. Each view has its own protection – NoAccess / ReadOnly / ReadWrite. Each shared object in physical memory can be accessed through each of the three views. Helps to distinguish between reads and writes. Enables the realization of the dynamic detection unit and avoids false sharing problem.

PADTAD, Nice, April 0347 Disabling Detection Obviously, Lockset can report false alarms. Also Djit + detects apparent races that are not necessarily feasible races: Intentional races Unrefined granularity Private synchronization Detection can be disabled through the use of source code annotations.

PADTAD, Nice, April 0348 Overheads The overheads are steady for 1-4 threads – we are scalable in number of CPUs. The overheads increase for high number of threads. Number of page faults (both read and write) increases linearly with number of threads. In fact, any on-the-fly tool for data race detection will be unscalable in number of threads when number of CPUs is fixed.

PADTAD, Nice, April 0349 Instrumentation Limitations Currently, no transparent solution for instrumenting global and static pointers. In order to monitor all accesses to these pointers they should be wrapped into classes  compiler’s automatic pointer conversions are lost. Will not be a problem in Java. All data members of the same instance of class always reside on the same minipage. In the future – will split classes dynamically.

Breakdowns of Overheads

PADTAD, Nice, April 0351 References T. Brecht and H. Sandhu. The Region Trap Library: Handling traps on application-defined regions of memory. In USENIX Annual Technical Conference, Monterey, CA, June A. Itzkovitz, A. Schuster, and O. Zeev-Ben-Mordechai. Towards Integration of Data Race Detection in DSM System. In The Journal of Parallel and Distributed Computing (JPDC), 59(2): pp , Nov L. Lamport. Time, Clock, and the Ordering of Events in a Distributed System. In Communications of the ACM, 21(7): pp , Jul F. Mattern. Virtual Time and Global States of Distributed Systems. In Parallel & Distributed Algorithms, pp. 215­226, 1989.

PADTAD, Nice, April 0352 References Cont. R. H. B. Netzer and B. P. Miller. What Are Race Conditions? Some Issues and Formalizations. In ACM Letters on Programming Languages and Systems, 1(1): pp , Mar R. H. B. Netzer and B. P. Miller. On the Complexity of Event Ordering for Shared-Memory Parallel Program Executions. In 1990 International Conference on Parallel Processing, 2: pp. 93­97, Aug R. H. B. Netzer and B. P. Miller. Detecting Data Races in Parallel Program Executions. In Advances in Languages and Compilers for Parallel Processing, MIT Press 1991, pp

PADTAD, Nice, April 0353 References Cont. S. Savage, M. Burrows, G. Nelson, P. Sobalvarro, and T.E. Anderson. Eraser: A Dynamic Data Race Detector for Multithreaded Programs. In ACM Transactions on Computer Systems, 15(4): pp , 1997 E. Pozniansky. Efficient On-the-Fly Data Race Detection in Multithreaded C++ Programs. Research Thesis. O. Zeev-Ben-Mordehai. Efficient Integration of On-The-Fly Data Race Detection in Distributed Shared Memory and Symmetric Multiprocessor Environments. Research Thesis, May 2001.

Overheads

PADTAD, Nice, April 0355 Overheads The testing platform: 4-way IBM Netfinity, 550 MHz 2GB RAM Microsoft Windows NT