Locks and Threads and Monads—OOo My ● Stephan Bergmann – StarOffice/OpenOffice.org ● Sun Microsystems.

Slides:



Advertisements
Similar presentations
C. Varela; Adapted from S. Haridi and P. Van Roy1 Declarative Programming Techniques Lazy Execution (VRH 4.5) Carlos Varela RPI Adapted with permission.
Advertisements

October 2007Susan Eggers: SOSP Women's Workshop 1 Your Speaker in a Nutshell BA in Economics 1965 PhD University of CA, Berkeley 1989 Technical skills.
Distributed systems Programming with threads. Reviews on OS concepts Each process occupies a single address space.
PARALLEL PROGRAMMING with TRANSACTIONAL MEMORY Pratibha Kona.
Parallelism and Concurrency Koen Lindström Claessen Chalmers University Gothenburg, Sweden Ulf Norell.
CIS 101: Computer Programming and Problem Solving Lecture 8 Usman Roshan Department of Computer Science NJIT.
Distributed systems Programming with threads. Reviews on OS concepts Each process occupies a single address space.
Precept 3 COS 461. Concurrency is Useful Multi Processor/Core Multiple Inputs Don’t wait on slow devices.
Threads 1 CS502 Spring 2006 Threads CS-502 Spring 2006.
CS533 Concepts of Operating Systems Class 2 Thread vs Event-Based Programming.
Language Support for Lightweight transactions Tim Harris & Keir Fraser Presented by Narayanan Sundaram 04/28/2008.
1 New Architectures Need New Languages A triumph of optimism over experience! Ian Watson 3 rd July 2009.
1 Java Grande Introduction  Grande Application: a GA is any application, scientific or industrial, that requires a large number of computing resources(CPUs,
1 Sharing Objects – Ch. 3 Visibility What is the source of the issue? Volatile Dekker’s algorithm Publication and Escape Thread Confinement Immutability.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Threads CNS What is a thread?  an independent unit of execution within a process  a "lightweight process"  an independent unit of execution within.
Why Threads Are A Bad Idea (for most purposes) John Ousterhout Sun Microsystems Laboratories
High level & Low level language High level programming languages are more structured, are closer to spoken language and are more intuitive than low level.
ICOM 6115©Manuel Rodriguez-Martinez ICOM 6115 – Computer Networks and the WWW Manuel Rodriguez-Martinez, Ph.D. Lecture 6.
Cosc 4740 Chapter 6, Part 3 Process Synchronization.
Java Threads 11 Threading and Concurrent Programming in Java Introduction and Definitions D.W. Denbo Introduction and Definitions D.W. Denbo.
CSC 660: Advanced Operating SystemsSlide #1 CSC 660: Advanced OS Concurrent Programming.
Optimistic Design 1. Guarded Methods Do something based on the fact that one or more objects have particular states  Make a set of purchases assuming.
Chapter 4 – Threads (Pgs 153 – 174). Threads  A "Basic Unit of CPU Utilization"  A technique that assists in performing parallel computation by setting.
Consider the program fragment below left. Assume that the program containing this fragment executes t1() and t2() on separate threads running on separate.
CS533 - Concepts of Operating Systems 1 Threads, Events, and Reactive Objects - Alan West.
1 Why Threads are a Bad Idea (for most purposes) based on a presentation by John Ousterhout Sun Microsystems Laboratories Threads!
Em Spatiotemporal Database Laboratory Pusan National University File Processing : Database Management System Architecture 2004, Spring Pusan National University.
Hardware Trends CSE451 Andrew Whitaker. Motivation Hardware moves quickly OS code tends to stick around for a while “System building” extends way beyond.
Software Engineering Algorithms, Compilers, & Lifecycle.
24-Jun-16 Haskell Dealing with impurity. Purity Haskell is a “pure” functional programming language Functions have no side effects Input/output is a side.
Parallelism and Concurrency Koen Lindström Claessen Chalmers University Gothenburg, Sweden Patrik Jansson.
Unit Testing.
Parallelism and Concurrency
Types for Programs and Proofs
PROGRAMMING LANGUAGES
CSE341: Programming Languages Lecture 14 Thunks, Laziness, Streams, Memoization Dan Grossman Spring 2017.
Operating Systems (CS 340 D)
Threads Cannot Be Implemented As a Library
Machine code Recall that all a computer recognises is binary code.
Higher-Order Programming: Iterative computation (CTM Section 3
课程名 编译原理 Compiling Techniques
Atomicity CS 2110 – Fall 2017.
CSE341: Programming Languages Lecture 14 Thunks, Laziness, Streams, Memoization Dan Grossman Winter 2013.
Operating Systems (CS 340 D)
Lecture 5: GPU Compute Architecture
CSE341: Programming Languages Lecture 14 Thunks, Laziness, Streams, Memoization Zach Tatlock Winter 2018.
CSE341: Programming Languages Lecture 14 Thunks, Laziness, Streams, Memoization Dan Grossman Spring 2013.
Lecture 5: GPU Compute Architecture for the last time
Haskell Dealing with impurity 30-Nov-18.
Transactions.
CSE341: Programming Languages Lecture 14 Thunks, Laziness, Streams, Memoization Dan Grossman Spring 2016.
Closure Representations in Higher-Order Programming Languages
CSE341: Programming Languages Lecture 14 Thunks, Laziness, Streams, Memoization Dan Grossman Autumn 2017.
Threads Chapter 4.
CSE341: Programming Languages Lecture 14 Thunks, Laziness, Streams, Memoization Dan Grossman Autumn 2018.
Parallelism and Concurrency
Concurrency: Mutual Exclusion and Process Synchronization
CS 240 – Advanced Programming Concepts
Kernel Synchronization II
Why Threads Are A Bad Idea (for most purposes)
Haskell Dealing with impurity 8-Apr-19.
- When you approach operating system concepts there might be several confusing terms that may look similar but in fact refer to different concepts:  multiprogramming, multiprocessing, multitasking,
CSE 153 Design of Operating Systems Winter 19
Haskell Dealing with impurity 29-May-19.
Why Threads Are A Bad Idea (for most purposes)
Why Threads Are A Bad Idea (for most purposes)
Haskell Dealing with impurity 28-Jun-19.
Brett Wortzman Summer 2019 Slides originally created by Dan Grossman
CSE341: Programming Languages Lecture 14 Thunks, Laziness, Streams, Memoization Dan Grossman Spring 2019.
Presentation transcript:

Locks and Threads and Monads—OOo My ● Stephan Bergmann – StarOffice/OpenOffice.org ● Sun Microsystems

● 1 - Tomorrow's hardware... ● and today's software ● 3 - Stateful vs. functional... ● in parallel ● 5 - UNO to the rescue? Locks and Threads and Monads—OOo My

CPUs: Broader, Not Faster Today, CPU speed no longer increases the way it did all those decades. Instead, consumer machines are equipped with increasing numbers of parallel execution units (multiple CPUs, hyperthreading). Herb Sutter: “[...] applications will increasingly need to be concurrent if they want to fully exploit CPU throughput gains [...]”

OOo Today Mostly single-threaded application, based around a GUI event loop. Few additional threads: > filename autocompletion in file picker,..., > remote UNO connections. An example consequence: Opening a large writer document takes a while, you cannot start searching through it right away.

OOo Today Much of the OOo code written with a single- threaded application in mind. Multi-threading support added afterwards (“global solar mutex”). An example consequence: Multiple incoming remote UNO connections (i.e., multiple active threads) likely crash OOo.

Shared state threading Extremely hard to get right. Example: What is a recursive mutex good for? > David Butenhof: “A correct and well understood design does not require recursive mutexes.” Example: Issue 67191, osl_waitCondition not working properly from day one, detected years later. Example: Are Old/NewValue in PropertyChangeEvent of any use?

Dilemma Can we reasonably expect to make use of multiple parallel execution units in OOo using the shared state threading model we love and hate? No! What then?

A little rant intermezzo Many CS concepts seem to be little known across the industry: > “So, what your suggestion amounts to is to add closures to OOo Basic, right?” — “Closures???” > “But UNO does not support structural subtyping.” — “C struct types???” > Scott Meyers: “I have a Ph.D. in Computer Science, and I’d never heard of F-bounded polymorphism.”

Look around! Other approaches to programming (concurrent) applications: > Declarative models with logic variables (e.g., Oz). > Non-strict functional models (e.g., Haskell). Philip Greenspun: “Any sufficiently complicated C or Fortran program contains an ad-hoc, informally- specified bug-ridden slow implementation of half of Common Lisp.” Remember: There are many interesting approaches, and there is no silver bullet.

Oz Logic (dataflow) variables and lightweight threads: thread List = "a"|X1 end thread X1 = "b"|X2 end thread X2 = "c"|nil end {Length List} fun {Map F Xs} case Xs of nil then nil [] X|Xr then thread {F X} end|{Map F Xr} end end Concepts, Techniques, and Models of Computer Programming by van Roy and Haridi.

Haskell Non-strict (“lazy”): f :: Float -> Float f _ = 5.0 f (1.0 / 0.0) squares :: Int -> [Int] squares n = take n (map (\x -> x * x) [1..]) Monadic IO: main :: IO a wordCount :: IO Int wordCount = do putStr "input: " l <- getLine return (length (words l)) infinite

Software Transactional Memory Don't pessimistically lock data, but optimistically use the data and then commit a bunch of operations: Either succeeds or fails and restarts. > Easier to program. > Works best in low-contention scenarios. > Nicely integrates into Haskell: newTVar :: a -> STM (TVar a) readTVar :: TVar a -> STM a writeTVar :: TVar a -> a -> STM () atomically :: STM a -> IO a

And its not only concurrency For example, resource management: > C malloc/free : a nightmare to get them properly paired. > C++ RAII: better, but (a) often not used (witness many OOo crash reports), and (b) bad when destruction can fail ( fclose ). > Java try/finally : cumbersome, esp. when using multiple resources. > Haskell: higher order functions!

And its not only concurrency withOpenFile :: Handle -> (Handle -> IO a) -> IO a withOpenFile h f = finally (f h) (hClose h) copyAndClose :: Handle -> Handle -> IO () copyAndClose h1 h2 = withOpenFile h1 (\_ -> withOpenFile h2 (\_ -> do x <- hGetContents h1 hPutStr h2 x return () )) do h1 <- openFile "input" ReadMode h2 <- openFile "output" WriteMode return copyAndClose h1 h2

UNO Conceptually, UNO consists of threads concurrently invoking methods on (shared) objects. Each UNO object has to ensure that concurrent invocations of its methods are safe. > Hard to avoid deadlock. > Single method calls are often the wrong locking granularity. > Unnecessary locking costs in single-threaded use. > Java had the same problem (e.g., StringBuffer → StringBuilder ).

UNO Does this fit a (massively) concurrent world? Not really: > The emerging threading framework tends to cluster objects in cages when they should be free (individual paragraphs of a text document model). > The two-level approach (language-independent model on top of language bindings) hampers innovation (e.g., language-supported lightweight threads, language-supported STM). (UNO does help to integrate new languages.)

Conclusion An OOo running correctly on 1–2 processing units is important, but an OOo running efficiently on 8–16 processing units will become just as important. > Find places in OOo where things can be done in parallel. > Know how to write good code that achieves this. > Have fun with a snappy application.

Mistrust all enterprises that require new clothes. —E. M. Forster