Bridging the Gap Philip Levis, UC Berkeley David Gay, Intel Research Berkeley David Culler, UC Berkeley Programming Sensor Networks with Application Specific.

Slides:



Advertisements
Similar presentations
Configuration management
Advertisements

Berkeley dsn declarative sensor networks problem David Chu, Lucian Popa, Arsalan Tavakoli, Joe Hellerstein approach related dsn architecture status  B.
Declarative sensor networks David Chu Computer Science Division EECS Department UC Berkeley DBLunch UC Berkeley 2 March 2007.
Trickle: Code Propagation and Maintenance Neil Patel UC Berkeley David Culler UC Berkeley Scott Shenker UC Berkeley ICSI Philip Levis UC Berkeley.
Sensor Network Platforms and Tools
Overview: Chapter 7  Sensor node platforms must contend with many issues  Energy consumption  Sensing environment  Networking  Real-time constraints.
Introduction CSCI 444/544 Operating Systems Fall 2008.
Programming Vast Networks of Tiny Devices David Culler University of California, Berkeley Intel Research Berkeley
TOSSIM A simulator for TinyOS Presented at SenSys 2003 Presented by : Bhavana Presented by : Bhavana 16 th March, 2005.
Contiki A Lightweight and Flexible Operating System for Tiny Networked Sensors Presented by: Jeremy Schiff.
Systems Wireless EmBedded nesC Update Eric Brewer with help from David Culler, David Gay, Phil Levis, Rob von Behren, and Matt Welsh.
1 Efficient Memory Safety for TinyOS Nathan Cooprider Will Archer Eric Eide David Gay † John Regehr University of Utah School of Computing † Intel Research.
How to Code on TinyOS Xufei Mao Advisor: Dr. Xiang-yang Li CS Dept. IIT.
Incremental Network Programming for Wireless Sensors NEST Retreat June 3 rd, 2004 Jaein Jeong UC Berkeley, EECS Introduction Background – Mechanisms of.
Sample Project Ideas KD Kang. Project Idea 1: Real-time task scheduling in TinyOS EDF in TinyOS 2.x –Description is available at
Incremental Network Programming for Wireless Sensors IEEE SECON 2004 Jaein Jeong and David Culler UC Berkeley, EECS.
A Survey of Wireless Sensor Network Data Collection Schemes by Brett Wilson.
Active Messages: a Mechanism for Integrated Communication and Computation von Eicken et. al. Brian Kazian CS258 Spring 2008.
Development of a Mica2 Mote Sensor Network Cliff Macklin Bill Ehrbar December 8, 2004 University of Colorado, Colorado Springs.
Systems Wireless EmBedded Macroprogramming Eric Brewer (with help from David Gay, Rob von Behren, and Phil Levis)
Chess Review November 21, 2005 Berkeley, CA Edited and presented by Sensor Network Design Akos Ledeczi ISIS, Vanderbilt University.
Maté: A Tiny Virtual Machine for Sensor Networks Philip Levis and David Culler Presented by: Michele Romano.
TinyOS Systems Overview Phil Levis et al. MobiSys 2003.
Silberschatz, Galvin and Gagne  Operating System Concepts Common OS Components Process Management Memory Management File Management I/O System.
TOSSIM: Visualizing the Real World Philip Levis, Nelson Lee, Dennis Chi and David Culler UC Berkeley NEST Retreat, January 2003.
Philip Levis UC Berkeley 6/17/20021 Maté: A Tiny Virtual Machine Viral Programs with a Certain Cosmopolitan Charm.
Configuration Management
1 Software Development Infrastructure for Sensor Networks  Operating systems ( TinyOS )  Resource (device) management  Basic primitives  Protocols.
By: R Jayampathi Sampath
April 15, 2005TinyOS: A Component Based OSPage 1 of 27 TinyOS A Component-Based Operating System for Networked Embedded Systems Tom Bush Graduate College.
TinyOS By Morgan Leider CS 411 with Mike Rowe with Mike Rowe.
Eric Keller, Evan Green Princeton University PRESTO /22/08 Virtualizing the Data Plane Through Source Code Merging.
Cisco S2 C4 Router Components. Configure a Router You can configure a router from –from the console terminal (a computer connected to the router –through.
Configuration Management (CM)
Mate: A Tiny Virtual Machine for Sensor Networks Philip Levis and David Culler Presented by: Damon Jo.
Sensor Database System Sultan Alhazmi
Korea Advanced Institute of Science and Technology Active Sensor Networks(Mate) (Published by Philip Levis, David Gay, and David Culler in NSDI 2005) 11/11/09.
TRICKLE: A Self-Regulating Algorithm for Code Propagation and Maintenance in Wireless Sensor Networks Philip Levis, Neil Patel, Scott Shenker and David.
CS542 Seminar – Sensor OS A Virtual Machine For Sensor Networks Oct. 28, 2009 Seok Kim Eugene Seo R. Muller, G. Alonso, and D. Kossmann.
Mate: A Tiny Virtual Machine for Sensor Networks Phil Levis and David Culler Presented by Andrew Chien CSE 291 Chien April 22, 2003 (slides courtesy, Phil.
한국기술교육대학교 컴퓨터 공학 김홍연 Habitat Monitoring with Sensor Networks DKE.
Query Processing for Sensor Networks Yong Yao and Johannes Gehrke (Presentation: Anne Denton March 8, 2003)
이상훈, 허 윤 (Dept. of CS) René Müller, Gustavo Alonso, Donald Kossmann, “A virtual machine for sensor networks”, ACM SIGOPS Operating Systems Review, v.41.
Dhanshree Nimje Smita Khartad
Simulation of Distributed Application and Protocols using TOSSIM Valliappan Annamalai.
Part 2 TinyOS and nesC Programming Selected slides from:
Main Issues Three major issues that we are concerned with in sensor networks are – Clustering Routing and Security To be considered against the backdrop.
Xiong Junjie Node-level debugging based on finite state machine in wireless sensor networks.
A Dynamic Operating System for Sensor Nodes Chih-Chieh Han, Ram Kumar, Roy Shea, Eddie Kohler, Mani, Srivastava, MobiSys ‘05 Oct., 2009 발표자 : 김영선, 윤상열.
Fuzzy Data Collection in Sensor Networks Lee Cranford Marguerite Doman July 27, 2006.
MVC WITH CODEIGNITER Presented By Bhanu Priya.
Link Layer Support for Unified Radio Power Management in Wireless Sensor Networks IPSN 2007 Kevin Klues, Guoliang Xing and Chenyang Lu Database Lab.
In-Network Query Processing on Heterogeneous Hardware Martin Lukac*†, Harkirat Singh*, Mark Yarvis*, Nithya Ramanathan*† *Intel.
1 Software Reliability in Wireless Sensor Networks (WSN) -Xiong Junjie
Region Streams Functional Macroprogramming for Sensor Networks Ryan Newton MIT CSAIL Matt Welsh Harvard University
By Nitin Bahadur Gokul Nadathur Department of Computer Sciences University of Wisconsin-Madison Spring 2000.
CSCI/CMPE 4334 Operating Systems Review: Exam 1 1.
1 Chapter 2: Operating-System Structures Services Interface provided to users & programmers –System calls (programmer access) –User level access to system.
Why does it need? [USN] ( 주 ) 한백전자 Background Wireless Sensor Network (WSN)  Relationship between Sensor and WSN Individual sensors are very limited.
Software Architecture of Sensors. Hardware - Sensor Nodes Sensing: sensor --a transducer that converts a physical, chemical, or biological parameter into.
TinyOS and nesC. Outline ● Wireless sensor networks and TinyOS ● Networked embedded system C (nesC) – Components – Interfaces – Concurrency model – Tool.
Introduction to Operating Systems Concepts
Simulation of Distributed Application and Protocols using TOSSIM
Wireless Sensor Networks
Real-time Software Design
Trickle: Code Propagation and Maintenance
Chapter 3: Operating-System Structures
Chapter 2: Operating-System Structures
Chapter 2: Operating-System Structures
Overview: Active Sensor networks
Presentation transcript:

Bridging the Gap Philip Levis, UC Berkeley David Gay, Intel Research Berkeley David Culler, UC Berkeley Programming Sensor Networks with Application Specific Virtual Machines

4.vi.2004NEST Retreat1 Sensor Network Deployments Need to modify behavior after deployment –Evolving requirements, parameter tuning Energy is the critical resource Embedment complicates node failure

4.vi.2004NEST Retreat2 Sensor Network Deployments Need to modify behavior after deployment –Evolving requirements, parameter tuning Energy is the critical resource Embedment complicates node failure We need an efficient and safe way to reprogram sensor network deployments.

4.vi.2004NEST Retreat3 Application-Specificity Deployments are for an application class –Tracking, habitat monitoring –Don’t need complete generality in a given network Classes may have different programming models –Habitat monitoring: queries –Tracking: neighborhoods/regions –Need generality across networks

4.vi.2004NEST Retreat4 Application-Specificity Deployments are for an application class –Tracking, habitat monitoring –Don’t need complete generality in a given network Classes may have different programming models –Habitat monitoring: queries –Tracking: neighborhoods/regions –Need generality across networks –Implementation alternatives Application-specific virtual machines (ASVMs) can provide a way to safely and efficiently program deployed sensor networks.

4.vi.2004NEST Retreat5 Programming Space Efficiency Generality Config Deluge/Xnp

4.vi.2004NEST Retreat6 Programming Space Efficiency Generality Config Deluge/Xnp ASVMs

4.vi.2004NEST Retreat7 Execution Layer nesC, binary code, changed rarely Optimizations, resource management, hardware Transmission Layer Application specific VM bytecodes Efficiency, safety Programming Layer SQL-like queries, data parallel operators, scripts Expressivity, simplicity Three Layers

4.vi.2004NEST Retreat8 Execution Layer nesC, binary code, changed rarely Optimizations, resource management, hardware Transmission Layer Application specific VM bytecodes Efficiency, safety Programming Layer SQL-like queries, data parallel operators, scripts Expressivity, simplicity Three Layers

4.vi.2004NEST Retreat9 Outline Programming sensor network deployments RegionsVM QueryVM Maté: an architecture for building ASVMs Separating the transmission layer Conclusion

4.vi.2004NEST Retreat10 Abstract Regions “Programming Sensor Networks with Abstract Regions,” Matt Welsh and Geoff Mainland, NSDI Proposed “abstract regions” as a data primitive for data-parallel operations Single lightweight “fiber” for synchronous code Compile regions program into a TinyOS image –20KB for a single-region program –Binary code: unsafe

4.vi.2004NEST Retreat11 RegionsVM Export regions primitives as functions –Creation –Reductions –Tuple space put/get Regions programs become small scripts –Easy to install –Virtual code: safety

4.vi.2004NEST Retreat12 Regions Fiber RegionsVM region = k_nearest_region_create(8); while(true){ val = get_sensor_reading(); region.putvar(v_key, val); region.putvar(x_key, val * loc.x); region.putvar(y_key, val * loc.y); if (val > threshold) { max_id = region.reduce(OP_MAXID, v_key); if (max_id == my_id) { sum = region.reduce(OP_SUM, v_key); sum_y = region.reduce(OP_SUM, x_key); sum_x = region.reduce(OP_SUM, y_key); centroid.x = sum_x / sum; centroid.y = sum_y / sum; send_to_basestation(centroid); } sleep(periodic_delay); } call KNearCreate(); for i = 1 until 0 val = call cast(call mag()); call KNearPutVar(0, val); call KNearPutVar(1, val*call locX()); call KNearPutVar(2, val*call locY()); if (val > threshold) then max_id = call KNearReduceMaxID(0); if (max_id = my_id) then sum = call KNearReduceAdd(0); sum_y = call KNearReduceAdd(1); sum_x = call KNearReduceAdd(2); buffer[0] = sum_x / sum; buffer[1] = sum_y / sum; call send(buffer); end if call sleep(periodic_delay); next i

4.vi.2004NEST Retreat13 Cost Breakdown Regions FiberRegionsVM TinyOS Image Size19K39K Data Size2.25K3.02K Program Size19K71 bytes SafetyNoYes Max. Concurrency1N

4.vi.2004NEST Retreat14 Cost Breakdown Regions FiberRegionsVM TinyOS Image Size19K39K Data Size2.25K3.02K Program Size19K71 bytes SafetyNoYes Max. Concurrency1N Reduces program size by 99.6% Provides safety

4.vi.2004NEST Retreat15 Outline Programming sensor network deployments RegionsVM QueryVM Maté: an architecture for building ASVMs Separating the transmission layer Conclusion

4.vi.2004NEST Retreat16 TinyDB Declarative SQL-like queries –Programs are direct binary encodings of queries Epoch-based data collection In-network aggregation up a routing tree

4.vi.2004NEST Retreat17 QueryVM Imperative framework for processing queries Execution events for routing intercept –Same as TinyDB approach –Aggregate by examining forwarding messages Uses motlle as its language –Not as simple as SQL, but more powerful –Comparatively rich data types (vectors, lists, etc.)

4.vi.2004NEST Retreat18 Three Queries Simple: basic periodic data collection (light) Conditional: send EWMA of temperature if light above a threshold SpatialAvg: periodic collection of spatially averaged temperature readings

4.vi.2004NEST Retreat19 TinyDBQueryVM Querysizedraw (mW) sizedraw (mW) Simple Conditional155NA SpatialAvg Results Energy savings due to more efficient interpretation

4.vi.2004NEST Retreat20 Outline Programming sensor network deployments RegionsVM QueryVM Maté: an architecture for building ASVMs Separating the transmission layer Conclusion

4.vi.2004NEST Retreat21 Scheduler Concurrency Manager Capsule Store Contexts Operations Capsules Maté Architecture Template

4.vi.2004NEST Retreat22 Scheduler Concurrency Manager Capsule Store Contexts Operations Capsules Maté Architecture Template

4.vi.2004NEST Retreat23 Building an ASVM (and a few terms in bold) User specifies three things –Contexts: events that trigger execution –Language: TinyScript or motlle, a set of primitives TinyScript: BASIC-like minimalist language motlle: Scheme-like language with C syntax –Functions: higher-level operations Functions and primitives define VM operations (instruction set) Maté core, shared across all VMs –Scheduler: execution –Capsule Store: code capsule storage and dissemination –Concurrency Manager: concurrency control

4.vi.2004NEST Retreat24 Sample VM Description File (RegionsVM)

4.vi.2004NEST Retreat25 Scheduler: Data and Execution Model Contexts execute in response to an event –Timer, reboot, receive a packet –Submit context to concurrency manager –Concurrency manager ensures race-free access to shared resources, submits runnables to scheduler Contexts execute synchronously –Operations encapsulate split-phase operations behind a blocking interface Contexts have an operand stack –Passes parameters to functions –Two basic types: values and sensor readings Operations provide additional data storage

4.vi.2004NEST Retreat26 Capsule Store and Concurrency Manager Automatically propagates code using a series of network trickles (version numbers, fragment status, fragments) Analyze new code to detect which shared resources are used Controls context concurrency to ensure race- free, deadlock-free execution For more details, refer to paper

4.vi.2004NEST Retreat27 Scheduler Concurrency Manager Capsule Store Contexts Operations Capsules Maté Architecture Template

4.vi.2004NEST Retreat28 Extensions: Operations and Contexts Every operation and context has a nesC component –Some implement multiple operations: getvar/setvar –Some contexts have operations: Timers Wiring these to the VM template customizes the VM to a particular application –Contexts: when the VM executes –Operations: what the VM can do when it executes Components are small and simple –Example: operations

4.vi.2004NEST Retreat29 Outline Programming sensor network deployments RegionsVM QueryVM Maté: an architecture for building ASVMs Separating the transmission layer Conclusion

4.vi.2004NEST Retreat30 Execution Layer nesC, binary code, changed rarely Optimizations, resource management, hardware Transmission Layer Application specific VM bytecodes Efficiency, safety Programming Layer SQL-like queries, data parallel operators, scripts Expressivity, simplicity Three Layers

4.vi.2004NEST Retreat31 Execution Layer nesC, binary code, changed rarely Optimizations, resource management, hardware Transmission Layer Binary Code Programming Layer SQL-like queries, data parallel operators, scripts Expressivity, simplicity Regions Model

4.vi.2004NEST Retreat32 Execution Layer nesC, binary code, changed rarely Optimizations, resource management, hardware Transmission Layer SQL queries Programming Layer SQL-like queries, data parallel operators, scripts Expressivity, simplicity TinyDB Model

4.vi.2004NEST Retreat33 Source of Inefficiency Regions conflate execution and transmission –Inefficient propagation TinyDB conflates programming and transmission –Inefficient execution Separating the transmission layer enables both efficient propagation and execution, while being flexible to different programming models.

4.vi.2004NEST Retreat34 Outline Programming sensor network deployments RegionsVM QueryVM Maté: an architecture for building ASVMs Separating the transmission layer Conclusion

4.vi.2004NEST Retreat35 Bridging the Gap Bridge the gap between programming and execution with an application-specific virtual machine Provides safety and efficient propagation Flexible architecture can be used for a range of programming models and languages

4.vi.2004NEST Retreat36 Status There is a current Maté release, 1.1 (March) –Allows VM composition and TinyScript programming Current work (this summer): –Cleaning up code –More general UI –Expand library of contexts/functions –Incorporate motlle into distribution –Power management Actively supported: try it out and tell me what you think or need!

4.vi.2004NEST Retreat37 Questions

4.vi.2004NEST Retreat38 Sample VM Description File (RegionsVM)

4.vi.2004NEST Retreat39 Operations interface MateBytecode { command result_t execute(uint8_t instr, MateContext* context); command uint8_t byteLength(); } module OPpopM { provides interface MateBytecode as Code; uses interface MateStacks as Stacks; } implementation { command result_t Code.execute(uint8_t i, MateContext* ctxt) { call Stacks.popOperand(ctxt); return SUCCESS; } command uint8_t byteLength() {return 1;} }

4.vi.2004NEST Retreat40 Wiring Operations module MateEngineM { // Scheduler uses interface MateBytecode as Bytecode[uint8_t op]; } implementation { result_t execute(MateContext* context) {...fetch next bytecode... context->pc += call Bytecode.byteLength[op](); call Bytecode.execute[op](op, context); } // Automatically generated by toolchain configuration MateTopLevel { components MateEngineM as VM,...; VM.Code[OPadd] -> OPadd; VM.Code[OPsend] -> OPsend; VM.Code[OPsettimer] -> OPsettimer; VM.Code[OPgetvar] -> OPgetvar4;... VM.Code[OPgetvar+7] -> OPgetvar4;}