Presentation is loading. Please wait.

Presentation is loading. Please wait.

Programming Vast Networks of Tiny Devices David Culler University of California, Berkeley Intel Research Berkeley

Similar presentations


Presentation on theme: "Programming Vast Networks of Tiny Devices David Culler University of California, Berkeley Intel Research Berkeley"— Presentation transcript:

1 Programming Vast Networks of Tiny Devices David Culler University of California, Berkeley Intel Research Berkeley http://webs.cs.berkeley.edu

2 11/14/2002NEC Intro Programmable network fabric Architectural approach –new code image pushed through the network as packets –assembled and verified in local flash –second watch-dog processor reprograms main controller Viral code approach (Phil Levis) –each node runs a tiny virtual machine interpreter –captures the high-level behavior of application domain as individual instructions –packets are “capsule” sequence of high-level instructions –capsules can forward capsules Rich challenges –security –energy trade-offs –DOS pushc 1 # Light is sensor 1 sense # Push light reading pushm # Push message buffer clear # Clear message buffer add # Append val to buffer send # Send message using AHR forw # Forward capsule halt #

3 11/14/2002NEC Intro Mate’ Tiny Virtual Machine Comm. centric stack machine –7286 bytes code, 603 bytes RAM –dynamicly typed Four context types: –send, receive, clock, subroutine (4) –each 24 instructions Fit in a single TinyOS AM packet –Installation is atomic –Self-propagating Version information 0123 Subroutines Clock Send Receive Events gets/sets Code Operand Stack Return Stack PC Mate Context

4 11/14/2002NEC Intro Case Study: GDI Great Duck Island application Simple sense and send loop Runs every 8 seconds – low duty cycle 19 Maté instructions, 8K binary code Energy tradeoff: if you run GDI application for less than 6 days, Maté saves energy

5 11/14/2002NEC Intro Higher-level Programming? Ideally, would specify the desired global behavior Compilers would translate this into local operations High-Performance Fortran (HPF) analog –program is sequence of parallel operations on large matrices –each of the matrices are spread over many processors on a parallel machine –compiler translates from global view to local view »local operations + message passing –highly structured and regular We need a much richer suite of operations on unstructured aggregates on irregular, changing networks

6 11/14/2002NEC Intro Sensor Databases – a start Relational databases: rich queries described by declarative queries over tables of data –select, join, count, sum,... –user dictates what should be computed –query optimizer determines how –assumes data presented in complete, tabular form First step: database operations over streams of data –incremental query processing Big step: process the query in the sensor net –query processing == content-based routing? –energy savings, bandwidth, reliability App Sensor Network TinyDB Query, Trigger Data SELECT AVG(light) GROUP BY roomNo

7 11/14/2002NEC Intro Motivation: Sensor Nets and In-Network Query Processing Many Sensor Network Applications are Data Oriented Queries Natural and Efficient Data Processing Mechanism –Easy (unlike embedded C code) –Enable optimizations through abstraction Aggregates Common Case –E.g. Which rooms are in use? In-network processing a must –Sensor networks power and bandwidth constrained –Communication dominates power cost –Not subject to Moore’s law!

8 11/14/2002NEC Intro SQL Primer SQL is an established declarative language; not wedded to it –Some extensions clearly necessary, e.g. for sample rates We adopt a basic subset: ‘sensors’ relation (table) has –One column for each reading-type, or attribute –One row for each externalized value »May represent an aggregation of several individual readings SELECT {agg n (attr n ), attrs} FROM sensors WHERE {selPreds} GROUP BY {attrs} HAVING {havingPreds} EPOCH DURATION s SELECT AVG(light) FROM sensors WHERE sound < 100 GROUP BY roomNo HAVING AVG(light) < 50

9 11/14/2002NEC Intro TinyDB Demo (Sam Madden) Joe Hellerstein, Sam Madden, Wei Hong, Michael Franklin

10 11/14/2002NEC Intro Tiny Aggregation (TAG) Approach Push declarative queries into network –Impose a hierarchical routing tree onto the network Divide time into epochs Every epoch, sensors evaluate query over local sensor data and data from children –Aggregate local and child data –Each node transmits just once per epoch –Pipelined approach increases throughput Depending on aggregate function, various optimizations can be applied hypothesis testing

11 11/14/2002NEC Intro Aggregation Functions Standard SQL supports “the basic 5”: –MIN, MAX, SUM, AVERAGE, and COUNT We support any function conforming to: Agg n ={f merge, f init, f evaluate } F merge {, }  f init {a 0 }  F evaluate { }  aggregate value (Merge associative, commutative!) Example: Average AVG merge {, }  AVG init {v}  AVG evaluate { }  S 1 /C 1 Partial Aggregate

12 11/14/2002NEC Intro Query Propagation TAG propagation agnostic –Any algorithm that can: »Deliver the query to all sensors »Provide all sensors with one or more duplicate free routes to some root simple flooding approach –Query introduced at a root; rebroadcast by all sensors until it reaches leaves –Sensors pick parent and level when they hear query –Reselect parent after k silent epochs Query P:0, L:1 2 1 5 3 4 6 P:1, L:2 P:3, L:3 P:2, L:3 P:4, L:4

13 11/14/2002NEC Intro Illustration: Pipelined Aggregation 1 2 3 4 5 SELECT COUNT(*) FROM sensors Depth = d

14 11/14/2002NEC Intro Illustration: Pipelined Aggregation 12345 111111 1 2 3 4 5 1 1 1 1 1 Sensor # Epoch # Epoch 1 SELECT COUNT(*) FROM sensors

15 11/14/2002NEC Intro Illustration: Pipelined Aggregation 12345 111111 231221 1 2 3 4 5 1 2 2 1 3 Sensor # Epoch # Epoch 2 SELECT COUNT(*) FROM sensors

16 11/14/2002NEC Intro Illustration: Pipelined Aggregation 12345 111111 231221 341321 1 2 3 4 5 1 2 3 1 4 Sensor # Epoch # Epoch 3 SELECT COUNT(*) FROM sensors

17 11/14/2002NEC Intro Illustration: Pipelined Aggregation 12345 111111 231221 341321 451321 1 2 3 4 5 1 2 3 1 5 Sensor # Epoch # Epoch 4 SELECT COUNT(*) FROM sensors

18 11/14/2002NEC Intro Illustration: Pipelined Aggregation 12345 111111 231221 341321 451321 551321 1 2 3 4 5 1 2 3 1 5 Sensor # Epoch # Epoch 5 SELECT COUNT(*) FROM sensors

19 11/14/2002NEC Intro Discussion Result is a stream of values –Ideal for monitoring scenarios One communication / node / epoch –Symmetric power consumption, even at root New value on every epoch –After d-1 epochs, complete aggregation Given a single loss, network will recover after at most d-1 epochs With time synchronization, nodes can sleep between epochs, except during small communication window Note: Values from different epochs combined –Can be fixed via small cache of past values at each node –Cache size at most one reading per child x depth of tree 1 2 3 4 5

20 11/14/2002NEC Intro Testbench & Matlab Integration Positioned mica array for controlled studies –in situ programming –Localization (RF, TOF) –Distributed Algorithms –Distributed Control –Auto Calibration Out-of-band “squid” instrumentation NW Integrated with MatLab –packets -> matlab events –data processing –filtering & control

21 11/14/2002NEC Intro Acoustic Time-of-Flight Ranging Sounder/Tone Detect Pair Emit Sounder pulse and RF message Receiver uses message to arm Tone Detector Key Challenges –Noisy Environment –Calibration On-mote Noise Filter Calibration fundamental to “many cheap” regime »variations in tone frequency and amplitude, detector sensitivity Collect many pairs – 4-parameter model for each pair –T(A->B, x) = O A + O B + (L A + L B )x –O A, L A in message, O B, L B local no calibration: 76% error joint calibration: 10.1% error


Download ppt "Programming Vast Networks of Tiny Devices David Culler University of California, Berkeley Intel Research Berkeley"

Similar presentations


Ads by Google