1 Software Development Infrastructure for Sensor Networks  Operating systems ( TinyOS )  Resource (device) management  Basic primitives  Protocols.

Slides:



Advertisements
Similar presentations
How to use TinyOS Jason Hill Rob Szewczyk Alec Woo David Culler An event based execution environment for Networked Sensors.
Advertisements

NesC Prepared for the Multimedia Networks Group University of Virginia.
Feb 2007WSN Training: First Steps in nesC Programming1 First Steps in TinyOS and nesC Programming Topics  Timer Application: MyApp  Application directory.
1 Lab 3 Objectives  Case study: “Hello world” program on motes  Write you first program on mote.
The nesc Language: A Holistic Approach to Networked Embedded Systems David Gay, Philip Levis, Robert von Behren, Matt Welsh, Eric Brewer, David Culler.
Sensor Network Platforms and Tools
Overview: Chapter 7  Sensor node platforms must contend with many issues  Energy consumption  Sensing environment  Networking  Real-time constraints.
Chapter 13 Embedded Systems
Intro to Threading CS221 – 4/20/09. What we’ll cover today Finish the DOTS program Introduction to threads and multi-threading.
TinyOS Introduction Advanced Computer Networks. TinyOS Outline  Introduction to the Architecture of TinyOS and nesC  Component Model –Components, interfaces,
Chapter 13 Embedded Systems Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design Principles,
Contiki A Lightweight and Flexible Operating System for Tiny Networked Sensors Presented by: Jeremy Schiff.
Systems Wireless EmBedded nesC Update Eric Brewer with help from David Culler, David Gay, Phil Levis, Rob von Behren, and Matt Welsh.
How to Code on TinyOS Xufei Mao Advisor: Dr. Xiang-yang Li CS Dept. IIT.
Sample Project Ideas KD Kang. Project Idea 1: Real-time task scheduling in TinyOS EDF in TinyOS 2.x –Description is available at
1 Concurrent and Distributed Systems Introduction 8 lectures on concurrency control in centralised systems - interaction of components in main memory -
CSE Fall Introduction - 1 What is an Embedded Systems  Its not a desktop system  Fixed or semi-fixed functionality (not user programmable)
TinyOS Software Engineering Sensor Networks for the Masses.
2008EECS Embedded Network Programming nesC, TinyOS, Networking, Microcontrollers Jonathan Hui University of California, Berkeley.
NesC: 1.1 Bumps and Future Directions David Gay, Intel Research, Berkeley (and the nesC and TinyOS teams)
Chapter 13 Embedded Operating Systems Eighth Edition By William Stallings Operating Systems: Internals and Design Principles.
1 Lab 3 Objectives  Case study: “Hello world” program on motes  Write you first program on mote.
Real-Time Software Design Yonsei University 2 nd Semester, 2014 Sanghyun Park.
By: R Jayampathi Sampath
April 15, 2005TinyOS: A Component Based OSPage 1 of 27 TinyOS A Component-Based Operating System for Networked Embedded Systems Tom Bush Graduate College.
1 TinyOS Computer Network Programming Wenyuan Xu Fall 2007.
1 Lab2 Objectives  Basics of TinyOS  Basics of nesC programming language.
1 Lab2 Objectives  Basics of TinyOS  Basics of nesC programming language.
TinyOS By Morgan Leider CS 411 with Mike Rowe with Mike Rowe.
OPERATING SYSTEMS Goals of the course Definitions of operating systems Operating system goals What is not an operating system Computer architecture O/S.
Vishal Jain, TinyOS Design Viewpoint “TinyOS” Design Viewpoint.
Dhanshree Nimje Smita Khartad
Lab 3 Introduction to TinyOS and nesC How to debug programs at PC Examples –Blink Timer –Blink –Hellow World Reference: 1.x/doc/tutorial/lesson1.html.
Simulation of Distributed Application and Protocols using TOSSIM Valliappan Annamalai.
HANBACK ELECTRONICS CO., LTD. 저자권 보호됨 TinyOS & NesC.
Part 2 TinyOS and nesC Programming Selected slides from:
CE Operating Systems Lecture 3 Overview of OS functions and structure.
Operating Systems Lecture November 2015© Copyright Virtual University of Pakistan 2 Agenda for Today Review of previous lecture Hardware (I/O, memory,
Operating Systems 1 K. Salah Module 1.2: Fundamental Concepts Interrupts System Calls.
Xiong Junjie Node-level debugging based on finite state machine in wireless sensor networks.
WSN Software Platforms - concepts Vinod Kulathumani Lecture uses some slides from tutorials prepared by authors of these platforms.
1 Lab2 Objectives  Basics of TinyOS  Basics of nesC programming language.
Lab 3, Part 2 Selected slides from: Wireless Sensor Networks Hardware/Software Tiny OS & NesC Programming borrowed from Turgay Korkmaz.
Link Layer Support for Unified Radio Power Management in Wireless Sensor Networks IPSN 2007 Kevin Klues, Guoliang Xing and Chenyang Lu Database Lab.
TinyOS Sandeep Gupta. Operating System (OS) What is an OS? Main functions  Process management  Memory management  Resource management Traditional OSs.
System Architecture Directions for Networked Sensors.
Time Management.  Time management is concerned with OS facilities and services which measure real time.  These services include:  Keeping track of.
Based on slides from Andreas Larsson Table from CY Chong, SP Kumar, BA Hamilton - Proceedings of the IEEE, 2003.
1 Chapter 2: Operating-System Structures Services Interface provided to users & programmers –System calls (programmer access) –User level access to system.
Why does it need? [USN] ( 주 ) 한백전자 Background Wireless Sensor Network (WSN)  Relationship between Sensor and WSN Individual sensors are very limited.
Software Architecture of Sensors. Hardware - Sensor Nodes Sensing: sensor --a transducer that converts a physical, chemical, or biological parameter into.
The Structuring of Systems Using Upcalls By David D. Clark Presented by Samuel Moffatt.
TinyOS Sandeep Gupta. TinyOS basics TinyOS is  Single tasking OS  Interrupt driven Written using a Component based language A set of components put.
TinyOS and nesC. Outline ● Wireless sensor networks and TinyOS ● Networked embedded system C (nesC) – Components – Interfaces – Concurrency model – Tool.
Computer System Structures
Real-time Software Design
Module 12: I/O Systems I/O hardware Application I/O Interface
Simulation of Distributed Application and Protocols using TOSSIM
Lecture 25 More Synchronized Data and Producer/Consumer Relationship
Real-time Software Design
Operating System Concepts
CS703 - Advanced Operating Systems
An Introduction to nesC
Embedded Operating Systems
WSN Training: TinyOS/nesC Basic Concepts TinyOS and nesC
Operating Systems Lecture 3.
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Modeling Event-Based Systems in Ptolemy II EE249 Project Status Report
Module 12: I/O Systems I/O hardwared Application I/O Interface
Chapter 13: I/O Systems “The two main jobs of a computer are I/O and [CPU] processing. In many cases, the main job is I/O, and the [CPU] processing is.
Presentation transcript:

1 Software Development Infrastructure for Sensor Networks  Operating systems ( TinyOS )  Resource (device) management  Basic primitives  Protocols (MAC, routing) {covered in many previous lectures}  Programming languages and runtime environments  Low level (C, nesC)  Very high level abstractions ( Regiment )  Services needed  Synchronization of clocks ( RBS )  Propagation of new code to all nodes ( Trickle )  Localization {not today}

2 Maintenance overheads Radio transmission scheduling (e.g. TDMA) + Route discovery & maintenance + Clock synchronization + Code propagation protocol + Sensor data delivery System Application Be aware of all the background message activity when designing application-level suppression.

Programming Platform for Sensor Networks: TinyOS Complements of Jun Yang With contents from D.-O. Kim, D. Culler, W. Xu

4 Challenges in programming sensors  WSN usually has severe power, memory, and bandwidth limitations  WSN must respond to multiple, concurrent stimuli  At the speed of changes in monitored phenomena  WSN are large-scale distributed systems

5 Traditional embedded systems  Event-driven execution and real-time scheduling  General-purpose layers are often bloated ! microkernel  Strict layering often adds overhead ! expose hardware controls

6 Node-level methodology and platform  Traditional design methodologies are node-centric  Node-level platforms  Operating system Abstracts the hardware on a sensor node Provides services for apps such as, traditionally, file management, memory allocation, task scheduling, device drivers, networking…  Language platform Provides a library of components to programmers

7 TinyOS  Started out as a research project at Berkeley  Now probably the de facto platform  Overarching goal: conserving resources  No file system  No dynamic memory allocation  No memory protection  Very simple task model  Minimal device and networking abstractions  Application and OS are coupled—composed into one image  Both are written in a special language nesC

8 TinyOS components  Components: reusable building blocks  Each component is specified by a set of interfaces  Provide “hooks” for wiring components together  A component C can provide an interface I  C must implement all commands available through I  Commands are methods exposed to an upper layer An upper layer can call a command  A component C can use an interface J  C must implement all events that can be signaled by J  These are methods available to a lower layer By signaling an event, the lower layer calls the appropriate handler  Components are then wired together into an application C I J

9 Component specification module TimerModule { provides { interface StdControl; interface Timer01; } uses interface Clock as Clk; } … interface StdControl { command result_t init(); command result_t start(); command result_t stop(); } interface Timer01 { command result_t start(char type, uint32_t interval); command result_t stop(); event result_t timer0Fire(); event result_t timer1Fire(); } interface Clock { command result_t setRate(char interval, char scale); event result_t fire(); } TimerModule

10 Module vs. configurations  Two types of components  Module: implements the component specification (interfaces) with application code  Configuration: implements the component specification by wiring existing components

11 Module implementation module TimerModule { provides { interface StdControl; interface Timer01; } uses interface Clock as Clk; } implementation { bool eventFlag; command result_t StdControl.init() { eventFlag = 0; return call Clk.setRate(128, 4); // 4 ticks per sec } event result_t Clk.fire() { eventFlag = !eventFlag; if (eventFlag) signal Timer01.timer0Fire(); else signal Timer01.timer1Fire(); return SUCCESS; } … } Just like method calls (unlike raising exceptions in Java, e.g.) Internal state

12 Configuration implementation  = (equate wire)  At least one must be external  ! (link wire)  Both internal; goes from user to provider configuration TimerConfiguration { provides { interface StdControl; interface Timer01; } } implementation { components TimerModule, HWClock; StdControl = TimerModule.StdControl; Timer01 = TimerModule.Timer01; TimerModule.Clock ! HWClock.Clock; } TimerModule HWClock TimerConfiguration

13 Commands vs. events { … status = call UsedInterfaceName.cmdName(args); … } event UsedInterfaceName.eventName(args) { … return status; } command cmdName(args) { … return status; } { … status = signal ProvidedInterfaceName.eventName(args); … }

14 FieldMonitor example

15 So… why wiring?  C/C++/Java?  Allow use of function pointers  Allow binding to be resolved at runtime  Flexibility  Less flexibility, but now call graph can be determined at compile time!  Can inline calls cross components (5-6) into a flat instruction stream with no function calls  Lots of static analysis and optimization possible

16 Concurrency model Two types of execution contexts  Tasks  Longer running jobs  Time flexible  (Currently) simple FIFO scheduling  Atomic w.r.t. other tasks, i.e., single-threaded  But can be preempted by events  Events (an overloaded term)  More precisely, hardware interrupt handlers  Time critical  Shorten duration as much as possible By issuing tasks for later execution  LIFO semantics; can preempt tasks and earlier events

17 Tasks  A task is always posted for later execution; control returns to poster immediately  Scheduler supports a bounded queue of pending tasks  Node sleeps when the queue is empty  For simplicity, tasks don’t take args and don’t return values  Typical use  Event necessitates further processing Wrap it up in a task  Event handler simply posts the task and can return immediately implementation { task void TaskName() { … } … … { … post TaskName(); … } }

18 Execution example Task A Task B Task C … Task queue Task B Task C … Event 1 Event 2 Event 3 Hardware interrupts

19 A more complex example  Timer01.Timer0Fire() triggers data acquisition (through ADC) and transmission to base station (through Send) module SenseAndSend { provides interface StdControl; uses { interface ADC; interface Timer01; interface Send; } } implementation { bool busy; norace unit16_t sensorReading; … command result_t StdControl.init() { busy = FALSE; } event result_t Timer01.Timer0Fire() { bool localBusy; atomic { localBusy = busy; busy = TRUE; } if (!localBusy) { call ADC.getData(); return SUCCESS; } else { return FAILED; } } task void sendData() { … adcPacket.data = sensorReading; … call Send.send(&adcPacket, sizeof(adcPacket.data)); return SUCCESS; } event result_t ADC.dataReady(uint16_t data) { sensorReading = data; post sendData(); atomic { busy = FALSE; } return SUCCESS; } … }

20 Split-phase operation  Data acquisition doesn’t take place immediately ( why? )  How does a traditional OS accomplish this?  OS puts thread to sleep when it blocks  Why isn’t this approach good enough here? event result_t Timer01.Timer0Fire() { bool localBusy; atomic { localBusy = busy; busy = TRUE; } if (!localBusy) { call ADC.getData(); return SUCCESS; } else { return FAILED; } } event result_t ADC.dataReady(uint16_t data) { sensorReading = data; post sendData(); atomic { busy = FALSE; } return SUCCESS; }

21 Posting task in interrupt handler  Why?  Make asynchronous code as short as possible task void sendData() { … adcPacket.data = sensorReading; … call Send.send(&adcPacket, sizeof(adcPacket.data)); return SUCCESS; } event result_t ADC.dataReady(uint16_t data) { sensorReading = data; post sendData(); atomic { busy = FALSE; } return SUCCESS; }

22 Race conditions  Because of preemption, race conditions may arise on shared data  nesC compiler can detect them, but with false positives  In case of false positive, declare shared variable with norace keyword  In case of real race conditions, use atomic to make code blocks non- preemptible bool busy; norace unit16_t sensorReading; … event result_t Timer01.Timer0Fire() { bool localBusy; atomic { localBusy = busy; busy = TRUE; } if (!localBusy) { call ADC.getData(); return SUCCESS; } else { return FAILED; } } task void sendData() { … adcPacket.data = sensorReading; … call Send.send(&adcPacket, sizeof(adcPacket.data)); return SUCCESS; } event result_t ADC.dataReady(uint16_t data) { sensorReading = data; post sendData(); atomic { busy = FALSE; } return SUCCESS; } Ensures at most one outstanding ADC request at all times

23 Discussion ? What are the first low level details of programming to TinyOS that you would like to abstract away?

24 Discussion  Provides framework for concurrency and modularity  Interleaves flows, events, energy management  Never poll, never block  Trade off flexibility for more optimization opportunities  Still a node-level platform  How do we go from individual sensors to a sensor field?

25 Software Development Infrastructure for Sensor Networks  Operating systems ( TinyOS )  Resource (device) management  Basic primitives  Protocols (MAC, routing) {covered in many previous lectures}  Programming languages and runtime environments  Low level (C, nesC)  Very high level abstractions ( Regiment )  Services needed  Synchronization of clocks ( RBS )  Propagation of new code to all nodes ( Trickle )  Localization {not today}