Download presentation
Presentation is loading. Please wait.
1
Container Shipping Breakout Group Participants: Cory Sharp [scribe] – UC Berkeley Thomas Sereno – SAIC Ken Traub – Connecterra Ron Kyker – Sandia National Labs Malena Mesarina – Hewlett Packard Labs Robert Szewczyk – UC Berkeley Phil Buonadonna – Intel Mike Manzo – UC Berkeley Jae-Hyuk Oh – United Technologies NEST Retreat, June 2004, Santa Cruz, CA
2
Deployments Shipping container security –Intrusion detection –Adversarial, networking Container monitoring –Report if environment out of tolerance –Take immediate action –Non-adversarial, networking Radioactive detection –Adversarial monitoring Pedigree –Log environment data –Report once at end, or at least infrequently –Non-adversarial, networking
3
Value Add Over a Simple, Cheap, One-shot Sensor Data processing –Simple, cheap sensors can only detect exceeded tolerances Data collection –More data available –Manual collection consumes resources Latency –Human-inspected detectors significantly increase latency
4
Sensing Modes Monitoring –Adversarial (security) vs. non-adverserial (environmental) –Mesh networking, low latency reports Pedigree –Less significant networking What kinds of sensing?
5
Installation Options Permanent fixture of container Vendor product Consumer devices
6
Core Issues Lifetime –5 years is the far upper bound (upgrades) Cost –$50/container/port total cost Localization –Where is container 1172? (global vs. local) –Audience Note: Shipping companies know precisely what and where about every container, needed for balancing RF Connectivity –RF leaks through non-water-tight containers Data Logging Interoperability between multiple vendors
7
Core Issues 2 Security –Compromised network (authentication, encryption, …) –Denial of service, network – spamming packets toward energy exhaustion –Denial of service, sensing – bananas and kitty litter excite radioactive detectors Detecting events in variable environments –On a ship, temperature, pressure, and vibration changes –How do you detect normal variance versus exceptional variance? Calibration –Logged/reported data must be meaningful
8
Problems Validation – how do you know you get data from all the containers? Robustness, Reliability –What happens if you lose 10% of the nodes? –Do losses generate false positives? –Add redundancy? Cost? Latency of data –Certain modes may want to spend a lot of energy to guarantee reports (“fire alarm”) Data glut/trunk, energy exhaustion –10,000 nodes reporting –Address with multiple gateways / base-stations
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.