The LSAM Proxy Cache - a Multicast Distributed Virtual Cache Joe Touch USC / Information Sciences Institute 元智大學 資訊工程研究所 系統實驗室 陳桂慧 1997.02.23.

Slides:



Advertisements
Similar presentations
Module 13: Implementing ISA Server 2004 Enterprise Edition: Site-to-Site VPN Scenario.
Advertisements

L. Alchaal & al. Page Offering a Multicast Delivery Service in a Programmable Secure IP VPN Environment Lina ALCHAAL Netcelo S.A., Echirolles INRIA.
Consistency and Replication Chapter 7 Part II Replica Management & Consistency Protocols.
Implementing Inter-VLAN Routing
NETWORK LOAD BALANCING NLB.  Network Load Balancing (NLB) is a Clustering Technology.  Windows Based. (windows server).  To scale performance, Network.
Module 8: Concepts of a Network Load Balancing Cluster
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
Internet Networking Spring 2006 Tutorial 12 Web Caching Protocols ICP, CARP.
Adaptive Web Caching: Towards a New Caching Architecture Authors and Institutions: Scott Michel, Khoi Nguyen, Adam Rosenstein and Lixia Zhang UCLA Computer.
Analysis of Web Caching Architectures: Hierarchical and Distributed Caching Pablo Rodriguez, Christian Spanner, and Ernst W. Biersack IEEE/ACM TRANSACTIONS.
P2P: Advanced Topics Filesystems over DHTs and P2P research Vyas Sekar.
1 Spring Semester 2007, Dept. of Computer Science, Technion Internet Networking recitation #13 Web Caching Protocols ICP, CARP.
Collaborative Web Caching Based on Proxy Affinities Jiong Yang, Wei Wang in T. J.Watson Research Center Richard Muntz in Computer Science Department of.
Towards a Better Understanding of Web Resources and Server Responses for Improved Caching Craig E. Wills and Mikhail Mikhailov Computer Science Department.
A Survey of proxy Cache Evaluation Techniques 系統實驗室 田坤銘
Internet Networking Spring 2002 Tutorial 13 Web Caching Protocols ICP, CARP.
Hands-On Microsoft Windows Server 2003 Networking Chapter 7 Windows Internet Naming Service.
Adaptive Web Caching Lixia Zhang, Sally Floyd, and Van Jacob-son. In the 2nd Web Caching Workshop, Boulder, Colorado, April 25, System Laboratory,
Implementing ISA Server Caching. Caching Overview ISA Server supports caching as a way to improve the speed of retrieving information from the Internet.
Proxy Caching the Estimates Page Load Delays Roland P. Wooster and Marc Abrams Network Research Group, Computer Science Department, Virginia Tech 元智大學.
Evaluating Content Management Techniques for Web Proxy Caches Martin Arlitt, Ludmila Cherkasova, John Dilley, Rich Friedrich and Tai Jin Hewlett-Packard.
The Medusa Proxy A Tool For Exploring User- Perceived Web Performance Mimika Koletsou and Geoffrey M. Voelker University of California, San Diego Proceeding.
Inferring the Topology and Traffic Load of Parallel Programs in a VM environment Ashish Gupta Peter Dinda Department of Computer Science Northwestern University.
World Wide Web Caching: Trends and Technology Greg Barish and Katia Obraczka USC Information Science Institute IEEE Communications Magazine, May 2000 Presented.
Achieving Load Balance and Effective Caching in Clustered Web Servers Richard B. Bunt Derek L. Eager Gregory M. Oster Carey L. Williamson Department of.
Word Wide Cache Distributed Caching for the Distributed Enterprise.
資訊工程系智慧型系統實驗室 iLab 南台科技大學 1 Optimizing Cloud MapReduce for Processing Stream Data using Pipelining 出處 : 2011 UKSim 5th European Symposium on Computer Modeling.
Module 13: Network Load Balancing Fundamentals. Server Availability and Scalability Overview Windows Network Load Balancing Configuring Windows Network.
Design and Implement an Efficient Web Application Server Presented by Tai-Lin Han Date: 11/28/2000.
World Wide Web Caching: Trends and Technologys Gerg Barish & Katia Obraczka USC Information Sciences Institute, USA,2000.
Uncovering the Multicore Processor Bottlenecks Server Design Summit Shay Gal-On Director of Technology, EEMBC.
Web Caching and Content Distribution: A View From the Interior Syam Gadde Jeff Chase Duke University Michael Rabinovich AT&T Labs - Research.
SOS: Security Overlay Service Angelos D. Keromytis, Vishal Misra, Daniel Rubenstein- Columbia University ACM SIGCOMM 2002 CONFERENCE, PITTSBURGH PA, AUG.
Module 11: Implementing ISA Server 2004 Enterprise Edition.
Fast Handoff for Seamless wireless mesh Networks Yair Amir, Clauiu Danilov, Michael Hilsdale Mobisys’ Jeon, Seung-woo.
NetCache Architecture and Deployment Peter Danzig Network Appliance, Santa Clara, CA 元智大學 系統實驗室 陳桂慧
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Adaptive Web Caching CS411 Dynamic Web-Based Systems Flying Pig Fei Teng/Long Zhao/Pallavi Shinde Computer Science Department.
Architecture Models. Readings r Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 m Note: All figures from this book.
輔大資工所 在職研一 報告人:林煥銘 學號: Public Access Mobility LAN: Extending The Wireless Internet into The LAN Environment Jun Li, Stephen B. Weinstein, Junbiao.
Intelligent Space 國立台灣大學資訊工程研究所 智慧型空間實驗室 Service Behavior Consistency in the OSGi Platform Authors Y.Qin, H.Hao,L.Jun, G.Jidong and L.Jian Proceedings.
11 CLUSTERING AND AVAILABILITY Chapter 11. Chapter 11: CLUSTERING AND AVAILABILITY2 OVERVIEW  Describe the clustering capabilities of Microsoft Windows.
70-293: MCSE Guide to Planning a Microsoft Windows Server 2003 Network, Enhanced Chapter 12: Planning and Implementing Server Availability and Scalability.
Implementing ISA Server Caching
Push Technology Humie Leung Annabelle Huo. Introduction Push technology is a set of technologies used to send information to a client without the client.
CFTP - A Caching FTP Server Mark Russell and Tim Hopkins Computing Laboratory University of Kent Canterbury, CT2 7NF Kent, UK 元智大學 資訊工程研究所 系統實驗室 陳桂慧.
 Cachet Technologies 1998 Cachet Technologies Technology Overview February 1998.
High-Speed Policy-Based Packet Forwarding Using Efficient Multi-dimensional Range Matching Lakshman and Stiliadis ACM SIGCOMM 98.
70-293: MCSE Guide to Planning a Microsoft Windows Server 2003 Network, Enhanced Chapter 4: Planning and Configuring Routing and Switching.
1/11/2016Lecturer : Trần Thị Ngọc Hoa1 ISA Array  Introduction  Deployment.
The Measured Access Characteristics of World-Wide-Web Client Proxy Caches Bradley M. Duska, David Marwood, and Michael J. Feeley Department of Computer.
CS 6401 Overlay Networks Outline Overlay networks overview Routing overlays Resilient Overlay Networks Content Distribution Networks.
Parallel and Distributed Simulation Data Distribution II.
Ad insertion at proxies to improve cache hit rates Amit Gupta and Geoffrey baehr, Sun Microsystems Laboratories 901 San Antonio Road Palo Alto,CA
Cache Digest Alex Rousskov Duane Wessels National Laboratory for Applied Network Research April 17, 1998 元智大學 資訊工程研究所 系統實驗室 陳桂慧 February 9, 1999.
Mapping and Browsing the Web in a 2D Space Mao Lin Huang, Wei Lai, Yanchun Zhang. Tenth International Workshop on, 元智資工所 系統實驗室 楊錫謦 2000/7/12.
/ Fast Web Content Delivery An Introduction to Related Techniques by Paper Survey B Li, Chien-chang R Sung, Chih-kuei.
Improving the WWW: Caching or Multicast? Pablo RodriguezErnst W. BiersackKeith W. Ross Institut EURECOM 2229, route des Cretes. BP , Sophia Antipolis.
1 Evaluation of Cooperative Web Caching with Web Polygraph Ping Du and Jaspal Subhlok Department of Computer Science University of Houston presented at.
Presented by Michael Rainey South Mississippi Linux Users Group
BUILD SECURE PRODUCTS AND SERVICES
70-293: MCSE Guide to Planning a Microsoft Windows Server 2003 Network, Enhanced Chapter 12: Planning and Implementing Server Availability and Scalability.
Affinity Depending on the application and client requirements of your Network Load Balancing cluster, you can be required to select an Affinity setting.
Zueyong Zhu† and J. William Atwood‡
Data Dissemination and Management (2) Lecture 10
Internet Networking recitation #12
Streaming Sensor Data Fjord / Sensor Proxy Multiquery Eddy
70-293: MCSE Guide to Planning a Microsoft Windows Server 2003 Network, Enhanced Chapter 4: Planning and Configuring Routing and Switching.
EE 122: Lecture 22 (Overlay Networks)
Data Dissemination and Management (2) Lecture 10
Presentation transcript:

The LSAM Proxy Cache - a Multicast Distributed Virtual Cache Joe Touch USC / Information Sciences Institute 元智大學 資訊工程研究所 系統實驗室 陳桂慧

Introduction Large / Scale / Active / Middleware LSAM Proxy Cache (LPC) is designed to –support multicast web push of groups of related web pages. –reduce network and server load, –provide increased client performance for associated groups of web pages, called ‘affinity groups.’ Goal: to develop middleware to support large-scale development of distributed information services with effective economies of scale.

Introduction (2) Middleware is a set of services –require distributed, on-going process, –combine OS and networking capabilities, –but cannot be effectively implemented in either the OS or the network. LSAM middleware is active, providing self- organization and programmability at the information access level akin to Active Networking at the packet level.

Architecture Pump (server proxy) –multicasts web pages to the filters, based on affinity groups filter (the distributed set of client proxies) –filters together act as a single virtual proxy cache –the request of any one client benefits the others.

Architecture (2-1) Pump –monitors web accesses at a server –creates /destroys multicast channels for affinity groups more or less popular –announced on a single, global multicast channel, similar to the tele- conference announcement channel in the sd tool.

Architecture (2-2) Filter –caches web pages for downstream access, nearer the client. –monitors the announcement channel and looks for ’interesting’ affinity group channels –is partitioned allowing it to join multiple channels and accept multicast downloads without inter-channel cache interference.

Architecture (3) request –is checked at inter- mediate proxies, and forwarded through to the server, response –is multicast to filters in the group by the pump, –and unicast from the final proxy back to the originating client

Architecture (4) Subsequent requests –receive pages with in- cache response times, even for pages which they have not requested before. –Because of the initial multicast push, network resources and server load are reduced for these generally popular pages.

Self-organization

Self-organization (2) All filters that have joined the group receive multicast pushes from the server. –filters upstream of the first shared proxy will leave the multicast group –filters at natural network aggregation points tend to join the multicast tree

Self-organization (2) This self-organization limits the number of filters that join a channel, to only the filter closest to the clients that shares traffic. Pushing data closer to the clients improves performance. –if more filters closer to the client were in the group, the pushed data would be replicated at filters with even light traffic, consuming bandwidth and cache space needlessly. –the LSAM proxy can be parameterized to balance these two competing incentives, by adjusting its threshold for ‘light’ vs. ‘heavy’ traffic.

Implementation Issues Apache proxy cache –implementation issues in providing multicast proxy caching in the Apache context. –development issues that include multicast channel management, intelligent request routing, object scheduling, dynamic caching algorithms, and sup-port for mobility. The pump/filter implementation takes advantage of Apache use of the local file system as a cache. –requests are processed according to conventional proxy rules, Figure 7.

Implementation Issues (2)

Implementation Issues (3) Web pages that are members of the affinity group are multicast to the filters. There are additional rules governing when responses are unicast or multicast (or both, in some cases), taking into account: –whether a page is in an active affinity group –downstream filters joined to a page affinity channel –whether the page has been pushed to that channel

Implementation Issues (4) Multicast channel management is the pump algorithm for creating channels based on affinity groups, and the filter joining channels related to its requests. This channel management determines –what channels are active at a pump, and –what channels each filter joins or leaves –management of the announcement channel

Implementation Issues (4)

Finding a affinity group Key aspect - affinity groups –affinity group is a set of related web pages, where retrievals are cross-correlated, these groups determine what channels a pump creates (or deletes), and what channels a filter joins (or leaves). These groups can be determined by a number of factors –a-posteri analysis of correlated retrievals –syntactic analysis of embedded HREFs –analysis of semantic groups In the current implementation, we assume affinity groups are subsets of a single server.

Finding a affinity group The LSAM project is currently developing three algorithms for syntactic approximation of semantic affinity groups: –on-line algorithms for the pump and filter –off-line algorithm for performance analysis. –analyze regional Squid caches, will help determine the current potential for performance enhancement by LSAM.

Current status The current implementation –supports multicast push for a single, static (preconfigured) channel –supports dynamic auto-configuration of the unicast proxy hierarchy, which can be exported to other proxy caches and clients. Six different cache replacement algorithms have been implemented, selected in the configuration file at proxy boot time.

Summary The LSAM proxy cache (LPC) provides a distributed cache system that uses multicast for automated push of popular web pages. –are deployed as a server pump and a distributed filter hierarchy. –automatically track the popularity of web page groups –automatically manage server push and client subscription. The system has an initial prototype implementation, and it is evolving to mitigate the effects of a deep proxy hierarchy, needed for the multicast self-organization. LPC reduces overall server and network load, and increases access performance, even for the first access of a page by a client.