A DevOps világ következő kedvence

Slides:



Advertisements
Similar presentations
System Center 2012 R2 Overview
Advertisements

FI-WARE – Future Internet Core Platform FI-WARE Cloud Hosting July 2011 High-level description.
Plan Introduction What is Cloud Computing?
By Mihir Joshi Nikhil Dixit Limaye Pallavi Bhide Payal Godse.
Cloud Computing 1. Outline  Introduction  Evolution  Cloud architecture  Map reduce operation  Platform 2.
M.A.Doman Short video intro Model for enabling the delivery of computing as a SERVICE.
Windows Azure Conference 2014 Deploy your Java workloads on Windows Azure.
1 Introduction to Middleware. 2 Outline What is middleware? Purpose and origin Why use it? What Middleware does? Technical details Middleware services.
IBM Bluemix Ecosystem Development Hands on Workshop Section 1 - Overview.
Chapter 8 – Cloud Computing
© 2012 Eucalyptus Systems, Inc. Cloud Computing Introduction Eucalyptus Education Services 2.
Amazon Web Services. Amazon Web Services (AWS) - robust, scalable and affordable infrastructure for cloud computing. This session is about:
Structured Container Delivery Oscar Renalias Accenture Container Lead (NOTE: PASTE IN PORTRAIT AND SEND BEHIND FOREGROUND GRAPHIC FOR CROP)
Unit 3 Virtualization.
CLOUD ARCHITECTURE Many organizations and researchers have defined the architecture for cloud computing. Basically the whole system can be divided into.
Run Azure Services in your datacenter
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S
DevOps Cloud Native Microservices
Containers as a Service with Docker to Extend an Open Platform
Organizations Are Embracing New Opportunities
By: Raza Usmani SaaS, PaaS & TaaS By: Raza Usmani
Azure Architect – IaaS or PaaS?
TrueSight Operations Management 11.0 Architecture
Dockerize OpenEdge Srinivasa Rao Nalla.
Netscape Application Server
OpenLegacy Training Day Four Introduction to Microservices
Integrating HA Legacy Products into OpenSAF based system
Docker Birthday #3.
Cloud Native: Rapid Application Development
HPE Synergy.
Cloud Computing Platform as a Service
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S
Dmytro Mykhailov How HashiCorp platform tools can make the difference in development and deployment Target and goal of HashiCorp.
Chapter 18 MobileApp Design
Introduction to Microservices Prepared for
Next Generation Service
Introduction to Cloud Computing
Cloud Computing.
Lessons Learned from Microservices at Scale
Confidential – Oracle Internal/Restricted/Highly Restricted
Confidential – Oracle Internal/Restricted/Highly Restricted
Kubernetes Container Orchestration
Microsoft Build /8/2018 5:15 AM © 2016 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY,
OpenStack Octavia, Kubernetes, and Terraform
Sherwood Zern Consulting Solutions Architect Oracle A-Team
Using docker containers
Accelerate application delivery with a Cloud-native mindset
Software Defined Networking (SDN)
Intro to Docker Containers and Orchestration in the Cloud
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Connecting, Managing, Observing, and Securing Services
Introduction to Databases Transparencies
3 Cloud Computing.
Learn. Imagine. Build. .NET Conf
AWS Cloud Computing Masaki.
Frankenstein Microservices
MicroProfile Meets Istio
Internet and Web Simple client-server model
Serverless Architecture in the Cloud
Stretching your application from OpenStack into Public Cloud
SUSE CaaS and Dell EMC.
OpenShift as a cloud for Data Science
OpenStack Summit Berlin – November 14, 2018
Harrison Howell CSCE 824 Dr. Farkas
ISTIO & ENVOY – Security
Johan Lindberg, inRiver
ONAP Architecture Principle Review
Containers on Azure Peter Lasne Sr. Software Development Engineer
Presentation transcript:

A DevOps világ következő kedvence Service Mesh: A DevOps világ következő kedvence Április 2. Szabó Dávid Senior Software Engineer @szabvid

The Way Applications Are Developed and Deployed Has Fundamentally Changed DevOps Cloud Native Development Process Application Architecture Deployment & Packaging Application Infrastructure Waterfall Agile Monolithic SOA Microservices Psychical Servers Virtual Machines Containers Server Room Data Center (IaaS) Cloud Platform Time Az első oszlop a módszertan, a többi pedig a technológia rész. Ha ehhez az OS community növekedését is hozzáadjuk, akkor talán ma tartunk igazán ott, hogy a legjobban/legkönnyebben megközelítsük az általános DevOps célokat. Én innen a microservice-t ragadnám ki. Egy aspektust emelnék ki az előadás alatt, ez pedig az, hogy ezekben a környezetben mi a legnagyobb kihivás és mit lehet tenni…

Introduction – DevOps Drivers Lead Time DEV OPS wall of confusion

Introduction – DevOps Drivers Lead Time Service Mesh Microservices DEV OPS wall of confusion Nézzük meg milyen kihívásokkal szembesülünk a microservice-ek esetén Service Mesh Amplify Feedback

How does microservices typically looks like?

This is not a Microservice Architecture! Web Server App Server DB Server

This is Getting There….

But These are True Microservice Architectures! Twitter Amazon Web Service Szóval a microservice-ek tök jók, de cserébe bonyolult lett a hálózat. Monolith vs. micro. Complexity moved from application into the network!

The Need of Common Features in Service Components Dynamic service discovery Load balancing Health checks Service B 1 Service A 2 3 circuit-breaking, latency-aware load balancing, eventually consistent (“advisory”) service discovery, retries, and deadlines.

The Need of Common Features in Service Components Dynamic service discovery Load balancing Health checks Timeouts Retries Web timeout = 400ms retries = 3 User Auth timeout = 400ms retries = 2 800ms! timeout = 200ms retries = 3 App 600ms! circuit-breaking, latency-aware load balancing, eventually consistent (“advisory”) service discovery, retries, and deadlines. DB

The Need of Common Features in Service Components Dynamic service discovery Load balancing Health checks Timeouts Retries TLS termination Monitoring and tracing via rich metrics Fault injection Circuit breakers Service B 1 Service A X 2 3 circuit-breaking, latency-aware load balancing, eventually consistent (“advisory”) service discovery, retries, and deadlines.

What is a Service Mesh? Service Mesh is dedicated infrastructure layer in a microservice environment to consistently manage, monitor and control the communication between services across the entire application A service mesh is a dedicated infrastructure layer for handling service-to-service communication. It’s responsible for the reliable delivery of requests through the complex topology of services that comprise a modern, cloud native application. In practice, the service mesh is typically implemented as an array of lightweight network proxies that are deployed alongside application code, without the application needing to be aware. 

Evolution: 1G - Shared Libraries Examples for such fat libraries: Hystrix @ Netflix Stubby @ Google Finagle @ Twitter Advantages: Simple Customizable Disadvantages of shared libraries: Have to be implemented in multiple languages If the library changes the entire service has to be redeployed Too tight involvement of dev teams SVC Nginx DB SVC Nginx DB SVC Nginx DB SVC Stubby is the basis of gRPC Finagle is the basis of Linkerd SVC Library

Evolution: 2G - Linkerd A service mesh that adds reliability, security, and visibility to cloud native applications Official CNCF Project Originally created by Buoyant Inc. based on Finagle Written in JAVA This is the control plane that programs the individual dataplane proxies Data Plane vs. Control Plane These are the dataplane components (proxies) namerd

Per-node vs. Sidecar Model The per-node model and its disadvantages: Raises security concerns in multi-tenant environments (shared TLS secrets, etc.) Can only be scaled vertically, not horizontally (more memory and CPU handle more conns.) Not optimized for container workloads The sidecar model: Put a proxy next to every container This is supported by the POD abstraction in Kubernetes Linkerd is considered to be too heavy for such environment! Data Plane vs. Control Plane Proxy Proxy Proxy Proxy Proxy Proxy Proxy Proxy Proxy Proxy Proxy Proxy Node 1 Node 2 Node 3

Evolution: 3G - Service Meshes Linkerd2 (Conduit): Specifically designed for Kubernetes: control plane is written in Go Data plane is written in Rust: fast and lightweight to sidecar operations (~5MB container size) Can be deployed service-by-service (it’s not an all-or-nothing choice…) Envoy Open source edge and service proxy, designed for cloud-native applications Official CNCF Project, originally created by Lyft Written in C++, built on the learnings of solutions as NGINX, HAProxy, hardware LBs and cloud LBs Istio Service mesh control plane which uses Envoy as data plane Originally created by Google and IBM Written in GO Most akkor linkerd2 vagy conduit? - Linkerd nem opensource? Conduit? Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety. Featuring zero-cost abstractions move semantics guaranteed memory safety threads without data races trait-based generics pattern matching type inference minimal runtime efficient C bindings

Examples: routing Service A Service B TCP 8080 /api/v1/t-shirts GET 2 3 1 4 95% 5% Canary version Current version Service A Service B1.1 Service B1.2 2 3 1 4 User-agent: Android User-agent: iPhone Service A Service B Service C Service A TCP 8080 /api/v1/t-shirts GET /api/v1/t-shirts PUT /api/v1/jeans * 1000/day Service B The core component used for traffic management in Istio is Pilot, which manages and configures all the Envoy proxy instances deployed in a particular Istio service mesh. Pilot lets you specify which rules you want to use to route traffic between Envoy proxies and configure failure recovery features such as timeouts, retries, and circuit breakers. It also maintains a canonical model of all the services in the mesh and uses this model to let Envoy instances know about the other Envoy instances in the mesh via its discovery service. Each Envoy instance maintains load balancing information based on the information it gets from Pilot and periodic health-checks of other instances in its load-balancing pool, allowing it to intelligently distribute traffic between destination instances while following its specified routing rules. Pilot is responsible for the lifecycle of Envoy instances deployed across the Istio service mesh. As shown in the figure above, Pilot maintains a canonical representation of services in the mesh that is independent of the underlying platform. Platform-specific adapters in Pilot are responsible for populating this canonical model appropriately. For example, the Kubernetes adapter in Pilot implements the necessary controllers to watch the Kubernetes API server for changes to the pod registration information, ingress resources, and third-party resources that store traffic management rules. This data is translated into the canonical representation. An Envoy-specific configuration is then generated based on the canonical representation. Pilot enables service discovery, dynamic updates to load balancing pools and routing tables. You can specify high-level traffic management rules through Pilot’s Rule configuration. These rules are translated into low-level configurations and distributed to Envoy instances.

Examples: visibility The core component used for traffic management in Istio is Pilot, which manages and configures all the Envoy proxy instances deployed in a particular Istio service mesh. Pilot lets you specify which rules you want to use to route traffic between Envoy proxies and configure failure recovery features such as timeouts, retries, and circuit breakers. It also maintains a canonical model of all the services in the mesh and uses this model to let Envoy instances know about the other Envoy instances in the mesh via its discovery service. Each Envoy instance maintains load balancing information based on the information it gets from Pilot and periodic health-checks of other instances in its load-balancing pool, allowing it to intelligently distribute traffic between destination instances while following its specified routing rules. Pilot is responsible for the lifecycle of Envoy instances deployed across the Istio service mesh. As shown in the figure above, Pilot maintains a canonical representation of services in the mesh that is independent of the underlying platform. Platform-specific adapters in Pilot are responsible for populating this canonical model appropriately. For example, the Kubernetes adapter in Pilot implements the necessary controllers to watch the Kubernetes API server for changes to the pod registration information, ingress resources, and third-party resources that store traffic management rules. This data is translated into the canonical representation. An Envoy-specific configuration is then generated based on the canonical representation. Pilot enables service discovery, dynamic updates to load balancing pools and routing tables. You can specify high-level traffic management rules through Pilot’s Rule configuration. These rules are translated into low-level configurations and distributed to Envoy instances.

Examples: visibility The core component used for traffic management in Istio is Pilot, which manages and configures all the Envoy proxy instances deployed in a particular Istio service mesh. Pilot lets you specify which rules you want to use to route traffic between Envoy proxies and configure failure recovery features such as timeouts, retries, and circuit breakers. It also maintains a canonical model of all the services in the mesh and uses this model to let Envoy instances know about the other Envoy instances in the mesh via its discovery service. Each Envoy instance maintains load balancing information based on the information it gets from Pilot and periodic health-checks of other instances in its load-balancing pool, allowing it to intelligently distribute traffic between destination instances while following its specified routing rules. Pilot is responsible for the lifecycle of Envoy instances deployed across the Istio service mesh. As shown in the figure above, Pilot maintains a canonical representation of services in the mesh that is independent of the underlying platform. Platform-specific adapters in Pilot are responsible for populating this canonical model appropriately. For example, the Kubernetes adapter in Pilot implements the necessary controllers to watch the Kubernetes API server for changes to the pod registration information, ingress resources, and third-party resources that store traffic management rules. This data is translated into the canonical representation. An Envoy-specific configuration is then generated based on the canonical representation. Pilot enables service discovery, dynamic updates to load balancing pools and routing tables. You can specify high-level traffic management rules through Pilot’s Rule configuration. These rules are translated into low-level configurations and distributed to Envoy instances.