CloudOpting - Hackathon Technical Speech
Introduction to technologies used in CloudOpting
Docker
Introduction Queue User DB QA server VMs Laptop Data Center Various services and applications: Static website Queue User DB API endpoint Analytics DB Web frontend Background workers Various hardware environment Can we migrate applications easily and quickly? QA server VMs Production Server Laptop Data Center Production Cluster Cloud público
Introduction API endpoint Analytics DB Queue Web frontend User DB Static website Background workers QA server Laptop Docker is an engine that let us to encapsulate any information and make it portable, self sufficient and isolated So it can be manipulated through estándar operations and can be executed consistently in any hardware Once it has been containarised, you deal with Docker, every application will be the same. Standard administration. Dev: Create it once and run wherever you want Sys: Configure it once and run wherever you want VMs Cloud público Data Center Production Server Production Cluster
Introduction What is Docker? What are the benefits? Currently it is the most widespread container technology. Take advantage of characteristic of Linux Kernel like cgroups, libcontainer (LXC modification), Namespaces…. Therefore containers share part of this Kernel. Can work in every system where Linux runs (laptops, VMs, Cloud…). Recently also for Windows. Formed by a set of tools oriented to make easier the way to deal with these containers. Provide an isolated environment which contains everything needed to run an application. Provide images which can be easily generated, modified from others, shared and versioned Lighter than a virtual machine
Introduction Difference with a Virtual Machine Hypervisor Infrastructure Host OS Hypervisor Guest OS Bin/ Libs App B App A’ App A Host OS Docker Engine Infrastructure Bin/Libs Binaries/Libraries App A App A’ App B App B’ App B’’ App B’’’ Virtual Machines Docker containers
Introduction Some benefit using Docker Reduction of number of VMs needed Better use of resources available in the VM (containers use and share resources dynamically Automatic vertical scaling. Less time needed to deploy and enable us using horizontal scaling. Quick and automatic reboot of container after falling down. Independency of the Cloud. Lots of images available, therefore developers need less time to prepare and develop applications. Easy to move, deploy , etc. Once you have the container you deal with Docker, not the application. Easy deployment even with complex systems with docker-compose. This tool let us to describe how we want to deploy everything at containers level
Introduction Components Incremental revolution of the platform: If we need a runtime (Docker container): Images, container, volumes. If we need a way to distribute it: Dockerfile, DockerHub, Docker Registry. If we need to execute it in different machines: Docker machine. If we need to build complex solutions: Docker compose. If we need scale application or create cluster of applications: Docker Swarm.
Docker Block of Docker
Docker images Storage Dockerfiles: Recipes for construction Docker Hub: Storage repository (property of Docker). Save: docker push ImageName:tag Search: docker search ImageName o acceso web Download: docker pull ImageName:tag Docker Registry: Private storage. Save: docker push RegistryHost:port/ImageName:tag Download: docker pull RegistryHost:port/ImageName:tag Docker save/load: Store in .tar.gz (for image persistence with all its layers) Save: docker save ImageName > ImageName.tar.gz Load: docker load < ImageName.tar.gz Docker export/import: Store in .tar.gz (for image container persistence and only one layer) Save: docker export container > container.tar.gz Load: cat container.tar.gz | docker import - image:tag
Docker images DockerFile Main Instructions: FROM VOLUME MAINTAINER EXPOSE ENV ADD COPY VOLUME USER WORKDIR RUN ENTRYPOINT CMD …
Docker images Main commands Docker build Docker images Docker rmi Docker tag Docker push Docker pull Docker import Docker export Docker commit Docker load Docker save … Tip: docker rmi $(docker images –aq)
Docker images How to build images: Utilizamos el commando docker build [OPTIONS] PATH (path is the address where the docker file is. This is called CONTEXT), Usual options: -t, --tag valor: usually the nomenclature used is: “organization/ImageName:tag”. By default tag=latest -f, --file cadena: Name of the docker file which will be used to create the image.
Docker Images Example
Docker containers Difference between image and container: A dockerfile ia a recipe which contains the instructions to create an image. Usually this will be done modifying a master image (parent image). An image is like a template, a snapshot of the status of a container. Based on UFS can be built stacking layers. All of them are only reading layers. A container is an executing image. These are which our application will use. The container adds a last layer of reading/writing above. The user only works with this upper layer.
Docker containers Main commands Docker start Docker run Docker stop Docker restart Docker pause Docker unpause Docker kill Docker attach Docker create Docker run Docker rm Docker ps Docker logs Docker inspect Docker stats Docker exec … Tip: docker rm -v $(docker ps -aq)
Docker Containers Main options in docker run: --name --env --publish, -p --publish-all, -P --link --log drive --log opt --volume --restart --env --env-file --entrypoint --expose --workdir
Docker containers - Networking Port publishing The container are connected by default to an intern virtual network. Executing ipconfig we can see the network interface in the host. The name is docker0. Docker allows the accessing to containers from outside only if it is indicated explicitly. For that purpose the ports will be mapped through NAT To indicate that we wish to publish a port the following options in the docker run can be used: -p “port_host:port_container” -P : this choice will publish the ports which were indicated by EXPOSE in the dockerfile and random ports will be mapped. A range of ports can also be indicated [96-175]:[100-179] Even any port can be indicated. In this case Docker will choose a random port.
Docker containers - Networking Links among containers Links can be done among containers through option --link in the docker run command. --link=“container name or id:Alias” When this command is used, the container (we want to connect to) should be previously created. Instruction link adds to /etc/hosts (of the container is being created) the IP of the container we want to connect to. Instruction link also adds some state variables to the created container : name_PORT name_PORT_num_protocol name_PORT_num_protocol_ADDR name_PORT_num_protocol_PORT name_PORT_num_protocol_PROTO name_NAME
Docker containers - Networking Network types Bridge mode: This mode indicates that after container creation this will connect to the virtual network (host interface docker0). This is the option by default. Host mode: this mode indicates that the container will share the connection stack with the host. Container mode: This mode indicates that the container will share the connection stack of other container previously created None mode: no network interface is indicated to the container, therefore the container does not have any network interface
Docker containers - Volumes Data persistence Problem: the data generated in a container will be enclosed in this container. Solution: let share files or data with the outside To add a volume we will use option -v “dir_host:dir_container” or --volume=“dir_host:dir_container”
Docker container Example
Docker Compose Main instructions Build Image Links Ports Net Command Entrypoint Environment Env_file Expose Links Net Volumes container_name log_driver log_opt fluentd-address Tag …
Docker Compose Main commands Docker-compose build Docker-compose up Docker-compose down Docker-compose start Docker-compose restart Docker-compose stop Docker-compose pause Docker-compose unpause Docker-compose scale …
Docker compose Example
Logs collecting Docker considers log all that uses the standard output of the container. Therefore the service must be set up to show this info on the screen Docker provides several ways to extract logs. Among these ways, Fluentd (tool in the Docker suite) offers native compatibility with Docker. Fluentd is a software for logs collection and treatment. This tool allows us to collect, filter and store logs from different services running inside the containers In CloudOpting platform other complementary technologies are used: ElasticSearch for logs storage and Kibana for logs visualization.
Logs collecting Time for Demo
Docker - Hands on code