Kubernetes Analysis: 2 types of containers

Slides:



Advertisements
Similar presentations
Mapping Service Templates to Concrete Network Semantics Some Ideas.
Advertisements

Proposal by CA Technologies, IBM, SAP, Vnomic
Docker Container Modeling Goals Goals (not requirements) Not proliferate Node Types for “Docker” 1.Consider modeling “Docker” as an (explicit) runtime.
Docker Container Modeling Goals Goals (not requirements) Not proliferate Node Types for “Docker” 1.Consider modeling “Docker” as an (explicit) runtime.
® IBM Software Group © 2006 IBM Corporation Rational Software France Object-Oriented Analysis and Design with UML2 and Rational Software Modeler 04. Other.
TOSCA SugarCRM Deployment
Component Patterns – Architecture and Applications with EJB copyright © 2001, MATHEMA AG Component Patterns Architecture and Applications with EJB JavaForum.
FI-WARE – Future Internet Core Platform FI-WARE Cloud Hosting July 2011 High-level description.
Software Component (Container + Containee) Software Component (Container + Containee) WebServer HostedOn Compute (Container) Compute (Container) Exploring.
Container Hierarchies and Related Issues
Software Component (Container + Containee) Software Component (Container + Containee) WebServer HostedOn Compute (Container) Compute (Container) Exploring.
 Tightly coupled containers of multiple resources of similar or different types  Lifecycle, Access, Billing & Identity control the resources placed.
Evaluate container lifecycle support in TOSCA TOSCA – 174 Adhoc TC.
TOSCA Workloads with OpenStack Heat-Translator
Software Component (Container + Containee) Software Component (Container + Containee) WebServer HostedOn Compute (Container) Compute (Container) Exploring.
Additional SugarCRM details for complete, functional, and portable deployment.
Proposal by CA Technologies, IBM, SAP, Vnomic
Network Connectivity Use Case Modeling and YAML Syntax
TOSCA Monitoring Working Group Status Roger Dev June 17, 2015.
Software Component (Container + Containee) Software Component (Container + Containee) WebServer HostedOn Compute (Container) Compute (Container) Exploring.
Andrew S. Budarevsky Adaptive Application Data Management Overview.
TOSCA Monitoring Reference Architecture Straw-man Roger Dev CA Technologies March 18, 2015 PRELIMINARY.
Block Storage 1: Using the normative AttachesTo Relationship Type my_server Compute Attributes private_address public_address networks ports Requirements.
Kind: “Pod” (i.e. Type) kind: “Pod” (i.e. Type) Kubernetes Analysis: 2 types of containers “Dumb” (no HA, no Autoscale) = Pod Template kind: “ReplicationController”
TOSCA Name Used by (Roles)Modeling conceptDistinguishing traitsNotes Node TypeType Developers Experts that fully understand, package an constrain, properties,
Objective Propose a simple and concise set of “Core” Entities and Relations for TOSCA useful for any application deployment in a cloud Enable users to.
TOSCA Monitoring Straw-man for Initial Minimal Monitoring Use Case Roger Dev CA Technologies Revision 3 May 21, 2015.
Block Storage 1: Using the normative AttachesTo Relationship Type my_server Compute Attributes private_address public_address networks ports Requirements.
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Component Patterns – Architecture and Applications with EJB copyright © 2001, MATHEMA AG Component Patterns Architecture and Applications with EJB Markus.
Kind: “Pod” (i.e. Type) kind: “Pod” (i.e. Type) Kubernetes Analysis: 2 types of containers “Dumb” (no HA, no Autoscale) = Pod Template kind: “ReplicationController”
Evaluate container lifecycle support in TOSCA TOSCA – 174 Adhoc TC.
How TOSCA Adds Value in NFV world
Copyright 2007, Information Builders. Slide 1 iWay Web Services and WebFOCUS Consumption Michael Florkowski Information Builders.
Evaluate container lifecycle support in TOSCA TOSCA – 174 Adhoc TC.
Objective Propose a simple and concise set of “Core” Entities and Relations for TOSCA useful for any application deployment in a cloud Enable users to.
TOSCA Orchestration and Management in OpenStack
#SummitNow Alfresco Deployments on AWS Cost-Effective, Scalable & Secure Michael Waldrop Director, Solutions Engineering .
#msitconf. Damien Caro Technical Evangelist Manager, Что будет, если приложение поместить в контейнер? What happens if the application.
ServerTemplate TM Deep Dive: Configuration for Multi-Cloud Environments Tim Miller Sr. Director ServerTemplate TM Development Cary Penniman Sr. Software.
Evaluate container lifecycle support in TOSCA TOSCA – 174 Adhoc TC.
1 Cluster – defn. TBD Derek: Homogenous set of nodes; in TOSCA that is a single node template. -Matt said this can also be viewed as a stack -- Derek can.
Mapping between NFV model and TOSCA
What has Docker Done for Us?
Deployment Architectures For Containers
Kubernetes Analysis: 2 types of containers
Cluster – defn. TBD Derek:
Deployment Flavour as VNF Capability: Alt1_r3
OpenStack Ani Bicaku 18/04/ © (SG)² Konsortium.
TOSCA Matching Or how the orchestrator provides implementation for abstract nodes or dangling requirements.
Kubernetes Container Orchestration
Using docker containers
Azure Container Instances
DF design as a node type (output by Chris and shitao)
Enhancements for Simple YAML Profile v1.2
Intro to Docker Containers and Orchestration in the Cloud
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
SwImageDesc Shitao li.
Remain issues
Instance Model Structure
Getting Started with Kubernetes and Rancher 2.0
Deployment Flavour as VNF Capability: Alt1_r2
* Introduction to Cloud computing * Introduction to OpenStack * OpenStack Design & Architecture * Demonstration of OpenStack Cloud.
Introduction to Docker
OpenShift as a cloud for Data Science
TOSCA v1.3 Enhancements February 21, 2019.
Kubernetes.
SDC BL and Titan overview
Turn up the Heat with LBaaS v2
Containers on Azure Peter Lasne Sr. Software Development Engineer
Presentation transcript:

Kubernetes Analysis: 2 types of containers Approach: Use reuse exiting TOSCA normative node, capability and relationship types where possible Model Kubernetes types (for now), then model similar container managers like Swarm, etc. and look for common base types, properties that can be abstracted. Kubernetes Analysis: 2 types of containers “Dumb” (no HA, no Autoscale) = Pod Template “Smart” (HA, Scaling) = ReplicationController Template kind: “Pod” (i.e. Type) kind: “ReplicationController” (i.e. Type) id: redis-master kind: Pod apiVersion: v1beta1 desiredState: manifest: version: v1beta1 containers: - name: master image: kubernetes/redis:v1 cpu: 1000 ports: - containerPort: 6379 volumeMounts: - name: data mountPath: /redis-master-data env: - key: MASTER value: "true" - name: sentinel - containerPort: 26379 - key: SENTINEL volumes: source: emptyDir: {} id: redis kind: ReplicationController apiVersion: v1beta1 desiredState: replicas: 1 replicaSelector: name: redis podTemplate: manifest: version: v1beta1 containers: - name: redis image: kubernetes/redis:v1 cpu: 1000 ports: - containerPort: 6379 volumeMounts: - name: data mountPath: /redis-master-data volumes: source: emptyDir: {} labels: labels: name: redis role: master redis-sentinel: "true" 1 1

Kubernetes Analysis: Pod Modeling: TOSCA Type mapping A Pod is an aggregate of Docker Container Requirements of 1..N homogenous Container (topologies) “Redis-master” Template of Kubernetes “Pod” Type: TOSCA Types for Kubernetes: kind: “Pod” (a Template of type “Pod”) Kubernetes.Pod tosca.groups.Placement derived_from: tosca.groups.Placement version: <version_number> metadata: <tosca:map(string)> description: <description> properties: TBD attributes: TBD # Allow get_property() against targets targets: [ tosca.nodes.Container.App.Kubernetes ] id: redis-master kind: Pod apiVersion: v1beta1 desiredState: manifest: version: v1beta1 (non-numeric) containers: -------------------------------------------------------------------------------------------------- - name: master (TOSCA template name) image: kubernetes/redis:v1 (TOSCA Container.App; create artifact of type image.Docker) cpu: 1000 (TOSCA Container capability; num_cpus, cpu_frequency) ports: (TOSCA EndPoint capability) - containerPort: 6379 (TOSCA Endpoint; port, ports) volumeMounts: (TOSCA Attachment capability) - name: data mountPath: /redis-master-data (TOSCA AttachesTo Rel.; location) env: - key: MASTER value: "true” # passed as Envirronment vars to instance ------------------------------------------------------------------------------------------------ - name: sentinel image: kubernetes/redis:v1 ports: - containerPort: 26379 - key: SENTINEL value: "true” # passed as Env. var. volumes: source: emptyDir: {} labels: name: redis role: master redis-sentinel: "true" Kubernetes.Container tosca.nodes.Container.App derived_from: tosca.nodes.Container.App metadata: <tosca:map(string)> version: <version_number> description: <description> properties: environment: <tosca:map of string> requirements: - host: # hosted on kubelets type: Container.Runtime.Kubernetes - ports: capability: EndPoint properties: ports, ports, etc. - volumes: capability: Attachment relationship: AttachesTo properties: location, device occurrences: [0, UNBOUNDED] There was some assumptions made that the “cpu” was related to CGROUPS or “Control Groups” within Linux… from Wikipedia… One of the design goals of cgroups is to provide a unified interface to many different use cases, from controlling single processes (by using nice, for example) to whole operating system-level virtualization (as provided by OpenVZ, Linux-VServer or LXC, for example). Cgroups provides: Resource limitation: groups can be set to not exceed a configured memory limit, which also includes the file system cache[6][7] Prioritization: some groups may get a larger share of CPU utilization[8] or disk I/O throughput[9] Accounting: measures how much resources certain systems use, which may be used, for example, for billing purposes[10] Control: freezing the groups of processes, their checkpointing and restarting[10] 2 2

Kubernetes Analysis: Pod Modeling: TOSCA Template Mapping: Simple “Group Approach”: Using the Types defined on the previous slide the TOSCA Topology Template looks like this for “redis-master” TOSCA Topology for Kubernetes “: “Redis-master” Template of Kubernetes “Pod” Type: redis-master-pod Kubernetes.Pod type: tosca.groups.Placement version: 1.0 metadata: name: redis role: master redis-sentinel: true targets: [ master-container, sentinel-container ] kind: “Pod” (a Template of type “Pod”) implied “InvitesTo” Relationship id: redis-master kind: Pod apiVersion: v1beta1 desiredState: manifest: version: v1beta1 (non-numeric) containers: -------------------------------------------------------------------------------------------------- - name: master image: kubernetes/redis:v1 cpu: 1000 ports: - containerPort: 6379 volumeMounts: - name: data mountPath: /redis-master-data env: - key: MASTER value: "true” # passed as Envirronment vars to instance ------------------------------------------------------------------------------------------------ - name: sentinel - containerPort: 26379 - key: SENTINEL value: "true” # passed as Env. var. volumes: source: emptyDir: {} labels: name: redis role: master redis-sentinel: "true" implied “InvitesTo” Relationship master-container sentinel-container Kubernetes.Container Kubernetes.Container derived_from: Kubernetes.Container metadata: <tosca:map(string)> version: <version_number> description: <description> artifacts: kubernetes/redis:v1 properties: requirements: - host: num_cpus: 1000 ? - port: capability: EndPoint port: 6379 - volume: capability: Attachment relationship: AttachesTo properties: location, device occurrences: [0, UNBOUNDED] interfaces: inputs: MASTER: true derived_from: Kubernetes.Container ... There was some assumptions made that the “cpu” was related to CGROUPS or “Control Groups” within Linux… from Wikipedia… One of the design goals of cgroups is to provide a unified interface to many different use cases, from controlling single processes (by using nice, for example) to whole operating system-level virtualization (as provided by OpenVZ, Linux-VServer or LXC, for example). Cgroups provides: Resource limitation: groups can be set to not exceed a configured memory limit, which also includes the file system cache[6][7] Prioritization: some groups may get a larger share of CPU utilization[8] or disk I/O throughput[9] Accounting: measures how much resources certain systems use, which may be used, for example, for billing purposes[10] Control: freezing the groups of processes, their checkpointing and restarting[10] Choice: or use Docker.Runtime type to allow use of template on Swarm, etc.? Issue: location property lost as there is no “AttachesTo” relationship in the topology. Create new Capability Type? Issue: Are there more than 1 volumes / mount points allowed? 3 3

Ratios may be needed: 2 “masters” to 3 “sentinels” NEW Direction Proposal: Membership (MemberOf) direction is wrong for management (group): Comment: Direction of relationship implies orchestration order TBD: Ratios may be needed: 2 “masters” to 3 “sentinels” master-VM sentinel-VM VM VM TOSCA Groups (logical) derived_from: Kubernetes.Container ... my_compute_node Compute type: tosca.groups.Placement # With ratios simple # notation example sources: { [ master-container, 2 ], [ sentinel-container, 3] } Occurrences: 0, N Occurrences: 1,1 implied “MemberOf” Relationship (a virtual host container which can be viewed as a virtual “HostedOn”) implied “MemberOf” Relationship (a virtual host container which can be viewed as a virtual “HostedOn”) There was some assumptions made that the “cpu” was related to CGROUPS or “Control Groups” within Linux… from Wikipedia… One of the design goals of cgroups is to provide a unified interface to many different use cases, from controlling single processes (by using nice, for example) to whole operating system-level virtualization (as provided by OpenVZ, Linux-VServer or LXC, for example). Cgroups provides: Resource limitation: groups can be set to not exceed a configured memory limit, which also includes the file system cache[6][7] Prioritization: some groups may get a larger share of CPU utilization[8] or disk I/O throughput[9] Accounting: measures how much resources certain systems use, which may be used, for example, for billing purposes[10] Control: freezing the groups of processes, their checkpointing and restarting[10] TBD additional requirements on the “host” environment. 4 4

However: We do not want to “buy into” Docker file as a Capability Type: Old Style: Docker capability type that mirrors a Dockerfile: Instead we want to use Endpoints (for ports) and Attachments (for volumes) This allows Docker, Rocket and containers to be modeled with other TOSCA nodes (i.e., via ConnectsTo) and leverage underlying Compute attached BlockStorage tosca.capabilities.Container.Docker: derived_from: tosca.capabilities.Container properties: version: type: list required: false entry_schema: version publish_all: type: boolean default: false publish_ports: entry_schema: PortSpec expose_ports: volumes: entry_schema: string TBD: Need to show this There was some assumptions made that the “cpu” was related to CGROUPS or “Control Groups” within Linux… from Wikipedia… One of the design goals of cgroups is to provide a unified interface to many different use cases, from controlling single processes (by using nice, for example) to whole operating system-level virtualization (as provided by OpenVZ, Linux-VServer or LXC, for example). Cgroups provides: Resource limitation: groups can be set to not exceed a configured memory limit, which also includes the file system cache[6][7] Prioritization: some groups may get a larger share of CPU utilization[8] or disk I/O throughput[9] Accounting: measures how much resources certain systems use, which may be used, for example, for billing purposes[10] Control: freezing the groups of processes, their checkpointing and restarting[10] 5 5

tosca.groups.Placement GENERIC IS BETTER: We do not need to define Kubernetes specific Types (use generic Group and Container.App types) : Kubernetes Pod reuses “Docker” Container.App type which can now reference other Container.App types like Rocket (Rkt) tosca.groups.Placement tosca.groups.Root derived_from: tosca.groups.Placement version: <version_number> metadata: <tosca:map(string)> description: <description> properties: TBD attributes: TBD # Allow get_property() against targets targets: [ Container.App.Docker, Container.App.Rocket, ... ] Policies: Security, Scaling, Update, etc. “AppliesTo” group (members) i.e., targets Not using “BindsTo” as that implies it is coupled to an implementation Container.App.Docker tosca.nodes.Container.App There is no need for a “Kubernetes” Runtime type, just use the real Container’s built-in runtime requirement (don’t care to model or reference Kubelets) Homogenous Pods/Containers for Kubernetes is still an issue, but this is a current Kubernetes limitation (heterogonous is possible in future) derived_from: tosca.nodes.Container.App metadata: <tosca:map(string)> version: <version_number> description: <description> properties: environment: <tosca:map of string> requirements: - compute: # a.k.a. host capability: Container.Docker type: Container.Runtime.Kubernetes - network: # a.k. port(s) capability: EndPoint properties: ports, ports, etc. - storage: # a.k.a. volue/attachment capability: Attachment relationship: AttachesTo properties: location, device occurrences: [0, UNBOUNDED] There was some assumptions made that the “cpu” was related to CGROUPS or “Control Groups” within Linux… from Wikipedia… One of the design goals of cgroups is to provide a unified interface to many different use cases, from controlling single processes (by using nice, for example) to whole operating system-level virtualization (as provided by OpenVZ, Linux-VServer or LXC, for example). Cgroups provides: Resource limitation: groups can be set to not exceed a configured memory limit, which also includes the file system cache[6][7] Prioritization: some groups may get a larger share of CPU utilization[8] or disk I/O throughput[9] Accounting: measures how much resources certain systems use, which may be used, for example, for billing purposes[10] Control: freezing the groups of processes, their checkpointing and restarting[10] Container.App.Rocket # Potential additional attributes on # Container capability capabilities: Container.App: attribute: response_time: Container.App derived_from: Kubernetes.Container ... 6 6

TOSCA Policy – Entities that compose Policy (Event, Condition, Action) model Policy Definition Event Type (new): <policy_name>: type: <policy_type_name> description: <policy_description> properties: <property_definitions> # allowed targets for policy association targets: [ <list_of_valid_target_templates> ] * triggers: <trigger_symbolic_name_1>: event: <event_type_name> # TODO: Allow a TOSCA node filter here # required node (resource) to monitor target_filter: node: <node_template_name> <node_type> # Used to reference another node related to # the node above via a relationship requirement: <requirement_name> # optional capability within node to monitor capability: <capability_name> # required clause that compares an attribute # with the identified node or capability # for some condition condition: <constraint_clause> action: # a) Define new TOSCA normative strategies # per-policy type and use here OR # b) allow domain-specific names <operation_name>: # (no lifecycle) # TBD: Do we care about validation of types? # If so, we should use a TOSCA Lifecycle type description: <optional description> inputs: <list of property assignments > implementation: <script> | <service_name> <trigger_symbolic_name_2>: ... <trigger_symbolic_name_n>: <event_type_name>: derived_from: <parent_event_type> version: <version_number> description: <policy_description> <filter_name>: properties: - <property_filter_def_1> - ... - <property_filter_def_n> capabilities: - <capability_name_or_type_1>: - <cap_1_property_filter_def_1> - <cap_m_property_filter_def_n> - ... - <capability_name_or_type_n>: Event name of a normative TOSCA Event Type Condition described as a constraint of an attribute of the node (or capability) identified) by the filter. Action Describes either: a well-known strategy an implementation artifact (e.g., scripts, service) to invoke with optional property definitions as inputs (to either choice) 7 7

Possible TOSCA Metamodel and Normative Type additions NodeType, Rel. Types tosca.capabilities.Container <node_type_name>: metadata: description: > allow tags / labels for search of instance model type: map of string derived_from: <parent_node_type_name> version: <version_number> description: <node_type_description> properties: <property_definitions> attributes: <attribute_definitions> requirements: - <requirement_definitions> capabilities: <capability_definitions> interfaces: <interface_definitions> artifacts: <artifact_definitions> tosca.capabilities.Container: derived_from: tosca.capabilities.Root properties: num_cpus: type: integer required: false constraints: - greater_or_equal: 1 cpu_frequency: type: scalar-unit.frequency disk_size: type: scalar-unit.size mem_size: attributes: utilization: description: referenced by scaling policies type: # float (percent) | integer (percent) | # scalar-percent ? required: no ? - in_range: [ 0, 100 ] 8 8

TOSCA Policy Definition TOSCA Policy Mapping – Example Senlin “scaling_out_policy_ceilometer.yaml” using the Kubernetes “redis” example from earlier slides (and its pod, container): Target is a Kubernetes Pod of the tosca.groups.placement type TOSCA Policy Definition Symbolic name for the trigger (could be used to reference an externalized version; however, this would violate a Policy’s integrity as a “Security document” my_scaling_policy: type: tosca.policies.scaling properties: # normative TOSCA properties for scaling min_instances: 1 max_instances: 10 default_instances: 3 increment: 1 # target the policy at the “Pod” targets: [redis-master-pod ] triggers: resize_compute: # symbolic name event: tosca.events.resource.utilization target_filter: node: master-container requirement: host capability: Container condition: utilization greater_than 80% action: # map to SENLIN::ACTION::RESIZE scaleup: # logical operation name inputs: # optional inputs parameters number: 1 strategy: BEST_EFFORT Implementation: <script> | <service_name> ... TOSCA normative event type (name) that would map to domain-specific names (e.g., OpenStack Ceilometer) Find the attribute via the topology: Navigate to node (directly or via the requirement name) and optionally the Capability name The condition to map & register with the target monitoring service (e.g., Ceilometer) Describe NODE to attach an alarm | alert | event to i.e., Using the “node”, “req”, “cap” and “condition” keys would expressed as a descriptive “filter” TODO: Need a % data type for TOSCA Note: we combined the Senlin “Action” of SENLIN:ACTION:RESIZE with the strategy: BEST_EFFORT to have one name List optional input parms. here 9 9

Senlin to TOSCA Policy Mapping – scaling_out_policy_ceilometer.yaml TOSCA Normative properties (for any scaling policy) Senlin WIP Definition TOSCA Definition Type-Version Use TOSCA types and versions type: ScalingPolicy version: 1.0 alarm: type: SENLIN::ALARM::CEILOMETER properties: meter: cpu_util op: gt threshold: 50 period: 60 evaluations: 1 repeat: True schedule: start_time: "2015-05-07T07:00:00Z" end_time: "2015-06-07T07:00:00Z" handlers: - type: webhook action: SENLIN::ACTION::RESIZE params: type: CHANGE_IN_CAPACITY number: 1 strategy: BEST_EFFORT credentials: user: john password: secrete - type: email addresses: - joe@cloud.com my_scaling_policy: type: tosca.policies.scaling properties: # normative TOSCA properties for scaling min_instances: 1 max_instances: 10 default_instances: 3 increment: 1 # target the policy at the “Pod” targets: [redis-master-pod ] triggers: compute_trigger: # symbolic name event: tosca.events.resource.utilization target_filter: node: master-container requirement: host capability: Container condition: utilization greater_than 80% action: # map to SENLIN::ACTION::RESIZE scaleup: # logical operation name inputs: # optional input parameters number: 1 # Should use “increment” property strategy: BEST_EFFORT | LEAST_USED Implementation: Senlin.resize <webhook name> Alarm Is implementation specific and should not be in the policy itself. It should use intelligent defaults for different Impls. or ones set by a user configuration file TOSCA Normative Find out what node (resource) to place “alarm” onto Find out what condition to set into alarm on that Schedule TOSCA could add these fields as optional “properties” on the base tosca.policies.Root type time_constraints: - name: a_time_constraint start: '* * * * * *' duration: 3600 description: a time constraint timezone: 'Asia/Shanghai' rule: meter_name: cpu_util comparison_operator: lt threshold: 15 period: 60 evaluation_periods: 1 statistic: avg query: - field: resource_metadata.cluster op: eq type: '' value: cluster1 handler Describe these in the “action:” section of the TOSCA policy Credentials Credentials should never be in a policy definition TOSCA Action Maps to a “well-known” (perhaps normative) strategy name. Could use Senlin names. (logical operation == strategy name) Maps to an implementation; could be a Senlin web hook Options for how “strategy” to be passed into the “resize” function/operation. Another is to put the strategy as the operation name not as input parameter. 10 10

Region and AZ really describe 2 Policies Senlin to TOSCA Policy Mapping – placement_rr.yaml TOSCA Definitions (Treat each placement policy as independent) Senlin WIP Definition my_zone_placement_policy: type: tosca.policies.placement properties: container type: zone container_number: N/A # unused in this use case containers: [ AZ1, AZ2] targets: [<node/group to be placed>] triggers: placement_trigger_1: # symbolic name event: tosca.events.resource.schedule target_filter: N/A # use declared “targets” condition: N/A # trigger based upon event only action: # map to SENLIN service / strategy schedule: # logical operation name strategy: ROUND_ROBIN weighting_map: { [ AZ1: 2 ], [ AZ2: 1 ] } implementation: Senlin.service.placement # Sample placement policy doing round-robin type: senlin.policy.placement version: 1.0 description: A policy for node placement scheduling. properties: availability_zones: # Valid values include: # ROUND_ROBIN, WEIGHTED, SOURCE strategy: ROUND_ROBIN candidates: - AZ1 - AZ2 regions: - RegionOne - RegionTwo + my_region_placement_policy: type: tosca.policies.placement properties: container type: region containers: [ regionOne, regionTwo ] targets: [<node/group to be placed>] triggers: placement_trigger_2: event: tosca.events.resource.schedule action: # map to SENLIN service / strategy schedule: # logical operation name strategy: ROUND_ROBIN implementation: Senlin.service.placement Region and AZ really describe 2 Policies They are related. But, customers want to describe these separately (2 different types of logical containers) as one is based upon security, the other based upon logical location. Each can be combined by policy service (as they have the same “target(s)” = container(s) to place on 11 11

tosca.groups.Placement Senlin to TOSCA Policy Mapping – Load Balancing is a TOSCA app. requirement, fulfilled by LoadBalancer node when orchestrated. Cluster – a specialized TOSCA Group with LoadBalancing Requirement Fulfilled by a LoadBalancer node when realized) Specification section 5.8.12 tosca.nodes.LoadBalancer # Load-balancing policy spec using Neutron LBaaS service type: senlin.policy.loadbalance version: 1.0 description: A policy for load-balancing the nodes in a cluster. properties: pool: # Protocol used for load balancing protocol: HTTP # Port on which servers are running on the members protocol_port: 80 # Name or ID of subnet for the port on which members can be # connected. subnet: private-subnet # Valid values include: ROUND_ROBIN, LEAST_CONNECTIONS, SOURCE_IP # This can be the “algorithm” property in LB capability lb_method: ROUND_ROBIN session_persistence: # type of session persistence, valid values include: # SOURCE_IP, HTTP_COOKIE, APP_COOKIE # ?? What OpenStack resource uses this, neutron?? type: SOURCE_IP # Name of cookie if type set to APP_COOKIE cookie_name: whatever vip: # Name or ID of Subnet on which VIP address will be allocated subnet: public-subnet # IP adddress of the VIP # address: <ADDRESS> load # Max #connections per second allowed for this VIP connection_limit: 500 # Need to add this to TOSCA as property # Protocol used for VIP # TCP port to listen on some_cluster tosca.groups.Placement type: tosca.groups.Placement sources: [ member ] requirements: load_balancer: type: tosca.capabilities.LoadBalancer occurrences: [0, UNBOUNDED] client_endpoint: type: tosca.capabilities.Endpoint tosca.capabilities.LoadBalancer: derived_from: tosca.capabilities.Root properties: algorithm : <string> session_type: # Session/cookie stuff could go here if needed. connection_limit: <integeer> There was some assumptions made that the “cpu” was related to CGROUPS or “Control Groups” within Linux… from Wikipedia… One of the design goals of cgroups is to provide a unified interface to many different use cases, from controlling single processes (by using nice, for example) to whole operating system-level virtualization (as provided by OpenVZ, Linux-VServer or LXC, for example). Cgroups provides: Resource limitation: groups can be set to not exceed a configured memory limit, which also includes the file system cache[6][7] Prioritization: some groups may get a larger share of CPU utilization[8] or disk I/O throughput[9] Accounting: measures how much resources certain systems use, which may be used, for example, for billing purposes[10] Control: freezing the groups of processes, their checkpointing and restarting[10] tosca.capabilities.Endpoint: # see sectioon 5.4.4.1 derived_from: tosca.capabilities.Root properties: protool : <string> port: PortDef secure: boolean url_path: string port_name: string network_name: PUBLIC | PRIVATE | <name> initiator: string ports: map of PortDef http://git.openstack.org/cgit/stackforge/senlin/tree/examples/policies/lb_policy.yaml 12 12

Backup

Container.App.Kubernetes Container.App.Kubernetes Software Component 1: SW Component with VM image deployment artifact master redis_master Container.App.Kubernetes Kubernetes.Pod Properties TBD Attributes Properties TBD Attributes Artifacts: kubernetes/redis:v1 type: image.Docker Requirements Container.Runtime.Docker Properties TBD Lifecycle.Standard HostedOn create … Capabilities Container.Runtime.Docker sentinel HostedOn Container.App.Kubernetes Properties TBD Attributes Artifacts: kubernetes/redis:v1 type: image.Docker Requirements Container.Runtime.Docker Properties TBD Lifecycle.Standard create …

Container.Application.Docker Container.Application.Docker Docker Hub (Repository) ‘mysql’ Docker Image ‘wordpress’ Docker Image wordpress_container mysql_container Container.Application.Docker Container.Application.Docker artifacts: - my_image: type: Image.Docker URI: wordpress repository: docker_hub artifacts: - my_image: type: Image.Docker URI: mysql repository: docker_hub Requirements Requirements Container Container Runtime.Docker Runtime.Docker Docker.LinksTo Docker.Link Capabilities Docker.Link 15 15