version: description: properties: environment: requirements: - host: # hosted on kubelets type: Container.Runtime.Kubernetes - ports: capability: EndPoint properties: ports, ports, etc. - volumes: capability: Attachment relationship: AttachesTo properties: location, device occurrences: [0, UNBOUNDED] There was some assumptions made that the “cpu” was related to CGROUPS or “Control Groups” within Linux… from Wikipedia… One of the design goals of cgroups is to provide a unified interface to many different use cases, from controlling single processes (by using nice, for example) to whole operating system-level virtualization (as provided by OpenVZ, Linux-VServer or LXC, for example). Cgroups provides: Resource limitation: groups can be set to not exceed a configured memory limit, which also includes the file system cache[6][7] Prioritization: some groups may get a larger share of CPU utilization[8] or disk I/O throughput[9] Accounting: measures how much resources certain systems use, which may be used, for example, for billing purposes[10] Control: freezing the groups of processes, their checkpointing and restarting[10] 2 2"> version: description: properties: environment: requirements: - host: # hosted on kubelets type: Container.Runtime.Kubernetes - ports: capability: EndPoint properties: ports, ports, etc. - volumes: capability: Attachment relationship: AttachesTo properties: location, device occurrences: [0, UNBOUNDED] There was some assumptions made that the “cpu” was related to CGROUPS or “Control Groups” within Linux… from Wikipedia… One of the design goals of cgroups is to provide a unified interface to many different use cases, from controlling single processes (by using nice, for example) to whole operating system-level virtualization (as provided by OpenVZ, Linux-VServer or LXC, for example). Cgroups provides: Resource limitation: groups can be set to not exceed a configured memory limit, which also includes the file system cache[6][7] Prioritization: some groups may get a larger share of CPU utilization[8] or disk I/O throughput[9] Accounting: measures how much resources certain systems use, which may be used, for example, for billing purposes[10] Control: freezing the groups of processes, their checkpointing and restarting[10] 2 2">

Presentation is loading. Please wait.

Presentation is loading. Please wait.

Kubernetes Analysis: 2 types of containers

Similar presentations


Presentation on theme: "Kubernetes Analysis: 2 types of containers"— Presentation transcript:

1 Kubernetes Analysis: 2 types of containers
Approach: Use reuse exiting TOSCA normative node, capability and relationship types where possible Model Kubernetes types (for now), then model similar container managers like Swarm, etc. and look for common base types, properties that can be abstracted. Kubernetes Analysis: 2 types of containers “Dumb” (no HA, no Autoscale) = Pod Template “Smart” (HA, Scaling) = ReplicationController Template kind: “Pod” (i.e. Type) kind: “ReplicationController” (i.e. Type) id: redis-master kind: Pod apiVersion: v1beta1 desiredState: manifest: version: v1beta1 containers: - name: master image: kubernetes/redis:v1 cpu: 1000 ports: - containerPort: 6379 volumeMounts: - name: data mountPath: /redis-master-data env: - key: MASTER value: "true" - name: sentinel - containerPort: 26379 - key: SENTINEL volumes: source: emptyDir: {} id: redis kind: ReplicationController apiVersion: v1beta1 desiredState: replicas: 1 replicaSelector: name: redis podTemplate: manifest: version: v1beta1 containers: - name: redis image: kubernetes/redis:v1 cpu: 1000 ports: - containerPort: 6379 volumeMounts: - name: data mountPath: /redis-master-data volumes: source: emptyDir: {} labels: labels: name: redis role: master redis-sentinel: "true" 1 1

2 Kubernetes Analysis: Pod Modeling: TOSCA Type mapping
A Pod is an aggregate of Docker Container Requirements of 1..N homogenous Container (topologies) “Redis-master” Template of Kubernetes “Pod” Type: TOSCA Types for Kubernetes: kind: “Pod” (a Template of type “Pod”) Kubernetes.Pod tosca.groups.Placement derived_from: tosca.groups.Placement version: <version_number> metadata: <tosca:map(string)> description: <description> properties: TBD attributes: TBD # Allow get_property() against targets targets: [ tosca.nodes.Container.App.Kubernetes ] id: redis-master kind: Pod apiVersion: v1beta1 desiredState: manifest: version: v1beta1 (non-numeric) containers: - name: master (TOSCA template name) image: kubernetes/redis:v1 (TOSCA Container.App; create artifact of type image.Docker) cpu: (TOSCA Container capability; num_cpus, cpu_frequency) ports: (TOSCA EndPoint capability) - containerPort: 6379 (TOSCA Endpoint; port, ports) volumeMounts: (TOSCA Attachment capability) - name: data mountPath: /redis-master-data (TOSCA AttachesTo Rel.; location) env: - key: MASTER value: "true” # passed as Envirronment vars to instance - name: sentinel image: kubernetes/redis:v1 ports: - containerPort: 26379 - key: SENTINEL value: "true” # passed as Env. var. volumes: source: emptyDir: {} labels: name: redis role: master redis-sentinel: "true" Kubernetes.Container tosca.nodes.Container.App derived_from: tosca.nodes.Container.App metadata: <tosca:map(string)> version: <version_number> description: <description> properties: environment: <tosca:map of string> requirements: - host: # hosted on kubelets type: Container.Runtime.Kubernetes - ports: capability: EndPoint properties: ports, ports, etc. - volumes: capability: Attachment relationship: AttachesTo properties: location, device occurrences: [0, UNBOUNDED] There was some assumptions made that the “cpu” was related to CGROUPS or “Control Groups” within Linux… from Wikipedia… One of the design goals of cgroups is to provide a unified interface to many different use cases, from controlling single processes (by using nice, for example) to whole operating system-level virtualization (as provided by OpenVZ, Linux-VServer or LXC, for example). Cgroups provides: Resource limitation: groups can be set to not exceed a configured memory limit, which also includes the file system cache[6][7] Prioritization: some groups may get a larger share of CPU utilization[8] or disk I/O throughput[9] Accounting: measures how much resources certain systems use, which may be used, for example, for billing purposes[10] Control: freezing the groups of processes, their checkpointing and restarting[10] 2 2

3 Kubernetes Analysis: Pod Modeling: TOSCA Template Mapping: Simple “Group Approach”:
Using the Types defined on the previous slide the TOSCA Topology Template looks like this for “redis-master” TOSCA Topology for Kubernetes “: “Redis-master” Template of Kubernetes “Pod” Type: redis-master-pod Kubernetes.Pod type: tosca.groups.Placement version: 1.0 metadata: name: redis role: master redis-sentinel: true targets: [ master-container, sentinel-container ] kind: “Pod” (a Template of type “Pod”) implied “InvitesTo” Relationship id: redis-master kind: Pod apiVersion: v1beta1 desiredState: manifest: version: v1beta1 (non-numeric) containers: - name: master image: kubernetes/redis:v1 cpu: 1000 ports: - containerPort: 6379 volumeMounts: - name: data mountPath: /redis-master-data env: - key: MASTER value: "true” # passed as Envirronment vars to instance - name: sentinel - containerPort: 26379 - key: SENTINEL value: "true” # passed as Env. var. volumes: source: emptyDir: {} labels: name: redis role: master redis-sentinel: "true" implied “InvitesTo” Relationship master-container sentinel-container Kubernetes.Container Kubernetes.Container derived_from: Kubernetes.Container metadata: <tosca:map(string)> version: <version_number> description: <description> artifacts: kubernetes/redis:v1 properties: requirements: - host: num_cpus: 1000 ? - port: capability: EndPoint port: 6379 - volume: capability: Attachment relationship: AttachesTo properties: location, device occurrences: [0, UNBOUNDED] interfaces: inputs: MASTER: true derived_from: Kubernetes.Container ... There was some assumptions made that the “cpu” was related to CGROUPS or “Control Groups” within Linux… from Wikipedia… One of the design goals of cgroups is to provide a unified interface to many different use cases, from controlling single processes (by using nice, for example) to whole operating system-level virtualization (as provided by OpenVZ, Linux-VServer or LXC, for example). Cgroups provides: Resource limitation: groups can be set to not exceed a configured memory limit, which also includes the file system cache[6][7] Prioritization: some groups may get a larger share of CPU utilization[8] or disk I/O throughput[9] Accounting: measures how much resources certain systems use, which may be used, for example, for billing purposes[10] Control: freezing the groups of processes, their checkpointing and restarting[10] Choice: or use Docker.Runtime type to allow use of template on Swarm, etc.? Issue: location property lost as there is no “AttachesTo” relationship in the topology. Create new Capability Type? Issue: Are there more than 1 volumes / mount points allowed? 3 3

4 Ratios may be needed: 2 “masters” to 3 “sentinels”
NEW Direction Proposal: Membership (MemberOf) direction is wrong for management (group): Comment: Direction of relationship implies orchestration order TBD: Ratios may be needed: 2 “masters” to 3 “sentinels” master-VM sentinel-VM VM VM TOSCA Groups (logical) derived_from: Kubernetes.Container ... my_compute_node Compute type: tosca.groups.Placement # With ratios simple # notation example sources: { [ master-container, 2 ], [ sentinel-container, 3] } Occurrences: 0, N Occurrences: 1,1 implied “MemberOf” Relationship (a virtual host container which can be viewed as a virtual “HostedOn”) implied “MemberOf” Relationship (a virtual host container which can be viewed as a virtual “HostedOn”) There was some assumptions made that the “cpu” was related to CGROUPS or “Control Groups” within Linux… from Wikipedia… One of the design goals of cgroups is to provide a unified interface to many different use cases, from controlling single processes (by using nice, for example) to whole operating system-level virtualization (as provided by OpenVZ, Linux-VServer or LXC, for example). Cgroups provides: Resource limitation: groups can be set to not exceed a configured memory limit, which also includes the file system cache[6][7] Prioritization: some groups may get a larger share of CPU utilization[8] or disk I/O throughput[9] Accounting: measures how much resources certain systems use, which may be used, for example, for billing purposes[10] Control: freezing the groups of processes, their checkpointing and restarting[10] TBD additional requirements on the “host” environment. 4 4

5 However: We do not want to “buy into” Docker file as a Capability Type:
Old Style: Docker capability type that mirrors a Dockerfile: Instead we want to use Endpoints (for ports) and Attachments (for volumes) This allows Docker, Rocket and containers to be modeled with other TOSCA nodes (i.e., via ConnectsTo) and leverage underlying Compute attached BlockStorage tosca.capabilities.Container.Docker: derived_from: tosca.capabilities.Container properties: version: type: list required: false entry_schema: version publish_all: type: boolean default: false publish_ports: entry_schema: PortSpec expose_ports: volumes: entry_schema: string TBD: Need to show this There was some assumptions made that the “cpu” was related to CGROUPS or “Control Groups” within Linux… from Wikipedia… One of the design goals of cgroups is to provide a unified interface to many different use cases, from controlling single processes (by using nice, for example) to whole operating system-level virtualization (as provided by OpenVZ, Linux-VServer or LXC, for example). Cgroups provides: Resource limitation: groups can be set to not exceed a configured memory limit, which also includes the file system cache[6][7] Prioritization: some groups may get a larger share of CPU utilization[8] or disk I/O throughput[9] Accounting: measures how much resources certain systems use, which may be used, for example, for billing purposes[10] Control: freezing the groups of processes, their checkpointing and restarting[10] 5 5

6 tosca.groups.Placement
GENERIC IS BETTER: We do not need to define Kubernetes specific Types (use generic Group and Container.App types) : Kubernetes Pod reuses “Docker” Container.App type which can now reference other Container.App types like Rocket (Rkt) tosca.groups.Placement tosca.groups.Root derived_from: tosca.groups.Placement version: <version_number> metadata: <tosca:map(string)> description: <description> properties: TBD attributes: TBD # Allow get_property() against targets targets: [ Container.App.Docker, Container.App.Rocket, ... ] Policies: Security, Scaling, Update, etc. “AppliesTo” group (members) i.e., targets Not using “BindsTo” as that implies it is coupled to an implementation Container.App.Docker tosca.nodes.Container.App There is no need for a “Kubernetes” Runtime type, just use the real Container’s built-in runtime requirement (don’t care to model or reference Kubelets) Homogenous Pods/Containers for Kubernetes is still an issue, but this is a current Kubernetes limitation (heterogonous is possible in future) derived_from: tosca.nodes.Container.App metadata: <tosca:map(string)> version: <version_number> description: <description> properties: environment: <tosca:map of string> requirements: - compute: # a.k.a. host capability: Container.Docker type: Container.Runtime.Kubernetes - network: # a.k. port(s) capability: EndPoint properties: ports, ports, etc. - storage: # a.k.a. volue/attachment capability: Attachment relationship: AttachesTo properties: location, device occurrences: [0, UNBOUNDED] There was some assumptions made that the “cpu” was related to CGROUPS or “Control Groups” within Linux… from Wikipedia… One of the design goals of cgroups is to provide a unified interface to many different use cases, from controlling single processes (by using nice, for example) to whole operating system-level virtualization (as provided by OpenVZ, Linux-VServer or LXC, for example). Cgroups provides: Resource limitation: groups can be set to not exceed a configured memory limit, which also includes the file system cache[6][7] Prioritization: some groups may get a larger share of CPU utilization[8] or disk I/O throughput[9] Accounting: measures how much resources certain systems use, which may be used, for example, for billing purposes[10] Control: freezing the groups of processes, their checkpointing and restarting[10] Container.App.Rocket # Potential additional attributes on # Container capability capabilities: Container.App: attribute: response_time: Container.App derived_from: Kubernetes.Container ... 6 6

7 TOSCA Policy – Entities that compose Policy (Event, Condition, Action) model
Policy Definition Event Type (new): <policy_name>: type: <policy_type_name> description: <policy_description> properties: <property_definitions> # allowed targets for policy association targets: [ <list_of_valid_target_templates> ] * triggers: <trigger_symbolic_name_1>: event: <event_type_name> # TODO: Allow a TOSCA node filter here # required node (resource) to monitor target_filter: node: <node_template_name> <node_type> # Used to reference another node related to # the node above via a relationship requirement: <requirement_name> # optional capability within node to monitor capability: <capability_name> # required clause that compares an attribute # with the identified node or capability # for some condition condition: <constraint_clause> action: # a) Define new TOSCA normative strategies # per-policy type and use here OR # b) allow domain-specific names <operation_name>: # (no lifecycle) # TBD: Do we care about validation of types? # If so, we should use a TOSCA Lifecycle type description: <optional description> inputs: <list of property assignments > implementation: <script> | <service_name> <trigger_symbolic_name_2>: ... <trigger_symbolic_name_n>: <event_type_name>: derived_from: <parent_event_type> version: <version_number> description: <policy_description> <filter_name>: properties: - <property_filter_def_1> - ... - <property_filter_def_n> capabilities: - <capability_name_or_type_1>: - <cap_1_property_filter_def_1> - <cap_m_property_filter_def_n> - ... - <capability_name_or_type_n>: Event name of a normative TOSCA Event Type Condition described as a constraint of an attribute of the node (or capability) identified) by the filter. Action Describes either: a well-known strategy an implementation artifact (e.g., scripts, service) to invoke with optional property definitions as inputs (to either choice) 7 7

8 Possible TOSCA Metamodel and Normative Type additions
NodeType, Rel. Types tosca.capabilities.Container <node_type_name>: metadata: description: > allow tags / labels for search of instance model type: map of string derived_from: <parent_node_type_name> version: <version_number> description: <node_type_description> properties: <property_definitions> attributes: <attribute_definitions> requirements: - <requirement_definitions> capabilities: <capability_definitions> interfaces: <interface_definitions> artifacts: <artifact_definitions> tosca.capabilities.Container: derived_from: tosca.capabilities.Root properties: num_cpus: type: integer required: false constraints: - greater_or_equal: 1 cpu_frequency: type: scalar-unit.frequency disk_size: type: scalar-unit.size mem_size: attributes: utilization: description: referenced by scaling policies type: # float (percent) | integer (percent) | # scalar-percent ? required: no ? - in_range: [ 0, 100 ] 8 8

9 TOSCA Policy Definition
TOSCA Policy Mapping – Example Senlin “scaling_out_policy_ceilometer.yaml” using the Kubernetes “redis” example from earlier slides (and its pod, container): Target is a Kubernetes Pod of the tosca.groups.placement type TOSCA Policy Definition Symbolic name for the trigger (could be used to reference an externalized version; however, this would violate a Policy’s integrity as a “Security document” my_scaling_policy: type: tosca.policies.scaling properties: # normative TOSCA properties for scaling min_instances: 1 max_instances: 10 default_instances: 3 increment: 1 # target the policy at the “Pod” targets: [redis-master-pod ] triggers: resize_compute: # symbolic name event: tosca.events.resource.utilization target_filter: node: master-container requirement: host capability: Container condition: utilization greater_than 80% action: # map to SENLIN::ACTION::RESIZE scaleup: # logical operation name inputs: # optional inputs parameters number: 1 strategy: BEST_EFFORT Implementation: <script> | <service_name> ... TOSCA normative event type (name) that would map to domain-specific names (e.g., OpenStack Ceilometer) Find the attribute via the topology: Navigate to node (directly or via the requirement name) and optionally the Capability name The condition to map & register with the target monitoring service (e.g., Ceilometer) Describe NODE to attach an alarm | alert | event to i.e., Using the “node”, “req”, “cap” and “condition” keys would expressed as a descriptive “filter” TODO: Need a % data type for TOSCA Note: we combined the Senlin “Action” of SENLIN:ACTION:RESIZE with the strategy: BEST_EFFORT to have one name List optional input parms. here 9 9

10 Senlin to TOSCA Policy Mapping – scaling_out_policy_ceilometer.yaml
TOSCA Normative properties (for any scaling policy) Senlin WIP Definition TOSCA Definition Type-Version Use TOSCA types and versions type: ScalingPolicy version: 1.0 alarm: type: SENLIN::ALARM::CEILOMETER properties: meter: cpu_util op: gt threshold: 50 period: 60 evaluations: 1 repeat: True schedule: start_time: " T07:00:00Z" end_time: " T07:00:00Z" handlers: - type: webhook action: SENLIN::ACTION::RESIZE params: type: CHANGE_IN_CAPACITY number: 1 strategy: BEST_EFFORT credentials: user: john password: secrete - type: addresses: - my_scaling_policy: type: tosca.policies.scaling properties: # normative TOSCA properties for scaling min_instances: 1 max_instances: 10 default_instances: 3 increment: 1 # target the policy at the “Pod” targets: [redis-master-pod ] triggers: compute_trigger: # symbolic name event: tosca.events.resource.utilization target_filter: node: master-container requirement: host capability: Container condition: utilization greater_than 80% action: # map to SENLIN::ACTION::RESIZE scaleup: # logical operation name inputs: # optional input parameters number: 1 # Should use “increment” property strategy: BEST_EFFORT | LEAST_USED Implementation: Senlin.resize <webhook name> Alarm Is implementation specific and should not be in the policy itself. It should use intelligent defaults for different Impls. or ones set by a user configuration file TOSCA Normative Find out what node (resource) to place “alarm” onto Find out what condition to set into alarm on that Schedule TOSCA could add these fields as optional “properties” on the base tosca.policies.Root type time_constraints: - name: a_time_constraint start: '* * * * * *' duration: 3600 description: a time constraint timezone: 'Asia/Shanghai' rule: meter_name: cpu_util comparison_operator: lt threshold: 15 period: 60 evaluation_periods: 1 statistic: avg query: - field: resource_metadata.cluster op: eq type: '' value: cluster1 handler Describe these in the “action:” section of the TOSCA policy Credentials Credentials should never be in a policy definition TOSCA Action Maps to a “well-known” (perhaps normative) strategy name. Could use Senlin names. (logical operation == strategy name) Maps to an implementation; could be a Senlin web hook Options for how “strategy” to be passed into the “resize” function/operation. Another is to put the strategy as the operation name not as input parameter. 10 10

11 Region and AZ really describe 2 Policies
Senlin to TOSCA Policy Mapping – placement_rr.yaml TOSCA Definitions (Treat each placement policy as independent) Senlin WIP Definition my_zone_placement_policy: type: tosca.policies.placement properties: container type: zone container_number: N/A # unused in this use case containers: [ AZ1, AZ2] targets: [<node/group to be placed>] triggers: placement_trigger_1: # symbolic name event: tosca.events.resource.schedule target_filter: N/A # use declared “targets” condition: N/A # trigger based upon event only action: # map to SENLIN service / strategy schedule: # logical operation name strategy: ROUND_ROBIN weighting_map: { [ AZ1: 2 ], [ AZ2: 1 ] } implementation: Senlin.service.placement # Sample placement policy doing round-robin type: senlin.policy.placement version: 1.0 description: A policy for node placement scheduling. properties: availability_zones: # Valid values include: # ROUND_ROBIN, WEIGHTED, SOURCE strategy: ROUND_ROBIN candidates: - AZ1 - AZ2 regions: - RegionOne - RegionTwo + my_region_placement_policy: type: tosca.policies.placement properties: container type: region containers: [ regionOne, regionTwo ] targets: [<node/group to be placed>] triggers: placement_trigger_2: event: tosca.events.resource.schedule action: # map to SENLIN service / strategy schedule: # logical operation name strategy: ROUND_ROBIN implementation: Senlin.service.placement Region and AZ really describe 2 Policies They are related. But, customers want to describe these separately (2 different types of logical containers) as one is based upon security, the other based upon logical location. Each can be combined by policy service (as they have the same “target(s)” = container(s) to place on 11 11

12 tosca.groups.Placement
Senlin to TOSCA Policy Mapping – Load Balancing is a TOSCA app. requirement, fulfilled by LoadBalancer node when orchestrated. Cluster – a specialized TOSCA Group with LoadBalancing Requirement Fulfilled by a LoadBalancer node when realized) Specification section tosca.nodes.LoadBalancer # Load-balancing policy spec using Neutron LBaaS service type: senlin.policy.loadbalance version: 1.0 description: A policy for load-balancing the nodes in a cluster. properties: pool: # Protocol used for load balancing protocol: HTTP # Port on which servers are running on the members protocol_port: 80 # Name or ID of subnet for the port on which members can be # connected. subnet: private-subnet # Valid values include: ROUND_ROBIN, LEAST_CONNECTIONS, SOURCE_IP # This can be the “algorithm” property in LB capability lb_method: ROUND_ROBIN session_persistence: # type of session persistence, valid values include: # SOURCE_IP, HTTP_COOKIE, APP_COOKIE # ?? What OpenStack resource uses this, neutron?? type: SOURCE_IP # Name of cookie if type set to APP_COOKIE cookie_name: whatever vip: # Name or ID of Subnet on which VIP address will be allocated subnet: public-subnet # IP adddress of the VIP # address: <ADDRESS> load # Max #connections per second allowed for this VIP connection_limit: 500 # Need to add this to TOSCA as property # Protocol used for VIP # TCP port to listen on some_cluster tosca.groups.Placement type: tosca.groups.Placement sources: [ member ] requirements: load_balancer: type: tosca.capabilities.LoadBalancer occurrences: [0, UNBOUNDED] client_endpoint: type: tosca.capabilities.Endpoint tosca.capabilities.LoadBalancer: derived_from: tosca.capabilities.Root properties: algorithm : <string> session_type: # Session/cookie stuff could go here if needed. connection_limit: <integeer> There was some assumptions made that the “cpu” was related to CGROUPS or “Control Groups” within Linux… from Wikipedia… One of the design goals of cgroups is to provide a unified interface to many different use cases, from controlling single processes (by using nice, for example) to whole operating system-level virtualization (as provided by OpenVZ, Linux-VServer or LXC, for example). Cgroups provides: Resource limitation: groups can be set to not exceed a configured memory limit, which also includes the file system cache[6][7] Prioritization: some groups may get a larger share of CPU utilization[8] or disk I/O throughput[9] Accounting: measures how much resources certain systems use, which may be used, for example, for billing purposes[10] Control: freezing the groups of processes, their checkpointing and restarting[10] tosca.capabilities.Endpoint: # see sectioon derived_from: tosca.capabilities.Root properties: protool : <string> port: PortDef secure: boolean url_path: string port_name: string network_name: PUBLIC | PRIVATE | <name> initiator: string ports: map of PortDef 12 12

13 Backup

14 Container.App.Kubernetes Container.App.Kubernetes
Software Component 1: SW Component with VM image deployment artifact master redis_master Container.App.Kubernetes Kubernetes.Pod Properties TBD Attributes Properties TBD Attributes Artifacts: kubernetes/redis:v1 type: image.Docker Requirements Container.Runtime.Docker Properties TBD Lifecycle.Standard HostedOn create Capabilities Container.Runtime.Docker sentinel HostedOn Container.App.Kubernetes Properties TBD Attributes Artifacts: kubernetes/redis:v1 type: image.Docker Requirements Container.Runtime.Docker Properties TBD Lifecycle.Standard create

15 Container.Application.Docker Container.Application.Docker
Docker Hub (Repository) ‘mysql’ Docker Image ‘wordpress’ Docker Image wordpress_container mysql_container Container.Application.Docker Container.Application.Docker artifacts: - my_image: type: Image.Docker URI: wordpress repository: docker_hub artifacts: - my_image: type: Image.Docker URI: mysql repository: docker_hub Requirements Requirements Container Container Runtime.Docker Runtime.Docker Docker.LinksTo Docker.Link Capabilities Docker.Link 15 15


Download ppt "Kubernetes Analysis: 2 types of containers"

Similar presentations


Ads by Google