Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Kubernetes-worker charm

This charm deploys a container runtime and the Kubernetes worker applications: kubelet, and kube-proxy.

In order for this charm to be useful, it should be deployed with its companion charm kubernetes-master and linked with an SDN-Plugin and a container runtime such as containerd.

This charm is part of the Charmed Kubernetes bundle bundle which can be deployed with a single command::

juju deploy charmed-kubernetes

For more information about Charmed Kubernetes see the overview documentation

Scale out

To add additional compute capacity to your Kubernetes workers, you may juju add-unit kubernetes-worker to scale the cluster. The new units will automatically join any related kubernetes-master, and enlist themselves as ready once the deployment is complete.

Snap Configuration

The Kubernetes resources used by this charm are snap packages. When not specified during deployment, these resources come from the public store. By default, the snapd daemon will refresh all snaps installed from the store four (4) times per day. A charm configuration option is provided for operators to control this refresh frequency.

Examples:

refresh kubernetes-worker snaps every tuesday
juju config kubernetes-worker snapd_refresh="tue"
refresh snaps at 11pm on the last (5th) friday of the month
juju config kubernetes-worker snapd_refresh="fri5,23:00"
delay the refresh as long as possible
juju config kubernetes-worker snapd_refresh="max"
use the system default refresh timer
juju config kubernetes-worker snapd_refresh=""

For more information, see the snap documentation.

Configuration

This charm supports some configuration options to set up a Kubernetes cluster that works in your environment, detailed in the section below.

For some specific Kubernetes service configuration tasks, please also see the section on configuring K8s services. Please also see the kubernetes-master charm configuration for other settings relating to Kubernetes services.

name type Default Description

| channel | string | 1.22/stable | Snap channel to install Kubernetes worker services from | | default-backend-image | string | auto | Docker image to use for the default backend. Auto will select an image based on architecture. | | ingress | boolean | True | See notes | | ingress-default-ssl-certificate | string | | See notes | | ingress-default-ssl-key | string | | See notes | | ingress-ssl-chain-completion | boolean | False | See notes |

| ingress-ssl-passthrough | boolean | False | Enable ssl passthrough on ingress server. This allows passing the ssl connection through to the workloads and not terminating it at the ingress controller. | | ingress-use-forwarded-headers | boolean | False | See notes | | kubelet-extra-args | string | | See notes | | kubelet-extra-config | string | {} | See notes | | labels | string | | Labels can be used to organize and to select subsets of nodes in the cluster. Declare node labels in key=value format, separated by spaces. | | nagios_context | string | juju | See notes | | nagios_servicegroups | string | | A comma-separated list of nagios servicegroups. If left empty, the nagios_context will be used as the servicegroup | | nginx-image | string | auto | See notes | | proxy-extra-args | string | | See notes | | require-manual-upgrade | boolean | True | When true, worker services will not be upgraded until the user triggers it manually by running the upgrade action. | | snapd_refresh | string | max | See notes | | sysctl | string | See notes | See notes |


ingress

Description:

Deploy the default http backend and ingress controller to handle ingress requests.

Set to false if deploying an alternate ingress controller, and note that you may need to manually open ports 80 and 443 on the nodes: juju run --application kubernetes-worker -- open-port 80 && open-port 443

Back to table

ingress-default-ssl-certificate

Description:

SSL certificate to be used by the default HTTPS server. If one of the flag ingress-default-ssl-certificate or ingress-default-ssl-key is not provided ingress will use a self-signed certificate. This parameter is specific to nginx-ingress-controller.

Back to table

ingress-default-ssl-key

Description:

Private key to be used by the default HTTPS server. If one of the flag ingress-default-ssl-certificate or ingress-default-ssl-key is not provided ingress will use a self-signed certificate. This parameter is specific to nginx-ingress-controller.

Back to table

ingress-ssl-chain-completion

Description:

Enable chain completion for TLS certificates used by the nginx ingress controller. Set this to true if you would like the ingress controller to attempt auto-retrieval of intermediate certificates. The default (false) is recommended for all production kubernetes installations, and any environment which does not have outbound Internet access.

Back to table

ingress-use-forwarded-headers

Description:

If true, NGINX passes the incoming X-Forwarded-* headers to upstreams. Use this option when NGINX is behind another L7 proxy / load balancer that is setting these headers.

If false, NGINX ignores incoming X-Forwarded-* headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3/packet-based load balancer that doesn't alter the source IP in the packets.

Reference: https://github.com/kubernetes/ingress-nginx/ docs/user-guide/nginx-configuration/configmap.md#use-forwarded-headers.

Back to table

kubelet-extra-args

Description:

Space separated list of flags and key=value pairs that will be passed as arguments to kubelet. For example a value like this:

  runtime-config=batch/v2alpha1=true profiling=true

will result in kubelet being run with the following options:

  --runtime-config=batch/v2alpha1=true --profiling=true

Note: As of Kubernetes 1.10.x, many of Kubelet's args have been deprecated, and can be set with kubelet-extra-config instead.

Back to table

kubelet-extra-config

Description:

Extra configuration to be passed to kubelet. Any values specified in this config will be merged into a KubeletConfiguration file that is passed to the kubelet service via the --config flag. This can be used to override values provided by the charm.

Requires Kubernetes 1.10+.

The value for this config must be a YAML mapping that can be safely merged with a KubeletConfiguration file. For example:

  {evictionHard: {memory.available: 200Mi}}

For more information about KubeletConfiguration, see upstream docs: https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/

Back to table

nagios_context

Description:

Used by the nrpe subordinate charms. A string that will be prepended to instance name to set the host name in nagios. So for instance the hostname would be something like:

    juju-myservice-0

If you're running multiple environments with the same services in them this allows you to differentiate between them.

Back to table

nginx-image

Description:

Docker image to use for the nginx ingress controller. Using "auto" will select an image based on architecture.

Example: quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:0.32.0

Back to table

proxy-extra-args

Description:

Space separated list of flags and key=value pairs that will be passed as arguments to kube-proxy. For example a value like this:

  runtime-config=batch/v2alpha1=true profiling=true

will result in kube-apiserver being run with the following options: --runtime-config=batch/v2alpha1=true --profiling=true

Back to table

snapd_refresh

Description:

How often snapd handles updates for installed snaps. Setting an empty string will check 4x per day. Set to "max" to delay the refresh as long as possible. You may also set a custom string as described in the 'refresh.timer' section here: https://forum.snapcraft.io/t/system-options/87

Back to table

sysctl

Default:

{ net.ipv4.conf.all.forwarding : 1, net.ipv4.neigh.default.gc_thresh1 : 128, net.ipv4.neigh.default.gc_thresh2 : 28672, net.ipv4.neigh.default.gc_thresh3 : 32768, net.ipv6.neigh.default.gc_thresh1 : 128, net.ipv6.neigh.default.gc_thresh2 : 28672, net.ipv6.neigh.default.gc_thresh3 : 32768, fs.inotify.max_user_instances : 8192, fs.inotify.max_user_watches : 1048576, kernel.panic : 10, kernel.panic_on_oops: 1, vm.overcommit_memory : 1 }

Back to table

Description:

YAML formatted associative array of sysctl values, e.g.: '{kernel.pid_max : 4194303 }'. Note that kube-proxy handles the conntrack settings. The proper way to alter them is to use the proxy-extra-args config to set them, e.g.:

  juju config kubernetes-master proxy-extra-args="conntrack-min=1000000 conntrack-max-per-core=250000"
  juju config kubernetes-worker proxy-extra-args="conntrack-min=1000000 conntrack-max-per-core=250000"

The proxy-extra-args conntrack-min and conntrack-max-per-core can be set to 0 to ignore kube-proxy's settings and use the sysctl settings instead. Note the fundamental difference between the setting of conntrack-max-per-core vs nf_conntrack_max.

Back to table

Configuring K8s services

IPVS (IP Virtual Server)

This requires configuration of both the kubernetes-master and kubernetes-worker charms. Please see the configuration section on the kubernetes-master page.

Configuring kubelet

Each worker runs the node agent, kubelet with a set of arguments and configuration set by this charm. In some cases it may be desirable to add options or arguments, for which the charm provides two mechanisms

kubelet-extra-args for command line options. kubelet-extra-config for configuration.

The definitive reference for kubelet is the upstream documentation.

HugePages

HugePages are a standard memory management feature of the Linux kernel to decrease overhead for processes which consume large amounts of memory.

Kubernetes includes support for using HugePages with pods (see the upstream documentation).

To use HugePages in your pods with Charmed Kubernetes, it is necessary to update the configuration for the workers:

  1. Fetch the current 'sysctl' configuration from the worker:
    juju config kubernetes-worker sysctl
    
    This should return a string of config options, e.g.:
    { net.ipv4.conf.all.forwarding : 1, net.ipv4.neigh.default.gc_thresh1 : 128, net.ipv4.neigh.default.gc_thresh2 : 28672, net.ipv4.neigh.default.gc_thresh3 : 32768, net.ipv6.neigh.default.gc_thresh1 : 128, net.ipv6.neigh.default.gc_thresh2 : 28672, net.ipv6.neigh.default.gc_thresh3 : 32768, fs.inotify.max_user_instances : 8192, fs.inotify.max_user_watches: 1048576 }
    
  2. The config option for HugePages is vm.nr_hugepages. To add this configuration, you should append it to the string and set the whole configuration. For example, for 100 2Mi pages:
    juju config kubernetes-worker sysctl="{ net.ipv4.conf.all.forwarding : 1, net.ipv4.neigh.default.gc_thresh1 : 128, net.ipv4.neigh.default.gc_thresh2 : 28672, net.ipv4.neigh.default.gc_thresh3 : 32768, net.ipv6.neigh.default.gc_thresh1 : 128, net.ipv6.neigh.default.gc_thresh2 : 28672, net.ipv6.neigh.default.gc_thresh3 : 32768, fs.inotify.max_user_instances : 8192, fs.inotify.max_user_watches: 1048576, vm.nr_hugepages: 100}"
    
  3. HugePages can now be consumed via container level resource requirements using the resource name hugepages-<size>.

    For example:

    apiVersion: v1
    kind: Pod
    metadata:
      name: hugepages-test
    spec:
      containers:
      - image: ubuntu:latest
        command:
        - sleep
        - inf
        name: example
        volumeMounts:
        - mountPath: /hugepages
          name: hugepage
        resources:
          limits:
            hugepages-2Mi: 100Mi
            memory: 100Mi
          requests:
            memory: 100Mi
      volumes:
      - name: hugepage
        emptyDir:
          medium: HugePages
    

    Huge page usage in a namespace can be managed with ResourceQuota, similar to other compute resources.

  4. To verify, you can exec into the pod and check the /proc/meminfo.
    kubectl exec hugepage-test cat /proc/meminfo | grep HugePages_
    

Actions

You can run an action with the following

juju run-action kubernetes-worker ACTION [parameters] [--wait]

cis-benchmark

Run the CIS Kubernetes Benchmark against snap-based components.

This action has the following parameters:


apply

Apply remediations to address benchmark failures. The default, 'none', will not attempt to fix any reported failures. Set to 'conservative' to resolve simple failures. Set to 'dangerous' to attempt to resolve all failures. Note: Applying any remediation may result in an unusable cluster.

Default: none


config

Archive containing configuration files to use when running kube-bench. The default value is known to be compatible with snap components. When using a custom URL, append '#<hash_type>=<checksum>' to verify the archive integrity when downloaded.

Default: https://github.com/charmed-kubernetes/kube-bench-c onfig/archive/cis-1.5.zip#sha1=cb8e78712ee5bfeab87 d0ed7c139a83e88915530


release

Set the kube-bench release to run. If set to 'upstream', the action will compile and use a local kube-bench binary built from the master branch of the upstream repository: https://github.com/aquasecurity/kube-bench This value may also be set to an accessible archive containing a pre-built kube-bench binary, for example: https://github.com/aquasecurity/kube- bench/releases/download/v0.0.34/kube-bench_0.0.34_ linux_amd64.tar.gz#sha256=f96d1fcfb84b18324f1299db 074d41ef324a25be5b944e79619ad1a079fca077

Default: https://github.com/aquasecurity/kube- bench/releases/download/v0.2.3/kube-bench_0.2.3_li nux_amd64.tar.gz#sha256=429a1db271689aafec009434de d1dea07a6685fee85a1deea638097c8512d548



debug

Collect debug data


microbot

Launch microbot containers

This action has the following parameters:


delete

Remove a microbots deployment, service, and ingress if True.

Default: False


replicas

Number of microbots to launch in Kubernetes.

Default: 3



pause

Mark the node as unschedulable to prevent new pods from arriving, and evict existing pods.

This action has the following parameters:


delete-local-data

Continue even if there are pods using emptyDir (local data that will be deleted when the node is drained).

Default: False


force

Continue even if there are pods not managed by a ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet.

Default: False



registry

Create a private Docker registry. DEPRECATED: See https://ubuntu.com/kubernetes/docs/docker-registry

This action has the following parameters:


delete

Remove a registry replication controller, service, and ingress if True.

Default: False


domain

The domain name for the registry. Must match the Common Name of the certificate.

Default:


htpasswd

base64 encoded htpasswd file used for authentication.

Default:


htpasswd-plain

base64 encoded plaintext version of the htpasswd file, needed by docker daemons to authenticate to the registry.

Default:


ingress

Create an Ingress resource for the registry (or delete resource object if "delete" is True)

Default: False


tlscert

base64 encoded TLS certificate for the registry. Common Name must match the domain name of the registry.

Default:


tlskey

base64 encoded TLS key for the registry.

Default:



resume

Mark node as schedulable.


upgrade

Upgrade the kubernetes snaps