diff --git a/content/en/docs/concepts/cluster-administration/flow-control.md b/content/en/docs/concepts/cluster-administration/flow-control.md index c8c0833ef5ab0..9d7ed4d95e1c6 100644 --- a/content/en/docs/concepts/cluster-administration/flow-control.md +++ b/content/en/docs/concepts/cluster-administration/flow-control.md @@ -470,7 +470,7 @@ traffic, you can configure rules to block any health check requests that originate from outside your cluster. {{< /caution >}} -{{% codenew file="priority-and-fairness/health-for-strangers.yaml" %}} +{{% code file="priority-and-fairness/health-for-strangers.yaml" %}} ## Diagnostics diff --git a/content/en/docs/concepts/cluster-administration/logging.md b/content/en/docs/concepts/cluster-administration/logging.md index 000bce73eaf06..09df51c3833fc 100644 --- a/content/en/docs/concepts/cluster-administration/logging.md +++ b/content/en/docs/concepts/cluster-administration/logging.md @@ -39,7 +39,7 @@ Kubernetes captures logs from each container in a running Pod. This example uses a manifest for a `Pod` with a container that writes text to the standard output stream, once per second. -{{% codenew file="debug/counter-pod.yaml" %}} +{{% code file="debug/counter-pod.yaml" %}} To run this pod, use the following command: @@ -255,7 +255,7 @@ For example, a pod runs a single container, and the container writes to two different log files using two different formats. Here's a manifest for the Pod: -{{% codenew file="admin/logging/two-files-counter-pod.yaml" %}} +{{% code file="admin/logging/two-files-counter-pod.yaml" %}} It is not recommended to write log entries with different formats to the same log stream, even if you managed to redirect both components to the `stdout` stream of @@ -265,7 +265,7 @@ the logs to its own `stdout` stream. Here's a manifest for a pod that has two sidecar containers: -{{% codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" %}} +{{% code file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" %}} Now when you run this pod, you can access each log stream separately by running the following commands: @@ -332,7 +332,7 @@ Here are two example manifests that you can use to implement a sidecar container The first manifest contains a [`ConfigMap`](/docs/tasks/configure-pod-container/configure-pod-configmap/) to configure fluentd. -{{% codenew file="admin/logging/fluentd-sidecar-config.yaml" %}} +{{% code file="admin/logging/fluentd-sidecar-config.yaml" %}} {{< note >}} In the sample configurations, you can replace fluentd with any logging agent, reading @@ -342,7 +342,7 @@ from any source inside an application container. The second manifest describes a pod that has a sidecar container running fluentd. The pod mounts a volume where fluentd can pick up its configuration data. -{{% codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" %}} +{{% code file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" %}} ### Exposing logs directly from the application diff --git a/content/en/docs/concepts/cluster-administration/manage-deployment.md b/content/en/docs/concepts/cluster-administration/manage-deployment.md index d7132fe62c5ac..913891e470ee7 100644 --- a/content/en/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/en/docs/concepts/cluster-administration/manage-deployment.md @@ -22,7 +22,7 @@ Many applications require multiple resources to be created, such as a Deployment Management of multiple resources can be simplified by grouping them together in the same file (separated by `---` in YAML). For example: -{{% codenew file="application/nginx-app.yaml" %}} +{{% code file="application/nginx-app.yaml" %}} Multiple resources can be created the same way as a single resource: diff --git a/content/en/docs/concepts/configuration/configmap.md b/content/en/docs/concepts/configuration/configmap.md index 9c923f907abfd..4e5c6f460ba6c 100644 --- a/content/en/docs/concepts/configuration/configmap.md +++ b/content/en/docs/concepts/configuration/configmap.md @@ -111,7 +111,7 @@ technique also lets you access a ConfigMap in a different namespace. Here's an example Pod that uses values from `game-demo` to configure a Pod: -{{% codenew file="configmap/configure-pod.yaml" %}} +{{% code file="configmap/configure-pod.yaml" %}} A ConfigMap doesn't differentiate between single line property values and multi-line file-like values. diff --git a/content/en/docs/concepts/overview/working-with-objects/_index.md b/content/en/docs/concepts/overview/working-with-objects/_index.md index 45cabfbf03eca..a77192202f85e 100644 --- a/content/en/docs/concepts/overview/working-with-objects/_index.md +++ b/content/en/docs/concepts/overview/working-with-objects/_index.md @@ -77,7 +77,7 @@ request. Here's an example `.yaml` file that shows the required fields and object spec for a Kubernetes Deployment: -{{% codenew file="application/deployment.yaml" %}} +{{% code file="application/deployment.yaml" %}} One way to create a Deployment using a `.yaml` file like the one above is to use the [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply) command diff --git a/content/en/docs/concepts/policy/limit-range.md b/content/en/docs/concepts/policy/limit-range.md index a11ff2a663a0b..b4afce14f63fe 100644 --- a/content/en/docs/concepts/policy/limit-range.md +++ b/content/en/docs/concepts/policy/limit-range.md @@ -54,12 +54,12 @@ A `LimitRange` does **not** check the consistency of the default values it appli For example, you define a `LimitRange` with this manifest: -{{% codenew file="concepts/policy/limit-range/problematic-limit-range.yaml" %}} +{{% code file="concepts/policy/limit-range/problematic-limit-range.yaml" %}} along with a Pod that declares a CPU resource request of `700m`, but not a limit: -{{% codenew file="concepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml" %}} +{{% code file="concepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml" %}} then that Pod will not be scheduled, failing with an error similar to: @@ -69,7 +69,7 @@ Pod "example-conflict-with-limitrange-cpu" is invalid: spec.containers[0].resour If you set both `request` and `limit`, then that new Pod will be scheduled successfully even with the same `LimitRange` in place: -{{% codenew file="concepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml" %}} +{{% code file="concepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml" %}} ## Example resource constraints diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md index e404152e6bdac..290197dc9da0a 100644 --- a/content/en/docs/concepts/policy/resource-quotas.md +++ b/content/en/docs/concepts/policy/resource-quotas.md @@ -687,7 +687,7 @@ plugins: Then, create a resource quota object in the `kube-system` namespace: -{{% codenew file="policy/priority-class-resourcequota.yaml" %}} +{{% code file="policy/priority-class-resourcequota.yaml" %}} ```shell kubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index bbabc4dfb6168..15438f4f26ad2 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -122,7 +122,7 @@ your Pod spec. For example, consider the following Pod spec: -{{% codenew file="pods/pod-with-node-affinity.yaml" %}} +{{% code file="pods/pod-with-node-affinity.yaml" %}} In this example, the following rules apply: @@ -172,7 +172,7 @@ scheduling decision for the Pod. For example, consider the following Pod spec: -{{% codenew file="pods/pod-with-affinity-anti-affinity.yaml" %}} +{{% code file="pods/pod-with-affinity-anti-affinity.yaml" %}} If there are two possible nodes that match the `preferredDuringSchedulingIgnoredDuringExecution` rule, one with the @@ -288,7 +288,7 @@ spec. Consider the following Pod spec: -{{% codenew file="pods/pod-with-pod-affinity.yaml" %}} +{{% code file="pods/pod-with-pod-affinity.yaml" %}} This example defines one Pod affinity rule and one Pod anti-affinity rule. The Pod affinity rule uses the "hard" diff --git a/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md b/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md index ee6fa1e07ff73..908db48fd86f8 100644 --- a/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md +++ b/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md @@ -31,7 +31,7 @@ each schedulingGate can be removed in arbitrary order, but addition of a new sch To mark a Pod not-ready for scheduling, you can create it with one or more scheduling gates like this: -{{% codenew file="pods/pod-with-scheduling-gates.yaml" %}} +{{% code file="pods/pod-with-scheduling-gates.yaml" %}} After the Pod's creation, you can check its state using: @@ -61,7 +61,7 @@ The output is: To inform scheduler this Pod is ready for scheduling, you can remove its `schedulingGates` entirely by re-applying a modified manifest: -{{% codenew file="pods/pod-without-scheduling-gates.yaml" %}} +{{% code file="pods/pod-without-scheduling-gates.yaml" %}} You can check if the `schedulingGates` is cleared by running: diff --git a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md index 15969da5fa375..64154de2257e0 100644 --- a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md +++ b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md @@ -64,7 +64,7 @@ tolerations: Here's an example of a pod that uses tolerations: -{{% codenew file="pods/pod-with-toleration.yaml" %}} +{{% code file="pods/pod-with-toleration.yaml" %}} The default value for `operator` is `Equal`. diff --git a/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md b/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md index 6a5e9510d007a..3a6c42fe6f421 100644 --- a/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md +++ b/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md @@ -284,7 +284,7 @@ graph BT If you want an incoming Pod to be evenly spread with existing Pods across zones, you can use a manifest similar to: -{{% codenew file="pods/topology-spread-constraints/one-constraint.yaml" %}} +{{% code file="pods/topology-spread-constraints/one-constraint.yaml" %}} From that manifest, `topologyKey: zone` implies the even distribution will only be applied to nodes that are labelled `zone: ` (nodes that don't have a `zone` label @@ -377,7 +377,7 @@ graph BT You can combine two topology spread constraints to control the spread of Pods both by node and by zone: -{{% codenew file="pods/topology-spread-constraints/two-constraints.yaml" %}} +{{% code file="pods/topology-spread-constraints/two-constraints.yaml" %}} In this case, to match the first constraint, the incoming Pod can only be placed onto nodes in zone `B`; while in terms of the second constraint, the incoming Pod can only be @@ -466,7 +466,7 @@ and you know that zone `C` must be excluded. In this case, you can compose a man as below, so that Pod `mypod` will be placed into zone `B` instead of zone `C`. Similarly, Kubernetes also respects `spec.nodeSelector`. -{{% codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" %}} +{{% code file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" %}} ## Implicit conventions diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md index 3ecb88686cd1b..8101806efff77 100644 --- a/content/en/docs/concepts/services-networking/dns-pod-service.md +++ b/content/en/docs/concepts/services-networking/dns-pod-service.md @@ -300,7 +300,7 @@ Below are the properties a user can specify in the `dnsConfig` field: The following is an example Pod with custom DNS settings: -{{% codenew file="service/networking/custom-dns.yaml" %}} +{{% code file="service/networking/custom-dns.yaml" %}} When the Pod above is created, the container `test` gets the following contents in its `/etc/resolv.conf` file: diff --git a/content/en/docs/concepts/services-networking/dual-stack.md b/content/en/docs/concepts/services-networking/dual-stack.md index 3048363de696a..b9e943daa542d 100644 --- a/content/en/docs/concepts/services-networking/dual-stack.md +++ b/content/en/docs/concepts/services-networking/dual-stack.md @@ -135,7 +135,7 @@ These examples demonstrate the behavior of various dual-stack Service configurat [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors will behave in this same way.) - {{% codenew file="service/networking/dual-stack-default-svc.yaml" %}} + {{% code file="service/networking/dual-stack-default-svc.yaml" %}} 1. This Service specification explicitly defines `PreferDualStack` in `.spec.ipFamilyPolicy`. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 @@ -151,14 +151,14 @@ These examples demonstrate the behavior of various dual-stack Service configurat * On a cluster with dual-stack enabled, specifying `RequireDualStack` in `.spec.ipFamilyPolicy` behaves the same as `PreferDualStack`. - {{% codenew file="service/networking/dual-stack-preferred-svc.yaml" %}} + {{% code file="service/networking/dual-stack-preferred-svc.yaml" %}} 1. This Service specification explicitly defines `IPv6` and `IPv4` in `.spec.ipFamilies` as well as defining `PreferDualStack` in `.spec.ipFamilyPolicy`. When Kubernetes assigns an IPv6 and IPv4 address in `.spec.ClusterIPs`, `.spec.ClusterIP` is set to the IPv6 address because that is the first element in the `.spec.ClusterIPs` array, overriding the default. - {{% codenew file="service/networking/dual-stack-preferred-ipfamilies-svc.yaml" %}} + {{% code file="service/networking/dual-stack-preferred-ipfamilies-svc.yaml" %}} #### Dual-stack defaults on existing Services @@ -171,7 +171,7 @@ dual-stack.) `.spec.ipFamilies` to the address family of the existing Service. The existing Service cluster IP will be stored in `.spec.ClusterIPs`. - {{% codenew file="service/networking/dual-stack-default-svc.yaml" %}} + {{% code file="service/networking/dual-stack-default-svc.yaml" %}} You can validate this behavior by using kubectl to inspect an existing service. @@ -211,7 +211,7 @@ dual-stack.) `--service-cluster-ip-range` flag to the kube-apiserver) even though `.spec.ClusterIP` is set to `None`. - {{% codenew file="service/networking/dual-stack-default-svc.yaml" %}} + {{% code file="service/networking/dual-stack-default-svc.yaml" %}} You can validate this behavior by using kubectl to inspect an existing headless service with selectors. diff --git a/content/en/docs/concepts/services-networking/ingress.md b/content/en/docs/concepts/services-networking/ingress.md index b69a3a4ffd0f8..7f2809ecc4844 100644 --- a/content/en/docs/concepts/services-networking/ingress.md +++ b/content/en/docs/concepts/services-networking/ingress.md @@ -73,7 +73,7 @@ Make sure you review your Ingress controller's documentation to understand the c A minimal Ingress resource example: -{{% codenew file="service/networking/minimal-ingress.yaml" %}} +{{% code file="service/networking/minimal-ingress.yaml" %}} An Ingress needs `apiVersion`, `kind`, `metadata` and `spec` fields. The name of an Ingress object must be a valid @@ -140,7 +140,7 @@ setting with Service, and will fail validation if both are specified. A common usage for a `Resource` backend is to ingress data to an object storage backend with static assets. -{{% codenew file="service/networking/ingress-resource-backend.yaml" %}} +{{% code file="service/networking/ingress-resource-backend.yaml" %}} After creating the Ingress above, you can view it with the following command: @@ -229,7 +229,7 @@ equal to the suffix of the wildcard rule. | `*.foo.com` | `baz.bar.foo.com` | No match, wildcard only covers a single DNS label | | `*.foo.com` | `foo.com` | No match, wildcard only covers a single DNS label | -{{% codenew file="service/networking/ingress-wildcard-host.yaml" %}} +{{% code file="service/networking/ingress-wildcard-host.yaml" %}} ## Ingress class @@ -238,7 +238,7 @@ configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class. -{{% codenew file="service/networking/external-lb.yaml" %}} +{{% code file="service/networking/external-lb.yaml" %}} The `.spec.parameters` field of an IngressClass lets you reference another resource that provides configuration related to that IngressClass. @@ -369,7 +369,7 @@ configured with a [flag](https://kubernetes.github.io/ingress-nginx/#what-is-the `--watch-ingress-without-class`. It is [recommended](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do) though, to specify the default `IngressClass`: -{{% codenew file="service/networking/default-ingressclass.yaml" %}} +{{% code file="service/networking/default-ingressclass.yaml" %}} ## Types of Ingress @@ -379,7 +379,7 @@ There are existing Kubernetes concepts that allow you to expose a single Service (see [alternatives](#alternatives)). You can also do this with an Ingress by specifying a *default backend* with no rules. -{{% codenew file="service/networking/test-ingress.yaml" %}} +{{% code file="service/networking/test-ingress.yaml" %}} If you create it using `kubectl apply -f` you should be able to view the state of the Ingress you added: @@ -411,7 +411,7 @@ down to a minimum. For example, a setup like: It would require an Ingress such as: -{{% codenew file="service/networking/simple-fanout-example.yaml" %}} +{{% code file="service/networking/simple-fanout-example.yaml" %}} When you create the Ingress with `kubectl apply -f`: @@ -456,7 +456,7 @@ Name-based virtual hosts support routing HTTP traffic to multiple host names at The following Ingress tells the backing load balancer to route requests based on the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4). -{{% codenew file="service/networking/name-virtual-host-ingress.yaml" %}} +{{% code file="service/networking/name-virtual-host-ingress.yaml" %}} If you create an Ingress resource without any hosts defined in the rules, then any web traffic to the IP address of your Ingress controller can be matched without a name based @@ -467,7 +467,7 @@ requested for `first.bar.com` to `service1`, `second.bar.com` to `service2`, and any traffic whose request host header doesn't match `first.bar.com` and `second.bar.com` to `service3`. -{{% codenew file="service/networking/name-virtual-host-ingress-no-third-host.yaml" %}} +{{% code file="service/networking/name-virtual-host-ingress-no-third-host.yaml" %}} ### TLS @@ -505,7 +505,7 @@ certificates would have to be issued for all the possible sub-domains. Therefore section. {{< /note >}} -{{% codenew file="service/networking/tls-example-ingress.yaml" %}} +{{% code file="service/networking/tls-example-ingress.yaml" %}} {{< note >}} There is a gap between TLS features supported by various Ingress diff --git a/content/en/docs/concepts/services-networking/network-policies.md b/content/en/docs/concepts/services-networking/network-policies.md index dbb827176847c..6dacf58ba5cdc 100644 --- a/content/en/docs/concepts/services-networking/network-policies.md +++ b/content/en/docs/concepts/services-networking/network-policies.md @@ -83,7 +83,7 @@ reference for a full definition of the resource. An example NetworkPolicy might look like this: -{{% codenew file="service/networking/networkpolicy.yaml" %}} +{{% code file="service/networking/networkpolicy.yaml" %}} {{< note >}} POSTing this to the API server for your cluster will have no effect unless your chosen networking @@ -212,7 +212,7 @@ in that namespace. You can create a "default" ingress isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any ingress traffic to those pods. -{{% codenew file="service/networking/network-policy-default-deny-ingress.yaml" %}} +{{% code file="service/networking/network-policy-default-deny-ingress.yaml" %}} This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod. @@ -222,7 +222,7 @@ for ingress. This policy does not affect isolation for egress from any pod. If you want to allow all incoming connections to all pods in a namespace, you can create a policy that explicitly allows that. -{{% codenew file="service/networking/network-policy-allow-all-ingress.yaml" %}} +{{% code file="service/networking/network-policy-allow-all-ingress.yaml" %}} With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod. @@ -232,7 +232,7 @@ those pods to be denied. This policy has no effect on isolation for egress from You can create a "default" egress isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any egress traffic from those pods. -{{% codenew file="service/networking/network-policy-default-deny-egress.yaml" %}} +{{% code file="service/networking/network-policy-default-deny-egress.yaml" %}} This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the ingress isolation behavior of any pod. @@ -242,7 +242,7 @@ egress traffic. This policy does not change the ingress isolation behavior of an If you want to allow all connections from all pods in a namespace, you can create a policy that explicitly allows all outgoing connections from pods in that namespace. -{{% codenew file="service/networking/network-policy-allow-all-egress.yaml" %}} +{{% code file="service/networking/network-policy-allow-all-egress.yaml" %}} With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod. @@ -252,7 +252,7 @@ those pods to be denied. This policy has no effect on isolation for ingress to You can create a "default" policy for a namespace which prevents all ingress AND egress traffic by creating the following NetworkPolicy in that namespace. -{{% codenew file="service/networking/network-policy-default-deny-all.yaml" %}} +{{% code file="service/networking/network-policy-default-deny-all.yaml" %}} This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic. @@ -280,7 +280,7 @@ When writing a NetworkPolicy, you can target a range of ports instead of a singl This is achievable with the usage of the `endPort` field, as the following example: -{{% codenew file="service/networking/networkpolicy-multiport-egress.yaml" %}} +{{% code file="service/networking/networkpolicy-multiport-egress.yaml" %}} The above rule allows any Pod with label `role=db` on the namespace `default` to communicate with any IP within the range `10.0.0.0/24` over TCP, provided that the target diff --git a/content/en/docs/concepts/storage/projected-volumes.md b/content/en/docs/concepts/storage/projected-volumes.md index 175e24ab2f2c9..a0575dd08e75e 100644 --- a/content/en/docs/concepts/storage/projected-volumes.md +++ b/content/en/docs/concepts/storage/projected-volumes.md @@ -30,11 +30,11 @@ see the [all-in-one volume](https://git.k8s.io/design-proposals-archive/node/all ### Example configuration with a secret, a downwardAPI, and a configMap {#example-configuration-secret-downwardapi-configmap} -{{% codenew file="pods/storage/projected-secret-downwardapi-configmap.yaml" %}} +{{% code file="pods/storage/projected-secret-downwardapi-configmap.yaml" %}} ### Example configuration: secrets with a non-default permission mode set {#example-configuration-secrets-nondefault-permission-mode} -{{% codenew file="pods/storage/projected-secrets-nondefault-permission-mode.yaml" %}} +{{% code file="pods/storage/projected-secrets-nondefault-permission-mode.yaml" %}} Each projected volume source is listed in the spec under `sources`. The parameters are nearly the same with two exceptions: @@ -49,7 +49,7 @@ parameters are nearly the same with two exceptions: You can inject the token for the current [service account](/docs/reference/access-authn-authz/authentication/#service-account-tokens) into a Pod at a specified path. For example: -{{% codenew file="pods/storage/projected-service-account-token.yaml" %}} +{{% code file="pods/storage/projected-service-account-token.yaml" %}} The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API diff --git a/content/en/docs/concepts/workloads/controllers/cron-jobs.md b/content/en/docs/concepts/workloads/controllers/cron-jobs.md index ea90dd742f642..6ef899d225dbe 100644 --- a/content/en/docs/concepts/workloads/controllers/cron-jobs.md +++ b/content/en/docs/concepts/workloads/controllers/cron-jobs.md @@ -41,7 +41,7 @@ length of a Job name is no more than 63 characters. This example CronJob manifest prints the current time and a hello message every minute: -{{% codenew file="application/job/cronjob.yaml" %}} +{{% code file="application/job/cronjob.yaml" %}} ([Running Automated Tasks with a CronJob](/docs/tasks/job/automated-tasks-with-cron-jobs/) takes you through this example in more detail). diff --git a/content/en/docs/concepts/workloads/controllers/daemonset.md b/content/en/docs/concepts/workloads/controllers/daemonset.md index 01336994819ee..65fb4e0f4c953 100644 --- a/content/en/docs/concepts/workloads/controllers/daemonset.md +++ b/content/en/docs/concepts/workloads/controllers/daemonset.md @@ -38,7 +38,7 @@ different flags and/or different memory and cpu requests for different hardware You can describe a DaemonSet in a YAML file. For example, the `daemonset.yaml` file below describes a DaemonSet that runs the fluentd-elasticsearch Docker image: -{{% codenew file="controllers/daemonset.yaml" %}} +{{% code file="controllers/daemonset.yaml" %}} Create a DaemonSet based on the YAML file: diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md index 2ce9add75a607..56bb579a78170 100644 --- a/content/en/docs/concepts/workloads/controllers/deployment.md +++ b/content/en/docs/concepts/workloads/controllers/deployment.md @@ -46,7 +46,7 @@ for a container. The following is an example of a Deployment. It creates a ReplicaSet to bring up three `nginx` Pods: -{{% codenew file="controllers/nginx-deployment.yaml" %}} +{{% code file="controllers/nginx-deployment.yaml" %}} In this example: diff --git a/content/en/docs/concepts/workloads/controllers/job.md b/content/en/docs/concepts/workloads/controllers/job.md index 32034f8386f2e..2ccbaefdb1d68 100644 --- a/content/en/docs/concepts/workloads/controllers/job.md +++ b/content/en/docs/concepts/workloads/controllers/job.md @@ -39,7 +39,7 @@ see [CronJob](/docs/concepts/workloads/controllers/cron-jobs/). Here is an example Job config. It computes π to 2000 places and prints it out. It takes around 10s to complete. -{{% codenew file="controllers/job.yaml" %}} +{{% code file="controllers/job.yaml" %}} You can run the example with this command: @@ -402,7 +402,7 @@ container exit codes and the Pod conditions. Here is a manifest for a Job that defines a `podFailurePolicy`: -{{% codenew file="/controllers/job-pod-failure-policy-example.yaml" %}} +{{% code file="/controllers/job-pod-failure-policy-example.yaml" %}} In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the `main` container fails with the 42 exit diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index 1fefa8b86a744..a8290977918d2 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -56,7 +56,7 @@ use a Deployment instead, and define your application in the spec section. ## Example -{{% codenew file="controllers/frontend.yaml" %}} +{{% code file="controllers/frontend.yaml" %}} Saving this manifest into `frontend.yaml` and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the Pods that it manages. @@ -166,7 +166,7 @@ to owning Pods specified by its template-- it can acquire other Pods in the mann Take the previous frontend ReplicaSet example, and the Pods specified in the following manifest: -{{% codenew file="pods/pod-rs.yaml" %}} +{{% code file="pods/pod-rs.yaml" %}} As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it. @@ -381,7 +381,7 @@ A ReplicaSet can also be a target for a ReplicaSet can be auto-scaled by an HPA. Here is an example HPA targeting the ReplicaSet we created in the previous example. -{{% codenew file="controllers/hpa-rs.yaml" %}} +{{% code file="controllers/hpa-rs.yaml" %}} Saving this manifest into `hpa-rs.yaml` and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage diff --git a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md index 53571ceb731ef..3885fefdc070a 100644 --- a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md @@ -44,7 +44,7 @@ service, such as web servers. This example ReplicationController config runs three copies of the nginx web server. -{{% codenew file="controllers/replication.yaml" %}} +{{% code file="controllers/replication.yaml" %}} Run the example job by downloading the example file and then running this command: diff --git a/content/en/docs/concepts/workloads/pods/_index.md b/content/en/docs/concepts/workloads/pods/_index.md index 09359505b6b3a..ba4b9d47fc01d 100644 --- a/content/en/docs/concepts/workloads/pods/_index.md +++ b/content/en/docs/concepts/workloads/pods/_index.md @@ -46,7 +46,7 @@ A Pod is similar to a set of containers with shared namespaces and shared filesy The following is an example of a Pod which consists of a container running the image `nginx:1.14.2`. -{{% codenew file="pods/simple-pod.yaml" %}} +{{% code file="pods/simple-pod.yaml" %}} To create the Pod shown above, run the following command: ```shell diff --git a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md index f09cbc4d45b02..66b368d48a39a 100644 --- a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md +++ b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md @@ -78,7 +78,7 @@ To allow creating a CertificateSigningRequest and retrieving any CertificateSign For example: -{{% codenew file="access/certificate-signing-request/clusterrole-create.yaml" %}} +{{% code file="access/certificate-signing-request/clusterrole-create.yaml" %}} To allow approving a CertificateSigningRequest: @@ -88,7 +88,7 @@ To allow approving a CertificateSigningRequest: For example: -{{% codenew file="access/certificate-signing-request/clusterrole-approve.yaml" %}} +{{% code file="access/certificate-signing-request/clusterrole-approve.yaml" %}} To allow signing a CertificateSigningRequest: @@ -96,7 +96,7 @@ To allow signing a CertificateSigningRequest: * Verbs: `update`, group: `certificates.k8s.io`, resource: `certificatesigningrequests/status` * Verbs: `sign`, group: `certificates.k8s.io`, resource: `signers`, resourceName: `/` or `/*` -{{% codenew file="access/certificate-signing-request/clusterrole-sign.yaml" %}} +{{% code file="access/certificate-signing-request/clusterrole-sign.yaml" %}} ## Signers diff --git a/content/en/docs/reference/access-authn-authz/rbac.md b/content/en/docs/reference/access-authn-authz/rbac.md index ff43e47477525..0b31fb92af101 100644 --- a/content/en/docs/reference/access-authn-authz/rbac.md +++ b/content/en/docs/reference/access-authn-authz/rbac.md @@ -1240,7 +1240,7 @@ guidance for restricting this access in existing clusters. If you want new clusters to retain this level of access in the aggregated roles, you can create the following ClusterRole: -{{% codenew file="access/endpoints-aggregated.yaml" %}} +{{% code file="access/endpoints-aggregated.yaml" %}} ## Upgrading from ABAC diff --git a/content/en/docs/reference/access-authn-authz/service-accounts-admin.md b/content/en/docs/reference/access-authn-authz/service-accounts-admin.md index d70a37826cad7..391b279d823e5 100644 --- a/content/en/docs/reference/access-authn-authz/service-accounts-admin.md +++ b/content/en/docs/reference/access-authn-authz/service-accounts-admin.md @@ -265,7 +265,7 @@ updates that Secret with that generated token data. Here is a sample manifest for such a Secret: -{{% codenew file="secret/serviceaccount/mysecretname.yaml" %}} +{{% code file="secret/serviceaccount/mysecretname.yaml" %}} To create a Secret based on this example, run: diff --git a/content/en/docs/reference/access-authn-authz/validating-admission-policy.md b/content/en/docs/reference/access-authn-authz/validating-admission-policy.md index cc9aeee7320bc..baef99c2e0442 100644 --- a/content/en/docs/reference/access-authn-authz/validating-admission-policy.md +++ b/content/en/docs/reference/access-authn-authz/validating-admission-policy.md @@ -417,7 +417,7 @@ resource to be evaluated. Here is an example illustrating a few different uses for match conditions: -{{% codenew file="access/validating-admission-policy-match-conditions.yaml" %}} +{{% code file="access/validating-admission-policy-match-conditions.yaml" %}} Match conditions have access to the same CEL variables as validation expressions. @@ -435,7 +435,7 @@ the request is determined as follows: For example, here is an admission policy with an audit annotation: -{{% codenew file="access/validating-admission-policy-audit-annotation.yaml" %}} +{{% code file="access/validating-admission-policy-audit-annotation.yaml" %}} When an API request is validated with this admission policy, the resulting audit event will look like: @@ -472,7 +472,7 @@ message expression must evaluate to a string. For example, to better inform the user of the reason of denial when the policy refers to a parameter, we can have the following validation: -{{% codenew file="access/deployment-replicas-policy.yaml" %}} +{{% code file="access/deployment-replicas-policy.yaml" %}} After creating a params object that limits the replicas to 3 and setting up the binding, when we try to create a deployment with 5 replicas, we will receive the following message. diff --git a/content/en/docs/reference/using-api/server-side-apply.md b/content/en/docs/reference/using-api/server-side-apply.md index 08e3f949d0d39..d081f56f25789 100644 --- a/content/en/docs/reference/using-api/server-side-apply.md +++ b/content/en/docs/reference/using-api/server-side-apply.md @@ -332,7 +332,7 @@ resource and its accompanying controller. Say a user has defined deployment with `replicas` set to the desired value: -{{% codenew file="application/ssa/nginx-deployment.yaml" %}} +{{% code file="application/ssa/nginx-deployment.yaml" %}} And the user has created the deployment using Server-Side Apply like so: @@ -396,7 +396,7 @@ process than it sometimes does. At this point the user may remove the `replicas` field from their configuration. -{{% codenew file="application/ssa/nginx-deployment-no-replicas.yaml" %}} +{{% code file="application/ssa/nginx-deployment-no-replicas.yaml" %}} Note that whenever the HPA controller sets the `replicas` field to a new value, the temporary field manager will no longer own any fields and will be diff --git a/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md b/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md index 02166277430e6..206bed8c07d0a 100644 --- a/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md +++ b/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md @@ -23,7 +23,7 @@ In this exercise, you create a Pod that runs two Containers. The two containers share a Volume that they can use to communicate. Here is the configuration file for the Pod: -{{% codenew file="pods/two-container-pod.yaml" %}} +{{% code file="pods/two-container-pod.yaml" %}} In the configuration file, you can see that the Pod has a Volume named `shared-data`. diff --git a/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md b/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md index bc1c34070b52c..e89a94982c0f8 100644 --- a/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md +++ b/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md @@ -36,7 +36,7 @@ require a supported environment. If your environment does not support this, you The backend is a simple hello greeter microservice. Here is the configuration file for the backend Deployment: -{{% codenew file="service/access/backend-deployment.yaml" %}} +{{% code file="service/access/backend-deployment.yaml" %}} Create the backend Deployment: @@ -97,7 +97,7 @@ the Pods that it routes traffic to. First, explore the Service configuration file: -{{% codenew file="service/access/backend-service.yaml" %}} +{{% code file="service/access/backend-service.yaml" %}} In the configuration file, you can see that the Service, named `hello` routes traffic to Pods that have the labels `app: hello` and `tier: backend`. @@ -125,7 +125,7 @@ configuration file. The Pods in the frontend Deployment run a nginx image that is configured to proxy requests to the `hello` backend Service. Here is the nginx configuration file: -{{% codenew file="service/access/frontend-nginx.conf" %}} +{{% code file="service/access/frontend-nginx.conf" %}} Similar to the backend, the frontend has a Deployment and a Service. An important difference to notice between the backend and frontend services, is that the @@ -133,9 +133,9 @@ configuration for the frontend Service has `type: LoadBalancer`, which means tha the Service uses a load balancer provisioned by your cloud provider and will be accessible from outside the cluster. -{{% codenew file="service/access/frontend-service.yaml" %}} +{{% code file="service/access/frontend-service.yaml" %}} -{{% codenew file="service/access/frontend-deployment.yaml" %}} +{{% code file="service/access/frontend-deployment.yaml" %}} Create the frontend Deployment and Service: diff --git a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md index 8e38e1af083be..1e72da60f9141 100644 --- a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md +++ b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md @@ -126,7 +126,7 @@ The following manifest defines an Ingress that sends traffic to your Service via 1. Create `example-ingress.yaml` from the following file: - {{% codenew file="service/networking/example-ingress.yaml" %}} + {{% code file="service/networking/example-ingress.yaml" %}} 1. Create the Ingress object by running the following command: diff --git a/content/en/docs/tasks/access-application-cluster/service-access-application-cluster.md b/content/en/docs/tasks/access-application-cluster/service-access-application-cluster.md index edd78d322da73..489e0fee0fd5a 100644 --- a/content/en/docs/tasks/access-application-cluster/service-access-application-cluster.md +++ b/content/en/docs/tasks/access-application-cluster/service-access-application-cluster.md @@ -26,7 +26,7 @@ provides load balancing for an application that has two running instances. Here is the configuration file for the application Deployment: -{{% codenew file="service/access/hello-application.yaml" %}} +{{% code file="service/access/hello-application.yaml" %}} 1. Run a Hello World application in your cluster: Create the application Deployment using the file above: diff --git a/content/en/docs/tasks/administer-cluster/declare-network-policy.md b/content/en/docs/tasks/administer-cluster/declare-network-policy.md index e67e5070b8d5b..5a20ddae2cd50 100644 --- a/content/en/docs/tasks/administer-cluster/declare-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/declare-network-policy.md @@ -87,7 +87,7 @@ remote file exists To limit the access to the `nginx` service so that only Pods with the label `access: true` can query it, create a NetworkPolicy object as follows: -{{% codenew file="service/networking/nginx-policy.yaml" %}} +{{% code file="service/networking/nginx-policy.yaml" %}} The name of a NetworkPolicy object must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). diff --git a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md index b8d8234ef6926..68e3dcce72c0c 100644 --- a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md +++ b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md @@ -24,7 +24,7 @@ kube-dns. ### Create a simple Pod to use as a test environment -{{% codenew file="admin/dns/dnsutils.yaml" %}} +{{% code file="admin/dns/dnsutils.yaml" %}} {{< note >}} This example creates a pod in the `default` namespace. DNS name resolution for diff --git a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md index 54ad1a42c8450..fd07f77bdcd02 100644 --- a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md +++ b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md @@ -86,7 +86,7 @@ container based on the `cluster-proportional-autoscaler-amd64` image. Create a file named `dns-horizontal-autoscaler.yaml` with this content: -{{% codenew file="admin/dns/dns-horizontal-autoscaler.yaml" %}} +{{% code file="admin/dns/dns-horizontal-autoscaler.yaml" %}} In the file, replace `` with your scale target. diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md index c20ec954150ef..8722ac5385c47 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md @@ -47,7 +47,7 @@ kubectl create namespace constraints-cpu-example Here's a manifest for an example {{< glossary_tooltip text="LimitRange" term_id="limitrange" >}}: -{{% codenew file="admin/resource/cpu-constraints.yaml" %}} +{{% code file="admin/resource/cpu-constraints.yaml" %}} Create the LimitRange: @@ -98,7 +98,7 @@ Here's a manifest for a Pod that has one container. The container manifest specifies a CPU request of 500 millicpu and a CPU limit of 800 millicpu. These satisfy the minimum and maximum CPU constraints imposed by the LimitRange for this namespace. -{{% codenew file="admin/resource/cpu-constraints-pod.yaml" %}} +{{% code file="admin/resource/cpu-constraints-pod.yaml" %}} Create the Pod: @@ -140,7 +140,7 @@ kubectl delete pod constraints-cpu-demo --namespace=constraints-cpu-example Here's a manifest for a Pod that has one container. The container specifies a CPU request of 500 millicpu and a cpu limit of 1.5 cpu. -{{% codenew file="admin/resource/cpu-constraints-pod-2.yaml" %}} +{{% code file="admin/resource/cpu-constraints-pod-2.yaml" %}} Attempt to create the Pod: @@ -161,7 +161,7 @@ pods "constraints-cpu-demo-2" is forbidden: maximum cpu usage per Container is 8 Here's a manifest for a Pod that has one container. The container specifies a CPU request of 100 millicpu and a CPU limit of 800 millicpu. -{{% codenew file="admin/resource/cpu-constraints-pod-3.yaml" %}} +{{% code file="admin/resource/cpu-constraints-pod-3.yaml" %}} Attempt to create the Pod: @@ -183,7 +183,7 @@ pods "constraints-cpu-demo-3" is forbidden: minimum cpu usage per Container is 2 Here's a manifest for a Pod that has one container. The container does not specify a CPU request, nor does it specify a CPU limit. -{{% codenew file="admin/resource/cpu-constraints-pod-4.yaml" %}} +{{% code file="admin/resource/cpu-constraints-pod-4.yaml" %}} Create the Pod: diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace.md index 2bc3398034368..31c689510a3a9 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace.md @@ -49,7 +49,7 @@ kubectl create namespace default-cpu-example Here's a manifest for an example {{< glossary_tooltip text="LimitRange" term_id="limitrange" >}}. The manifest specifies a default CPU request and a default CPU limit. -{{% codenew file="admin/resource/cpu-defaults.yaml" %}} +{{% code file="admin/resource/cpu-defaults.yaml" %}} Create the LimitRange in the default-cpu-example namespace: @@ -65,7 +65,7 @@ CPU limit of 1. Here's a manifest for a Pod that has one container. The container does not specify a CPU request and limit. -{{% codenew file="admin/resource/cpu-defaults-pod.yaml" %}} +{{% code file="admin/resource/cpu-defaults-pod.yaml" %}} Create the Pod. @@ -100,7 +100,7 @@ containers: Here's a manifest for a Pod that has one container. The container specifies a CPU limit, but not a request: -{{% codenew file="admin/resource/cpu-defaults-pod-2.yaml" %}} +{{% code file="admin/resource/cpu-defaults-pod-2.yaml" %}} Create the Pod: @@ -132,7 +132,7 @@ resources: Here's an example manifest for a Pod that has one container. The container specifies a CPU request, but not a limit: -{{% codenew file="admin/resource/cpu-defaults-pod-3.yaml" %}} +{{% code file="admin/resource/cpu-defaults-pod-3.yaml" %}} Create the Pod: diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md index 27f9a1c0abb2d..d32955608340b 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md @@ -43,7 +43,7 @@ kubectl create namespace constraints-mem-example Here's an example manifest for a LimitRange: -{{% codenew file="admin/resource/memory-constraints.yaml" %}} +{{% code file="admin/resource/memory-constraints.yaml" %}} Create the LimitRange: @@ -89,7 +89,7 @@ Here's a manifest for a Pod that has one container. Within the Pod spec, the sol container specifies a memory request of 600 MiB and a memory limit of 800 MiB. These satisfy the minimum and maximum memory constraints imposed by the LimitRange. -{{% codenew file="admin/resource/memory-constraints-pod.yaml" %}} +{{% code file="admin/resource/memory-constraints-pod.yaml" %}} Create the Pod: @@ -132,7 +132,7 @@ kubectl delete pod constraints-mem-demo --namespace=constraints-mem-example Here's a manifest for a Pod that has one container. The container specifies a memory request of 800 MiB and a memory limit of 1.5 GiB. -{{% codenew file="admin/resource/memory-constraints-pod-2.yaml" %}} +{{% code file="admin/resource/memory-constraints-pod-2.yaml" %}} Attempt to create the Pod: @@ -153,7 +153,7 @@ pods "constraints-mem-demo-2" is forbidden: maximum memory usage per Container i Here's a manifest for a Pod that has one container. That container specifies a memory request of 100 MiB and a memory limit of 800 MiB. -{{% codenew file="admin/resource/memory-constraints-pod-3.yaml" %}} +{{% code file="admin/resource/memory-constraints-pod-3.yaml" %}} Attempt to create the Pod: @@ -174,7 +174,7 @@ pods "constraints-mem-demo-3" is forbidden: minimum memory usage per Container i Here's a manifest for a Pod that has one container. The container does not specify a memory request, and it does not specify a memory limit. -{{% codenew file="admin/resource/memory-constraints-pod-4.yaml" %}} +{{% code file="admin/resource/memory-constraints-pod-4.yaml" %}} Create the Pod: diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md index 4d2dd0931d3b9..9a1a313d9d3f0 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md @@ -53,7 +53,7 @@ Here's a manifest for an example {{< glossary_tooltip text="LimitRange" term_id= The manifest specifies a default memory request and a default memory limit. -{{% codenew file="admin/resource/memory-defaults.yaml" %}} +{{% code file="admin/resource/memory-defaults.yaml" %}} Create the LimitRange in the default-mem-example namespace: @@ -70,7 +70,7 @@ applies default values: a memory request of 256MiB and a memory limit of 512MiB. Here's an example manifest for a Pod that has one container. The container does not specify a memory request and limit. -{{% codenew file="admin/resource/memory-defaults-pod.yaml" %}} +{{% code file="admin/resource/memory-defaults-pod.yaml" %}} Create the Pod. @@ -110,7 +110,7 @@ kubectl delete pod default-mem-demo --namespace=default-mem-example Here's a manifest for a Pod that has one container. The container specifies a memory limit, but not a request: -{{% codenew file="admin/resource/memory-defaults-pod-2.yaml" %}} +{{% code file="admin/resource/memory-defaults-pod-2.yaml" %}} Create the Pod: @@ -141,7 +141,7 @@ resources: Here's a manifest for a Pod that has one container. The container specifies a memory request, but not a limit: -{{% codenew file="admin/resource/memory-defaults-pod-3.yaml" %}} +{{% code file="admin/resource/memory-defaults-pod-3.yaml" %}} Create the Pod: diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md index ca4449a9ea283..1259671457c17 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md @@ -42,7 +42,7 @@ kubectl create namespace quota-mem-cpu-example Here is a manifest for an example ResourceQuota: -{{% codenew file="admin/resource/quota-mem-cpu.yaml" %}} +{{% code file="admin/resource/quota-mem-cpu.yaml" %}} Create the ResourceQuota: @@ -71,7 +71,7 @@ to learn what Kubernetes means by “1 CPU”. Here is a manifest for an example Pod: -{{% codenew file="admin/resource/quota-mem-cpu-pod.yaml" %}} +{{% code file="admin/resource/quota-mem-cpu-pod.yaml" %}} Create the Pod: @@ -121,7 +121,7 @@ kubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example -o json Here is a manifest for a second Pod: -{{% codenew file="admin/resource/quota-mem-cpu-pod-2.yaml" %}} +{{% code file="admin/resource/quota-mem-cpu-pod-2.yaml" %}} In the manifest, you can see that the Pod has a memory request of 700 MiB. Notice that the sum of the used memory request and this new memory diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace.md index df31d67b7fbce..26c78e3747759 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace.md @@ -39,7 +39,7 @@ kubectl create namespace quota-pod-example Here is an example manifest for a ResourceQuota: -{{% codenew file="admin/resource/quota-pod.yaml" %}} +{{% code file="admin/resource/quota-pod.yaml" %}} Create the ResourceQuota: @@ -69,7 +69,7 @@ status: Here is an example manifest for a {{< glossary_tooltip term_id="deployment" >}}: -{{% codenew file="admin/resource/quota-pod-deployment.yaml" %}} +{{% code file="admin/resource/quota-pod-deployment.yaml" %}} In that manifest, `replicas: 3` tells Kubernetes to attempt to create three new Pods, all running the same application. diff --git a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md index f08a8f423d772..5548d82c81774 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md +++ b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md @@ -73,7 +73,7 @@ Let's create two new namespaces to hold our work. Use the file [`namespace-dev.yaml`](/examples/admin/namespace-dev.yaml) which describes a `development` namespace: -{{% codenew language="yaml" file="admin/namespace-dev.yaml" %}} +{{% code language="yaml" file="admin/namespace-dev.yaml" %}} Create the `development` namespace using kubectl. @@ -83,7 +83,7 @@ kubectl create -f https://k8s.io/examples/admin/namespace-dev.yaml Save the following contents into file [`namespace-prod.yaml`](/examples/admin/namespace-prod.yaml) which describes a `production` namespace: -{{% codenew language="yaml" file="admin/namespace-prod.yaml" %}} +{{% code language="yaml" file="admin/namespace-prod.yaml" %}} And then let's create the `production` namespace using kubectl. @@ -226,7 +226,7 @@ At this point, all requests we make to the Kubernetes cluster from the command l Let's create some contents. -{{% codenew file="admin/snowflake-deployment.yaml" %}} +{{% code file="admin/snowflake-deployment.yaml" %}} Apply the manifest to create a Deployment diff --git a/content/en/docs/tasks/administer-cluster/quota-api-object.md b/content/en/docs/tasks/administer-cluster/quota-api-object.md index a8895fb407f08..716eaadb9ae2c 100644 --- a/content/en/docs/tasks/administer-cluster/quota-api-object.md +++ b/content/en/docs/tasks/administer-cluster/quota-api-object.md @@ -40,7 +40,7 @@ kubectl create namespace quota-object-example Here is the configuration file for a ResourceQuota object: -{{% codenew file="admin/resource/quota-objects.yaml" %}} +{{% code file="admin/resource/quota-objects.yaml" %}} Create the ResourceQuota: @@ -74,7 +74,7 @@ status: Here is the configuration file for a PersistentVolumeClaim object: -{{% codenew file="admin/resource/quota-objects-pvc.yaml" %}} +{{% code file="admin/resource/quota-objects-pvc.yaml" %}} Create the PersistentVolumeClaim: @@ -99,7 +99,7 @@ pvc-quota-demo Pending Here is the configuration file for a second PersistentVolumeClaim: -{{% codenew file="admin/resource/quota-objects-pvc-2.yaml" %}} +{{% code file="admin/resource/quota-objects-pvc-2.yaml" %}} Attempt to create the second PersistentVolumeClaim: diff --git a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md index 3bef0ec884be8..d5ec024560806 100644 --- a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md +++ b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md @@ -92,7 +92,7 @@ projects in repositories maintained by cloud vendors or by SIGs. For providers already in Kubernetes core, you can run the in-tree cloud controller manager as a DaemonSet in your cluster, use the following as a guideline: -{{% codenew file="admin/cloud/ccm-example.yaml" %}} +{{% code file="admin/cloud/ccm-example.yaml" %}} ## Limitations diff --git a/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md b/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md index 712f6a6c53638..452d32d7a55da 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md +++ b/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md @@ -71,7 +71,7 @@ in the Container resource manifest. To specify a CPU limit, include `resources:l In this exercise, you create a Pod that has one container. The container has a request of 0.5 CPU and a limit of 1 CPU. Here is the configuration file for the Pod: -{{% codenew file="pods/resource/cpu-request-limit.yaml" %}} +{{% code file="pods/resource/cpu-request-limit.yaml" %}} The `args` section of the configuration file provides arguments for the container when it starts. The `-cpus "2"` argument tells the Container to attempt to use 2 CPUs. @@ -163,7 +163,7 @@ the capacity of any Node in your cluster. Here is the configuration file for a P that has one Container. The Container requests 100 CPU, which is likely to exceed the capacity of any Node in your cluster. -{{% codenew file="pods/resource/cpu-request-limit-2.yaml" %}} +{{% code file="pods/resource/cpu-request-limit-2.yaml" %}} Create the Pod: diff --git a/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md b/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md index 48dc43e03a784..81c771c8b4335 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md +++ b/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md @@ -69,7 +69,7 @@ In this exercise, you create a Pod that has one Container. The Container has a m request of 100 MiB and a memory limit of 200 MiB. Here's the configuration file for the Pod: -{{% codenew file="pods/resource/memory-request-limit.yaml" %}} +{{% code file="pods/resource/memory-request-limit.yaml" %}} The `args` section in the configuration file provides arguments for the Container when it starts. The `"--vm-bytes", "150M"` arguments tell the Container to attempt to allocate 150 MiB of memory. @@ -139,7 +139,7 @@ In this exercise, you create a Pod that attempts to allocate more memory than it Here is the configuration file for a Pod that has one Container with a memory request of 50 MiB and a memory limit of 100 MiB: -{{% codenew file="pods/resource/memory-request-limit-2.yaml" %}} +{{% code file="pods/resource/memory-request-limit-2.yaml" %}} In the `args` section of the configuration file, you can see that the Container will attempt to allocate 250 MiB of memory, which is well above the 100 MiB limit. @@ -248,7 +248,7 @@ capacity of any Node in your cluster. Here is the configuration file for a Pod t Container with a request for 1000 GiB of memory, which likely exceeds the capacity of any Node in your cluster. -{{% codenew file="pods/resource/memory-request-limit-3.yaml" %}} +{{% code file="pods/resource/memory-request-limit-3.yaml" %}} Create the Pod: diff --git a/content/en/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md b/content/en/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md index b457343820845..6100d99e3b061 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md +++ b/content/en/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md @@ -64,7 +64,7 @@ Kubernetes cluster. This manifest describes a Pod that has a `requiredDuringSchedulingIgnoredDuringExecution` node affinity,`disktype: ssd`. This means that the pod will get scheduled only on a node that has a `disktype=ssd` label. -{{% codenew file="pods/pod-nginx-required-affinity.yaml" %}} +{{% code file="pods/pod-nginx-required-affinity.yaml" %}} 1. Apply the manifest to create a Pod that is scheduled onto your chosen node: @@ -91,7 +91,7 @@ This means that the pod will get scheduled only on a node that has a `disktype=s This manifest describes a Pod that has a `preferredDuringSchedulingIgnoredDuringExecution` node affinity,`disktype: ssd`. This means that the pod will prefer a node that has a `disktype=ssd` label. -{{% codenew file="pods/pod-nginx-preferred-affinity.yaml" %}} +{{% code file="pods/pod-nginx-preferred-affinity.yaml" %}} 1. Apply the manifest to create a Pod that is scheduled onto your chosen node: diff --git a/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md b/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md index ea45fbfbf7b8b..1a15b06cc3a1d 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md +++ b/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md @@ -66,7 +66,7 @@ This pod configuration file describes a pod that has a node selector, `disktype: ssd`. This means that the pod will get scheduled on a node that has a `disktype=ssd` label. -{{% codenew file="pods/pod-nginx.yaml" %}} +{{% code file="pods/pod-nginx.yaml" %}} 1. Use the configuration file to create a pod that will get scheduled on your chosen node: @@ -91,7 +91,7 @@ a `disktype=ssd` label. You can also schedule a pod to one specific node via setting `nodeName`. -{{% codenew file="pods/pod-nginx-specific-node.yaml" %}} +{{% code file="pods/pod-nginx-specific-node.yaml" %}} Use the configuration file to create a pod that will get scheduled on `foo-node` only. diff --git a/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md b/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md index 58109514719bc..5fe7837b50ac4 100644 --- a/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md +++ b/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md @@ -30,7 +30,7 @@ for the postStart and preStop events. Here is the configuration file for the Pod: -{{% codenew file="pods/lifecycle-events.yaml" %}} +{{% code file="pods/lifecycle-events.yaml" %}} In the configuration file, you can see that the postStart command writes a `message` file to the Container's `/usr/share` directory. The preStop command shuts down diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index 4141311ee4a84..51b35813ffab4 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -57,7 +57,7 @@ liveness probes to detect and remedy such situations. In this exercise, you create a Pod that runs a container based on the `registry.k8s.io/busybox` image. Here is the configuration file for the Pod: -{{% codenew file="pods/probe/exec-liveness.yaml" %}} +{{% code file="pods/probe/exec-liveness.yaml" %}} In the configuration file, you can see that the Pod has a single `Container`. The `periodSeconds` field specifies that the kubelet should perform a liveness @@ -142,7 +142,7 @@ liveness-exec 1/1 Running 1 1m Another kind of liveness probe uses an HTTP GET request. Here is the configuration file for a Pod that runs a container based on the `registry.k8s.io/liveness` image. -{{% codenew file="pods/probe/http-liveness.yaml" %}} +{{% code file="pods/probe/http-liveness.yaml" %}} In the configuration file, you can see that the Pod has a single container. The `periodSeconds` field specifies that the kubelet should perform a liveness @@ -203,7 +203,7 @@ kubelet will attempt to open a socket to your container on the specified port. If it can establish a connection, the container is considered healthy, if it can't it is considered a failure. -{{% codenew file="pods/probe/tcp-liveness-readiness.yaml" %}} +{{% code file="pods/probe/tcp-liveness-readiness.yaml" %}} As you can see, configuration for a TCP check is quite similar to an HTTP check. This example uses both readiness and liveness probes. The kubelet will send the @@ -241,7 +241,7 @@ Similarly you can configure readiness and startup probes. Here is an example manifest: -{{% codenew file="pods/probe/grpc-liveness.yaml" %}} +{{% code file="pods/probe/grpc-liveness.yaml" %}} To use a gRPC probe, `port` must be configured. If you want to distinguish probes of different types and probes for different features you can use the `service` field. diff --git a/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md index 94227f7479ae0..c04a9a56cee00 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md +++ b/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md @@ -89,7 +89,7 @@ to set up Here is the configuration file for the hostPath PersistentVolume: -{{% codenew file="pods/storage/pv-volume.yaml" %}} +{{% code file="pods/storage/pv-volume.yaml" %}} The configuration file specifies that the volume is at `/mnt/data` on the cluster's Node. The configuration also specifies a size of 10 gibibytes and @@ -127,7 +127,7 @@ access for at most one Node at a time. Here is the configuration file for the PersistentVolumeClaim: -{{% codenew file="pods/storage/pv-claim.yaml" %}} +{{% code file="pods/storage/pv-claim.yaml" %}} Create the PersistentVolumeClaim: @@ -173,7 +173,7 @@ The next step is to create a Pod that uses your PersistentVolumeClaim as a volum Here is the configuration file for the Pod: -{{% codenew file="pods/storage/pv-pod.yaml" %}} +{{% code file="pods/storage/pv-pod.yaml" %}} Notice that the Pod's configuration file specifies a PersistentVolumeClaim, but it does not specify a PersistentVolume. From the Pod's point of view, the claim @@ -244,7 +244,7 @@ You can now close the shell to your Node. ## Mounting the same persistentVolume in two places -{{% codenew file="pods/storage/pv-duplicate.yaml" %}} +{{% code file="pods/storage/pv-duplicate.yaml" %}} You can perform 2 volume mounts on your nginx container: diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index a724760b265b3..2f16cf981a1ec 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -547,7 +547,7 @@ section, and learn how to use these objects with Pods. 2. Assign the `special.how` value defined in the ConfigMap to the `SPECIAL_LEVEL_KEY` environment variable in the Pod specification. - {{% codenew file="pods/pod-single-configmap-env-variable.yaml" %}} + {{% code file="pods/pod-single-configmap-env-variable.yaml" %}} Create the Pod: @@ -562,7 +562,7 @@ section, and learn how to use these objects with Pods. As with the previous example, create the ConfigMaps first. Here is the manifest you will use: -{{% codenew file="configmap/configmaps.yaml" %}} +{{% code file="configmap/configmaps.yaml" %}} * Create the ConfigMap: @@ -572,7 +572,7 @@ Here is the manifest you will use: * Define the environment variables in the Pod specification. - {{% codenew file="pods/pod-multiple-configmap-env-variable.yaml" %}} + {{% code file="pods/pod-multiple-configmap-env-variable.yaml" %}} Create the Pod: @@ -591,7 +591,7 @@ Here is the manifest you will use: * Create a ConfigMap containing multiple key-value pairs. - {{% codenew file="configmap/configmap-multikeys.yaml" %}} + {{% code file="configmap/configmap-multikeys.yaml" %}} Create the ConfigMap: @@ -602,7 +602,7 @@ Here is the manifest you will use: * Use `envFrom` to define all of the ConfigMap's data as container environment variables. The key from the ConfigMap becomes the environment variable name in the Pod. - {{% codenew file="pods/pod-configmap-envFrom.yaml" %}} + {{% code file="pods/pod-configmap-envFrom.yaml" %}} Create the Pod: @@ -624,7 +624,7 @@ using the `$(VAR_NAME)` Kubernetes substitution syntax. For example, the following Pod manifest: -{{% codenew file="pods/pod-configmap-env-var-valueFrom.yaml" %}} +{{% code file="pods/pod-configmap-env-var-valueFrom.yaml" %}} Create that Pod, by running: @@ -651,7 +651,7 @@ the ConfigMap. The file contents become the key's value. The examples in this section refer to a ConfigMap named `special-config`: -{{% codenew file="configmap/configmap-multikeys.yaml" %}} +{{% code file="configmap/configmap-multikeys.yaml" %}} Create the ConfigMap: @@ -666,7 +666,7 @@ This adds the ConfigMap data to the directory specified as `volumeMounts.mountPa case, `/etc/config`). The `command` section lists directory files with names that match the keys in ConfigMap. -{{% codenew file="pods/pod-configmap-volume.yaml" %}} +{{% code file="pods/pod-configmap-volume.yaml" %}} Create the Pod: @@ -700,7 +700,7 @@ kubectl delete pod dapi-test-pod --now Use the `path` field to specify the desired file path for specific ConfigMap items. In this case, the `SPECIAL_LEVEL` item will be mounted in the `config-volume` volume at `/etc/config/keys`. -{{% codenew file="pods/pod-configmap-volume-specific-key.yaml" %}} +{{% code file="pods/pod-configmap-volume-specific-key.yaml" %}} Create the Pod: diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-initialization.md b/content/en/docs/tasks/configure-pod-container/configure-pod-initialization.md index 2bc67b4bd100e..b48f936bcd967 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-initialization.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-initialization.md @@ -23,7 +23,7 @@ container starts. Here is the configuration file for the Pod: -{{% codenew file="pods/init-containers.yaml" %}} +{{% code file="pods/init-containers.yaml" %}} In the configuration file, you can see that the Pod has a Volume that the init container and the application container share. diff --git a/content/en/docs/tasks/configure-pod-container/configure-projected-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-projected-volume-storage.md index 90e3d5c15465c..3e083db8c9924 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-projected-volume-storage.md +++ b/content/en/docs/tasks/configure-pod-container/configure-projected-volume-storage.md @@ -29,7 +29,7 @@ In this exercise, you create username and password {{< glossary_tooltip text="Se Here is the configuration file for the Pod: -{{% codenew file="pods/storage/projected.yaml" %}} +{{% code file="pods/storage/projected.yaml" %}} 1. Create the Secrets: diff --git a/content/en/docs/tasks/configure-pod-container/configure-runasusername.md b/content/en/docs/tasks/configure-pod-container/configure-runasusername.md index 73aede4c94c0e..72a1f764d6eec 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-runasusername.md +++ b/content/en/docs/tasks/configure-pod-container/configure-runasusername.md @@ -29,7 +29,7 @@ The Windows security context options that you specify for a Pod apply to all Con Here is a configuration file for a Windows Pod that has the `runAsUserName` field set: -{{% codenew file="windows/run-as-username-pod.yaml" %}} +{{% code file="windows/run-as-username-pod.yaml" %}} Create the Pod: @@ -69,7 +69,7 @@ The Windows security context options that you specify for a Container apply only Here is the configuration file for a Pod that has one Container, and the `runAsUserName` field is set at the Pod level and the Container level: -{{% codenew file="windows/run-as-username-container.yaml" %}} +{{% code file="windows/run-as-username-container.yaml" %}} Create the Pod: diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md index e537471adad82..9b00e6edab1f0 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md @@ -403,7 +403,7 @@ You can configure this behavior for the `spec` of a Pod using a To provide a Pod with a token with an audience of `vault` and a validity duration of two hours, you could define a Pod manifest that is similar to: -{{% codenew file="pods/pod-projected-svc-token.yaml" %}} +{{% code file="pods/pod-projected-svc-token.yaml" %}} Create the Pod: diff --git a/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md index 0b31fce245c61..b3f3de48db6d1 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md +++ b/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md @@ -28,7 +28,7 @@ Volume of type that lasts for the life of the Pod, even if the Container terminates and restarts. Here is the configuration file for the Pod: -{{% codenew file="pods/storage/redis.yaml" %}} +{{% code file="pods/storage/redis.yaml" %}} 1. Create the Pod: diff --git a/content/en/docs/tasks/configure-pod-container/extended-resource.md b/content/en/docs/tasks/configure-pod-container/extended-resource.md index 756694f05ff09..4442f2f261768 100644 --- a/content/en/docs/tasks/configure-pod-container/extended-resource.md +++ b/content/en/docs/tasks/configure-pod-container/extended-resource.md @@ -37,7 +37,7 @@ descriptive resource name. Here is the configuration file for a Pod that has one Container: -{{% codenew file="pods/resource/extended-resource-pod.yaml" %}} +{{% code file="pods/resource/extended-resource-pod.yaml" %}} In the configuration file, you can see that the Container requests 3 dongles. @@ -73,7 +73,7 @@ Requests: Here is the configuration file for a Pod that has one Container. The Container requests two dongles. -{{% codenew file="pods/resource/extended-resource-pod-2.yaml" %}} +{{% code file="pods/resource/extended-resource-pod-2.yaml" %}} Kubernetes will not be able to satisfy the request for two dongles, because the first Pod used three of the four available dongles. diff --git a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md index e871d9bb810b6..6de401f2e442b 100644 --- a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md +++ b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md @@ -185,7 +185,7 @@ You have successfully set your Docker credentials as a Secret called `regcred` i Here is a manifest for an example Pod that needs access to your Docker credentials in `regcred`: -{{% codenew file="pods/private-reg-pod.yaml" %}} +{{% code file="pods/private-reg-pod.yaml" %}} Download the above file onto your computer: diff --git a/content/en/docs/tasks/configure-pod-container/quality-service-pod.md b/content/en/docs/tasks/configure-pod-container/quality-service-pod.md index ad69ad94088cb..701cb7390a1f2 100644 --- a/content/en/docs/tasks/configure-pod-container/quality-service-pod.md +++ b/content/en/docs/tasks/configure-pod-container/quality-service-pod.md @@ -56,7 +56,7 @@ cannot define resources so these restrictions do not apply. Here is a manifest for a Pod that has one Container. The Container has a memory limit and a memory request, both equal to 200 MiB. The Container has a CPU limit and a CPU request, both equal to 700 milliCPU: -{{% codenew file="pods/qos/qos-pod.yaml" %}} +{{% code file="pods/qos/qos-pod.yaml" %}} Create the Pod: @@ -116,7 +116,7 @@ A Pod is given a QoS class of `Burstable` if: Here is a manifest for a Pod that has one Container. The Container has a memory limit of 200 MiB and a memory request of 100 MiB. -{{% codenew file="pods/qos/qos-pod-2.yaml" %}} +{{% code file="pods/qos/qos-pod-2.yaml" %}} Create the Pod: @@ -165,7 +165,7 @@ have any memory or CPU limits or requests. Here is a manifest for a Pod that has one Container. The Container has no memory or CPU limits or requests: -{{% codenew file="pods/qos/qos-pod-3.yaml" %}} +{{% code file="pods/qos/qos-pod-3.yaml" %}} Create the Pod: @@ -205,7 +205,7 @@ kubectl delete pod qos-demo-3 --namespace=qos-example Here is a manifest for a Pod that has two Containers. One container specifies a memory request of 200 MiB. The other Container does not specify any requests or limits. -{{% codenew file="pods/qos/qos-pod-4.yaml" %}} +{{% code file="pods/qos/qos-pod-4.yaml" %}} Notice that this Pod meets the criteria for QoS class `Burstable`. That is, it does not meet the criteria for QoS class `Guaranteed`, and one of its Containers has a memory request. diff --git a/content/en/docs/tasks/configure-pod-container/resize-container-resources.md b/content/en/docs/tasks/configure-pod-container/resize-container-resources.md index 70cd09662ada9..38199c038e11a 100644 --- a/content/en/docs/tasks/configure-pod-container/resize-container-resources.md +++ b/content/en/docs/tasks/configure-pod-container/resize-container-resources.md @@ -107,7 +107,7 @@ class pod by specifying requests and/or limits for a pod's containers. Consider the following manifest for a Pod that has one Container. -{{% codenew file="pods/qos/qos-pod-5.yaml" %}} +{{% code file="pods/qos/qos-pod-5.yaml" %}} Create the pod in the `qos-example` namespace: diff --git a/content/en/docs/tasks/configure-pod-container/security-context.md b/content/en/docs/tasks/configure-pod-container/security-context.md index 506c48186d52f..c1ac3a2872edc 100644 --- a/content/en/docs/tasks/configure-pod-container/security-context.md +++ b/content/en/docs/tasks/configure-pod-container/security-context.md @@ -58,7 +58,7 @@ in the Pod specification. The `securityContext` field is a The security settings that you specify for a Pod apply to all Containers in the Pod. Here is a configuration file for a Pod that has a `securityContext` and an `emptyDir` volume: -{{% codenew file="pods/security/security-context.yaml" %}} +{{% code file="pods/security/security-context.yaml" %}} In the configuration file, the `runAsUser` field specifies that for any Containers in the Pod, all processes run with user ID 1000. The `runAsGroup` field specifies the primary group ID of 3000 for @@ -221,7 +221,7 @@ there is overlap. Container settings do not affect the Pod's Volumes. Here is the configuration file for a Pod that has one Container. Both the Pod and the Container have a `securityContext` field: -{{% codenew file="pods/security/security-context-2.yaml" %}} +{{% code file="pods/security/security-context-2.yaml" %}} Create the Pod: @@ -274,7 +274,7 @@ of the root user. To add or remove Linux capabilities for a Container, include t First, see what happens when you don't include a `capabilities` field. Here is configuration file that does not add or remove any Container capabilities: -{{% codenew file="pods/security/security-context-3.yaml" %}} +{{% code file="pods/security/security-context-3.yaml" %}} Create the Pod: @@ -336,7 +336,7 @@ that it has additional capabilities set. Here is the configuration file for a Pod that runs one Container. The configuration adds the `CAP_NET_ADMIN` and `CAP_SYS_TIME` capabilities: -{{% codenew file="pods/security/security-context-4.yaml" %}} +{{% code file="pods/security/security-context-4.yaml" %}} Create the Pod: diff --git a/content/en/docs/tasks/configure-pod-container/share-process-namespace.md b/content/en/docs/tasks/configure-pod-container/share-process-namespace.md index d488ee6e24c40..db80e233a5b3c 100644 --- a/content/en/docs/tasks/configure-pod-container/share-process-namespace.md +++ b/content/en/docs/tasks/configure-pod-container/share-process-namespace.md @@ -29,7 +29,7 @@ include debugging utilities like a shell. Process namespace sharing is enabled using the `shareProcessNamespace` field of `.spec` for a Pod. For example: -{{% codenew file="pods/share-process-namespace.yaml" %}} +{{% code file="pods/share-process-namespace.yaml" %}} 1. Create the pod `nginx` on your cluster: diff --git a/content/en/docs/tasks/configure-pod-container/user-namespaces.md b/content/en/docs/tasks/configure-pod-container/user-namespaces.md index 26ef3a441b144..1fe59e0002196 100644 --- a/content/en/docs/tasks/configure-pod-container/user-namespaces.md +++ b/content/en/docs/tasks/configure-pod-container/user-namespaces.md @@ -62,7 +62,7 @@ created without user namespaces.** A user namespace for a stateless pod is enabled setting the `hostUsers` field of `.spec` to `false`. For example: -{{% codenew file="pods/user-namespaces-stateless.yaml" %}} +{{% code file="pods/user-namespaces-stateless.yaml" %}} 1. Create the pod on your cluster: diff --git a/content/en/docs/tasks/debug/debug-application/debug-running-pod.md b/content/en/docs/tasks/debug/debug-application/debug-running-pod.md index 80ed083d47cd1..48862b3008b45 100644 --- a/content/en/docs/tasks/debug/debug-application/debug-running-pod.md +++ b/content/en/docs/tasks/debug/debug-application/debug-running-pod.md @@ -25,7 +25,7 @@ This page explains how to debug Pods running (or crashing) on a Node. For this example we'll use a Deployment to create two pods, similar to the earlier example. -{{% codenew file="application/nginx-with-request.yaml" %}} +{{% code file="application/nginx-with-request.yaml" %}} Create deployment by running following command: diff --git a/content/en/docs/tasks/debug/debug-application/determine-reason-pod-failure.md b/content/en/docs/tasks/debug/debug-application/determine-reason-pod-failure.md index 24b1ff6b6e600..7e578efc2db27 100644 --- a/content/en/docs/tasks/debug/debug-application/determine-reason-pod-failure.md +++ b/content/en/docs/tasks/debug/debug-application/determine-reason-pod-failure.md @@ -27,7 +27,7 @@ the general In this exercise, you create a Pod that runs one container. The manifest for that Pod specifies a command that runs when the container starts: -{{% codenew file="debug/termination.yaml" %}} +{{% code file="debug/termination.yaml" %}} 1. Create a Pod based on the YAML configuration file: diff --git a/content/en/docs/tasks/debug/debug-application/get-shell-running-container.md b/content/en/docs/tasks/debug/debug-application/get-shell-running-container.md index 7e088227d8cc4..7bd79f72f4aeb 100644 --- a/content/en/docs/tasks/debug/debug-application/get-shell-running-container.md +++ b/content/en/docs/tasks/debug/debug-application/get-shell-running-container.md @@ -29,7 +29,7 @@ running container. In this exercise, you create a Pod that has one container. The container runs the nginx image. Here is the configuration file for the Pod: -{{% codenew file="application/shell-demo.yaml" %}} +{{% code file="application/shell-demo.yaml" %}} Create the Pod: diff --git a/content/en/docs/tasks/debug/debug-cluster/audit.md b/content/en/docs/tasks/debug/debug-cluster/audit.md index 4e1a178f3e2a8..2a9e02254624e 100644 --- a/content/en/docs/tasks/debug/debug-cluster/audit.md +++ b/content/en/docs/tasks/debug/debug-cluster/audit.md @@ -80,7 +80,7 @@ A policy with no (0) rules is treated as illegal. Below is an example audit policy file: -{{% codenew file="audit/audit-policy.yaml" %}} +{{% code file="audit/audit-policy.yaml" %}} You can use a minimal audit policy file to log all requests at the `Metadata` level: diff --git a/content/en/docs/tasks/debug/debug-cluster/monitor-node-health.md b/content/en/docs/tasks/debug/debug-cluster/monitor-node-health.md index 103009c9b4347..8a96fee3ab863 100644 --- a/content/en/docs/tasks/debug/debug-cluster/monitor-node-health.md +++ b/content/en/docs/tasks/debug/debug-cluster/monitor-node-health.md @@ -42,7 +42,7 @@ to detect customized node problems. For example: 1. Create a Node Problem Detector configuration similar to `node-problem-detector.yaml`: - {{% codenew file="debug/node-problem-detector.yaml" %}} + {{% code file="debug/node-problem-detector.yaml" %}} {{< note >}} You should verify that the system log directory is right for your operating system distribution. @@ -80,7 +80,7 @@ to overwrite the configuration: 1. Change the `node-problem-detector.yaml` to use the `ConfigMap`: - {{% codenew file="debug/node-problem-detector-configmap.yaml" %}} + {{% code file="debug/node-problem-detector-configmap.yaml" %}} 1. Recreate the Node Problem Detector with the new configuration file: diff --git a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md index e95a7de0bb494..e4a9a186b181e 100644 --- a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md +++ b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md @@ -69,7 +69,7 @@ for this example. A [Deployment](/docs/concepts/workloads/controllers/deployment thereby making the scheduler resilient to failures. Here is the deployment config. Save it as `my-scheduler.yaml`: -{{% codenew file="admin/sched/my-scheduler.yaml" %}} +{{% code file="admin/sched/my-scheduler.yaml" %}} In the above manifest, you use a [KubeSchedulerConfiguration](/docs/reference/scheduling/config/) to customize the behavior of your scheduler implementation. This configuration has been passed to @@ -139,7 +139,7 @@ Add your scheduler name to the resourceNames of the rule applied for `endpoints` kubectl edit clusterrole system:kube-scheduler ``` -{{% codenew file="admin/sched/clusterrole.yaml" %}} +{{% code file="admin/sched/clusterrole.yaml" %}} ## Specify schedulers for pods @@ -150,7 +150,7 @@ scheduler in that pod spec. Let's look at three examples. - Pod spec without any scheduler name - {{% codenew file="admin/sched/pod1.yaml" %}} + {{% code file="admin/sched/pod1.yaml" %}} When no scheduler name is supplied, the pod is automatically scheduled using the default-scheduler. @@ -163,7 +163,7 @@ scheduler in that pod spec. Let's look at three examples. - Pod spec with `default-scheduler` - {{% codenew file="admin/sched/pod2.yaml" %}} + {{% code file="admin/sched/pod2.yaml" %}} A scheduler is specified by supplying the scheduler name as a value to `spec.schedulerName`. In this case, we supply the name of the default scheduler which is `default-scheduler`. @@ -176,7 +176,7 @@ scheduler in that pod spec. Let's look at three examples. - Pod spec with `my-scheduler` - {{% codenew file="admin/sched/pod3.yaml" %}} + {{% code file="admin/sched/pod3.yaml" %}} In this case, we specify that this pod should be scheduled using the scheduler that we deployed - `my-scheduler`. Note that the value of `spec.schedulerName` should match the name supplied for the scheduler diff --git a/content/en/docs/tasks/extend-kubernetes/setup-konnectivity.md b/content/en/docs/tasks/extend-kubernetes/setup-konnectivity.md index 889f5bd660434..3d1487c13478e 100644 --- a/content/en/docs/tasks/extend-kubernetes/setup-konnectivity.md +++ b/content/en/docs/tasks/extend-kubernetes/setup-konnectivity.md @@ -23,7 +23,7 @@ plane hosts. If you do not already have a cluster, you can create one by using The following steps require an egress configuration, for example: -{{% codenew file="admin/konnectivity/egress-selector-configuration.yaml" %}} +{{% code file="admin/konnectivity/egress-selector-configuration.yaml" %}} You need to configure the API Server to use the Konnectivity service and direct the network traffic to the cluster nodes: @@ -74,12 +74,12 @@ that the Kubernetes components are deployed as a {{< glossary_tooltip text="stat term_id="static-pod" >}} in your cluster. If not, you can deploy the Konnectivity server as a DaemonSet. -{{% codenew file="admin/konnectivity/konnectivity-server.yaml" %}} +{{% code file="admin/konnectivity/konnectivity-server.yaml" %}} Then deploy the Konnectivity agents in your cluster: -{{% codenew file="admin/konnectivity/konnectivity-agent.yaml" %}} +{{% code file="admin/konnectivity/konnectivity-agent.yaml" %}} Last, if RBAC is enabled in your cluster, create the relevant RBAC rules: -{{% codenew file="admin/konnectivity/konnectivity-rbac.yaml" %}} +{{% code file="admin/konnectivity/konnectivity-rbac.yaml" %}} diff --git a/content/en/docs/tasks/inject-data-application/define-command-argument-container.md b/content/en/docs/tasks/inject-data-application/define-command-argument-container.md index 38e1ba39ea655..08370e621727a 100644 --- a/content/en/docs/tasks/inject-data-application/define-command-argument-container.md +++ b/content/en/docs/tasks/inject-data-application/define-command-argument-container.md @@ -42,7 +42,7 @@ The `command` field corresponds to `entrypoint` in some container runtimes. In this exercise, you create a Pod that runs one container. The configuration file for the Pod defines a command and two arguments: -{{% codenew file="pods/commands.yaml" %}} +{{% code file="pods/commands.yaml" %}} 1. Create a Pod based on the YAML configuration file: diff --git a/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md b/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md index 648abc02e82c1..fed16a131d40f 100644 --- a/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md +++ b/content/en/docs/tasks/inject-data-application/define-environment-variable-container.md @@ -42,7 +42,7 @@ file for the Pod defines an environment variable with name `DEMO_GREETING` and value `"Hello from the environment"`. Here is the configuration manifest for the Pod: -{{% codenew file="pods/inject/envars.yaml" %}} +{{% code file="pods/inject/envars.yaml" %}} 1. Create a Pod based on that manifest: diff --git a/content/en/docs/tasks/inject-data-application/define-interdependent-environment-variables.md b/content/en/docs/tasks/inject-data-application/define-interdependent-environment-variables.md index 3abc99ff2f12b..64121fee1ba72 100644 --- a/content/en/docs/tasks/inject-data-application/define-interdependent-environment-variables.md +++ b/content/en/docs/tasks/inject-data-application/define-interdependent-environment-variables.md @@ -26,7 +26,7 @@ In this exercise, you create a Pod that runs one container. The configuration file for the Pod defines a dependent environment variable with common usage defined. Here is the configuration manifest for the Pod: -{{% codenew file="pods/inject/dependent-envars.yaml" %}} +{{% code file="pods/inject/dependent-envars.yaml" %}} 1. Create a Pod based on that manifest: diff --git a/content/en/docs/tasks/inject-data-application/distribute-credentials-secure.md b/content/en/docs/tasks/inject-data-application/distribute-credentials-secure.md index aa8efd5e13bb9..984172fadfbfc 100644 --- a/content/en/docs/tasks/inject-data-application/distribute-credentials-secure.md +++ b/content/en/docs/tasks/inject-data-application/distribute-credentials-secure.md @@ -38,7 +38,7 @@ Use a local tool trusted by your OS to decrease the security risks of external t Here is a configuration file you can use to create a Secret that holds your username and password: -{{% codenew file="pods/inject/secret.yaml" %}} +{{% code file="pods/inject/secret.yaml" %}} 1. Create the Secret @@ -97,7 +97,7 @@ through each step explicitly to demonstrate what is happening. Here is a configuration file you can use to create a Pod: -{{% codenew file="pods/inject/secret-pod.yaml" %}} +{{% code file="pods/inject/secret-pod.yaml" %}} 1. Create the Pod: @@ -252,7 +252,7 @@ secrets change. - Assign the `backend-username` value defined in the Secret to the `SECRET_USERNAME` environment variable in the Pod specification. - {{% codenew file="pods/inject/pod-single-secret-env-variable.yaml" %}} + {{% code file="pods/inject/pod-single-secret-env-variable.yaml" %}} - Create the Pod: @@ -282,7 +282,7 @@ secrets change. - Define the environment variables in the Pod specification. - {{% codenew file="pods/inject/pod-multiple-secret-env-variable.yaml" %}} + {{% code file="pods/inject/pod-multiple-secret-env-variable.yaml" %}} - Create the Pod: @@ -315,7 +315,7 @@ This functionality is available in Kubernetes v1.6 and later. - Use envFrom to define all of the Secret's data as container environment variables. The key from the Secret becomes the environment variable name in the Pod. - {{% codenew file="pods/inject/pod-secret-envFrom.yaml" %}} + {{% code file="pods/inject/pod-secret-envFrom.yaml" %}} - Create the Pod: diff --git a/content/en/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md b/content/en/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md index 3203072bc50de..12f5c26478799 100644 --- a/content/en/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md +++ b/content/en/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md @@ -32,7 +32,7 @@ In this part of exercise, you create a Pod that has one container, and you project Pod-level fields into the running container as files. Here is the manifest for the Pod: -{{% codenew file="pods/inject/dapi-volume.yaml" %}} +{{% code file="pods/inject/dapi-volume.yaml" %}} In the manifest, you can see that the Pod has a `downwardAPI` Volume, and the container mounts the volume at `/etc/podinfo`. @@ -155,7 +155,7 @@ definition, but taken from the specific rather than from the Pod overall. Here is a manifest for a Pod that again has just one container: -{{% codenew file="pods/inject/dapi-volume-resources.yaml" %}} +{{% code file="pods/inject/dapi-volume-resources.yaml" %}} In the manifest, you can see that the Pod has a [`downwardAPI` volume](/docs/concepts/storage/volumes/#downwardapi), diff --git a/content/en/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md b/content/en/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md index cc92bc4ad187f..4660192a0849b 100644 --- a/content/en/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md +++ b/content/en/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md @@ -34,7 +34,7 @@ Read more about accessing Services [here](/docs/tutorials/services/connect-appli In this part of exercise, you create a Pod that has one container, and you project Pod-level fields into the running container as environment variables. -{{% codenew file="pods/inject/dapi-envars-pod.yaml" %}} +{{% code file="pods/inject/dapi-envars-pod.yaml" %}} In that manifest, you can see five environment variables. The `env` field is an array of @@ -119,7 +119,7 @@ rather than from the Pod overall. Here is a manifest for another Pod that again has just one container: -{{% codenew file="pods/inject/dapi-envars-container.yaml" %}} +{{% code file="pods/inject/dapi-envars-container.yaml" %}} In this manifest, you can see four environment variables. The `env` field is an array of diff --git a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md index 819f372bb3568..f6573f0e2bda8 100644 --- a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md +++ b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md @@ -22,7 +22,7 @@ This page shows how to run automated tasks using Kubernetes {{< glossary_tooltip Cron jobs require a config file. Here is a manifest for a CronJob that runs a simple demonstration task every minute: -{{% codenew file="application/job/cronjob.yaml" %}} +{{% code file="application/job/cronjob.yaml" %}} Run the example CronJob by using this command: diff --git a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md index eb297f2ee3e05..aa464c9ebca5d 100644 --- a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md +++ b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md @@ -186,7 +186,7 @@ We will use the `amqp-consume` utility to read the message from the queue and run our actual program. Here is a very simple example program: -{{% codenew language="python" file="application/job/rabbitmq/worker.py" %}} +{{% code language="python" file="application/job/rabbitmq/worker.py" %}} Give the script execution permission: @@ -230,7 +230,7 @@ Here is a job definition. You'll need to make a copy of the Job and edit the image to match the name you used, and call it `./job.yaml`. -{{% codenew file="application/job/rabbitmq/job.yaml" %}} +{{% code file="application/job/rabbitmq/job.yaml" %}} In this example, each pod works on one item from the queue and then exits. So, the completion count of the Job corresponds to the number of work items diff --git a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md index cec1d15660557..a14721528f091 100644 --- a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md +++ b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md @@ -119,7 +119,7 @@ called rediswq.py ([Download](/examples/application/job/redis/rediswq.py)). The "worker" program in each Pod of the Job uses the work queue client library to get work. Here it is: -{{% codenew language="python" file="application/job/redis/worker.py" %}} +{{% code language="python" file="application/job/redis/worker.py" %}} You could also download [`worker.py`](/examples/application/job/redis/worker.py), [`rediswq.py`](/examples/application/job/redis/rediswq.py), and @@ -158,7 +158,7 @@ gcloud docker -- push gcr.io//job-wq-2 Here is the job definition: -{{% codenew file="application/job/redis/job.yaml" %}} +{{% code file="application/job/redis/job.yaml" %}} Be sure to edit the job template to change `gcr.io/myproject` to your own path. diff --git a/content/en/docs/tasks/job/indexed-parallel-processing-static.md b/content/en/docs/tasks/job/indexed-parallel-processing-static.md index 6c464506ffaf7..628b0bfff2482 100644 --- a/content/en/docs/tasks/job/indexed-parallel-processing-static.md +++ b/content/en/docs/tasks/job/indexed-parallel-processing-static.md @@ -77,7 +77,7 @@ the start of the clip. Here is a sample Job manifest that uses `Indexed` completion mode: -{{% codenew language="yaml" file="application/job/indexed-job.yaml" %}} +{{% code language="yaml" file="application/job/indexed-job.yaml" %}} In the example above, you use the builtin `JOB_COMPLETION_INDEX` environment variable set by the Job controller for all containers. An [init container](/docs/concepts/workloads/pods/init-containers/) @@ -92,7 +92,7 @@ Alternatively, you can directly [use the downward API to pass the annotation value as a volume file](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#store-pod-fields), like shown in the following example: -{{% codenew language="yaml" file="application/job/indexed-job-vol.yaml" %}} +{{% code language="yaml" file="application/job/indexed-job-vol.yaml" %}} ## Running the Job diff --git a/content/en/docs/tasks/job/parallel-processing-expansion.md b/content/en/docs/tasks/job/parallel-processing-expansion.md index be93d2660c62c..7e7b59a2eadd9 100644 --- a/content/en/docs/tasks/job/parallel-processing-expansion.md +++ b/content/en/docs/tasks/job/parallel-processing-expansion.md @@ -43,7 +43,7 @@ pip install --user jinja2 First, download the following template of a Job to a file called `job-tmpl.yaml`. Here's what you'll download: -{{% codenew file="application/job/job-tmpl.yaml" %}} +{{% code file="application/job/job-tmpl.yaml" %}} ```shell # Use curl to download job-tmpl.yaml diff --git a/content/en/docs/tasks/job/pod-failure-policy.md b/content/en/docs/tasks/job/pod-failure-policy.md index 556ebf84dbe37..1fbae3716a6bc 100644 --- a/content/en/docs/tasks/job/pod-failure-policy.md +++ b/content/en/docs/tasks/job/pod-failure-policy.md @@ -39,7 +39,7 @@ software bug. First, create a Job based on the config: -{{% codenew file="/controllers/job-pod-failure-policy-failjob.yaml" %}} +{{% code file="/controllers/job-pod-failure-policy-failjob.yaml" %}} by running: @@ -85,7 +85,7 @@ node while the Pod is running on it (within 90s since the Pod is scheduled). 1. Create a Job based on the config: - {{% codenew file="/controllers/job-pod-failure-policy-ignore.yaml" %}} + {{% code file="/controllers/job-pod-failure-policy-ignore.yaml" %}} by running: @@ -145,7 +145,7 @@ deleted pods, in the `Pending` phase, to a terminal phase 1. First, create a Job based on the config: - {{% codenew file="/controllers/job-pod-failure-policy-config-issue.yaml" %}} + {{% code file="/controllers/job-pod-failure-policy-config-issue.yaml" %}} by running: diff --git a/content/en/docs/tasks/manage-daemon/pods-some-nodes.md b/content/en/docs/tasks/manage-daemon/pods-some-nodes.md index bfb69eb7c5c4f..a16fea20937c1 100644 --- a/content/en/docs/tasks/manage-daemon/pods-some-nodes.md +++ b/content/en/docs/tasks/manage-daemon/pods-some-nodes.md @@ -33,7 +33,7 @@ Let's create a {{}} which Next, use a `nodeSelector` to ensure that the DaemonSet only runs Pods on nodes with the `ssd` label set to `"true"`. -{{% codenew file="controllers/daemonset-label-selector.yaml" %}} +{{% code file="controllers/daemonset-label-selector.yaml" %}} ### Step 3: Create the DaemonSet diff --git a/content/en/docs/tasks/manage-daemon/update-daemon-set.md b/content/en/docs/tasks/manage-daemon/update-daemon-set.md index 6bb6c87ff52cd..f3c6d8be459e5 100644 --- a/content/en/docs/tasks/manage-daemon/update-daemon-set.md +++ b/content/en/docs/tasks/manage-daemon/update-daemon-set.md @@ -46,7 +46,7 @@ You may want to set This YAML file specifies a DaemonSet with an update strategy as 'RollingUpdate' -{{% codenew file="controllers/fluentd-daemonset.yaml" %}} +{{% code file="controllers/fluentd-daemonset.yaml" %}} After verifying the update strategy of the DaemonSet manifest, create the DaemonSet: @@ -92,7 +92,7 @@ manifest accordingly. Any updates to a `RollingUpdate` DaemonSet `.spec.template` will trigger a rolling update. Let's update the DaemonSet by applying a new YAML file. This can be done with several different `kubectl` commands. -{{% codenew file="controllers/fluentd-daemonset-update.yaml" %}} +{{% code file="controllers/fluentd-daemonset-update.yaml" %}} #### Declarative commands diff --git a/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md b/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md index a9a6af905dfc8..4459ba79f35fa 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md @@ -71,7 +71,7 @@ Add the `-R` flag to recursively process directories. Here's an example of an object configuration file: -{{% codenew file="application/simple_deployment.yaml" %}} +{{% code file="application/simple_deployment.yaml" %}} Run `kubectl diff` to print the object that will be created: @@ -163,7 +163,7 @@ Add the `-R` flag to recursively process directories. Here's an example configuration file: -{{% codenew file="application/simple_deployment.yaml" %}} +{{% code file="application/simple_deployment.yaml" %}} Create the object using `kubectl apply`: @@ -281,7 +281,7 @@ spec: Update the `simple_deployment.yaml` configuration file to change the image from `nginx:1.14.2` to `nginx:1.16.1`, and delete the `minReadySeconds` field: -{{% codenew file="application/update_deployment.yaml" %}} +{{% code file="application/update_deployment.yaml" %}} Apply the changes made to the configuration file: @@ -513,7 +513,7 @@ to calculate which fields should be deleted or set: Here's an example. Suppose this is the configuration file for a Deployment object: -{{% codenew file="application/update_deployment.yaml" %}} +{{% code file="application/update_deployment.yaml" %}} Also, suppose this is the live configuration for the same Deployment object: @@ -809,7 +809,7 @@ not specified when the object is created. Here's a configuration file for a Deployment. The file does not specify `strategy`: -{{% codenew file="application/simple_deployment.yaml" %}} +{{% code file="application/simple_deployment.yaml" %}} Create the object using `kubectl apply`: diff --git a/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md b/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md index 9e4623b6f3419..0905d1731b5ce 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md @@ -27,7 +27,7 @@ in this task demonstrate a strategic merge patch and a JSON merge patch. Here's the configuration file for a Deployment that has two replicas. Each replica is a Pod that has one container: -{{% codenew file="application/deployment-patch.yaml" %}} +{{% code file="application/deployment-patch.yaml" %}} Create the Deployment: @@ -288,7 +288,7 @@ patch-demo-1307768864-c86dc 1/1 Running 0 1m Here's the configuration file for a Deployment that uses the `RollingUpdate` strategy: -{{% codenew file="application/deployment-retainkeys.yaml" %}} +{{% code file="application/deployment-retainkeys.yaml" %}} Create the deployment: @@ -439,7 +439,7 @@ examples which supports these subresources. Here's a manifest for a Deployment that has two replicas: -{{% codenew file="application/deployment.yaml" %}} +{{% code file="application/deployment.yaml" %}} Create the Deployment: diff --git a/content/en/docs/tasks/network/customize-hosts-file-for-pods.md b/content/en/docs/tasks/network/customize-hosts-file-for-pods.md index 0c61f705122b6..7a5685e07aaa4 100644 --- a/content/en/docs/tasks/network/customize-hosts-file-for-pods.md +++ b/content/en/docs/tasks/network/customize-hosts-file-for-pods.md @@ -69,7 +69,7 @@ For example: to resolve `foo.local`, `bar.local` to `127.0.0.1` and `foo.remote` `bar.remote` to `10.1.2.3`, you can configure HostAliases for a Pod under `.spec.hostAliases`: -{{% codenew file="service/networking/hostaliases-pod.yaml" %}} +{{% code file="service/networking/hostaliases-pod.yaml" %}} You can start a Pod with that configuration by running: diff --git a/content/en/docs/tasks/network/validate-dual-stack.md b/content/en/docs/tasks/network/validate-dual-stack.md index d3db2be89fb1a..7576322bd8ad5 100644 --- a/content/en/docs/tasks/network/validate-dual-stack.md +++ b/content/en/docs/tasks/network/validate-dual-stack.md @@ -106,7 +106,7 @@ fe00::2 ip6-allrouters Create the following Service that does not explicitly define `.spec.ipFamilyPolicy`. Kubernetes will assign a cluster IP for the Service from the first configured `service-cluster-ip-range` and set the `.spec.ipFamilyPolicy` to `SingleStack`. -{{% codenew file="service/networking/dual-stack-default-svc.yaml" %}} +{{% code file="service/networking/dual-stack-default-svc.yaml" %}} Use `kubectl` to view the YAML for the Service. @@ -143,7 +143,7 @@ status: Create the following Service that explicitly defines `IPv6` as the first array element in `.spec.ipFamilies`. Kubernetes will assign a cluster IP for the Service from the IPv6 range configured `service-cluster-ip-range` and set the `.spec.ipFamilyPolicy` to `SingleStack`. -{{% codenew file="service/networking/dual-stack-ipfamilies-ipv6.yaml" %}} +{{% code file="service/networking/dual-stack-ipfamilies-ipv6.yaml" %}} Use `kubectl` to view the YAML for the Service. @@ -181,7 +181,7 @@ status: Create the following Service that explicitly defines `PreferDualStack` in `.spec.ipFamilyPolicy`. Kubernetes will assign both IPv4 and IPv6 addresses (as this cluster has dual-stack enabled) and select the `.spec.ClusterIP` from the list of `.spec.ClusterIPs` based on the address family of the first element in the `.spec.ipFamilies` array. -{{% codenew file="service/networking/dual-stack-preferred-svc.yaml" %}} +{{% code file="service/networking/dual-stack-preferred-svc.yaml" %}} {{< note >}} The `kubectl get svc` command will only show the primary IP in the `CLUSTER-IP` field. @@ -222,7 +222,7 @@ Events: If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with `PreferDualStack` in `.spec.ipFamilyPolicy`, `IPv6` as the first element of the `.spec.ipFamilies` array and the `type` field set to `LoadBalancer`. -{{% codenew file="service/networking/dual-stack-prefer-ipv6-lb-svc.yaml" %}} +{{% code file="service/networking/dual-stack-prefer-ipv6-lb-svc.yaml" %}} Check the Service: diff --git a/content/en/docs/tasks/run-application/configure-pdb.md b/content/en/docs/tasks/run-application/configure-pdb.md index cecffb231b5c5..ea02c4a1845aa 100644 --- a/content/en/docs/tasks/run-application/configure-pdb.md +++ b/content/en/docs/tasks/run-application/configure-pdb.md @@ -165,11 +165,11 @@ You can find examples of pod disruption budgets defined below. They match pods w Example PDB Using minAvailable: -{{% codenew file="policy/zookeeper-pod-disruption-budget-minavailable.yaml" %}} +{{% code file="policy/zookeeper-pod-disruption-budget-minavailable.yaml" %}} Example PDB Using maxUnavailable: -{{% codenew file="policy/zookeeper-pod-disruption-budget-maxunavailable.yaml" %}} +{{% code file="policy/zookeeper-pod-disruption-budget-maxunavailable.yaml" %}} For example, if the above `zk-pdb` object selects the pods of a StatefulSet of size 3, both specifications have the exact same meaning. The use of `maxUnavailable` is recommended as it diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 683f2894b3367..7a133f5171ff0 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -58,7 +58,7 @@ To demonstrate a HorizontalPodAutoscaler, you will first start a Deployment that `hpa-example` image, and expose it as a {{< glossary_tooltip term_id="service">}} using the following manifest: -{{% codenew file="application/php-apache.yaml" %}} +{{% code file="application/php-apache.yaml" %}} To do so, run the following command: @@ -498,7 +498,7 @@ between `1` and `1500m`, or `1` and `1.5` when written in decimal notation. Instead of using `kubectl autoscale` command to create a HorizontalPodAutoscaler imperatively we can use the following manifest to create it declaratively: -{{% codenew file="application/hpa/php-apache.yaml" %}} +{{% code file="application/hpa/php-apache.yaml" %}} Then, create the autoscaler by executing the following command: diff --git a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md index 5cd15c3754fef..6fed2b92257c0 100644 --- a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md +++ b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md @@ -56,7 +56,7 @@ and a StatefulSet. Create the ConfigMap from the following YAML configuration file: -{{% codenew file="application/mysql/mysql-configmap.yaml" %}} +{{% code file="application/mysql/mysql-configmap.yaml" %}} ```shell kubectl apply -f https://k8s.io/examples/application/mysql/mysql-configmap.yaml @@ -76,7 +76,7 @@ based on information provided by the StatefulSet controller. Create the Services from the following YAML configuration file: -{{% codenew file="application/mysql/mysql-services.yaml" %}} +{{% code file="application/mysql/mysql-services.yaml" %}} ```shell kubectl apply -f https://k8s.io/examples/application/mysql/mysql-services.yaml @@ -103,7 +103,7 @@ writes. Finally, create the StatefulSet from the following YAML configuration file: -{{% codenew file="application/mysql/mysql-statefulset.yaml" %}} +{{% code file="application/mysql/mysql-statefulset.yaml" %}} ```shell kubectl apply -f https://k8s.io/examples/application/mysql/mysql-statefulset.yaml diff --git a/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md b/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md index 529e2a496c84d..d55747bb4d082 100644 --- a/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md +++ b/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md @@ -39,8 +39,8 @@ Note: The password is defined in the config yaml, and this is insecure. See [Kubernetes Secrets](/docs/concepts/configuration/secret/) for a secure solution. -{{% codenew file="application/mysql/mysql-deployment.yaml" %}} -{{% codenew file="application/mysql/mysql-pv.yaml" %}} +{{% code file="application/mysql/mysql-deployment.yaml" %}} +{{% code file="application/mysql/mysql-pv.yaml" %}} 1. Deploy the PV and PVC of the YAML file: diff --git a/content/en/docs/tasks/run-application/run-stateless-application-deployment.md b/content/en/docs/tasks/run-application/run-stateless-application-deployment.md index 271e5cc6edb69..0cd32071d4f7e 100644 --- a/content/en/docs/tasks/run-application/run-stateless-application-deployment.md +++ b/content/en/docs/tasks/run-application/run-stateless-application-deployment.md @@ -27,7 +27,7 @@ You can run an application by creating a Kubernetes Deployment object, and you can describe a Deployment in a YAML file. For example, this YAML file describes a Deployment that runs the nginx:1.14.2 Docker image: -{{% codenew file="application/deployment.yaml" %}} +{{% code file="application/deployment.yaml" %}} 1. Create a Deployment based on the YAML file: @@ -100,7 +100,7 @@ a Deployment that runs the nginx:1.14.2 Docker image: You can update the deployment by applying a new YAML file. This YAML file specifies that the deployment should be updated to use nginx 1.16.1. -{{% codenew file="application/deployment-update.yaml" %}} +{{% code file="application/deployment-update.yaml" %}} 1. Apply the new YAML file: @@ -120,7 +120,7 @@ You can increase the number of Pods in your Deployment by applying a new YAML file. This YAML file sets `replicas` to 4, which specifies that the Deployment should have four Pods: -{{% codenew file="application/deployment-scale.yaml" %}} +{{% code file="application/deployment-scale.yaml" %}} 1. Apply the new YAML file: diff --git a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md index 0db58381ac5ba..8c3541bfc2b65 100644 --- a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md +++ b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md @@ -236,7 +236,7 @@ This produces a certificate authority key file (`ca-key.pem`) and certificate (` ### Issue a certificate -{{% codenew file="tls/server-signing-config.json" %}} +{{% code file="tls/server-signing-config.json" %}} Use a `server-signing-config.json` signing configuration and the certificate authority key file and certificate to sign the certificate request: