$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
You can use the Operator SDK to package, deploy, and upgrade Operators in the bundle format for use on Operator Lifecycle Manager (OLM).
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Operator SDK CLI installed on a development workstation
OpenShift CLI (oc
) v4.15+ installed
Operator project initialized by using the Operator SDK
If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Run the following make
commands in your Operator project directory to build and push your Operator image. Modify the IMG
argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.
Build the image:
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
The Dockerfile generated by the SDK for the Operator explicitly references |
Push the image to a repository:
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
Create your Operator bundle manifest by running the make bundle
command, which invokes several commands, including the Operator SDK generate bundle
and bundle validate
subcommands:
$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>
Bundle manifests for an Operator describe how to display, create, and manage an application. The make bundle
command creates the following files and directories in your Operator project:
A bundle manifests directory named bundle/manifests
that contains a ClusterServiceVersion
object
A bundle metadata directory named bundle/metadata
All custom resource definitions (CRDs) in a config/crd
directory
A Dockerfile bundle.Dockerfile
These files are then automatically validated by using operator-sdk bundle validate
to ensure the on-disk bundle representation is correct.
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set BUNDLE_IMG
with the details for the registry, user namespace, and image tag where you intend to push the image:
$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
Push the bundle image:
$ docker push <registry>/<user>/<bundle_image_name>:<tag>
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc
) for all Operator lifecycle management functions without any additional tools.
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Operator SDK CLI installed on a development workstation
Operator bundle image built and pushed to a registry
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use apiextensions.k8s.io/v1
CRDs, for example OpenShift Container Platform 4.15)
Logged in to the cluster with oc
using an account with cluster-admin
permissions
If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Enter the following command to run the Operator on the cluster:
$ operator-sdk run bundle \(1)
-n <namespace> \(2)
<registry>/<user>/<bundle_image_name>:<tag> (3)
1 | The run bundle command creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM. |
2 | Optional: By default, the command installs the Operator in the currently active project in your ~/.kube/config file. You can add the -n flag to set a different namespace scope for the installation. |
3 | If you do not specify an image, the command uses quay.io/operator-framework/opm:latest as the default index image. If you specify an image, the command uses the bundle image itself as the index image. |
As of OpenShift Container Platform 4.11, the |
This command performs the following actions:
Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
Deploy your Operator to your cluster by creating an OperatorGroup
, Subscription
, InstallPlan
, and all other required resources, including RBAC.
File-based catalogs in Operator Framework packaging format
File-based catalogs in Managing custom catalogs
To install and manage Operators, Operator Lifecycle Manager (OLM) requires that Operator bundles are listed in an index image, which is referenced by a catalog on the cluster. As an Operator author, you can use the Operator SDK to create an index containing the bundle for your Operator and all of its dependencies. This is useful for testing on remote clusters and publishing to container registries.
The Operator SDK uses the |
Operator SDK CLI installed on a development workstation
Operator bundle image built and pushed to a registry
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use apiextensions.k8s.io/v1
CRDs, for example OpenShift Container Platform 4.15)
Logged in to the cluster with oc
using an account with cluster-admin
permissions
Run the following make
command in your Operator project directory to build an index image containing your Operator bundle:
$ make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>
where the CATALOG_IMG
argument references a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.
Push the built index image to a repository:
$ make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>
You can use Operator SDK
Alternatively, you can set the
You can then use the following syntax to build and push images with automatically-generated names, such as
|
Define a CatalogSource
object that references the index image you just generated, and then create the object by using the oc apply
command or web console:
CatalogSource
YAMLapiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: cs-memcached
namespace: <operator_namespace>
spec:
displayName: My Test
publisher: Company
sourceType: grpc
grpcPodConfig:
securityContextConfig: <security_mode> (1)
image: quay.io/example/memcached-catalog:v0.0.1 (2)
updateStrategy:
registryPoll:
interval: 10m
1 | Specify the value of legacy or restricted . If the field is not set, the default value is legacy . In a future OpenShift Container Platform release, it is planned that the default value will be restricted . If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy . |
2 | Set image to the image pull spec you used previously with the CATALOG_IMG argument. |
Check the catalog source:
$ oc get catalogsource
NAME DISPLAY TYPE PUBLISHER AGE
cs-memcached My Test grpc Company 4h31m
Install the Operator using your catalog:
Define an OperatorGroup
object and create it by using the oc apply
command or web console:
OperatorGroup
YAMLapiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: my-test
namespace: <operator_namespace>
spec:
targetNamespaces:
- <operator_namespace>
Define a Subscription
object and create it by using the oc apply
command or web console:
Subscription
YAMLapiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: catalogtest
namespace: <catalog_namespace>
spec:
channel: "alpha"
installPlanApproval: Manual
name: catalog
source: cs-memcached
sourceNamespace: <operator_namespace>
startingCSV: memcached-operator.v0.0.1
Verify the installed Operator is running:
Check the Operator group:
$ oc get og
NAME AGE
my-test 4h40m
Check the cluster service version (CSV):
$ oc get csv
NAME DISPLAY VERSION REPLACES PHASE
memcached-operator.v0.0.1 Test 0.0.1 Succeeded
Check the pods for the Operator:
$ oc get pods
NAME READY STATUS RESTARTS AGE
9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m
catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m
cs-memcached-7622r 1/1 Running 0 4h33m
See Managing custom catalogs for details on direct usage of the opm
CLI for more advanced use cases.
You can quickly test upgrading your Operator by using Operator Lifecycle Manager (OLM) integration in the Operator SDK, without requiring you to manually manage index images and catalog sources.
The run bundle-upgrade
subcommand automates triggering an installed Operator to upgrade to a later version by specifying a bundle image for the later version.
Operator installed with OLM either by using the run bundle
subcommand or with traditional OLM installation
A bundle image that represents a later version of the installed Operator
If your Operator has not already been installed with OLM, install the earlier version either by using the run bundle
subcommand or with traditional OLM installation.
If the earlier version of the bundle was installed traditionally using OLM, the newer bundle that you intend to upgrade to must not exist in the index image referenced by the catalog source. Otherwise, running the |
For example, you can use the following run bundle
subcommand for a Memcached Operator by specifying the earlier bundle image:
$ operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1
INFO[0006] Creating a File-Based Catalog of the bundle "quay.io/demo/memcached-operator:v0.0.1"
INFO[0008] Generated a valid File-Based Catalog
INFO[0012] Created registry pod: quay-io-demo-memcached-operator-v1-0-1
INFO[0012] Created CatalogSource: memcached-operator-catalog
INFO[0012] OperatorGroup "operator-sdk-og" created
INFO[0012] Created Subscription: memcached-operator-v0-0-1-sub
INFO[0015] Approved InstallPlan install-h9666 for the Subscription: memcached-operator-v0-0-1-sub
INFO[0015] Waiting for ClusterServiceVersion "my-project/memcached-operator.v0.0.1" to reach 'Succeeded' phase
INFO[0015] Waiting for ClusterServiceVersion ""my-project/memcached-operator.v0.0.1" to appear
INFO[0026] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.1" phase: Pending
INFO[0028] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.1" phase: Installing
INFO[0059] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.1" phase: Succeeded
INFO[0059] OLM has successfully installed "memcached-operator.v0.0.1"
Upgrade the installed Operator by specifying the bundle image for the later Operator version:
$ operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2
INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project
INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project
INFO[0008] Generated a valid Upgraded File-Based Catalog
INFO[0009] Created registry pod: quay-io-demo-memcached-operator-v0-0-2
INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations
INFO[0010] Deleted previous registry pod with name "quay-io-demo-memcached-operator-v0-0-1"
INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub
INFO[0042] Waiting for ClusterServiceVersion "my-project/memcached-operator.v0.0.2" to reach 'Succeeded' phase
INFO[0019] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: Pending
INFO[0042] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: InstallReady
INFO[0043] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: Installing
INFO[0044] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: Succeeded
INFO[0044] Successfully upgraded to "memcached-operator.v0.0.2"
Clean up the installed Operators:
$ operator-sdk cleanup memcached-operator
Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. If your Operator is using a deprecated API, it might no longer work after the OpenShift Container Platform cluster is upgraded to the Kubernetes version where the API has been removed. As an Operator author, it is strongly recommended that you review the Deprecated API Migration Guide in Kubernetes documentation and keep your Operator projects up to date to avoid using deprecated and removed APIs. Ideally, you should update your Operator before the release of a future version of OpenShift Container Platform that would make the Operator incompatible. |
When an API is removed from an OpenShift Container Platform version, Operators running on that cluster version that are still using removed APIs will no longer work properly. As an Operator author, you should plan to update your Operator projects to accommodate API deprecation and removal to avoid interruptions for users of your Operator.
You can check the event alerts of your Operators to find whether there are any warnings about APIs currently in use. The following alerts fire when they detect an API in use that will be removed in the next release:
|
If a cluster administrator has installed your Operator, before they upgrade to the next version of OpenShift Container Platform, they must ensure a version of your Operator is installed that is compatible with that next cluster version. While it is recommended that you update your Operator projects to no longer use deprecated or removed APIs, if you still need to publish your Operator bundles with removed APIs for continued use on earlier versions of OpenShift Container Platform, ensure that the bundle is configured accordingly.
The following procedure helps prevent administrators from installing versions of your Operator on an incompatible version of OpenShift Container Platform. These steps also prevent administrators from upgrading to a newer version of OpenShift Container Platform that is incompatible with the version of your Operator that is currently installed on their cluster.
This procedure is also useful when you know that the current version of your Operator will not work well, for any reason, on a specific OpenShift Container Platform version. By defining the cluster versions where the Operator should be distributed, you ensure that the Operator does not appear in a catalog of a cluster version which is outside of the allowed range.
Operators that use deprecated APIs can adversely impact critical workloads when cluster administrators upgrade to a future version of OpenShift Container Platform where the API is no longer supported. If your Operator is using deprecated APIs, you should configure the following settings in your Operator project as soon as possible. |
An existing Operator project
If you know that a specific bundle of your Operator is not supported and will not work correctly on OpenShift Container Platform later than a certain cluster version, configure the maximum version of OpenShift Container Platform that your Operator is compatible with. In your Operator project’s cluster service version (CSV), set the olm.maxOpenShiftVersion
annotation to prevent administrators from upgrading their cluster before upgrading the installed Operator to a compatible version:
You must use |
olm.maxOpenShiftVersion
annotationapiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
annotations:
"olm.properties": '[{"type": "olm.maxOpenShiftVersion", "value": "<cluster_version>"}]' (1)
1 | Specify the maximum cluster version of OpenShift Container Platform that your Operator is compatible with. For example, setting value to 4.9 prevents cluster upgrades to OpenShift Container Platform versions later than 4.9 when this bundle is installed on a cluster. |
If your bundle is intended for distribution in a Red Hat-provided Operator catalog, configure the compatible versions of OpenShift Container Platform for your Operator by setting the following properties. This configuration ensures your Operator is only included in catalogs that target compatible versions of OpenShift Container Platform:
This step is only valid when publishing Operators in Red Hat-provided catalogs. If your bundle is only intended for distribution in a custom catalog, you can skip this step. For more details, see "Red Hat-provided Operator catalogs". |
Set the com.redhat.openshift.versions
annotation in your project’s bundle/metadata/annotations.yaml
file:
bundle/metadata/annotations.yaml
file with compatible versionscom.redhat.openshift.versions: "v4.7-v4.9" (1)
1 | Set to a range or single version. |
To prevent your bundle from being carried on to an incompatible version of OpenShift Container Platform, ensure that the index image is generated with the proper com.redhat.openshift.versions
label in your Operator’s bundle image. For example, if your project was generated using the Operator SDK, update the bundle.Dockerfile
file:
bundle.Dockerfile
with compatible versionsLABEL com.redhat.openshift.versions="<versions>" (1)
1 | Set to a range or single version, for example, v4.7-v4.9 . This setting defines the cluster versions where the Operator should be distributed, and the Operator does not appear in a catalog of a cluster version which is outside of the range. |
You can now bundle a new version of your Operator and publish the updated version to a catalog for distribution.
Managing OpenShift Versions in the Certified Operator Build Guide
See Operator Framework packaging format for details on the bundle format.
See Managing custom catalogs for details on adding bundle images to index images by using the opm
command.
See Operator Lifecycle Manager workflow for details on how upgrades work for installed Operators.