So you have a brand new shiny OpenShift cluster. Now what? It’s time for some gitops.
GitOps is a modern approach to managing infrastructure and application deployments using Git as the single source of truth. Instead of manually applying changes to a cluster or scripting deployments, GitOps treats your desired system state—like Kubernetes manifests, Helm charts, or Kustomize overlays—as version-controlled code. When a change is pushed to your Git repository, a GitOps controller running in the cluster automatically reconciles the live state to match the desired state in Git. This not only enables full auditability and rollback via Git history, but also promotes consistency, repeatability, and automation across environments, making deployments safer and more transparent.
OpenShift GitOps is Red Hat’s opinionated integration of GitOps principles into the OpenShift Container Platform, built on top of the popular Argo CD project. It enables OpenShift users to declaratively manage cluster configuration and application deployments by storing the desired state in Git repositories. At its core, OpenShift GitOps uses Argo CD—a continuous delivery tool for Kubernetes that watches Git repositories for changes and automatically applies them to the cluster. Argo CD provides visibility into the sync status of each application, supports automated drift detection and reconciliation, and integrates tightly with OpenShift’s RBAC and authentication model. This makes it easier for teams to adopt GitOps workflows in a secure, scalable, and OpenShift-native way.
Getting Started
This is the inception part of the blog post. We want to have our cluster’s installed components delivered via GitOps principles but there is no GitOps installed. At some point, an admin (or an automated process) is going to have to apply some YAML to get this thing bootstrapped. But along the way, we are also going to loop back around and register the installed components as an ArgoCD Application, so after we are done, the gitops is going to be managed using gitops. It’ll make sense in a minute.
First step is to install the OpenShift GitOps operator. By default, when the operator’s subscription is added to the cluster, not only does it stand up the actual openshift-gitops-operator-controller-manager
but it will also add a default ArgoCD custom resource (CR) into the openshift-gitops
namespace and get a “default” instance of ArgoCD up and running. This default instance is very opinionated and difficult to extend. For this reason, we are just going to go ahead and do our own thing.
Creating the Cluster GitOps Repository
But the first step is really to create the GitOps repository. The operator install manifests are going to live in the repo and we are going to not only use them to bootstrap, but they are also going to be referenced in one of the Applications. You can find a full working example at https://github.com/stephennimmo/openshift-cluster-gitops
openshift-cluster-gitops
-> applications
-> cluster-gitops
-> openshift-gitops-operator
install-openshift-gitops.sh
openshift-cluster-gitops-application.yaml
Repository Structure Notes
- applications – This is the folder that contains all the applications referenced by the
openshift-cluster-gitops-application.yaml
. This is the classic app-of-apps pattern to bootstrapping a cluster. This pattern is superior to the other ApplicationSet pattern because once it’s bootstrapped, there is no more need to “apply” a manifest. Simply drop a new Application CR into the applications folder and push. If I am using the ApplicationSet pattern, I have to update the CR by hand and commit the new change but I still have to apply it to the cluster manually. - cluster-gitops – This folder contains all the setup manifests for our ArgoCD based gitops controller. This is also the folder containing the necessary customizations to the ArgoCD instance including healthchecks, RBAC and default projects.
- openshift-gitops-operator – This is a basic operator install using a Subscription. Ours has some extra configuration to accomplish two goals. First, we don’t want the default gitops instance. Second, we want our cluster-gitops instance to be considered a “cluster-wide” instance. OpenShift GitOps instances in the identified namespaces are granted limited additional permissions to manage specific cluster-scoped resources, which include platform operators, optional OLM operators, user management, etc. Here’s the Subscription CR.
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-gitops-operator-subscription
namespace: openshift-gitops-operator
annotations:
argocd.argoproj.io/sync-wave: "2"
spec:
channel: latest
installPlanApproval: Automatic
name: openshift-gitops-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
config:
env:
- name: DISABLE_DEFAULT_ARGOCD_INSTANCE
value: "true"
- name: ARGOCD_CLUSTER_CONFIG_NAMESPACES
value: "cluster-gitops"
- install-openshift-gitops.sh – This is a bash script to help ease the bootstrapping of gitops. It does two things. First, it installs the OpenShift Gitops Operator and then it waits until the operator is fully installed and running. Second, it applies the manifests for our cluster-gitops instance with all it’s customization and again, it waits until everything is up and running.
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
set -euo pipefail
log() {
echo "$(date '+%Y-%m-%d %H:%M:%S') $1 ${@:2}"
}
log "[INFO] Applying OpenShift GitOps operator subscription manifests..."
oc apply -k openshift-gitops-operator
log "[INFO] Waiting for the OpenShift GitOps Operator to be available..."
until oc wait --for=condition=Available --timeout=300s deployment/openshift-gitops-operator-controller-manager -n openshift-gitops-operator >/dev/null 2>&1; do
log "[INFO] Waiting..."
sleep 10
done
log "[INFO] OpenShift GitOps Operator is now available."
log "[INFO] Applying OpenShift GitOps instance cluster-gitops manifests..."
oc apply -k cluster-gitops
log "[INFO] Waiting for deployment/cluster-gitops-server to be created..."
until oc get deployment cluster-gitops-server -n cluster-gitops >/dev/null 2>&1; do
log "[INFO] Still waiting for deployment/cluster-gitops-server to be created..."
sleep 10
done
# Verify ArgoCD is running
log "[INFO] Waiting for deployment/cluster-gitops-server to be ready..."
oc wait --for=condition=Available --timeout=300s deployment/cluster-gitops-server -n cluster-gitops
log "[INFO] OpenShift GitOps successfully installed in cluster-gitops."
ARGOCD_ROUTE=$(oc get route cluster-gitops-server -n cluster-gitops -o jsonpath='{.spec.host}')
log "[INFO] Cluster Argo CD is accessible at: https://$ARGOCD_ROUTE"
- openshift-cluster-gitops-application.yaml – This is the root of the app-of-apps and all it references is the applications folder described above.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: openshift-cluster-gitops
namespace: cluster-gitops
spec:
project: cluster
source:
repoURL: https://github.com/<YOUR_ORGANIZATION>/openshift-cluster-gitops.git
targetRevision: main
path: applications
destination:
server: https://kubernetes.default.svc
namespace: ''
syncPolicy:
automated:
prune: true
selfHeal: true
Setup Instructions
Now that we have our gitops repository structure all setup and committed, our steps are as followed.
- Login to the cluster
oc login --server=https://api.clustername.domain:6443 -u kubeadmin -p <password>
2. Run the bash script to bootstrap the gitops
./install-openshift-gitops.sh
3. At this point, everything we have done has been local “oc apply” from our bastion. Before we apply the app-of-apps root application, we need to provide credentials to allow the cluster-gitops instance to have access to the openshift-cluster-gitops repository. In this example, I am using a personal access token from GitHub.
oc apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: openshift-cluster-gitops-credentials
namespace: cluster-gitops
labels:
argocd.argoproj.io/secret-type: repository
type: Opaque
stringData:
url: https://github.com/<YOUR_ORGANIZATION>/openshift-cluster-gitops.git
username: <username>
password: <github-pat>
name: openshift-cluster-gitops
project: cluster
EOF
4. Now that everything is in place and all the credentials are setup, apply the app-of-apps root.
oc apply -f openshift-cluster-gitops-application.yaml
Application Branches and Sync Waves
The app-of-apps root points to the applications folder in the repository. So what’s inside that folder? More Application CRs! But because the Application CRs inside the applications folder are treated as “children” of the root application, the sync wave annotations are respected by the sync processes.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: openshift-gitops-operator
namespace: cluster-gitops
annotations:
argocd.argoproj.io/sync-wave: "0"
spec:
project: cluster
source:
repoURL: https://github.com/<YOUR_ORGANIZATION>/openshift-cluster-gitops.git
targetRevision: main
path: openshift-gitops-operator
destination:
server: https://kubernetes.default.svc
namespace: openshift-gitops-operator
syncPolicy:
automated:
prune: true
selfHeal: true
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cluster-gitops
namespace: cluster-gitops
annotations:
argocd.argoproj.io/sync-wave: "0"
spec:
project: cluster
source:
repoURL: https://github.com/<YOUR_ORGANIZATION>/openshift-cluster-gitops.git
targetRevision: main
path: cluster-gitops
destination:
server: https://kubernetes.default.svc
namespace: cluster-gitops
syncPolicy:
automated:
prune: true
selfHeal: true
Here’s where the inception-type part comes into play. These two Application CRs refer to the folders that we already used during the bootstrapping procedure in the install-openshift-gitops.sh script. But now, the ArgoCD controller is in control of updating those resources in the cluster based on the gitops repo so if we want to make a change to the cluster-gitops instance, such as adding a custom health check, you simply have to commit those changes to the cluster-gitops folder and the controller loop will apply those changes. It’s OpenShift GitOps managing itself.
Adding More Resources
Now that the structure is setup, you can start adding more resources. In the repository, create a new folder, add the kustomization.yaml
in the root of the folder with the resources defined in it, and then add the Application CR into the applications folder. Commit and push and away you go.