Skip to content

OpenShift Installation

PREVIEW FEATURE

This is a preview feature that may have incomplete functionality.

Overview

OpenShift is a self-managed Kubernetes platform. When you install Yellowbrick on OpenShift, you manage the OpenShift cluster and its lifecycle. Yellowbrick provides an installation program that deploys all Yellowbrick components onto your existing OpenShift cluster.

This guide walks you through:

  1. Prerequisites
  2. Obtaining the Yellowbrick Installation Program
  3. Connecting to Your OpenShift Cluster
  4. Labeling Nodes
  5. Preparing the Installation Configuration
  6. Configuring TLS Certificates
  7. Running the Installation
  8. Accessing Yellowbrick
  9. Uninstalling Yellowbrick

Prerequisites

Before you begin, ensure you have:

OpenShift Cluster

  • OpenShift version 4.14 or later
  • Cluster networking configured
  • A storage class for persistent volumes (see Storage Requirements)
  • The OpenShift internal image registry enabled, or access to an external container registry

INFO

OpenShift clusters are Kubernetes clusters with additional components from Red Hat. Yellowbrick workloads are compatible with Kubernetes version 1.33 and earlier.

S3-Compatible Object Storage

Yellowbrick requires S3-compatible object storage for two purposes:

  1. Database Storage: The primary storage backend for your data warehouse. Yellowbrick uses local NVMe drives for caching to enhance performance, with the object store providing data durability.

  2. Observability Storage: Stores diagnostics and monitoring data collected from your Yellowbrick instance.

You will need:

  • An S3-compatible endpoint URL
  • A bucket name for observability data
  • Access credentials (access key ID and secret access key)

Workstation Requirements

  • oc CLI installed and configured
  • podman or docker installed
  • Network access to your OpenShift cluster API

Permissions

For this preview release, you must have cluster-admin privileges on the OpenShift cluster. Future releases will document a more restricted set of permissions.


Storage Requirements

Yellowbrick requires a storage class that supports ReadWriteOnce (RWO) access mode for persistent volumes. The storage class is used for:

  • Rowstore data volumes
  • Yellowbrick data volumes
  • Operator and monitoring components

Example storage classes that are commonly used with OpenShift:

  • ocs-storagecluster-cephfs (OpenShift Data Foundation)
  • ocs-storagecluster-ceph-rbd (OpenShift Data Foundation)

Your storage class name may differ based on your OpenShift configuration. Verify your available storage classes:

bash
oc get storageclasses

1. Obtain the Yellowbrick Installation Program

Yellowbrick provides an installer container image that includes all installation tools and assets. The image is available from Docker Hub.

Pull the installation program image:

bash
podman pull docker.io/yellowbrickdata/yb-install:7.4.2-81695.8092626a

Or using Docker:

bash
docker pull docker.io/yellowbrickdata/yb-install:7.4.2-81695.8092626a

The container includes:

  • The yb-install command-line tool
  • Yellowbrick container images
  • Helm charts for all Yellowbrick components

For details on yb-install command-line options and sub-commands, see the Deployer CLI Reference.


2. Connect to Your OpenShift Cluster

Log in to OpenShift

Use the oc CLI to authenticate to your OpenShift cluster:

bash
oc login https://api.<your-cluster-domain>:6443

You can authenticate using:

  • Username and password:

    bash
    oc login https://api.<your-cluster-domain>:6443 \
      --username <username> \
      --password <password>
  • Web-based authentication (if configured):

    bash
    oc login https://api.<your-cluster-domain>:6443 --web
  • Token-based authentication:

    bash
    oc login https://api.<your-cluster-domain>:6443 --token=<token>

Verify Your Connection

Confirm you are connected and have the required permissions:

bash
# Verify your identity
oc whoami

# Verify cluster access
oc get nodes

You should see a list of your cluster nodes.

Set Up Kubeconfig

The oc login command automatically updates your kubeconfig file. By default, this is located at ~/.kube/config.

To use a separate kubeconfig file for your OpenShift cluster:

bash
export KUBECONFIG="$HOME/.kube/config-openshift"
oc login https://api.<your-cluster-domain>:6443

Grant Registry Access (If Using Internal Registry)

If you are using the OpenShift internal image registry, grant yourself permission to view the registry route:

bash
oc adm policy add-role-to-user view <your-username> -n openshift-image-registry

3. Label Nodes

Yellowbrick uses Kubernetes node selectors to schedule workloads on appropriate nodes. You must label your OpenShift nodes before installation.

Understanding Node Selectors

Yellowbrick uses a two-dimensional labeling scheme:

  • cluster.yellowbrick.io/hardware_type: Identifies the hardware generation (for example, ybd-gen1)
  • cluster.yellowbrick.io/node_type: Identifies the workload type for the node

The node_type values correspond to configuration fields in your installation config:

Node Type LabelConfiguration FieldPurpose
yb-mgrinstanceNodeTypeRuns the Yellowbrick instance (database)
yb-operatorUsed in operator.nodeSelectorsRuns the Yellowbrick Operator and Manager
yb-compilercompilerNodeTypeRuns query compilation workloads
yb-bulksvcbulkNodeTypeRuns bulk load operations

Label Your Nodes

Label each node according to its intended purpose. For a simple deployment where all Yellowbrick workloads run on the same nodes:

bash
# Label all worker nodes with hardware type
oc label node worker1.example.com cluster.yellowbrick.io/hardware_type=ybd-gen1
oc label node worker2.example.com cluster.yellowbrick.io/hardware_type=ybd-gen1
oc label node worker3.example.com cluster.yellowbrick.io/hardware_type=ybd-gen1

# Label nodes for specific workload types
# For a simple setup, you can assign multiple types to the same nodes
oc label node worker1.example.com cluster.yellowbrick.io/node_type=yb-mgr
oc label node worker2.example.com cluster.yellowbrick.io/node_type=yb-operator
oc label node worker3.example.com cluster.yellowbrick.io/node_type=yb-compiler
oc label node worker3.example.com cluster.yellowbrick.io/node_type=yb-bulksvc

For larger deployments, you may dedicate specific nodes to each workload type for better resource isolation.

Verify Node Labels

bash
oc get nodes --show-labels | grep cluster.yellowbrick.io

INFO

The node labels you apply must match the values you specify in your installation configuration. If the labels do not match, pods will remain in Pending state because they cannot be scheduled.


4. Prepare the Installation Configuration

The installation program uses a JSON configuration file to define your Yellowbrick deployment. Create a file named install-config.json with the following structure.

Minimal Configuration Example

The following example shows the minimum required configuration for an OpenShift installation:

json
{
  "contact": {
    "email": "admin@example.com",
    "company": "Your Company",
    "country": "United States"
  },
  "provider": {
    "type": "openshift",
    "region": "your-region"
  },
  "kubernetes": {
    "name": "your-cluster-name",
    "origin": "IMPORTED"
  },
  "access": {
    "type": "public"
  },
  "account": {
    "instanceAccountName": "ybdadmin",
    "infraAdminAccountName": "infraadmin"
  },
  "instance": {
    "name": "myinstance",
    "namespace": "yb-myinstance",
    "sharedServiceType": "standard",
    "serviceType": "ClusterIP",
    "tlsIssuer": "your-cluster-issuer"
  },
  "operator": {
    "namespace": "yb-myinstance",
    "storageClass": "ocs-storagecluster-ceph-rbd",
    "managerServiceType": "ClusterIP",
    "managerTlsIssuer": "your-cluster-issuer",
    "managerSigningIssuer": "your-cluster-issuer",
    "monitoringConfiguration": {
      "monitoringNamespace": "monitoring",
      "diagnosticsFolder": "diags",
      "observabilityBucketName": "your-observability-bucket",
      "observabilityBucketCredentials": {
        "userId": "your-access-key-id",
        "password": "your-secret-access-key",
        "url": "https://s3.us-east-1.amazonaws.com"
      }
    }
  },
  "dependencies": {
    "deployCertManager": false
  }
}

INFO

This example assumes you have cert-manager installed with a ClusterIssuer. Replace your-cluster-issuer with the name of your ClusterIssuer.

Configuration Reference

The following sections describe each configuration block in detail.

contact (Required)

Contact information for the installation.

FieldRequiredDescription
emailYesContact email address
companyYesCompany or organization name
countryYesCountry name

provider (Required)

Cloud provider settings.

FieldRequiredDescription
typeYesMust be openshift
regionYesA logical identifier for your OpenShift environment (for example, datacenter-east)

kubernetes (Required)

Kubernetes cluster settings.

FieldRequiredDescription
nameYesA name for your Kubernetes cluster
originYesMust be IMPORTED for OpenShift installations

access (Required)

Access configuration for the deployment.

FieldRequiredDescription
typeYesAccess type: public or private

account (Required)

Initial administrator account settings.

FieldRequiredDescription
instanceAccountNameYesUsername for the Yellowbrick instance administrator
instanceAccountPasswordNoPassword for the instance administrator. If not provided, you will be prompted during installation.
infraAdminAccountNameYesUsername for the infrastructure administrator
infraAdminAccountPasswordNoPassword for the infrastructure administrator. If not provided, you will be prompted during installation.

instance (Required)

Yellowbrick instance settings.

FieldRequiredDescription
nameYesName of your Yellowbrick instance
namespaceYesKubernetes namespace for the instance
sharedServiceTypeYesService type: standard
serviceTypeYesKubernetes service type: ClusterIP or LoadBalancer
serviceAnnotationsNoAnnotations for the instance service
tlsIssuerNoName of a cert-manager ClusterIssuer (see Configuring TLS)
tlsSecretNoName of a TLS secret (see Configuring TLS)
instanceConfigurationNoHardware and resource configuration (see below)
computeClusterConfigurationNoCompute cluster sizing (see below)

Instance Configuration (instance.instanceConfiguration.standard):

FieldDescriptionExample
bulkHardwareTypeHardware type label for bulk service nodesybd-gen1
bulkNodeTypeNode type label for bulk service nodesyb-bulksvc
compilerCpuCountCPU count for the compiler5
compilerHardwareTypeHardware type label for compiler nodesybd-gen1
compilerNodeTypeNode type label for compiler nodesyb-compiler
instanceHardwareTypeHardware type label for instance nodesybd-gen1
instanceNodeTypeNode type label for instance nodesyb-mgr
limeMemoryMemory allocation for LIME30Gi
pgMemoryMemory allocation for PostgreSQL30Gi
rowstorePvcStorageClassNameStorage class for rowstore volumesocs-storagecluster-cephfs
rowstorePvcStorageSizeSize of rowstore volumes100Gi
ybdataPvcStorageClassNameStorage class for data volumesocs-storagecluster-cephfs
ybdataPvcStorageSizeSize of data volumes100Gi

Compute Cluster Configuration (instance.computeClusterConfiguration):

Define compute cluster profiles as named objects. Example:

json
"computeClusterConfiguration": {
  "ybd-gen1-small-v1": {
    "hardwareType": "ybd-gen1",
    "cpu": 25,
    "memoryBytes": 100000000000,
    "storageBytes": 6400000000000,
    "ignoreAttachedNvme": false
  }
}

operator (Required)

Yellowbrick Operator and Manager settings.

FieldRequiredDescription
namespaceYesKubernetes namespace for the operator
storageClassYesStorage class for operator volumes
managerServiceTypeYesService type for Yellowbrick Manager: ClusterIP or LoadBalancer
managerServiceAnnotationsNoAnnotations for the manager service
managerTlsIssuerNoClusterIssuer for Manager TLS (see Configuring TLS)
managerTlsSecretNoTLS secret for Manager (see Configuring TLS)
managerSigningIssuerNoClusterIssuer for Manager JWT signing
managerSigningSecretNoSecret for Manager JWT signing
nodeSelectorsNoNode selectors for operator pods
monitoringConfigurationNoMonitoring and observability settings (see below)

Monitoring Configuration (operator.monitoringConfiguration):

For OpenShift installations, monitoring configuration is required.

FieldRequiredDescription
monitoringNamespaceYesNamespace for monitoring components
diagnosticsFolderYesFolder name for diagnostics in the bucket
observabilityBucketNameYesS3 bucket name for observability data
observabilityBucketCredentials.userIdYesS3 access key ID
observabilityBucketCredentials.passwordYesS3 secret access key
observabilityBucketCredentials.urlYesS3 endpoint URL
nodeSelectorsNoNode selectors for monitoring pods

dependencies (Required)

Dependency configuration.

FieldRequiredDescription
deployCertManagerYesSet to false if cert-manager is already installed (recommended), or true to have the installation program deploy it
timeoutNoTimeout for dependency operations

registry (Optional)

Container registry configuration. Leave empty ({}) to use the OpenShift internal registry.

If you are using an external registry, you can specify registry settings here. The installation program will push Yellowbrick container images to this registry during installation.

Complete Configuration Example

The following example shows a complete configuration with all commonly used options:

json
{
  "contact": {
    "email": "admin@example.com",
    "company": "Example Corp",
    "country": "United States"
  },
  "provider": {
    "type": "openshift",
    "region": "datacenter-east"
  },
  "kubernetes": {
    "name": "production-cluster",
    "origin": "IMPORTED"
  },
  "access": {
    "type": "public"
  },
  "account": {
    "instanceAccountName": "ybdadmin",
    "infraAdminAccountName": "infraadmin"
  },
  "instance": {
    "name": "production",
    "namespace": "yb-production",
    "sharedServiceType": "standard",
    "serviceType": "ClusterIP",
    "serviceAnnotations": {},
    "tlsIssuer": "yb-selfsigned",
    "instanceConfiguration": {
      "standard": {
        "bulkHardwareType": "ybd-gen1",
        "bulkNodeType": "yb-bulksvc",
        "compilerCpuCount": 5,
        "compilerHardwareType": "ybd-gen1",
        "compilerNodeType": "yb-compiler",
        "instanceHardwareType": "ybd-gen1",
        "instanceNodeType": "yb-mgr",
        "limeMemory": "30Gi",
        "pgMemory": "30Gi",
        "rowstorePvcStorageClassName": "ocs-storagecluster-cephfs",
        "rowstorePvcStorageSize": "100Gi",
        "ybdataPvcStorageClassName": "ocs-storagecluster-cephfs",
        "ybdataPvcStorageSize": "100Gi"
      }
    },
    "computeClusterConfiguration": {
      "ybd-gen1-small-v1": {
        "hardwareType": "ybd-gen1",
        "cpu": 25,
        "memoryBytes": 100000000000,
        "storageBytes": 6400000000000,
        "ignoreAttachedNvme": false
      }
    }
  },
  "operator": {
    "namespace": "yb-production",
    "storageClass": "ocs-storagecluster-cephfs",
    "managerServiceType": "ClusterIP",
    "managerServiceAnnotations": {},
    "managerTlsIssuer": "yb-selfsigned",
    "managerSigningIssuer": "yb-selfsigned",
    "nodeSelectors": {
      "cluster.yellowbrick.io/hardware_type": "ybd-gen1",
      "cluster.yellowbrick.io/node_type": "yb-operator"
    },
    "monitoringConfiguration": {
      "monitoringNamespace": "monitoring",
      "diagnosticsFolder": "diags",
      "observabilityBucketName": "your-observability-bucket",
      "observabilityBucketCredentials": {
        "userId": "your-access-key-id",
        "password": "your-secret-access-key",
        "url": "https://s3.us-east-1.amazonaws.com"
      },
      "nodeSelectors": {
        "cluster.yellowbrick.io/hardware_type": "ybd-gen1",
        "cluster.yellowbrick.io/node_type": "yb-operator"
      }
    }
  },
  "dependencies": {
    "deployCertManager": false
  }
}

5. Configure TLS Certificates

Yellowbrick requires TLS certificates for secure communication. The recommended approach is to use your existing cert-manager installation with a ClusterIssuer.

Reference your ClusterIssuer in the configuration. The issuer name must be specified in three places:

  • instance.tlsIssuer - for Yellowbrick instance TLS
  • operator.managerTlsIssuer - for Yellowbrick Manager HTTPS
  • operator.managerSigningIssuer - for Manager JWT signing
json
{
  "instance": {
    "tlsIssuer": "your-cluster-issuer"
  },
  "operator": {
    "managerTlsIssuer": "your-cluster-issuer",
    "managerSigningIssuer": "your-cluster-issuer"
  },
  "dependencies": {
    "deployCertManager": false
  }
}

Replace your-cluster-issuer with the name of your ClusterIssuer.

Alternative: Let the Installation Program Deploy cert-manager

If you do not have cert-manager installed, the installation program can deploy it and create a self-signed ClusterIssuer named yb-selfsigned:

json
{
  "dependencies": {
    "deployCertManager": true
  }
}

When deployCertManager is true, the installation program will:

  • Deploy cert-manager into the operator namespace
  • Create a ClusterIssuer named yb-selfsigned
  • Configure all Yellowbrick components to use yb-selfsigned for certificate generation

WARNING

Do not set deployCertManager to true if cert-manager is already installed in your cluster.

INFO

Self-signed certificates are suitable for development, testing, and environments without external access. For production deployments with external access, use a ClusterIssuer configured for a trusted certificate authority.

Alternative: Provide Your Own Certificates

If you prefer not to use cert-manager, you can create TLS secrets manually and reference them in your configuration.

Required Secrets

You must create three Kubernetes TLS secrets:

SecretPurpose
Manager TLSHTTPS for the Yellowbrick Manager web interface
Manager SigningJWT signing for authentication
Instance TLSHTTPS for Yellowbrick instance services

Step 1: Generate Certificates

Create certificates for each component. The following example uses OpenSSL to generate self-signed certificates:

bash
# Set variables
NAMESPACE="yb-myinstance"

# Create a working directory
mkdir -p ~/yb-certs && cd ~/yb-certs

# Generate Manager TLS certificate
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
  -keyout manager-tls.key \
  -out manager-tls.crt \
  -subj "/CN=yb-manager.${NAMESPACE}.svc.cluster.local/O=Yellowbrick"

# Generate Manager signing certificate
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
  -keyout manager-signing.key \
  -out manager-signing.crt \
  -subj "/CN=yb-manager-signing.${NAMESPACE}.svc.cluster.local/O=Yellowbrick"

# Generate Instance TLS certificate
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
  -keyout instance-tls.key \
  -out instance-tls.crt \
  -subj "/CN=*.${NAMESPACE}.svc.cluster.local/O=Yellowbrick"

For production deployments, use certificates signed by a trusted certificate authority instead of self-signed certificates.

Step 2: Create Kubernetes Secrets

Create the TLS secrets in your instance namespace:

bash
# Create the namespace if it does not exist
oc create namespace ${NAMESPACE} --dry-run=client -o yaml | oc apply -f -

# Create Manager TLS secret
oc create secret tls yb-manager-tls \
  --cert=manager-tls.crt \
  --key=manager-tls.key \
  -n ${NAMESPACE}

# Create Manager signing secret
oc create secret tls yb-manager-signing \
  --cert=manager-signing.crt \
  --key=manager-signing.key \
  -n ${NAMESPACE}

# Create Instance TLS secret
oc create secret tls yb-instance-tls \
  --cert=instance-tls.crt \
  --key=instance-tls.key \
  -n ${NAMESPACE}

Verify the secrets were created:

bash
oc get secrets -n ${NAMESPACE} | grep tls

Step 3: Update Your Configuration

Reference your secrets in the installation configuration:

json
{
  "dependencies": {
    "deployCertManager": false
  },
  "operator": {
    "managerTlsSecret": "yb-manager-tls",
    "managerSigningSecret": "yb-manager-signing"
  },
  "instance": {
    "tlsSecret": "yb-instance-tls"
  }
}

WARNING

Do not set both *Issuer and *Secret fields. Use one approach or the other.


6. Run the Installation

With your configuration file prepared and TLS configured, you can run the installation.

Run the Installation Program

Run the installation program container, mounting your configuration file and kubeconfig:

Using Podman:

bash
podman run --rm -it \
  -v "${KUBECONFIG}:/kubeconfig:ro" \
  -e KUBECONFIG=/kubeconfig \
  -v "$(pwd)/install-config.json:/install-config.json:ro" \
  docker.io/yellowbrickdata/yb-install:7.4.2-81695.8092626a \
  yb-install install -c /install-config.json -v

Using Docker:

bash
docker run --rm -it \
  -v "${KUBECONFIG}:/kubeconfig:ro" \
  -e KUBECONFIG=/kubeconfig \
  -v "$(pwd)/install-config.json:/install-config.json:ro" \
  docker.io/yellowbrickdata/yb-install:7.4.2-81695.8092626a \
  yb-install install -c /install-config.json -v

The -v flag enables verbose output so you can monitor installation progress.

What the Installation Program Does

The installation program performs the following steps:

  1. Validates your configuration
  2. Pushes Yellowbrick container images to your registry
  3. Pushes Helm charts to your registry
  4. Installs dependencies (cert-manager, if configured)
  5. Deploys the Yellowbrick Operator
  6. Deploys monitoring components
  7. Creates your Yellowbrick instance

Monitor Installation Progress

You can monitor the installation in a separate terminal:

bash
# Watch operator pods
oc get pods -n <operator-namespace> -w

# Watch instance pods
oc get pods -n <instance-namespace> -w

# Check Yellowbrick custom resources
oc get ybinstances.cluster.yellowbrick.io -A
oc get ybversions.cluster.yellowbrick.io -A

7. Access Yellowbrick

After installation completes, you can access Yellowbrick Manager and connect to the database.

Accessing Yellowbrick Manager

Yellowbrick Manager is a web-based interface for managing your Yellowbrick instance.

If You Used ClusterIP

When managerServiceType is set to ClusterIP, the service is only accessible within the cluster. You have two options for external access:

Option A: Port Forwarding (for temporary access)

Forward a local port to the Manager service:

bash
oc port-forward -n <namespace> svc/yb-manager-service 8443:443

Then open https://localhost:8443 in your browser.

Option B: Create an OpenShift Route (for persistent access)

Yellowbrick Manager uses HTTPS, which is well-suited for OpenShift Routes. Create a Route to expose Yellowbrick Manager externally:

bash
cat <<EOF | oc apply -f -
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: yb-manager
  namespace: <namespace>
spec:
  to:
    kind: Service
    name: yb-manager-service
  port:
    targetPort: 443
  tls:
    termination: edge
    insecureEdgeTerminationPolicy: Redirect
EOF

Get the Route URL:

bash
oc get route -n <namespace> yb-manager -o jsonpath='{.spec.host}{"\n"}'

Open https://<route-hostname> in your browser.

If You Used LoadBalancer

When managerServiceType is set to LoadBalancer, you need a load balancer provider installed in your cluster (such as MetalLB).

Get the external IP address or hostname:

bash
oc get svc -n <namespace> yb-manager-service \
  -o jsonpath='{.status.loadBalancer.ingress[0].ip}{"\n"}'

Open https://<external-ip> in your browser.

INFO

Using LoadBalancer service type requires a load balancer provider. OpenShift does not include one by default. MetalLB is a common choice for bare metal or on-premises clusters.

Accessing the Yellowbrick Database

The Yellowbrick database uses the PostgreSQL wire protocol on port 5432. Database connections are long-lived TCP sessions that are not suitable for OpenShift Routes (which are optimized for HTTP traffic).

If You Used ClusterIP

Use port forwarding to access the database from your workstation:

bash
oc port-forward -n <namespace> svc/ybinst-<instance-name>-service 5432:5432

Connect using a PostgreSQL client:

bash
psql -h localhost -p 5432 -U ybdadmin -d yellowbrick

Or with JDBC:

jdbc:postgresql://localhost:5432/yellowbrick

INFO

Port forwarding is suitable for development, testing, and administrative access. For production workloads requiring persistent external database access, use LoadBalancer service type instead.

If You Used LoadBalancer

Get the external IP address:

bash
oc get svc -n <namespace> ybinst-<instance-name>-service \
  -o jsonpath='{.status.loadBalancer.ingress[0].ip}{"\n"}'

Connect using a PostgreSQL client:

bash
psql -h <external-ip> -p 5432 -U ybdadmin -d yellowbrick

Or with JDBC:

jdbc:postgresql://<external-ip>:5432/yellowbrick

From Within the Cluster

Applications running inside the OpenShift cluster can connect directly using the service DNS name:

bash
# From the same namespace
psql -h ybinst-<instance-name>-service -p 5432 -U ybdadmin -d yellowbrick

# From a different namespace
psql -h ybinst-<instance-name>-service.<namespace>.svc.cluster.local \
  -p 5432 -U ybdadmin -d yellowbrick

8. Uninstall Yellowbrick

To uninstall Yellowbrick from your OpenShift cluster, run the installation program with the uninstall command:

bash
podman run --rm -it \
  -v "${KUBECONFIG}:/kubeconfig:ro" \
  -e KUBECONFIG=/kubeconfig \
  docker.io/yellowbrickdata/yb-install:7.4.2-81695.8092626a \
  yb-install uninstall \
    --provider openshift \
    --cluster <cluster-name> \
    --namespace <namespace>

WARNING

Uninstalling Yellowbrick is a destructive operation that will delete all Yellowbrick data. This action cannot be undone.

You will be prompted to confirm the uninstall by typing the instance name.


Troubleshooting

View Installation Logs

If the installation fails, check the logs for more information:

bash
# Check operator logs
oc logs -n <operator-namespace> -l app=yb-operator

# Check manager logs
oc logs -n <namespace> -l app=yb-manager

Verify Pods Are Running

bash
oc get pods -n <namespace>

All pods should be in Running status.

Check Events

bash
oc get events -n <namespace> --sort-by='.lastTimestamp'

Reset Administrator Password

If you need to reset the administrator password:

bash
oc exec -it -n <namespace> ybinst-<instance-name>-0 -c ybinst-pg -- \
  ybsql yellowbrick --command="ALTER ROLE ybdadmin WITH PASSWORD 'new-password';"

Appendix: RBAC Requirements

For this preview release, cluster-admin privileges are required. A future release will document a minimal RBAC configuration.

The following ClusterRole represents the approximate permissions required by the Yellowbrick installer. This is provided for reference and planning purposes:

yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: yb-install
rules:
  # Core API resources
  - apiGroups: [""]
    resources:
      - namespaces
      - configmaps
      - secrets
      - services
      - serviceaccounts
      - persistentvolumeclaims
      - pods
      - pods/exec
      - pods/log
      - events
      - nodes
    verbs: [get, list, watch, create, update, patch, delete]

  # Apps API
  - apiGroups: ["apps"]
    resources:
      - deployments
      - statefulsets
      - daemonsets
    verbs: [get, list, watch, create, update, patch, delete]

  # RBAC API
  - apiGroups: ["rbac.authorization.k8s.io"]
    resources:
      - roles
      - rolebindings
      - clusterroles
      - clusterrolebindings
    verbs: [get, list, watch, create, update, patch, delete, bind, escalate]

  # Storage API (read-only)
  - apiGroups: ["storage.k8s.io"]
    resources:
      - storageclasses
    verbs: [get, list, watch]

  # API Extensions (CRDs)
  - apiGroups: ["apiextensions.k8s.io"]
    resources:
      - customresourcedefinitions
    verbs: [get, list, watch, create, update, patch, delete]

  # Yellowbrick CRDs
  - apiGroups: ["cluster.yellowbrick.io"]
    resources:
      - ybinstances
      - ybinstancetasks
      - ybversions
      - ybnodegroups
      - ybhwinstancetypes
      - ybsharedservices
    verbs: [get, list, watch, create, update, patch, delete]

  # cert-manager
  - apiGroups: ["cert-manager.io"]
    resources:
      - certificates
      - issuers
      - clusterissuers
    verbs: [get, list, watch, create, update, patch, delete]

  # Metrics API
  - apiGroups: ["metrics.k8s.io"]
    resources:
      - nodes
      - pods
    verbs: [get, list]

  # Batch API
  - apiGroups: ["batch"]
    resources:
      - jobs
    verbs: [get, list, watch, create, delete]

  # OpenShift-specific: Routes
  - apiGroups: ["route.openshift.io"]
    resources:
      - routes
    verbs: [get, list, watch]

  # OpenShift-specific: Image Registry
  - apiGroups: ["image.openshift.io"]
    resources:
      - imagestreams
      - imagestreamtags
    verbs: [get, list, watch, create, update, patch, delete]

  # OpenShift-specific: Security Context Constraints
  - apiGroups: ["security.openshift.io"]
    resources:
      - securitycontextconstraints
    verbs: [use]
    resourceNames:
      - nonroot-v2

To use this role:

bash
# Create the ClusterRole
oc apply -f yb-install-clusterrole.yaml

# Bind the role to your user
oc create clusterrolebinding yb-install-binding \
  --clusterrole=yb-install \
  --user=<your-username>