Appearance
OpenShift Installation
PREVIEW FEATURE
This is a preview feature that may have incomplete functionality.
Overview
OpenShift is a self-managed Kubernetes platform. When you install Yellowbrick on OpenShift, you manage the OpenShift cluster and its lifecycle. Yellowbrick provides an installation program that deploys all Yellowbrick components onto your existing OpenShift cluster.
This guide walks you through:
- Prerequisites
- Obtaining the Yellowbrick Installation Program
- Connecting to Your OpenShift Cluster
- Labeling Nodes
- Preparing the Installation Configuration
- Configuring TLS Certificates
- Running the Installation
- Accessing Yellowbrick
- Uninstalling Yellowbrick
Prerequisites
Before you begin, ensure you have:
OpenShift Cluster
- OpenShift version 4.14 or later
- Cluster networking configured
- A storage class for persistent volumes (see Storage Requirements)
- The OpenShift internal image registry enabled, or access to an external container registry
INFO
OpenShift clusters are Kubernetes clusters with additional components from Red Hat. Yellowbrick workloads are compatible with Kubernetes version 1.33 and earlier.
S3-Compatible Object Storage
Yellowbrick requires S3-compatible object storage for two purposes:
Database Storage: The primary storage backend for your data warehouse. Yellowbrick uses local NVMe drives for caching to enhance performance, with the object store providing data durability.
Observability Storage: Stores diagnostics and monitoring data collected from your Yellowbrick instance.
You will need:
- An S3-compatible endpoint URL
- A bucket name for observability data
- Access credentials (access key ID and secret access key)
Workstation Requirements
ocCLI installed and configuredpodmanordockerinstalled- Network access to your OpenShift cluster API
Permissions
For this preview release, you must have cluster-admin privileges on the OpenShift cluster. Future releases will document a more restricted set of permissions.
Storage Requirements
Yellowbrick requires a storage class that supports ReadWriteOnce (RWO) access mode for persistent volumes. The storage class is used for:
- Rowstore data volumes
- Yellowbrick data volumes
- Operator and monitoring components
Example storage classes that are commonly used with OpenShift:
ocs-storagecluster-cephfs(OpenShift Data Foundation)ocs-storagecluster-ceph-rbd(OpenShift Data Foundation)
Your storage class name may differ based on your OpenShift configuration. Verify your available storage classes:
bash
oc get storageclasses1. Obtain the Yellowbrick Installation Program
Yellowbrick provides an installer container image that includes all installation tools and assets. The image is available from Docker Hub.
Pull the installation program image:
bash
podman pull docker.io/yellowbrickdata/yb-install:7.4.2-81695.8092626aOr using Docker:
bash
docker pull docker.io/yellowbrickdata/yb-install:7.4.2-81695.8092626aThe container includes:
- The
yb-installcommand-line tool - Yellowbrick container images
- Helm charts for all Yellowbrick components
For details on yb-install command-line options and sub-commands, see the Deployer CLI Reference.
2. Connect to Your OpenShift Cluster
Log in to OpenShift
Use the oc CLI to authenticate to your OpenShift cluster:
bash
oc login https://api.<your-cluster-domain>:6443You can authenticate using:
Username and password:
bashoc login https://api.<your-cluster-domain>:6443 \ --username <username> \ --password <password>Web-based authentication (if configured):
bashoc login https://api.<your-cluster-domain>:6443 --webToken-based authentication:
bashoc login https://api.<your-cluster-domain>:6443 --token=<token>
Verify Your Connection
Confirm you are connected and have the required permissions:
bash
# Verify your identity
oc whoami
# Verify cluster access
oc get nodesYou should see a list of your cluster nodes.
Set Up Kubeconfig
The oc login command automatically updates your kubeconfig file. By default, this is located at ~/.kube/config.
To use a separate kubeconfig file for your OpenShift cluster:
bash
export KUBECONFIG="$HOME/.kube/config-openshift"
oc login https://api.<your-cluster-domain>:6443Grant Registry Access (If Using Internal Registry)
If you are using the OpenShift internal image registry, grant yourself permission to view the registry route:
bash
oc adm policy add-role-to-user view <your-username> -n openshift-image-registry3. Label Nodes
Yellowbrick uses Kubernetes node selectors to schedule workloads on appropriate nodes. You must label your OpenShift nodes before installation.
Understanding Node Selectors
Yellowbrick uses a two-dimensional labeling scheme:
cluster.yellowbrick.io/hardware_type: Identifies the hardware generation (for example,ybd-gen1)cluster.yellowbrick.io/node_type: Identifies the workload type for the node
The node_type values correspond to configuration fields in your installation config:
| Node Type Label | Configuration Field | Purpose |
|---|---|---|
| yb-mgr | instanceNodeType | Runs the Yellowbrick instance (database) |
| yb-operator | Used in operator.nodeSelectors | Runs the Yellowbrick Operator and Manager |
| yb-compiler | compilerNodeType | Runs query compilation workloads |
| yb-bulksvc | bulkNodeType | Runs bulk load operations |
Label Your Nodes
Label each node according to its intended purpose. For a simple deployment where all Yellowbrick workloads run on the same nodes:
bash
# Label all worker nodes with hardware type
oc label node worker1.example.com cluster.yellowbrick.io/hardware_type=ybd-gen1
oc label node worker2.example.com cluster.yellowbrick.io/hardware_type=ybd-gen1
oc label node worker3.example.com cluster.yellowbrick.io/hardware_type=ybd-gen1
# Label nodes for specific workload types
# For a simple setup, you can assign multiple types to the same nodes
oc label node worker1.example.com cluster.yellowbrick.io/node_type=yb-mgr
oc label node worker2.example.com cluster.yellowbrick.io/node_type=yb-operator
oc label node worker3.example.com cluster.yellowbrick.io/node_type=yb-compiler
oc label node worker3.example.com cluster.yellowbrick.io/node_type=yb-bulksvcFor larger deployments, you may dedicate specific nodes to each workload type for better resource isolation.
Verify Node Labels
bash
oc get nodes --show-labels | grep cluster.yellowbrick.ioINFO
The node labels you apply must match the values you specify in your installation configuration. If the labels do not match, pods will remain in Pending state because they cannot be scheduled.
4. Prepare the Installation Configuration
The installation program uses a JSON configuration file to define your Yellowbrick deployment. Create a file named install-config.json with the following structure.
Minimal Configuration Example
The following example shows the minimum required configuration for an OpenShift installation:
json
{
"contact": {
"email": "admin@example.com",
"company": "Your Company",
"country": "United States"
},
"provider": {
"type": "openshift",
"region": "your-region"
},
"kubernetes": {
"name": "your-cluster-name",
"origin": "IMPORTED"
},
"access": {
"type": "public"
},
"account": {
"instanceAccountName": "ybdadmin",
"infraAdminAccountName": "infraadmin"
},
"instance": {
"name": "myinstance",
"namespace": "yb-myinstance",
"sharedServiceType": "standard",
"serviceType": "ClusterIP",
"tlsIssuer": "your-cluster-issuer"
},
"operator": {
"namespace": "yb-myinstance",
"storageClass": "ocs-storagecluster-ceph-rbd",
"managerServiceType": "ClusterIP",
"managerTlsIssuer": "your-cluster-issuer",
"managerSigningIssuer": "your-cluster-issuer",
"monitoringConfiguration": {
"monitoringNamespace": "monitoring",
"diagnosticsFolder": "diags",
"observabilityBucketName": "your-observability-bucket",
"observabilityBucketCredentials": {
"userId": "your-access-key-id",
"password": "your-secret-access-key",
"url": "https://s3.us-east-1.amazonaws.com"
}
}
},
"dependencies": {
"deployCertManager": false
}
}INFO
This example assumes you have cert-manager installed with a ClusterIssuer. Replace your-cluster-issuer with the name of your ClusterIssuer.
Configuration Reference
The following sections describe each configuration block in detail.
contact (Required)
Contact information for the installation.
| Field | Required | Description |
|---|---|---|
| Yes | Contact email address | |
| company | Yes | Company or organization name |
| country | Yes | Country name |
provider (Required)
Cloud provider settings.
| Field | Required | Description |
|---|---|---|
| type | Yes | Must be openshift |
| region | Yes | A logical identifier for your OpenShift environment (for example, datacenter-east) |
kubernetes (Required)
Kubernetes cluster settings.
| Field | Required | Description |
|---|---|---|
| name | Yes | A name for your Kubernetes cluster |
| origin | Yes | Must be IMPORTED for OpenShift installations |
access (Required)
Access configuration for the deployment.
| Field | Required | Description |
|---|---|---|
| type | Yes | Access type: public or private |
account (Required)
Initial administrator account settings.
| Field | Required | Description |
|---|---|---|
| instanceAccountName | Yes | Username for the Yellowbrick instance administrator |
| instanceAccountPassword | No | Password for the instance administrator. If not provided, you will be prompted during installation. |
| infraAdminAccountName | Yes | Username for the infrastructure administrator |
| infraAdminAccountPassword | No | Password for the infrastructure administrator. If not provided, you will be prompted during installation. |
instance (Required)
Yellowbrick instance settings.
| Field | Required | Description |
|---|---|---|
| name | Yes | Name of your Yellowbrick instance |
| namespace | Yes | Kubernetes namespace for the instance |
| sharedServiceType | Yes | Service type: standard |
| serviceType | Yes | Kubernetes service type: ClusterIP or LoadBalancer |
| serviceAnnotations | No | Annotations for the instance service |
| tlsIssuer | No | Name of a cert-manager ClusterIssuer (see Configuring TLS) |
| tlsSecret | No | Name of a TLS secret (see Configuring TLS) |
| instanceConfiguration | No | Hardware and resource configuration (see below) |
| computeClusterConfiguration | No | Compute cluster sizing (see below) |
Instance Configuration (instance.instanceConfiguration.standard):
| Field | Description | Example |
|---|---|---|
| bulkHardwareType | Hardware type label for bulk service nodes | ybd-gen1 |
| bulkNodeType | Node type label for bulk service nodes | yb-bulksvc |
| compilerCpuCount | CPU count for the compiler | 5 |
| compilerHardwareType | Hardware type label for compiler nodes | ybd-gen1 |
| compilerNodeType | Node type label for compiler nodes | yb-compiler |
| instanceHardwareType | Hardware type label for instance nodes | ybd-gen1 |
| instanceNodeType | Node type label for instance nodes | yb-mgr |
| limeMemory | Memory allocation for LIME | 30Gi |
| pgMemory | Memory allocation for PostgreSQL | 30Gi |
| rowstorePvcStorageClassName | Storage class for rowstore volumes | ocs-storagecluster-cephfs |
| rowstorePvcStorageSize | Size of rowstore volumes | 100Gi |
| ybdataPvcStorageClassName | Storage class for data volumes | ocs-storagecluster-cephfs |
| ybdataPvcStorageSize | Size of data volumes | 100Gi |
Compute Cluster Configuration (instance.computeClusterConfiguration):
Define compute cluster profiles as named objects. Example:
json
"computeClusterConfiguration": {
"ybd-gen1-small-v1": {
"hardwareType": "ybd-gen1",
"cpu": 25,
"memoryBytes": 100000000000,
"storageBytes": 6400000000000,
"ignoreAttachedNvme": false
}
}operator (Required)
Yellowbrick Operator and Manager settings.
| Field | Required | Description |
|---|---|---|
| namespace | Yes | Kubernetes namespace for the operator |
| storageClass | Yes | Storage class for operator volumes |
| managerServiceType | Yes | Service type for Yellowbrick Manager: ClusterIP or LoadBalancer |
| managerServiceAnnotations | No | Annotations for the manager service |
| managerTlsIssuer | No | ClusterIssuer for Manager TLS (see Configuring TLS) |
| managerTlsSecret | No | TLS secret for Manager (see Configuring TLS) |
| managerSigningIssuer | No | ClusterIssuer for Manager JWT signing |
| managerSigningSecret | No | Secret for Manager JWT signing |
| nodeSelectors | No | Node selectors for operator pods |
| monitoringConfiguration | No | Monitoring and observability settings (see below) |
Monitoring Configuration (operator.monitoringConfiguration):
For OpenShift installations, monitoring configuration is required.
| Field | Required | Description |
|---|---|---|
| monitoringNamespace | Yes | Namespace for monitoring components |
| diagnosticsFolder | Yes | Folder name for diagnostics in the bucket |
| observabilityBucketName | Yes | S3 bucket name for observability data |
| observabilityBucketCredentials.userId | Yes | S3 access key ID |
| observabilityBucketCredentials.password | Yes | S3 secret access key |
| observabilityBucketCredentials.url | Yes | S3 endpoint URL |
| nodeSelectors | No | Node selectors for monitoring pods |
dependencies (Required)
Dependency configuration.
| Field | Required | Description |
|---|---|---|
| deployCertManager | Yes | Set to false if cert-manager is already installed (recommended), or true to have the installation program deploy it |
| timeout | No | Timeout for dependency operations |
registry (Optional)
Container registry configuration. Leave empty ({}) to use the OpenShift internal registry.
If you are using an external registry, you can specify registry settings here. The installation program will push Yellowbrick container images to this registry during installation.
Complete Configuration Example
The following example shows a complete configuration with all commonly used options:
json
{
"contact": {
"email": "admin@example.com",
"company": "Example Corp",
"country": "United States"
},
"provider": {
"type": "openshift",
"region": "datacenter-east"
},
"kubernetes": {
"name": "production-cluster",
"origin": "IMPORTED"
},
"access": {
"type": "public"
},
"account": {
"instanceAccountName": "ybdadmin",
"infraAdminAccountName": "infraadmin"
},
"instance": {
"name": "production",
"namespace": "yb-production",
"sharedServiceType": "standard",
"serviceType": "ClusterIP",
"serviceAnnotations": {},
"tlsIssuer": "yb-selfsigned",
"instanceConfiguration": {
"standard": {
"bulkHardwareType": "ybd-gen1",
"bulkNodeType": "yb-bulksvc",
"compilerCpuCount": 5,
"compilerHardwareType": "ybd-gen1",
"compilerNodeType": "yb-compiler",
"instanceHardwareType": "ybd-gen1",
"instanceNodeType": "yb-mgr",
"limeMemory": "30Gi",
"pgMemory": "30Gi",
"rowstorePvcStorageClassName": "ocs-storagecluster-cephfs",
"rowstorePvcStorageSize": "100Gi",
"ybdataPvcStorageClassName": "ocs-storagecluster-cephfs",
"ybdataPvcStorageSize": "100Gi"
}
},
"computeClusterConfiguration": {
"ybd-gen1-small-v1": {
"hardwareType": "ybd-gen1",
"cpu": 25,
"memoryBytes": 100000000000,
"storageBytes": 6400000000000,
"ignoreAttachedNvme": false
}
}
},
"operator": {
"namespace": "yb-production",
"storageClass": "ocs-storagecluster-cephfs",
"managerServiceType": "ClusterIP",
"managerServiceAnnotations": {},
"managerTlsIssuer": "yb-selfsigned",
"managerSigningIssuer": "yb-selfsigned",
"nodeSelectors": {
"cluster.yellowbrick.io/hardware_type": "ybd-gen1",
"cluster.yellowbrick.io/node_type": "yb-operator"
},
"monitoringConfiguration": {
"monitoringNamespace": "monitoring",
"diagnosticsFolder": "diags",
"observabilityBucketName": "your-observability-bucket",
"observabilityBucketCredentials": {
"userId": "your-access-key-id",
"password": "your-secret-access-key",
"url": "https://s3.us-east-1.amazonaws.com"
},
"nodeSelectors": {
"cluster.yellowbrick.io/hardware_type": "ybd-gen1",
"cluster.yellowbrick.io/node_type": "yb-operator"
}
}
},
"dependencies": {
"deployCertManager": false
}
}5. Configure TLS Certificates
Yellowbrick requires TLS certificates for secure communication. The recommended approach is to use your existing cert-manager installation with a ClusterIssuer.
Using Your cert-manager ClusterIssuer (Recommended)
Reference your ClusterIssuer in the configuration. The issuer name must be specified in three places:
instance.tlsIssuer- for Yellowbrick instance TLSoperator.managerTlsIssuer- for Yellowbrick Manager HTTPSoperator.managerSigningIssuer- for Manager JWT signing
json
{
"instance": {
"tlsIssuer": "your-cluster-issuer"
},
"operator": {
"managerTlsIssuer": "your-cluster-issuer",
"managerSigningIssuer": "your-cluster-issuer"
},
"dependencies": {
"deployCertManager": false
}
}Replace your-cluster-issuer with the name of your ClusterIssuer.
Alternative: Let the Installation Program Deploy cert-manager
If you do not have cert-manager installed, the installation program can deploy it and create a self-signed ClusterIssuer named yb-selfsigned:
json
{
"dependencies": {
"deployCertManager": true
}
}When deployCertManager is true, the installation program will:
- Deploy cert-manager into the operator namespace
- Create a ClusterIssuer named
yb-selfsigned - Configure all Yellowbrick components to use
yb-selfsignedfor certificate generation
WARNING
Do not set deployCertManager to true if cert-manager is already installed in your cluster.
INFO
Self-signed certificates are suitable for development, testing, and environments without external access. For production deployments with external access, use a ClusterIssuer configured for a trusted certificate authority.
Alternative: Provide Your Own Certificates
If you prefer not to use cert-manager, you can create TLS secrets manually and reference them in your configuration.
Required Secrets
You must create three Kubernetes TLS secrets:
| Secret | Purpose |
|---|---|
| Manager TLS | HTTPS for the Yellowbrick Manager web interface |
| Manager Signing | JWT signing for authentication |
| Instance TLS | HTTPS for Yellowbrick instance services |
Step 1: Generate Certificates
Create certificates for each component. The following example uses OpenSSL to generate self-signed certificates:
bash
# Set variables
NAMESPACE="yb-myinstance"
# Create a working directory
mkdir -p ~/yb-certs && cd ~/yb-certs
# Generate Manager TLS certificate
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout manager-tls.key \
-out manager-tls.crt \
-subj "/CN=yb-manager.${NAMESPACE}.svc.cluster.local/O=Yellowbrick"
# Generate Manager signing certificate
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout manager-signing.key \
-out manager-signing.crt \
-subj "/CN=yb-manager-signing.${NAMESPACE}.svc.cluster.local/O=Yellowbrick"
# Generate Instance TLS certificate
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout instance-tls.key \
-out instance-tls.crt \
-subj "/CN=*.${NAMESPACE}.svc.cluster.local/O=Yellowbrick"For production deployments, use certificates signed by a trusted certificate authority instead of self-signed certificates.
Step 2: Create Kubernetes Secrets
Create the TLS secrets in your instance namespace:
bash
# Create the namespace if it does not exist
oc create namespace ${NAMESPACE} --dry-run=client -o yaml | oc apply -f -
# Create Manager TLS secret
oc create secret tls yb-manager-tls \
--cert=manager-tls.crt \
--key=manager-tls.key \
-n ${NAMESPACE}
# Create Manager signing secret
oc create secret tls yb-manager-signing \
--cert=manager-signing.crt \
--key=manager-signing.key \
-n ${NAMESPACE}
# Create Instance TLS secret
oc create secret tls yb-instance-tls \
--cert=instance-tls.crt \
--key=instance-tls.key \
-n ${NAMESPACE}Verify the secrets were created:
bash
oc get secrets -n ${NAMESPACE} | grep tlsStep 3: Update Your Configuration
Reference your secrets in the installation configuration:
json
{
"dependencies": {
"deployCertManager": false
},
"operator": {
"managerTlsSecret": "yb-manager-tls",
"managerSigningSecret": "yb-manager-signing"
},
"instance": {
"tlsSecret": "yb-instance-tls"
}
}WARNING
Do not set both *Issuer and *Secret fields. Use one approach or the other.
6. Run the Installation
With your configuration file prepared and TLS configured, you can run the installation.
Run the Installation Program
Run the installation program container, mounting your configuration file and kubeconfig:
Using Podman:
bash
podman run --rm -it \
-v "${KUBECONFIG}:/kubeconfig:ro" \
-e KUBECONFIG=/kubeconfig \
-v "$(pwd)/install-config.json:/install-config.json:ro" \
docker.io/yellowbrickdata/yb-install:7.4.2-81695.8092626a \
yb-install install -c /install-config.json -vUsing Docker:
bash
docker run --rm -it \
-v "${KUBECONFIG}:/kubeconfig:ro" \
-e KUBECONFIG=/kubeconfig \
-v "$(pwd)/install-config.json:/install-config.json:ro" \
docker.io/yellowbrickdata/yb-install:7.4.2-81695.8092626a \
yb-install install -c /install-config.json -vThe -v flag enables verbose output so you can monitor installation progress.
What the Installation Program Does
The installation program performs the following steps:
- Validates your configuration
- Pushes Yellowbrick container images to your registry
- Pushes Helm charts to your registry
- Installs dependencies (cert-manager, if configured)
- Deploys the Yellowbrick Operator
- Deploys monitoring components
- Creates your Yellowbrick instance
Monitor Installation Progress
You can monitor the installation in a separate terminal:
bash
# Watch operator pods
oc get pods -n <operator-namespace> -w
# Watch instance pods
oc get pods -n <instance-namespace> -w
# Check Yellowbrick custom resources
oc get ybinstances.cluster.yellowbrick.io -A
oc get ybversions.cluster.yellowbrick.io -A7. Access Yellowbrick
After installation completes, you can access Yellowbrick Manager and connect to the database.
Accessing Yellowbrick Manager
Yellowbrick Manager is a web-based interface for managing your Yellowbrick instance.
If You Used ClusterIP
When managerServiceType is set to ClusterIP, the service is only accessible within the cluster. You have two options for external access:
Option A: Port Forwarding (for temporary access)
Forward a local port to the Manager service:
bash
oc port-forward -n <namespace> svc/yb-manager-service 8443:443Then open https://localhost:8443 in your browser.
Option B: Create an OpenShift Route (for persistent access)
Yellowbrick Manager uses HTTPS, which is well-suited for OpenShift Routes. Create a Route to expose Yellowbrick Manager externally:
bash
cat <<EOF | oc apply -f -
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: yb-manager
namespace: <namespace>
spec:
to:
kind: Service
name: yb-manager-service
port:
targetPort: 443
tls:
termination: edge
insecureEdgeTerminationPolicy: Redirect
EOFGet the Route URL:
bash
oc get route -n <namespace> yb-manager -o jsonpath='{.spec.host}{"\n"}'Open https://<route-hostname> in your browser.
If You Used LoadBalancer
When managerServiceType is set to LoadBalancer, you need a load balancer provider installed in your cluster (such as MetalLB).
Get the external IP address or hostname:
bash
oc get svc -n <namespace> yb-manager-service \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}{"\n"}'Open https://<external-ip> in your browser.
INFO
Using LoadBalancer service type requires a load balancer provider. OpenShift does not include one by default. MetalLB is a common choice for bare metal or on-premises clusters.
Accessing the Yellowbrick Database
The Yellowbrick database uses the PostgreSQL wire protocol on port 5432. Database connections are long-lived TCP sessions that are not suitable for OpenShift Routes (which are optimized for HTTP traffic).
If You Used ClusterIP
Use port forwarding to access the database from your workstation:
bash
oc port-forward -n <namespace> svc/ybinst-<instance-name>-service 5432:5432Connect using a PostgreSQL client:
bash
psql -h localhost -p 5432 -U ybdadmin -d yellowbrickOr with JDBC:
jdbc:postgresql://localhost:5432/yellowbrickINFO
Port forwarding is suitable for development, testing, and administrative access. For production workloads requiring persistent external database access, use LoadBalancer service type instead.
If You Used LoadBalancer
Get the external IP address:
bash
oc get svc -n <namespace> ybinst-<instance-name>-service \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}{"\n"}'Connect using a PostgreSQL client:
bash
psql -h <external-ip> -p 5432 -U ybdadmin -d yellowbrickOr with JDBC:
jdbc:postgresql://<external-ip>:5432/yellowbrickFrom Within the Cluster
Applications running inside the OpenShift cluster can connect directly using the service DNS name:
bash
# From the same namespace
psql -h ybinst-<instance-name>-service -p 5432 -U ybdadmin -d yellowbrick
# From a different namespace
psql -h ybinst-<instance-name>-service.<namespace>.svc.cluster.local \
-p 5432 -U ybdadmin -d yellowbrick8. Uninstall Yellowbrick
To uninstall Yellowbrick from your OpenShift cluster, run the installation program with the uninstall command:
bash
podman run --rm -it \
-v "${KUBECONFIG}:/kubeconfig:ro" \
-e KUBECONFIG=/kubeconfig \
docker.io/yellowbrickdata/yb-install:7.4.2-81695.8092626a \
yb-install uninstall \
--provider openshift \
--cluster <cluster-name> \
--namespace <namespace>WARNING
Uninstalling Yellowbrick is a destructive operation that will delete all Yellowbrick data. This action cannot be undone.
You will be prompted to confirm the uninstall by typing the instance name.
Troubleshooting
View Installation Logs
If the installation fails, check the logs for more information:
bash
# Check operator logs
oc logs -n <operator-namespace> -l app=yb-operator
# Check manager logs
oc logs -n <namespace> -l app=yb-managerVerify Pods Are Running
bash
oc get pods -n <namespace>All pods should be in Running status.
Check Events
bash
oc get events -n <namespace> --sort-by='.lastTimestamp'Reset Administrator Password
If you need to reset the administrator password:
bash
oc exec -it -n <namespace> ybinst-<instance-name>-0 -c ybinst-pg -- \
ybsql yellowbrick --command="ALTER ROLE ybdadmin WITH PASSWORD 'new-password';"Appendix: RBAC Requirements
For this preview release, cluster-admin privileges are required. A future release will document a minimal RBAC configuration.
The following ClusterRole represents the approximate permissions required by the Yellowbrick installer. This is provided for reference and planning purposes:
yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: yb-install
rules:
# Core API resources
- apiGroups: [""]
resources:
- namespaces
- configmaps
- secrets
- services
- serviceaccounts
- persistentvolumeclaims
- pods
- pods/exec
- pods/log
- events
- nodes
verbs: [get, list, watch, create, update, patch, delete]
# Apps API
- apiGroups: ["apps"]
resources:
- deployments
- statefulsets
- daemonsets
verbs: [get, list, watch, create, update, patch, delete]
# RBAC API
- apiGroups: ["rbac.authorization.k8s.io"]
resources:
- roles
- rolebindings
- clusterroles
- clusterrolebindings
verbs: [get, list, watch, create, update, patch, delete, bind, escalate]
# Storage API (read-only)
- apiGroups: ["storage.k8s.io"]
resources:
- storageclasses
verbs: [get, list, watch]
# API Extensions (CRDs)
- apiGroups: ["apiextensions.k8s.io"]
resources:
- customresourcedefinitions
verbs: [get, list, watch, create, update, patch, delete]
# Yellowbrick CRDs
- apiGroups: ["cluster.yellowbrick.io"]
resources:
- ybinstances
- ybinstancetasks
- ybversions
- ybnodegroups
- ybhwinstancetypes
- ybsharedservices
verbs: [get, list, watch, create, update, patch, delete]
# cert-manager
- apiGroups: ["cert-manager.io"]
resources:
- certificates
- issuers
- clusterissuers
verbs: [get, list, watch, create, update, patch, delete]
# Metrics API
- apiGroups: ["metrics.k8s.io"]
resources:
- nodes
- pods
verbs: [get, list]
# Batch API
- apiGroups: ["batch"]
resources:
- jobs
verbs: [get, list, watch, create, delete]
# OpenShift-specific: Routes
- apiGroups: ["route.openshift.io"]
resources:
- routes
verbs: [get, list, watch]
# OpenShift-specific: Image Registry
- apiGroups: ["image.openshift.io"]
resources:
- imagestreams
- imagestreamtags
verbs: [get, list, watch, create, update, patch, delete]
# OpenShift-specific: Security Context Constraints
- apiGroups: ["security.openshift.io"]
resources:
- securitycontextconstraints
verbs: [use]
resourceNames:
- nonroot-v2To use this role:
bash
# Create the ClusterRole
oc apply -f yb-install-clusterrole.yaml
# Bind the role to your user
oc create clusterrolebinding yb-install-binding \
--clusterrole=yb-install \
--user=<your-username>