Skip to content

Self-Managed: Node Group

Create a base Node group for Yellowbrick Oeprator and Manager to run. The configuration in this node group is then used by the yellowbrick operator to create more node groups dynamically.

When using the commands or values outlined here, please make appropriate substitutions defined as:

ValueDescription
{cluster_endpoint}The https endpoint of the EKS Cluster found on the console
{certificate_authority}The CA data of the EKS Cluster found on the console
{cluster_name}The name of the EKS cluster
{region}The AWS region of the EKS cluster
{ami_id}The AMI ID to be used for the nodes.
{security_group_ids}Comma Separated list of security group ids to be attached to the node group.
{subnets}Space separated list of Subnet Ids for node group to be in.
{node_iam_role_arn}IAM role to be attached to the nodes in the ndoe group

INFO

  1. Security Groups used should ensure connection to the rest of the nodes in the cluster as well as the control plane or it could lead to nodes not joining. Recommendation is to use the same as the rest of the cluster.
  2. We recommend that a single subnet is used for the node group to ensure the nodes land in a single availability zone - or this could lead to cross az costs.
  3. The Node IAM role used must have the AWS recommended node policy as well as any other policy required to run any daemonsets/workloads on the existing cluster. Additionally, the role requires the AmazonEKS_CNI_Policy policy for the compute cluster nodes to function.
  4. It is recommended to use Amazon Linux 2. Yellowbrick also maintains node AMIs based on Amazon Linux 2, the Name of the AMI is yb-enterprise-eks-node-1.30-v20250115-20250116232817 owned by account id 732123782549 for commericial aws and 335348567317 for gov cloud. The following command can be used to refer to the AMI ID for your region.
bash
aws ec2 describe-images \
  --owners 732123782549 \
  --filters "Name=name,Values=yb-enterprise-eks-node-1.30-v20250115-20250116232817" \
  --query "Images[*].ImageId" \
  --region {region}

Creating Cloud Infrastructure

Node IAM Role (Optional)

The following steps are required only if you are creating a Node IAM role exclusively for the operator node group and the ones created by the Yellowbrick Operator.

Create the IAM role:

bash
aws iam create-role \
  --role-name yb-eks-node-{instance-name}-{region} \
  --assume-role-policy-document file://trust-policy.json

The trust policy:

json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
          "sts:AssumeRole"
      ],
      "Principal": {
          "Service": [
              "ec2.amazonaws.com"
          ]
      }
    }
  ]
}

Attach the IAM policy

bash
aws iam attach-role-policy \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy \
  --role-name yb-eks-node-{instance-name}-{region}
aws iam attach-role-policy \
  --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly \
  --role-name yb-eks-node-{instance-name}-{region}
aws iam attach-role-policy \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \
  --role-name yb-eks-node-{instance-name}-{region}

Yellowbrick Operator Node group

Preparing Node User Data

INFO

This User data is specific for Amazon Linux 2 based AMIs. Please adjust the user data if you are using a different distro/AMI.

Create a file, user-data.yaml with the following content, and correct parameters.

yaml
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==YBBOUNDARY=="

--==YBBOUNDARY==
Content-Type: text/cloud-config; charset="us-ascii"

write_files:
  - path: /root/bootstrap.sh
    permissions: "0700"
    content: |
      #!/bin/bash
      set -ex

      sysctl net.ipv4.ip_forward=1
      sed 's/net.ipv4.ip_forward=0/net.ipv4.ip_forward=1/g' /etc/sysctl.conf > /etc/sysctl.conf

      /etc/eks/bootstrap.sh \
        --apiserver-endpoint "{cluster_endpoint}" \
        --b64-cluster-ca "{certificate_authority}" \
        "{cluster_name}"

runcmd:
  - |
    set -x
    (
      while [ ! -f /root/bootstrap.sh ]; do
        sleep 1
      done
      if ! /root/bootstrap.sh; then
        shutdown now
      fi
    )
--==YBBOUNDARY==--

Then Base64-encode this file if you plan to embed it in the JSON for the launch template:

bash
USER_DATA_B64=$(cat user-data.yaml | base64 | tr -d '\n')

Preparing Launch Template

After the user data is prepared, we proceed to created the launch template for the node group. This is a must do.

INFO

Additional Parameters such as KeyPair and Tags maybe added to the launch templates.

bash
aws ec2 create-launch-template \
  --launch-template-name "yb-op-standard-lt" \
  --version-description "default" \
  --region "{region}" \
  --launch-template-data "{
    \"ImageId\": \"{ami_id}\",
    \"InstanceType\": \"t3.large\",
    \"SecurityGroupIds\": {security_group_ids},
    \"MetadataOptions\": {
      \"HttpEndpoint\": \"enabled\",
      \"HttpPutResponseHopLimit\": 2,
      \"HttpTokens\": \"optional\"
    },
    \"UserData\": \"${USER_DATA_B64}\"
  }"

Creating Node Group

After the launch template is created, we can proceed to create the group.

INFO

The Node group must be named yb-op-standard to ensure yellowbrick operator can inherit parameters while creating additional node groups.

Additional Parameters such as Tags maybe added to the node group.

bash
aws eks create-nodegroup \
  --cluster-name "{cluster_name}" \
  --nodegroup-name "yb-op-standard" \
  --subnets {subnets} \
  --node-role "{node_iam_role_arn}" \
  --scaling-config minSize=1,maxSize=3,desiredSize=1 \
  --labels "cluster.yellowbrick.io/owned=true,cluster.yellowbrick.io/hardware_type=t3.large,cluster.yellowbrick.io/node_type=yb-op-standard" \
  --taints "key=cluster.yellowbrick.io/owned,value=true,effect=NO_SCHEDULE" \
  --launch-template name=yb-op-standard-lt,version=1 \
  --capacity-type ON_DEMAND \
  --region "{region}"