Appearance
Yellowbrick Custom AMI on AWS
Introduction
Yellowbrick is run on node groups that create EC2 instances from an Amazon Machine Image (AMI). A default installation uses the recommended Amazon EKS optimized Amazon Linux AMI built on Amazon Linux 2023 (AL2023). It is optional to customize your own AMI to be used instead.
When using a custom AMI, please note that Yellowbrick provisions user-data as part of its configuration. To ensure compatibility, any custom AMI that does not utilize Amazon Linux 2023 must include nodeadm
as part of its bootstrap process. For more information on using nodeadm
or building a custom Amazon Linux AMI, see the AWS documentation.
Overview
The configuration of node groups is modeled by the YBNodeGroup Kubernetes Custom Resources. Each YBNodeGroup represents an AWS Nodegroup running on EKS. Updating the AMI used for a YBNodeGroup will create a new version in the launch template of the underlying AWS Nodegroup. Configuring a custom AMI is done by editing the YBNodeGroup Kubernetes Custom Resource and setting the customImage
field to the target AMI name and owner.
Please note that changing an AMI image will result in a rollout of new nodes in the EKS cluster. The Yellowbrick instance running on these node groups must be suspended before performing an AMI update operation to prevent disruption to the system and its users.
A custom AMI image may be configured for any of the underlying node groups of Yellowbrick:
- yb-bulk
- yb-mgr
- yb-mon
- yb-op
- yb-worker
Once node group configuration is customized, it will remain customized for that specific node group. Users will be responsible for managing future AMI updates.
Customizing AMI
The process outlined in this document will customize the AMI for the small-v2
compute cluster node group of the Yellowbrick instance demo
. These steps should be repeated for any additional node groups where customization is desired.
Listing YBNodeGroups
First, list the current YBNodeGroups in the instance namespace. In the case of a default install where the instance name give was demo
, the instance namespace will be yb-demo
:
sh
$ kubectl get ybnodegroup -n yb-demo
NAME HARDWARE TYPE IN USE STATE ERROR
large-v1 i3en.metal false Unused
large-v2 i4i.32xlarge false Unused
small-v1 m5dn.4xlarge false Unused
small-v2 i4i.4xlarge true Provisioned
yb-bulk-scaled c5n.9xlarge false Unused
yb-bulk-standard m5.2xlarge true Provisioned
yb-compiler-scaled m5.8xlarge false Unused
yb-mgr-scaled m5d.24xlarge false Unused
yb-mgr-standard m5d.4xlarge true Provisioned
yb-mon-scaled m5.8xlarge false Unused
yb-mon-standard m5.xlarge true Provisioned
yb-op-scaled m5.xlarge false Unused
yb-op-standard t3.large true Provisioned
Only YBNodeGroups that are currently in use will be provisioned as AWS Nodegroups. YBNodeGroups that are not in use may or may not have an AWS Nodegroup depending on if they have ever been provisioned before. Currently, AWS Nodegroups are only created and updated, they are not deleted.
Describe YBNodeGroup
Inspect the current AMI of a YBNodeGroup:
sh
$ kubectl describe ybnodegroup -n yb-demo small-v2
Name: small-v2
Namespace: yb-demo
Labels: <none>
Annotations: <none>
API Version: cluster.yellowbrick.io/v1
Kind: YBNodeGroup
Metadata:
Creation Timestamp: 2024-12-12T06:54:10Z
Generation: 1
Resource Version: 4097
UID: 71a801c9-3368-450a-80c6-6cad4de4d596
Spec:
Cloud Init Config: workerNodeConfig
Custom Image: amazon:amazon-eks-node-al2023-x86_64-standard-1.30-v20241121
Hardware Type: i4i.4xlarge
Huge Pages: 1073741824
Inuse: true
Network Interfaces: 3
Node Selector: yb-worker
Status:
State: Provisioned
Events: <none>
In this example, the AMI with name amazon-eks-node-al2023-x86_64-standard-1.30-v20241121
from the owner amazon
is being used. This is the AWS recommended EKS optimized AMI for this version of EKS. For stability, AMI versions used by node groups will be fixed based upon the version of Yellowbrick being used. To upgrade the AMI version, either upgrade your version of Yellowbrick or update the YBNodeGroup to a recommended and supported AMI version.
Updating YBNodeGroup
Create a patch to the YBNodeGroup that includes a new customImage
, referencing the new AMI. A valid value will be in the format "name" or "owner:name". If owner is not specified, "amazon" will be inferred:
sh
$ kubectl patch ybnodegroup small-v2 \
-n yb-demo \
--type='merge' \
-p '{"spec":{"customImage":"amazon:amazon-eks-node-al2023-x86_64-standard-1.30-v20241205"}}'
It is also possible to use kubectl edit
to modify the custom resource directly:
sh
$ kubectl edit ybnodegroup small-v2 -n yb-demo
The owner value can be an owner alias such as self
, amazon
, aws-backup-vault
, aws-marketplace
, or the AWS account ID. A wildcard pattern can be used in the customImage
field such as amazon-eks-node-al2023-x86_64-standard-1.30-*
. If more than one AMI is returned from this pattern, the most recent will be used.
The AWS CLI can be used to understand the correct name and owner values given a specific AMI ID:
sh
$ aws ec2 describe-images --image-ids ami-03a81a3c47f5d8d98
{
"Images": [
{
"Architecture": "x86_64",
"CreationDate": "2024-12-05T18:24:32.000Z",
"ImageId": "ami-03a81a3c47f5d8d98",
"ImageLocation": "amazon/amazon-eks-node-al2023-x86_64-standard-1.30-v20241205",
"ImageType": "machine",
"Public": true,
"OwnerId": "602401143452",
"PlatformDetails": "Linux/UNIX",
"UsageOperation": "RunInstances",
"State": "available",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/xvda",
"Ebs": {
"DeleteOnTermination": true,
"Iops": 3000,
"SnapshotId": "snap-0e0a4e2237d64dc25",
"VolumeSize": 20,
"VolumeType": "gp3",
"Throughput": 125,
"Encrypted": false
}
}
],
"Description": "EKS-optimized Kubernetes node based on Amazon Linux 2023, (k8s: 1.30.7, containerd: 1.7.*)",
"EnaSupport": true,
"Hypervisor": "xen",
"ImageOwnerAlias": "amazon",
"Name": "amazon-eks-node-al2023-x86_64-standard-1.30-v20241205",
"RootDeviceName": "/dev/xvda",
"RootDeviceType": "ebs",
"SriovNetSupport": "simple",
"VirtualizationType": "hvm",
"BootMode": "uefi-preferred",
"DeprecationTime": "2026-12-05T18:24:32.000Z",
"ImdsSupport": "v2.0"
}
]
}
Provisioning
An update to the YBNodeGroup AMI will trigger the creation of a new launch template version and update the underlying AWS Nodegroup to use that version. While this is happening, the status of the YBNodeGroup will be provisioning
. After the updated has been made, the status of the YBNodeGroup be provisioned
. Any existing EKS nodes running in that node group with the old AMI will be replaced with new nodes running the new AMI. If an error occurred during the update, it will be shown in the error
field of the YBNodeGroup for that node group.
User Data
A custom AMI may need to be coordinated with custom user data. If your custom AMI also requires custom user data, this is considered an advanced deployment scenario and must be performed directly in the AWS console. The outline of the steps to be taken are:
- Identify the node group launch template.
- Create a new version of the launch template that includes the new custom user data and the new custom AMI.
- Update the node group to use that new launch template version.
- Update your YBNodeGroup
customImage
to match the AMI as outlined above.
Please note if creating custom user data as a multi-part document, you must not use ==YBBOUNDARY==
markers. Doing so could result in your custom user data being overwritten when upgrading versions of Yellowbrick.