Data Warehouse Instances

A Yellowbrick instance is a deployed and provisioned data warehouse that runs in a SaaS cloud environment. After logging into the Yellowbrick Manager for the first time, you will be prompted to create your first instance.



You can create multiple instances in order to set different hardware limits for different users and workloads. Each instance defines the region (for AWS installations) and the supported hardware instance types (optional limits for creating hardware resources that will be allocated to and shared among virtual compute clusters).

You can reserve some number of nodes when you create instances.

There is also a per-instance database storage option. If you accept the default behavior for Create internal external storage, object storage for the data stored in your databases and tables will be created automatically. If you deselect this option, you have to create your own "primary storage" by using the CREATE EXTERNAL STORAGE and CREATE EXTERNAL LOCATION commands.

For example:



For simplicity and usability, "small" and "large" instance types are supported, where the large nodes use hardware with more cores per CPU, more RAM, and a bigger data cache.
  • small-v1:
    • m5dn.4xlarge in AWS
    • Standard_L16s_v3 in Azure
  • small-v2: i4i.4xlarge in AWS
  • large-v1:
    • i3en.metal in AWS
    • Standard_L80s_v3 in Azure
  • large-v2: i4i.32xlarge in AWS

Local Storage refers to the size of the SSDs (that is, a limit on the data cache, not physical storage).

Note: Instance names must be 2 to 63 characters long and contain only alphanumeric characters and hyphens (-). Hyphens may not be in first or last position in the name, and may not be repeated successively. Hyphens should only be used as "separators."
For example, the following instance names are valid:
  • bobr-inst1-080222
  • yb60-cdwm318-august2-2022
  • yellowbrick60instance100
On the Dashboard, you can see details about your instances. Three useful pieces of information are:
  • Status (Healthy when available for use)
  • Site (region name)
  • Host/Port, a connection string that you can copy to the clipboard and use to connect client applications to this instance

For example:



Suspending and Resuming Instances

To manage cost for cloud-based hardware resources, it is a best practice to suspend a data warehouse instance when it is not in use for some period of time. Suspending an instance implies the suspension of all of its compute clusters, regardless of the suspension and timeout policy set for those clusters and their current state. How long you want to allow an unused instance to remain up and running is your choice, as long as you monitor the cost. In addition to considering the cost, keep in mind that it takes a little while to resume an instance once it has been suspended.

On the Yellowbrick Manager Dashboard, you can easily see how long each instance has been up and running:



You can click the pause button next to the run time to suspend an instance. The run time is not cumulative; it restarts from 0 when you suspend and resume an instance.

Important: When you create an instance, the default behavior is that an idle instance is not automatically suspended. After creating an instance, you can modify this behavior by selecting Actions > Change for the instance. On the Change Instance screen, select Automatically suspend on idle, then enter an Idle Timeout value and an Idle Grace Period value.
The Idle Timeout value is subject to an additional grace period. For example, if you set Idle Timeout to 30 minutes and Idle Grace Period to 1 hour, the instance will not automatically suspend until an idle time of 90 minutes has passed. The idle timeout is not enforced until the grace period has elapsed.

An instance is considered idle when no user-submitted operations (create and alter cluster operations, backend queries, bulk loads, backups, and so on) are running on clusters that belong to the instance. Front-end database queries and system work that runs in the background do not apply to the idle timeout thresholds.

A cluster that is running but idle does not prevent an instance from auto-suspending. The instance suspends its clusters when it auto-suspends.

Note that you cannot change the auto-suspend behavior of an instance if the instance is currently suspended.

Changing Hardware Instance Limits

After creating an instance, you can change the number of reserved nodes and/or the maximum number of nodes available to the clusters in the instance. This option provides a way to scale an instance up or down.

Reserved Node Capacity

The Reserved field in the Create Data Warehouse Instance screen is a means of reserving and spinning up the specified number of compute nodes when an instance is created, as opposed to waiting for nodes to spin up when compute clusters are created and started. Reserved nodes are shared among all the compute clusters that are created for a single Yellowbrick instance, and they are immediately available for use when those clusters start running and doing work. (Reserved nodes that are not part of a running cluster remain available but are not billed.)

The Limit field sets the maximum node capacity for the instance, while the Reserved field sets the reserved capacity within that limit. The limit per instance must be greater than or equal to the number of reserved nodes.

This approach to reserving nodes is driven by K8S auto-scaling, which causes AWS to spin up nodes during instance creation, before any clusters have been created for the instance. The reserved nodes feature is independent from, but complementary to, the creation of on-demand capacity reservations (ODCRs) in AWS.

There is no direct integration between these two features, but you can use Yellowbrick reserved nodes or ODCRs (or both) to expedite the availability of compute resources. However, note that ODCRs belong to an AWS account and have a global application. This means that they are available to all Yellowbrick data warehouse instances as well as associated Yellowbrick services that run in their own EC2 instances (shared services, bulk load servers, and so on). Also, ODCRs are available (in theory) to non-Yellowbrick software that may be driven from the same AWS account. The presence of an ODCR in AWS facilitates the provisioning of reserved nodes via K8S auto-scaling. Nodes reserved explicitly for a Yellowbrick instance in Yellowbrick Manager apply only to that instance and only to the nodes created for compute clusters.

Availability of hardware resources in AWS may never be guaranteed, but ODCRs and reserved nodes both make compute resources more immediately available. In order to decide how many reserved nodes or ODCRs your Yellowbrick deployment is going to benefit from in practice, consider the following questions:
  • How many Yellowbrick data warehouse instances are you going to create?
  • How many compute clusters will be created for each instance, and what instance types and number of nodes will you allocate?
  • How dynamic is the behavior of your cloud operations in Yellowbrick? Will clusters be routinely suspended and resumed, or expanded and shrunken? How variable is the general use of compute clusters going to be? In other words, how many compute nodes need to be readily available at any given time?
  • What will the additional cost overhead be if you create ODCRs?
Note: ODCRs are of significant value as a boot-strapping mechanism during initial Yellowbrick installation. You can use ODCRs to make sure that sufficient hardware resources are available for creating the software stack and running all of its services. Once Yellowbrick is installed, you may to decide to drop the ODCRs and rely on reserved nodes, or you may wish to use the two mechanisms in a complementary way.