An instance group is a kops concept that defines a grouping of similar machines. In AWS, an instance group maps to an AutoScalingGroup (ASG).
In this section we’ll look at different ways of using kops instance groups in a Kubernetes cluster, including:
-
Updating the instance group by adjusting its size, or the instance types it contains
-
Adding an additional instance group to a cluster, and scheduling pods on the instances in the new instance group
-
Pinning an instance group to a specific AZ
Your Kubernetes cluster already has a couple of instance groups that were created by kops: one instance group for the worker nodes, another for the master nodes. List the instance groups in your cluster:
kops get ig
$ kops get ig Using cluster from kubectl context: cluster.k8s.local NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1d Master m3.medium 1 1 us-east-1d nodes Node t2.medium 2 6 us-east-1d,us-east-1e
As we mentioned above, an instance group maps to an ASG in AWS, so let’s check out the associated ASG. The name of the AWS ASG is a combination of the instance group name and the Kubernetes cluster name:
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names nodes.cluster.k8s.local
$ aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names nodes.cluster.k8s.local
{
"AutoScalingGroups": [
{
"AutoScalingGroupARN": "arn:aws:autoscaling:us-east-1:<account-id>:autoScalingGroup:070e8c67-c3fb-4a2d-a7b2-9d9af84fc876:autoScalingGroupName/nodes.cluster.k8s.local",
.
.
"AutoScalingGroupName": "nodes.cluster.k8s.local",
"DefaultCooldown": 300,
"MinSize": 2,
"MaxSize": 6,
"Instances": [
{
"ProtectedFromScaleIn": false,
"AvailabilityZone": "us-east-1d",
"InstanceId": "i-007c28e33c7c7bd2b",
"HealthStatus": "Healthy",
"LifecycleState": "InService",
"LaunchConfigurationName": "nodes.cluster.k8s.local-20171025033916"
},
{
"ProtectedFromScaleIn": false,
"AvailabilityZone": "us-east-1e",
"InstanceId": "i-040845072f78d347f",
"HealthStatus": "Healthy",
"LifecycleState": "InService",
"LaunchConfigurationName": "nodes.cluster.k8s.local-20171025033916"
}
],
.
.
}
]
}
We can use kops to change the size of an instance group. In AWS terms, this is the same as changing the MinSize and MaxSize of your ASG. Let’s try this:
kops edit ig nodes
Increase/decrease spec→maxSize and save the change (shift-zz works if you’re using a Mac). At this point, only the configuration has been changed; no changes have been applied to the cluster. We can preview the changes prior to applying them:
kops update cluster
$ kops update cluster
Using cluster from kubectl context: cluster.k8s.local
.
.
Will modify resources:
AutoscalingGroup/nodes.cluster.k8s.local
MaxSize 6 -> 8
Must specify --yes to apply changes
Now let’s apply the changes.
kops update cluster --yes
$ kops update cluster --yes
Using cluster from kubectl context: cluster.k8s.local
.
.
kops has set your kubectl context to cluster.k8s.local
Cluster changes have been applied to the cloud.
Changes may require instances to restart: kops rolling-update cluster
The output of 'update cluster' hints that a rolling update may be required. Since we are only updating the size of the instance group, no rolling update is necessary. In the next example we’ll make an update where a rolling update will be required.
Now check whether the MaxSize of our ASG has been updated:
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names nodes.cluster.k8s.local
$ aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names nodes.cluster.k8s.local
{
"AutoScalingGroups": [
{
"AutoScalingGroupARN": "arn:aws:autoscaling:us-east-1:<account-id>:autoScalingGroup:070e8c67-c3fb-4a2d-a7b2-9d9af84fc876:autoScalingGroupName/nodes.cluster.k8s.local",
.
.
"AutoScalingGroupName": "nodes.cluster.k8s.local",
"DefaultCooldown": 300,
"MinSize": 2,
"MaxSize": 8,
"Instances": [
{
"ProtectedFromScaleIn": false,
"AvailabilityZone": "us-east-1d",
"InstanceId": "i-007c28e33c7c7bd2b",
"HealthStatus": "Healthy",
"LifecycleState": "InService",
"LaunchConfigurationName": "nodes.cluster.k8s.local-20171025033916"
},
{
"ProtectedFromScaleIn": false,
"AvailabilityZone": "us-east-1e",
"InstanceId": "i-040845072f78d347f",
"HealthStatus": "Healthy",
"LifecycleState": "InService",
"LaunchConfigurationName": "nodes.cluster.k8s.local-20171025033916"
}
],
.
.
}
]
}
We can use kops to change the instance type of the instances in an instance group. In AWS terms, this is the same as changing the LaunchConfiguration associated with an ASG. In AWS, LaunchConfigurations are immutable, so this change will result in the creation of a new LaunchConfiguration, followed by an update to the ASG to associate the new LaunchConfiguration.
kops edit ig nodes
Change the instance type. kops supports specific AWS instance types; see the source code here for the latest list: https://github.com/kubernetes/kops/blob/709f902c11079345588119ab48c46b7129ef1e44/upup/pkg/fi/cloudup/awsup/machine_types.go#L74
As with the previous example, only the configuration has been changed at this stage. Let’s preview our changes:
kops update cluster
$ kops update cluster
Using cluster from kubectl context: cluster.k8s.local
.
.
Will modify resources:
LaunchConfiguration/nodes.cluster.k8s.local
InstanceType t2.medium -> m4.large
Must specify --yes to apply changes
Before we apply the changes, let’s check out our LaunchConfiguration so we can see whether kops updates it. Get the LaunchConfiguration from the ASG and note the InstanceType:
$ aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names nodes.cluster.k8s.local --query 'AutoScalingGroups[0].[LaunchConfigurationName]'
[
"nodes.cluster.k8s.local-20171025033916"
]
$ aws autoscaling describe-launch-configurations --launch-configuration-names nodes.cluster.k8s.local-20171025033916
{
"LaunchConfigurations": [
{
"UserData": "etc",
"IamInstanceProfile": "nodes.cluster.k8s.local",
"EbsOptimized": false,
.
.
"LaunchConfigurationName": "nodes.cluster.k8s.local-20171025033916",
"InstanceType": "t2.medium",
"AssociatePublicIpAddress": true
}
]
}
Now update the cluster.
kops update cluster --yes
$ kops update cluster --yes
Using cluster from kubectl context: cluster.k8s.local
.
.
kops has set your kubectl context to cluster.k8s.local
Cluster changes have been applied to the cloud.
Changes may require instances to restart: kops rolling-update cluster
We expect kops to have created a new LaunchConfiguration using our updated EC2 instance type and updated our ASG to refer to this LaunchConfiguration, so let’s check if this is indeed the case. Note that the name of the LaunchConfiguration associated with the ASG has changed, and the LaunchConfiguration reflects the new instance type:
$ aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names nodes.cluster.k8s.local --query 'AutoScalingGroups[0].[LaunchConfigurationName]'
[
"nodes.cluster.k8s.local-20171112055155"
]
$ aws autoscaling describe-launch-configurations --launch-configuration-names nodes.cluster.k8s.local-20171112055155
{
"LaunchConfigurations": [
{
"UserData": "etc",
"IamInstanceProfile": "nodes.cluster.k8s.local",
"EbsOptimized": false,
.
.
"LaunchConfigurationName": "nodes.cluster.k8s.local-20171112055155",
"InstanceType": "m4.large",
"AssociatePublicIpAddress": true
}
]
}
The kops configuration has been updated to reflect the new instance type:
$ kops get ig Using cluster from kubectl context: cluster.k8s.local NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-east-1d Master m3.medium 1 1 us-east-1d nodes Node m4.large 2 8 us-east-1d,us-east-1e
However, the EC2 instances running as worker nodes in the Kubernetes cluster have not yet been updated. You can check this by using one of the ec2 instance id’s from the 'aws autoscaling describe-auto-scaling-groups' command you ran earlier:
$ aws ec2 describe-instances --instance-ids i-007c28e33c7c7bd2b --query Reservations[0].Instances[0].InstanceType "t2.medium"
This makes sense. In AWS, creating a new LaunchConfiguration and associating it with an ASG has no impact until you scale the ASG. As you scale out, new EC2 instances are created based on the new LaunchConfiguration, and as you scale in, EC2 instances based on the oldest LaunchConfiguration are terminated.
To apply the new instance type to the cluster we do a rolling update. As with many other kops commands, we can preview the changes before applying them:
$ kops rolling-update cluster
Using cluster from kubectl context: cluster.k8s.local
NAME STATUS NEEDUPDATE READY MIN MAX NODES
master-us-east-1d Ready 0 1 1 1 1
nodes NeedsUpdate 2 0 2 8 2
Must specify --yes to rolling-update.
Now apply the changes. You’ll notice existing EC2 instances in the cluster being terminated one-by-one, and new instances based on the new LaunchConfiguration being started. This activity can also be viewed in the AWS Console, under the EC2 service. See Activity History under the appropriate Auto Scaling Group.
$ kops rolling-update cluster --yes
Using cluster from kubectl context: cluster.k8s.local
NAME STATUS NEEDUPDATE READY MIN MAX NODES
master-us-east-1d Ready 0 1 1 1 1
nodes NeedsUpdate 2 0 2 8 2
I1112 14:11:51.260854 52494 instancegroups.go:350] Stopping instance "i-007c28e33c7c7bd2b", node "ip-172-20-59-20.ec2.internal", in AWS ASG "nodes.cluster.k8s.local".
I1112 14:13:51.907500 52494 instancegroups.go:350] Stopping instance "i-040845072f78d347f", node "ip-172-20-71-215.ec2.internal", in AWS ASG "nodes.cluster.k8s.local".
I1112 14:15:55.287844 52494 rollingupdate.go:174] Rolling update completed!
In this section you’ll add an additional instance group to your Kubernetes cluster so that your cluster is comprised of two instance groups with different instance types. We’ll then schedule a pod to run specifically on instances in the new instance group. This is useful, for example, if you want to assign pods to run high performing workloads on GPU or FPGA instance types, or if you want to run pods on instances with specific EBS volumes attached.
Let’s go ahead and create an instance group for p2 instance types:
kops create ig p2 --subnet us-east-1d,us-east-1e
Change the machineType in the resulting skeleton configuration. Make sure you add the 'nodeLabels' attribute as in the example below. This will be used in a later example to schedule pods onto these instances.
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2017-11-12T07:25:23Z
labels:
kops.k8s.io/cluster: cluster.k8s.local
name: p2
spec:
image: kope.io/k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-07-28
machineType: p2.xlarge
maxSize: 2
minSize: 2
nodeLabels:
type: p2-ig
role: Node
subnets:
- us-east-1d
- us-east-1e
Preview and apply your changes in the usual way. In the preview you’ll notice a new LaunchConfiguration and ASG is about to be created.
kops update cluster
$ kops update cluster
Using cluster from kubectl context: cluster.k8s.local
.
.
Will create resources:
AutoscalingGroup/p2.cluster.k8s.local
MinSize 2
MaxSize 2
.
LaunchConfiguration name:p2.cluster.k8s.local
LaunchConfiguration/p2.cluster.k8s.local
ImageID kope.io/k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-07-28
InstanceType p2.xlarge
.
.
Must specify --yes to apply changes
kops update cluster --yes
After applying the changes it will take a few minutes before your ASG provisions the instances and reaches the MinSize specified in the instance group config. You can check progress by viewing the nodes in your cluster. AWS may only provision one P2 instance instead of the two specified; this is because most AWS accounts have a soft limit of 1 P2 instance per account. If this happens, don’t worry, we can still provision pods onto the P2 instance(s) in this instance group.
Check that your new instances have been provsioned:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-20-32-243.ec2.internal Ready master 18d v1.7.4
ip-172-20-38-84.ec2.internal Ready node 3m v1.7.4
ip-172-20-50-253.ec2.internal Ready node 1h v1.7.4
ip-172-20-69-144.ec2.internal Ready node 1h v1.7.4
ip-172-20-69-196.ec2.internal Ready node 3m v1.7.4
If you have a lot of instances in your cluster, you can show only the instances in your new instance group using the label we assigned during the creation of the instance group earlier. This also confirms that the label has been applied to the instances in the instance group - we’ll need the label to schedule pods onto these instances:
$ kubectl get nodes -l type=p2-ig
NAME STATUS ROLES AGE VERSION
ip-172-20-38-84.ec2.internal Ready node 13m v1.7.4
ip-172-20-69-196.ec2.internal Ready node 13m v1.7.4
Now we’ll schedule a pod onto an instance in the new instance group and check if it is scheduled onto the correct node:
$ kubectl create -f instance-groups/nginx-on-p2.yaml pod "nginx" created
Check if the pod was scheduled onto the correct node. The value of NODE should match the NAME of a node in the p2 instance group:
$ kubectl get pods nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx 1/1 Running 0 1m 100.96.10.2 ip-172-20-69-196.ec2.internal
Certain workloads benefit from low latency network connectivity between containers. This can be achieved by launching an instance group with instances in a single Availability Zone (AZ). It’s quite easy to do this using kops.
When creating an instance group with kops, it’s necessary to specify a subnet. Since a subnet is specific to an AZ, specifying subnets in a single AZ will pin the instance group to that AZ. In the case of the cluster we created in this workshop, the subnet names default to the AZ names:
kops create ig ig-1d --subnet us-east-1d
As with the previous example, add the 'nodeLabels' attribute as in the example below.
apiVersion: kops/v1alpha2
kind: InstanceGroup
.
.
nodeLabels:
type: 1d-ig
role: Node
subnets:
- us-east-1d
Preview and apply your changes in the usual way. In the preview you’ll notice a new LaunchConfiguration and ASG is about to be created.
kops update cluster
$ kops update cluster
Using cluster from kubectl context: cluster.k8s.local
.
.
Will create resources:
AutoscalingGroup/p2.cluster.k8s.local
MinSize 2
MaxSize 2
.
LaunchConfiguration name:ig-1d.cluster.k8s.local
LaunchConfiguration/ig-1d.cluster.k8s.local
ImageID kope.io/k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-07-28
InstanceType t2.medium
.
.
Must specify --yes to apply changes
kops update cluster --yes
Check that your new instances have been provsioned in the correct AZ:
$ kubectl get nodes -l type=1d-ig -o json | grep failure-domain.beta.kubernetes.io/zone
"failure-domain.beta.kubernetes.io/zone": "us-east-1d",
"failure-domain.beta.kubernetes.io/zone": "us-east-1d",
Now we’ll schedule a pod onto an instance in the new instance group and check if it is scheduled onto the correct node:
$ kubectl create -f instance-groups/nginx-on-1d-ig.yaml pod "nginx" created
Check if the pod was scheduled onto the correct node. The value of NODE should match a node in the 1d instance group, in the us-east-1d AZ:
$ kubectl get nodes -l type=1d-ig
NAME STATUS ROLES AGE VERSION
ip-172-20-39-205.ec2.internal Ready node 7m v1.7.4
ip-172-20-42-147.ec2.internal Ready node 8m v1.7.4
$ kubectl get pods nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx 1/1 Running 0 1m 100.96.12.2 ip-172-20-39-205.ec2.internal
Since GPU instances are costly, let’s remove this instance group. In the background the ASG will be deleted.
kubectl delete -f instance-groups/nginx-on-p2.yaml kubectl delete -f instance-groups/nginx-on-1d-ig.yaml kops delete ig p2 --yes kops delete ig ig-1d --yes
Check the instance group(s) have been deleted:
$ kops get ig
Using cluster from kubectl context: cluster.k8s.local
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east-1d Master m3.medium 1 1 us-east-1d
nodes Node m4.large 2 8 us-east-1d,us-east-1e