# Set Up an GCP GKE Cluster
cnvrg can natively run using an EKS cluster hosted on your own AWS account and an EKS cluster can also be used to run all your cnvrg job. This allows you to leverage cnvrg and Kubernetes to create one flexible compute and DS environment.
In this guide, you will learn how to:
- Create a GKE cluster using gcloud command line tools
# Create a User in GCP with the Correct Permissions
Creating an GKE cluster requires certain permissions within GCP.
To create a user with the necessary permissions to create a cluster and follow the rest of the guide, add the following GCP permissions to the relevant user account:
- Service Usage Admin
- Create Service Accounts
- Security Admin
- Compute Admin
- Kubernetes Engine Admin
- Service Account User
- Service Account Key Admin
# Prerequisites: Prepare Your Local Environment
Before you can complete the installation, you must install and prepare the following dependencies on your local machine:
# Use the gcloud CLI to Log In to your Account
NOTE
You can skip this step if you have already logged in to the gcloud CLI.
Use the following command to log in to your GCP account.
gcloud auth login
NOTE
Make sure your user account has the necessary privileges or is an administrator.
# Set the gcloud Project for the Setup
NOTE
You can skip this step if you have already configured the project in the gcloud CLI.
Make sure your gcloud CLI is communicating with the GCP project you would like to use for the cluster.
Specify the project with the following command. Fill in the variable at the beginning with the correct project id:
PROJECT=<project-id>
gcloud config set project ${PROJECT}
# Setup glcoud Services
NOTE
You can skip this step if you have already necessary services in the gcloud CLI.
Use the following command to set up the services as need for the rest of the process:
gcloud services enable compute.googleapis.com
gcloud services enable servicenetworking.googleapis.com
gcloud services enable cloudresourcemanager.googleapis.com
gcloud services enable container.googleapis.com
gcloud services enable containerregistry.googleapis.com
# Create a Service Account for the Cluster
NOTE
You can skip this step if you have created a service account.
# Create the service account
To be able to properly manage and use the cluster, we need to create a service account. Use the following command to create the service account as needed:
PROJECT=<project-id>
gcloud iam service-accounts create cnvrgio --display-name cnvrgio --project ${PROJECT}
# Delegate the permissions for the service account
Next, we will prescribe the correct permissions to our new service account:
PROJECT=<project-id>
gcloud projects add-iam-policy-binding ${PROJECT} --member serviceAccount:cnvrgio@${PROJECT}.iam.gserviceaccount.com --role roles/container.admin
gcloud projects add-iam-policy-binding ${PROJECT} --member serviceAccount:cnvrgio@${PROJECT}.iam.gserviceaccount.com --role roles/compute.admin
gcloud projects add-iam-policy-binding ${PROJECT} --member serviceAccount:cnvrgio@${PROJECT}.iam.gserviceaccount.com --role roles/iam.serviceAccountUser
gcloud projects add-iam-policy-binding ${PROJECT} --member serviceAccount:cnvrgio@${PROJECT}.iam.gserviceaccount.com --role roles/resourcemanager.projectIamAdmin
gcloud projects add-iam-policy-binding ${PROJECT} --member serviceAccount:cnvrgio@${PROJECT}.iam.gserviceaccount.com --role roles/storage.admin
# Create a VPC for the cluster
NOTE
If you have an existing VPC in GCP, you can skip this step.
Creating a VPC allows you to ensure everything is hosted securely and can only be accessed by those with the correct permissions.
You will need to decide which region you will be launching the cluster in. This will affect where the cluster is located and which machine types can be used. Using a region will create a multi-zonal cluster which is more reliant than a single zoned cluster. Read here to find out more.
Use the following command to create the VPC. Fill in the variables at the beginning with the correct information for your cluster:
PROJECT=<project-id>
REGION=<region>
gcloud compute networks create cnvrg --subnet-mode custom --project ${PROJECT}
gcloud compute networks subnets create cnvrg-app --network cnvrg --region ${REGION} --range 10.10.0.0/16 --project ${PROJECT}
# Create the GKE cluster
We will now create the Kubernetes cluster. Use the following command to initialize the cluster in GCP. Fill in the variables at the beginning with the correct information for your cluster:
PROJECT=<project-id>
CLUSTER_NAME=<cluster-name>
REGION=<region>
gcloud --project ${PROJECT} container clusters create ${CLUSTER_NAME} \
--num-nodes 2 \
--enable-ip-alias \
--region ${REGION} \
--machine-type n1-standard-8 \
--image-type ubuntu \
--scopes=storage-rw \
--node-locations ${REGION} \
--no-enable-autoupgrade
NOTE
The above command will create the cluster using the default service account, network and VPC. You can set those options by adding the following commands and editing them accordingly:
--service-account cnvrgio@${PROJECT}.iam.gserviceaccount.com \
--network projects/${PROJECT}/global/networks/cnvrg \
--subnetwork cnvrg-app
# Retrieve the Credentials (kubeconfig) for the Newly Created Cluster
Running the following command will save the kubeconfig for the new cluster locally. This allows you to be able to interact with it using kubectl
.
The kubeconfig will be saved in ~/.kube/config
PROJECT=<project-id>
CLUSTER_NAME=<cluster-name>
REGION=<region>
gcloud container clusters get-credentials ${CLUSTER_NAME} \
--region ${REGION} \
--project ${PROJECT}
# Check the Cluster is Working Correctly
Run the following command to get a list of the nodes in the cluster. If this command runs correctly and returns the list of nodes, everything is working as expected:
kubectl get nodes
# Conclusion
Congratulations, you have now set up a working GKE cluster in GCP! You can now use Helm to finish the setup and get the cluster ready to be used for deploying the cnvrg app or running cnvrg jobs.