# Create a kubeconfig for Remote Debugging

cnvrg's workspace Remote SSH feature enables you to use a workspace for remote debugging.

The feature requires the user have the kubeconfig for the cluster in which the workspace is running. This guide will help you create a kubeconfig that can be shared safely and only has the required access privleges.

# Requirements

  • kubectl and access to the Kubernetes cluster

# Create the permissions

  1. Here is the required permissions YAML file. Save it to you computer as ssh-permissions.yaml:
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cnvrg-port-forward-role
  namespace: cnvrg
rules:
- apiGroups: [""]
  resources: ["pods","pods/portforward"]
  verbs: ["get","list","create"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cnvrg-port-forward-role
  namespace: cnvrg
subjects:
- kind: ServiceAccount
  name: cnvrg-port-forward-role
roleRef:
  kind: Role
  name: cnvrg-port-forward-role
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: cnvrg-port
  name: cnvrg-port-forward-role
  namespace: cnvrg
  1. Run the following command:
kubectl apply -f ssh-permissions.yaml
  1. Check that you got the following response when you ran the command:
role.rbac.authorization.k8s.io/cnvrg-port-forward-role created
rolebinding.rbac.authorization.k8s.io/cnvrg-port-forward-role created
serviceaccount/cnvrg-port-forward-role created

# Retrieve the access token

Run the following command to retrieve the secret token:

kubectl -n cnvrg describe secret $(kubectl -n cnvrg get secret | grep cnvrg-port-forward-role-token | awk '{print $1}') | grep token:

# Edit your Local kubeconfig

Copy the token in the above step, then edit your kubeconfig file & change the following sections:

  1. set cnvrg-app under "contexts":

    contexts ⇒ context ⇒ user: cnvrg-app

  2. Remove existing child objects from "users" section, and set the cnvrg-app user and token:

    users ⇒ name: cnvrg-app

    users ⇒ user ⇒ token: ACTUAL_TOKEN

This is how the changed kubeconfig should look (with the rest of the data in the different fields kept intact):

apiVersion:
clusters:
- cluster:
    certificate-authority-data: 
    server: 
  name: 
contexts:
- context:
    cluster: 
    user: cnvrg-app
  name: 
current-context: 
kind:
preferences:
users:
- name: cnvrg-app
  user:
    token: <PLACE_TOKEN_HERE>

Save the updated file.

# Verify Access to the Cluster

Before concluding, check you can access the cluster by listing pods with the following command:

kubectl -n cnvrg get pods

# Conclusion

You will have no created a kubeconfig that can be used for the remote shh function.

Last Updated: 10/12/2022, 9:33:46 AM