# Create a kubeconfig for Remote Debugging
cnvrg's workspace Remote SSH feature enables you to use a workspace for remote debugging.
The feature requires the user have the kubeconfig for the cluster in which the workspace is running. This guide will help you create a kubeconfig that can be shared safely and only has the required access privleges.
# Requirements
- kubectl and access to the Kubernetes cluster
# Create the permissions
- Here is the required permissions YAML file. Save it to you computer as
ssh-permissions.yaml
:
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cnvrg-port-forward-role
namespace: cnvrg
rules:
- apiGroups: [""]
resources: ["pods","pods/portforward"]
verbs: ["get","list","create"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cnvrg-port-forward-role
namespace: cnvrg
subjects:
- kind: ServiceAccount
name: cnvrg-port-forward-role
roleRef:
kind: Role
name: cnvrg-port-forward-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: cnvrg-port
name: cnvrg-port-forward-role
namespace: cnvrg
- Run the following command:
kubectl apply -f ssh-permissions.yaml
- Check that you got the following response when you ran the command:
role.rbac.authorization.k8s.io/cnvrg-port-forward-role created
rolebinding.rbac.authorization.k8s.io/cnvrg-port-forward-role created
serviceaccount/cnvrg-port-forward-role created
# Retrieve the access token
Run the following command to retrieve the secret token:
kubectl -n cnvrg describe secret $(kubectl -n cnvrg get secret | grep cnvrg-port-forward-role-token | awk '{print $1}') | grep token:
# Edit your Local kubeconfig
Copy the token in the above step, then edit your kubeconfig file & change the following sections:
set
cnvrg-app
under "contexts":contexts ⇒ context ⇒ user:
cnvrg-app
Remove existing child objects from "users" section, and set the
cnvrg-app
user and token:users ⇒ name:
cnvrg-app
users ⇒ user ⇒ token:
ACTUAL_TOKEN
This is how the changed kubeconfig should look (with the rest of the data in the different fields kept intact):
apiVersion:
clusters:
- cluster:
certificate-authority-data:
server:
name:
contexts:
- context:
cluster:
user: cnvrg-app
name:
current-context:
kind:
preferences:
users:
- name: cnvrg-app
user:
token: <PLACE_TOKEN_HERE>
Save the updated file.
# Verify Access to the Cluster
Before concluding, check you can access the cluster by listing pods with the following command:
kubectl -n cnvrg get pods
# Conclusion
You will have no created a kubeconfig that can be used for the remote shh function.