# Advanced Helm Configuration Options

There are a few different considerations that need to be undertaken before deploying cnvrg. In this section, all of the different options and corresponding helm parameters are outlined. Please go through each section and collect the parameters that you will use to personalize your cnvrg installation.

# SMTP Support

You can set up cnvrg to use an SMTP server for all email functionality in cnvrg. You must provide your own SMTP server and provide all the relevant details.

SMTP is only relevant when installing the cnvrg app and is not used for creating a workers cluster.

You will need the following information about the server:

  • Server (ip address)
  • Port
  • Domain
  • Username for the email account
  • Password for the email account

To enable the functionality, include all these options in your Helm install command:

--set smtp.server=<smtp_ip_address> \
--set smtp.port=<smtp_port> \
--set smtp.domain=<smtp_domain> \
--set smtp.username=<smtp_username> \
--set smtp.password=<smtp_password>

# Persistent Disk Sizes

cnvrg leverages some additional services such as Object Storage (Minio), PostgreSQL, ElasticSearch, ElasticAlert and Prometheus. Each of these services uses some Persistent Disk storage to save files and information. By default, the Helm chart allocates 10 gb for Prometheus and 30 gb for each other service, however, you can alter the storage for each service as desired.

Use the relevant --set xxx.disk_size flag for each storage amount you would like to alter. You do not need to include all of the different commands. The flag accepts integers followed by Gi representing the amount of storage to allocate.

These flags can be used in combination with any other combination of configuration flags.

Include the options of your choice in your Helm install command:

--set object_storage.disk_size=<disk_size_in_gb>Gi \
--set postgres.disk_size=<disk_size_in_gb>Gi \
--set elasticsearch.disk_size=<disk_size_in_gb>Gi \
--set elasticalert.disk_size=<disk_size_in_gb>Gi \
--set prometheus.disk_size=<disk_size_in_gb>Gi

# Ingress

You can set up external ingress to cnvrg or the workers cluster using either:

When using an on-premise cluster, the default is NodePort. ie. --set use_istio_ingress=false When using a cloud cluster, Istio is required and is enabled by default. ie. --set use_istio_ingress=true.

# NodePort overview

This is the most basic and least resource heavy option. Using NodePort will simply expose a port on all nodes as an external ingress. All jobs will be confined to running on one node.

To use this option, make sure there is no firewall enabled and all ports are open and accessible (range of 30000-32767)

This option cannot be used while --set global.domain is being used or for cloud clusters. It must be used with the --set global.node flag.

Include these options in your Helm install command:

--set global.external_ip=<external_ip>

When using Minikube and accessing cnvrg from the same machine, you can set this easily as follows: --set global.external_ip=$(minikube ip). If you are running Minikube on a remote machine and intend to access cnvrg from your local machine (outside the VPC), you will need to use the external ip address of the remote machine.

# Istio overview

cnvrg can also leverage Istio to create native DNS routing through a service mesh. Istio makes it easy to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more. This means your nodes are not restricted to running on one node. This option is more resource heavy than NodePort.

To use this option, make sure there is no firewall enabled and ports 80 and 443 are open and accessible.

To use Istio ingress on-premise, you must have a domain for your cluster and use the --set global.domain and --set external_ip flags. When installing on the cloud, Istio is mandatory and enabled by default.

Include these options in your Helm install command:

--set global.enable_istio_ingress=true

# Node for Deployment

When installing cnvrg on-premise, you will need to specify which node from the cluster you will deploy cnvrg to. To get a list of available nodes you can run:

kubectl get nodes

NAME      STATUS   ROLES    AGE    VERSION
node1     Ready    master   21d    v1.17.1

This command is only relevant for on-premise clusters and the default value is minikube.

To specify which node, include this option in your Helm install command:

--set global.node=<node_name>

# Lightweight Install

You can optionally install a light weight version of cnvrg with lower CPU and memory requirements. The minimum requirmentas with this enabled are 4 CPU and 3GB of memory. This option is enabled automatically when --set global.node=minikube, but can optionally be controlled by the user.

Include this option in your Helm install command:

--set global.high_availability=false

# Domain

You can identify a domain for your cluster. This allows you to use any domain as the location for the cluster. For example, you could set the domain for the cluster as cnvrg.company.com.

For this option to work, you will need to set the required DNS routing rules between the domain and the ip address of the cluster after helm install has finished running.

You will need to create a CNAME/A listing for *.<your_domain> with the ip address of the autoscaler for the cluster. Make sure you include the wildcard: *.

The domain is the same domain you entered as <your_domain> in the helm command.

To get the ip address of the cluster run the following command after cnvrg has been deployed:

kubectl -n cnvrg get svc | grep ingress | awk "{print $4}"

The format for this flag must be <your_domain>.

This is a required flag when working with cloud clusters.

Include these options in your Helm install command:

--set global.domain=<your_domain>

# Cloud Provider

When working with cloud clusters, you will need to use this flag to identify which cloud provider the Kubernetes cluster is hosted on.

This is a required flag for cloud clusters and cannot be used with on-premise clusters.

The options are:

  • eks
  • gke
  • aks

Include these options in your Helm install command:

--set global.provider=[eks/gke/aks]
Last Updated: 9/16/2020, 1:40:07 PM