Full deployment of SAS® Viya® 3.5 on Azure Container Services

Background

SAS® Viya® can be deployed within container-enabled infrastructures, including Docker and Kubernetes, which are often run in the cloud. SAS Viya comes in two flavors of deployment: programming-only and full deployment. A programming-only deployment excludes general services and visual interfaces that are included in your order. Full deployment includes the complete software stack for which you are licensed.

This blog provides an overview for how to build the full deployment SAS Viya on Azure Kubernetes Service with persistent volumes to retain configuration and data.

Key Technologies

Azure Container Service (ACS) is a cloud-based container deployment and management service that supports popular open-source tools and technologies for container and container orchestration.

Azure Kubernetes Service (AKS) is a managed container orchestration service, based on the open-source Kubernetes system, which is available on the Microsoft Azure public cloud. An organization can use AKS to deploy, scale, and manage Docker containers and container-based applications across a cluster of container hosts.

Azure Container Registry (ACR) is a private Docker registry in Azure where you can store and manage private Docker container images and related artifacts.

Azure VM Scale Sets will allow to create and manage a group of identical, load balanced VMs. Scale sets provide high availability to the applications, and allow to centrally manage, configure, and update a large number of VMs.

Azure NetApp Files is an Azure first-party service for migrating and running the most demanding enterprise file-workloads in the cloud including high-performance computing applications.

Azure Blob Storage is Microsoft’s object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data, such as text or binary data.

Azure Node Pool is a group of nodes with the same configuration. These node pools contain the underlying VMs that run the applications.

Pods are used by Kubernetes to run an instance of the application. You can have several containers in a pod that share the same resources. Using a pod definition allows you to schedule and manage multiple containers together. A node pool can contain 1 or more pods.

Architecture

One of the basic ways of running containers is to provision a Linux machine in the cloud environment, install Docker, and deploy your containers.

For this this deployment, we use Azure’s container orchestrator, Azure Kubernetes Services (AKS), and other native services in Azure. AKS is a managed Kubernetes service that lets you deploy and manage container clusters. AKS simplifies the deployment, management, and operations of Kubernetes.

To run SAS Viya in containers, you need several Kubernetes nodes that are grouped into pods. A node is an Azure virtual machine (VM), and the size of the VM determines the resources available for your pods. An AKS cluster has one or more nodes, which can be grouped into pods to simplify scheduling and management.

Figure 1: Network diagram

Cluster Sizing

A Docker VM is used to connect to the AKS Cluster to deploy and test the SAS Viya environment. In this deployment, a Network File Share (NFS) Server is setup in this VM.

  • 1 X Standard D2_v3 (2 vCPUs, 8 GB memory)

AKS node pools contain the underlying VMs used to run SAS Viya application in containers. These pools and VMs can be sized and scaled as per your CAS Workload. In this deployment we are using:

  • 1 X Ds2_v2 (2 vCPUs, 7 GB memory) – Primary node pool
  • 2 X D16_v3 (16 vCPUs, 64 GB memory) – SAS Viya node pool.

Deployment Steps

Figure 2: Deployment Steps

Build Docker VM

Launch a VM which is used to build and manage the docker images and customize the YAML files for Kubernetes Deployment. Choose a suitable VM size of minimum requirements.
Login to the Docker Admin server and install Docker.

Install and Configure Azure Cloud CLI

The Azure command-line interface (Azure CLI) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation.
Install and authenticate your user id to use Azure Cloud CLI on the Docker VM.

Create Azure Storage Account and Blob container

Create a new Azure Storage Account to store the SAS Mirror Repository in the appropriate Resource Group and Location. Once the Storage account is set up, create a container to store the SAS repository files. You can re-use this container for multiple deployments and future upgrades.
Mount the blob storage as a file system on the Linux machine and create the SAS mirror repository.

Figure 3: SAS Repository Container

Build SAS Viya 3.5 Base Image

Install httpd server

Install the httpd server and start the service.

sudo yum install httpd

sudo systemctl start httpd

Create Mirror Repository

Create a SAS mirror repository as the best practice. SAS Mirror Manager downloads the software in your order and creates a mirror repository. The Software Order Email (SOE) instructs you to save the SAS_Viya_deployment_data.zip file attachment. Download and copy the .zip file in /var/www/html/sasrepo folder.

mkdir /var/www/html/sasrepo

Build Base Image

Download the latest sas-container-recipes using the below commands.

sudo yum -y install git

git clone https://github.com/sassoftware/sas-container-recipes.git

Make sure you have enough disk space (around 30 GB) before you build the images. Navigate to the sas-container-recipes and start the build process of base images using the below command.

./build.sh –type full –zip /var/www/html/sasrepo/SAS_Viya_deployment_data.zip –mirror-url http:///sasrepo/ –docker-registry-namespace sasviya –docker-registry-url –skip-docker-url-validation –skip-docker-registry-push –tag baseimages –verbose

Configure AKS Cluster

Create Azure Container Registry (ACR)

In Azure Portal, navigate to the Container Registry section and create a new Container Registry, setting the values of Resource Group and location.
Once created, authenticate to ACR from Docker admin VM.

az acr login –name

Push Images to ACR

Push the base images into the previously created Azure Container registry.
Login to the ACR from Docker login command

docker login

ID and password need to be provided from Access Keys section in the ACR Repository.

Navigate to the manifests folder, change dockerPush script to executable, and run the dockerPush script to push the images to ACR.

Create AKS Cluster and NodePool

Login to Azure portal to launch Azure Kubernetes Cluster and provide the details to create the cluster.

  1. Select a subscription and choose or create resource group
  2. Provide a unique Kubernetes cluster name
  3. Choose region
  4. Select node size (DS2_v2) and count (1) for primary node pool (agentpool).
  5. Add another node pool (D16_v3) and count (2) for sas viya workloads.

Nodes of the same configuration are grouped together into node pools. A Kubernetes cluster contains one or more node pools. The node pool (microservice) will handle Viya workloads.

kubectl configuration

To manage a Kubernetes cluster, you can use Kubernetes command line from the Docker VM, or you can use cloud shell where kubectl is already installed.

Install the Kubernetes CLI in Docker VM.

az aks install-cli

Connect to cluster using kubectl

az aks get-credentials –resource-group –name

To verify the connection to your cluster, run the kubectl “get nodes” command to return a list of the cluster nodes:

Figure 4: kubectl config

Configure Kubernetes

Secrets, configmaps and services

Navigate to manifests/kubernetes folder to deploy all the sas provided secrets, configmaps and services.

kubectl apply -f secrets/

kubectl apply -f configmaps/

kubectl apply -f services/

Create Imagepullsecret

Kubernetes uses an image pull secret to store information needed to authenticate to your registry. To create the pull secret for an Azure container registry, you provide the service principal ID, password, and the registry URL.

Create docker registry secret key to pull the image from docker registry.

kubectl create secret docker-registry –docker-server= –docker-username= –docker-password=xxxxxx

Storage

A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods and can be dynamically or statically provisioned. Persistent Volume and Persistent Volume Claim are created to achieve persistence.
There are multiple options for external volumes. Creating an NFS Server or Azure NetApp files on an Azure VM are examples of persistent shared storage.
Persistent Volumes (PV) and Persistent Volume Claims (PVC) are created to achieve persistence for this deployment. To persist the state of the configuration you can create and modify SAS Viya deployments yaml files to attach these PVC to the pods.

Deploy PODS

All the yamls for the deploying the pods are placed in manifests/Kubernetes/deployments. These yamls are created as a part of build process. The yamls are customized according to needs of the architecture. Below are the customizations done.

Note: If you do not specify a CPU limit for a Container, then one of these situations applies: The Container has no upper bound on the CPU resources it can use.

  • The Container could use all the CPU resources available on the Node where it is running.
  • The Container is running in a namespace that has a default CPU limit, and the Container is automatically assigned the default limit. Cluster administrators can use a Limit Range to specify a default value for the CPU limit.

nodeSelector is a field of PodSpec. Here agentpool nodeselector is defined on pods.

PVC section needs to be added to mount the previously created PVC.

Include imagePullSecrets

Network

Now all the pods are in running state and you are ready to access SAS Viya application. But how?

In Kubernetes, nodes, pods and services all have their own IP’s, so they will not be reachable from a machine outside the cluster, such as your desktop machine. There are several options for connecting to nodes, pods and services from outside the cluster.

In this deployment we have used Kubernetes ingress resources to configure the ingress rules and routes for individual Kubernetes services.

Public IP

By default, the public IP address assigned to a load balancer resource created by an AKS cluster is only valid for the lifespan of that resource. If you delete the Kubernetes service, the associated load balancer and IP address are also deleted. If you want to assign a specific IP address or retain an IP address for redeployed Kubernetes services, you can create and use a static public IP address.

az network public-ip create –resource-group mc_kubcluspoc_xxx_eastus –name viyapubIP –sku Standard –allocation-method static

Ingress Controller

An ingress controller is a piece of software that provides reverse proxy, configurable traffic routing, and TLS termination for Kubernetes services. A Classic Load balancer can also be used for Network routing, but Ingress is preferred for an application like SAS Viya for the following reasons. Ingress provides SSL termination and name-based virtual hosting.

Deploy Ingress controller with a suitable name, Static IP created previously and preferred DNS name for the Web Server. Run this command from cloud shell.

helm install stable/nginx-ingress –set controller.replicaCount=2 –set controller.nodeSelector.”beta.kubernetes.io/os”=linux –set defaultBackend.nodeSelector.”beta.kubernetes.io/os”=linux –set controller.service.loadBalancerIP=”” –set controller.service.annotations.”service.beta.kubernetes.io/azure-dns-label-name”=””

SSL and Ingress Secret

To allow Kubernetes to use the TLS certificate and private key for the ingress controller, you need to create and use a secret. The secret is defined once and uses the certificate provided.

Copy the required certificate and key file to docker admin server and run the following command.

kubectl create secret tls ingresscrtkey –key cert.key –cert cert.crt

Ingress Route

Ingress Controller has been configured successfully and SAS Viya Application is running as well. But traffic from Internet needs to be routed from the address https:/// to the service sas-viya-httpproxy using an Ingress Route resource. The ingress resource configures the rules that route traffic to the application.

kubectl apply -f

Directory Service Integration

With SAS Viya running on Containers, the usual approach is using OpenLDAP as discussed in SAS Blogs. But this deployment is integrated with Active Directory using the Compute Server Service Account method for Host Authentication.

The Active Directory Server should be integrated with Identities Service by creating a Kubernetes Service and Endpoint with AD Server IP Address. The Service Account needs to be created by making a REST API call to the Viya Server from sas-admin CLI. The host ID for service account should be created in Pod Lifecycle for CAS, CAS-Worker, Compute Server, and Programming pods. Steps for creating Service Account are provided here.

Conclusion

At this point, SAS Viya should be up and running in containers in your Azure environment. From this baseline setup, you can explore additional features of Kubernetes. For example, you can leverage additional container orchestration capabilities, scale the containers up or down, and/or deploy new SAS Viya containers for specific workloads.
Please reach out to us if you have any questions or would like to discuss the best way to use these additional options.

References

sas-container-recipes –
https://github.com/sassoftware/sas-container-recipes

Deploying the Full SAS Viya Stack in Kubernetes
https://blogs.sas.com/content/sgf/2019/06/10/deploying-the-full-sas-viya-stack-in-kubernetes/

Use an NFS volume with AKS
https://docs.microsoft.com/en-us/azure/aks/azure-nfs-volume

Integrate Azure NetApp Files with AKS
https://docs.microsoft.com/en-us/azure/aks/azure-netapp-files

SAS Viya 3.5 Compute Server Service Accounts
https://communities.sas.com/t5/SAS-Communities-Library/SAS-Viya-3-5-Compute-Server-Service-Accounts/ta-p/620992

About the Authors

Sanket Mitra is a Technical Architect with SAS and Cloud specialization at Core Compete.
Pradeep Kumar Rajendran is a Senior Technical Consultant with SAS and Cloud specialization at Core Compete.

Acknowledgments

Application Engineering – Core Compete