[]
        
(Showing Draft Content)

Deploying to Amazon EKS using Helm Charts

Amazon EKS (Elastic Kubernetes Service) is a service used to run Kubernetes in the AWS cloud and the local servers.


Before deploying the Wyn Enterprise in the Amazon Elastic Kubernetes Service (EKS) using Helm Charts ensure that you have installed the following tools on your local machine;

  1. Kubectl: Kubectl is a command line tool used to communicate with the Kubernetes API Server. See the Installing or updating Kubectl help doc on AWS Knowledgebase for more information on installing Kubectl.

  2. eksctl: Eksctl is a command line tool used to communicate and manage Kubernetes clusters on Amazon EKS. See the Installing or updating eksctl help doc on AWS Knowledgebase for more information on installing eksctl.

  3. Helm: Helm is the package manager for the Kubernetes services that are used to provide, share, and use the software built for Kubernetes. See the Helm Docs help doc for more information on using Helm Charts.

  4. aws-cli: Aws-cli is a command line tool used to work with AWS services including Amazon EKS. See the Installing, updating, and uninstalling AWS CLI help doc for more information.



Follow the below steps to deploy Wyn in Amazon EKS using Helm charts,

  1. Create AWS EKS Cluster

    Run the following command to create your AWS EKS Cluster,

    eksctl create cluster --name {your-cluster-name} --region {your-region-code}

    See the Getting started with Amazon EKS – eksctl for more information on Amazon EKS Cluster.

    Note: This step may take several minutes to complete.

  2. Create an IAM OIDC Provider

    To create an IAM OIDC identity provider for your cluster with eksctl, follow the below instructions,


    i) Retrieve the OIDC issuer ID of your cluster and store it in a variable. Replace the {your-cluster-name} with your own value.

    cluster_name={your-cluster-name}
    oidc_id=$(aws eks describe-cluster --name $cluster_name --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
    echo $oidc_id

    ii) Use the following command to determine whether an IAM OIDC provider with your cluster's issuer ID is already in your account.

    aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4

    If the output is returned, then an IAM OIDC provider is already present and you can skip the next step.


    iii) In case no output is returned, you must create an IAM OIDC provider for your cluster using the following command.

    eksctl utils associate-iam-oidc-provider --cluster $cluster_name --approve

    See the Creating an IAM OIDC provider for your cluster help article for more information on creating an IAM OIDC provider.

  3. Install the AWS Load Balancer Controller

    To install the AWS Load Balancer controlled, use the following commands,

    # To download the IAM policy file
    # AWS GovCloud (US-East) or AWS GovCloud (US-West) AWS Regions
    curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy_us-gov.json
    mv iam_policy_us-gov.json iam_policy.json
    
    # All other AWS Regions
    curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json
    
    # To create an IAM policy
    aws iam create-policy \
        --policy-name AWSLoadBalancerControllerIAMPolicy \
        --policy-document file://iam_policy.json
    
    # To create an IAM role and a Kubernetes service account
    eksctl create iamserviceaccount \
      --cluster={your_cluster_name} \
      --namespace=kube-system \
      --name=aws-load-balancer-controller \
      --role-name AmazonEKSLoadBalancerControllerRole \
      --attach-policy-arn=arn:aws:iam::{your_account_id}:policy/AWSLoadBalancerControllerIAMPolicy \
      --approve
    
    # To install the AWS load balancer controller using Helm v3
    helm repo add eks https://aws.github.io/eks-charts
    helm repo update eks
    helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
      -n kube-system \
      --set clusterName={your_cluster_name} \
      --set serviceAccount.create=false \
      --set serviceAccount.name=aws-load-balancer-controller

    See the Installing the AWS Load Balancer Controller add-on help article for more information on installing the AWS load balancer controller.

  4. Install Amazon EFS CSI Driver

    To install the Amazon EFS CSI Driver, follow the below instructions,

    i) Create the IAM role and Kubernetes service account using the following commands,

    # To download the IAM policy document
    curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/docs/iam-policy-example.json
    
    # To create the policy
    aws iam create-policy \
        --policy-name AmazonEKS_EFS_CSI_DriverPolicy \
        --policy-document file://iam-policy-example.json
    
    # To create the IAM role AmazonEKS_EFS_CSI_DriverRole
    eksctl create iamserviceaccount \
        --name efs-csi-fake-sa \
        --namespace kube-system \
        --cluster {your_cluster_name} \
        --role-name AmazonEKS_EFS_CSI_DriverRole \
        --role-only \
        --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEKS_EFS_CSI_DriverPolicy \
        --approve
    
    # Edit the trust policy of the role AmazonEKS_EFS_CSI_DriverRole
    TRUST_POLICY=$(aws iam get-role --role-name AmazonEKS_EFS_CSI_DriverRole --query 'Role.AssumeRolePolicyDocument' | \
        sed -e 's/efs-csi-controller-sa/efs-csi-*/' -e 's/StringEquals/StringLike/')
    aws iam update-assume-role-policy --role-name AmazonEKS_EFS_CSI_DriverRole --policy-document "$TRUST_POLICY"
    
    # To create the service account efs-csi-controller-sa
    eksctl create iamserviceaccount \
        --name efs-csi-controller-sa \
        --namespace kube-system \
        --cluster {your_cluster_name} \
        --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEKS_EFS_CSI_DriverPolicy \
        --attach-role-arn arn:aws:iam::{your_account_id}:role/AmazonEKS_EFS_CSI_DriverRole \
        --approve
    
    # To create the service account efs-csi-controller-sa
    eksctl create iamserviceaccount \
        --name efs-csi-node-sa \
        --namespace kube-system \
        --cluster {your_cluster_name} \
        --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEKS_EFS_CSI_DriverPolicy \
        --attach-role-arn arn:aws:iam::{your_account_id}:role/AmazonEKS_EFS_CSI_DriverRole \
        --approve

    See the Amazon EFS CSI driver help article for more information on installing Amazon EFS CSI Driver.


    ii) Install the Amazon EFS Driver using the following command,

    # To install the Amazon EFS CSI driver using Helm v3, 
    helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver/
    helm repo update aws-efs-csi-driver
    helm upgrade --install aws-efs-csi-driver aws-efs-csi-driver/aws-efs-csi-driver \
    --namespace kube-system \
    --set image.repository={amazon_container_image_registry}/eks/aws-efs-csi-driver \
    --set controller.serviceAccount.create=false \
    --set controller.serviceAccount.name=efs-csi-controller-sa

    Download the container image registry from this link.


    iii) Create an Amazon EFS File system using the following commands,

    # To retrieve the VPC ID of the cluster
    vpc_id=$(aws eks describe-cluster \
        --name {your_cluster_name} \
        --query "cluster.resourcesVpcConfig.vpcId" \
        --output text)
    
    # To retrieve the CIDR range for your cluster's VPC
    cidr_range=$(aws ec2 describe-vpcs \
        --vpc-ids $vpc_id \
        --query "Vpcs[].CidrBlock" \
        --output text \
        --region {your_region_code})
    
    # To create a security group with an inbound rule that allows inbound NFS traffic for your Amazon EFS mount points
    security_group_id=$(aws ec2 create-security-group \
        --group-name AmazonEfsSecurityGroup \
        --description "Amazon EFS security group" \
        --vpc-id $vpc_id \
        --output text)
    aws ec2 authorize-security-group-ingress \
        --group-id $security_group_id \
        --protocol tcp \
        --port 2049 \
        --cidr $cidr_range
    
    # To create an Amazon EFS file system for your Amazon EKS cluster
    file_system_id=$(aws efs create-file-system \
        --region {your_region_code} \
        --performance-mode generalPurpose \
        --query 'FileSystemId' \
        --output text)
    
    # To get the IDs of the subnets in your VPC 
    aws ec2 describe-subnets \
        --filters "Name=vpc-id,Values=$vpc_id" \
        --query 'Subnets[*].{SubnetId: SubnetId,AvailabilityZone: AvailabilityZone,CidrBlock: CidrBlock}' \
        --output table
    
    # To add mount target for each subnet that in your VPC
    aws efs create-mount-target \
        --file-system-id $file_system_id \
        --subnet-id {subnet_id} \
        --security-groups $security_group_id

    See this help article for more information on creating an Amazon EFS file system.


    iv) Create a storage class for EFS using the following command;

    # retrieve your Amazon EFS file system id
    aws efs describe-file-systems --query "FileSystems[*].FileSystemId" --output text
    
    # download a storage manifest file
    curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/storageclass.yaml
    
    # replace the value for "fileSystemId" with your EFS file system id in the file "storageclass.yaml" and save it
    # fileSystemId: {your_EFS_file_system_id}
    
    # deploy the storage class
    kubectl apply -f storageclass.yaml

    See this help article for more information on creating a storage class for EFS.

  5. Create Docker Image Pulling Secret (Optional Step)

    In case you are using a private docker repository, this step is mandatory. Otherwise, you can skip this step. Use the following command,

    kubectl create secret docker-registry {secret\_name} --docker-server={docker\_server} --docker-username={user\_name} --docker-password={password}

    Use the following command to fetch the secret;

    kubectl get secret
  6. Create PVC Resources

    For PVC Resources, we dynamically use the efs-sc storage class to provision the persistent volumes. Follow the below instructions to create PVC resource,


    i) Prepare a YAML file (name it pvc.yaml) like the one below,

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: pvc-wyn-data
    spec:
    accessModes:
        - ReadWriteMany
    storageClassName: efs-sc
    resources:
        requests:
        storage: 30Gi

    ii) Use the following command to create the PVC resource;

    kubectl apply -f pvc.yaml

    iii) Fetch the PVC resource using the following command;

    kubectl get pvc
  7. Deploy Wyn

    Follow the below instructions to deploy Wyn in Amazon EKS;

    i) Prepare the configuration file like the one below,

    pvcName: pvc-wyn-data
    ingress:
    enabled: true
    apiVersion: networking.k8s.io/v1
    name: wyn-ingress
    annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    hosts:
    \- paths:
    \- /
    
    identityServerUrl: http://wyn-server:51980
    
    database:
    provider: {database_provider}
    connectionStrings:
        dataExtraction: {database_connection_string}
         serverStorage:  {database_connection_string}
        identityServer: {database_connection_string}
    
    server:
     replicas: 1
     enabled: true
    analysisDbService:
     enabled: false
    schedulerService:
     enabled: true
    memoryDbService:
    enabled: true
    dataSourceService:
    enabled: true
    cotWorker:
    enabled: true
     replicas: 1
    reportingWorker:
     enabled: true
     replicas: 1
    dashboardWorker:
     enabled: true
     replicas: 1

    ii) Deploy Wyn with Helm v3 using the following commands;

    # add helm repo
    helm repo add wyn https://cdn.wynenterprise.io/BI/installation/helm-charts/
    helm repo update wyn
    
    # deploy Wyn
         helm install wyn -f eks-values.yaml wyn/wyn-enterprise
    
    # wait all pods are running and ready to work
         kubectl get pod
    
    # get the public address provided by ingress
         kubectl get ingress wyn-ingress

    Now, you can visit Wyn Enterprise using the URL - http://{ingress_address}

  8. Update the Settings

    To update the Helm chart settings, edit the eks-values.yaml configuration file, and then update your deployment using the following command;

    helm upgrade wyn -f eks-values.yaml wyn/wyn-enterprise
  9. Horizontal Pod Autoscaler (HPA)

    The Kubernetes Horizontal Pod Autoscaler is used to scale the number of pods in a deployment. See Horizontal Pod Autoscaler for more information. Follow the below instructions to deploy HPA,

    i) Use the following command to install the Kubernetes Metrics Server,

    # Deploy the Metrics Server with the following command
      kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
    # Verify that the metrics-server deployment is running the desired number of Pods with the following command
      kubectl get deployment metrics-server -n kube-system

    See Installing the Kubernetes Metrics Server help article for more information.


    ii) Create HPA based on your requirements

    Use the following command to create your HPA resource;

    kubectl autoscale deployment wyn-server --cpu-percent=25 --min=2 --max=4`

    You can also create a YAML file like the one below,

    apiVersion: autoscaling/v2
        kind: HorizontalPodAutoscaler
        metadata:
        name: hpa-wyn-server
        spec:
        maxReplicas: 2
        minReplicas: 1
        scaleTargetRef:
            apiVersion: apps/v1
            kind: Deployment
            name: wyn-server
        metrics:
        - type: Resource
            resource:
            name: cpu
            target:
                type: Utilization
                averageUtilization: 25
        ---
        apiVersion: autoscaling/v2
        kind: HorizontalPodAutoscaler
        metadata:
        name: hpa-wyn-cot-worker
        spec:
        maxReplicas: 2
        minReplicas: 1
        scaleTargetRef:
            apiVersion: apps/v1
            kind: StatefulSet
            name: wyn-cot-worker
        metrics:
        - type: Resource
            resource:
            name: memory
            target:
                type: Utilization
                averageUtilization: 40

    Deploy the autoscalers using the following command;

    kubectl apply -f hpa.yaml
    kubectl get hpa

    See the HorizontalPodAutoscaler Walkthrough article for more information.

To Uninstall Wyn

Use the following command to uninstall the Wyn Enterprise application;

helm uninstall wyn