-
Products and Features
- How to Create and Manage VPC on CloudRaya
- Getting Started with CloudRaya Container Registry
- How to use Sudo on a CloudRaya Linux VM
- Keeping Your CloudRaya Linux VMs Up-to-Date
- Maximizing StorageRaya with Essential Practices
- Assign Multiple IP Addresses to Virtual Machine
- Generating a CloudRaya API key
- Simplify CloudRaya Management with API
- Deploying a Virtual Machine on CloudRaya
- Deploying a Kubernetes Cluster on KubeRaya
- Using StorageRaya – CloudRaya S3 Object Storage
- Opening Ping Access on Cloud Raya VM Public IP
- Maximize Your Storage Raya Access Speed with Content Delivery Network (CDN)
- How to Create Project Tag in Cloud Raya for More Organized VM Billing Report
- Exporting Cloud Raya VM to outer Cloud Raya's Infrastructure using Acronis Cyber Protect
- SSO Management on Cloud Raya
- Using the SSH key Feature in Cloud Raya Dashboard
- Cloud Raya Load Balancer, Solution to Distribute Load Equally
- Create your own VPN server with DNS-Level AdBlocker using PiVPN
- Fix Broken LetsEncrypt SSL Certificate due to Expired Root CA Certificate
- How to Make a Snapshot and Configure VM Backup in Cloud Raya
- How to Request Services or Licenses Products
- Adding, Attaching, and Resize Root Storage Disk in Cloud Raya VPS
- Managing your DNS Zone with DNS Bucket in Cloud Raya
- Create VM, Custom Package, Reinstall VM, and Adjusting Security Profile
- How to backup Linux VM via Acronis in Cloud Raya
- How to Backup Desktop Linux and Windows via Acronis in Cloud Raya
- Backing-Up Cloud Raya Windows VM Using Acronis Cyber Protect
- Load Balancing in Cloud Raya
- Establishing a VPN in Cloud Raya
- Generating an API Token
- Deploying a Virtual Machine in Cloud Raya
- Show Remaining Articles17 Collapse Articles
-
- How to backup Linux VM via Acronis in Cloud Raya
- How to Backup Desktop Linux and Windows via Acronis in Cloud Raya
-
- Maximizing StorageRaya with Essential Practices
- Using StorageRaya – CloudRaya S3 Object Storage
- Building a Static Website Using Storage Raya S3 Bucket
- Integrating S3 Storage Raya and Strapi for Asset Storage Optimization – Part 4
- Maximize Your Storage Raya Access Speed with Content Delivery Network (CDN)
- Managing Storage Raya from various tools and from various OS
- Binding NextCloud with CloudRaya S3 Object Storage as External Storage Mount
-
- How to use Sudo on a CloudRaya Linux VM
- Keeping Your CloudRaya Linux VMs Up-to-Date
- Implement Multi-Factor Authentication on CloudRaya Linux VM
- Assign Multiple IP Addresses to Virtual Machine
- Deploying a Virtual Machine on CloudRaya
- Configurating cPanel Using Ubuntu 20.04 on CloudRaya – Part 2
- Deploying cPanel Using Ubuntu 20.04 on CloudRaya - Part 1
- Exporting Cloud Raya VM to outer Cloud Raya's Infrastructure using Acronis Cyber Protect
- Using the SSH key Feature in Cloud Raya Dashboard
- Adding, Attaching, and Resize Root Storage Disk in Cloud Raya VPS
- Create VM, Custom Package, Reinstall VM, and Adjusting Security Profile
- How to backup Linux VM via Acronis in Cloud Raya
- Backing-Up Cloud Raya Windows VM Using Acronis Cyber Protect
- Deploying a Virtual Machine in Cloud Raya
-
Integration
- Implement Multi-Factor Authentication on CloudRaya Linux VM
- Accessing KubeRaya Cluster Using the Kubernetes Dashboard
- Building a Static Website Using Storage Raya S3 Bucket
- Integrating S3 Storage Raya and Strapi for Asset Storage Optimization – Part 4
- Integrating Strapi Content to Frontend React - Part 3
- Content Management with Strapi Headless CMS - Part 2
- Strapi Headless CMS Installation in CloudRaya - Part. 1
- Using SSH Key on CloudRaya VM with PuTTY
- Installing Multiple PHP Versions in One VM for More Flexible Web Development
- Replatforming Apps to K8s with RKE and GitLab CI
- OpenAI API Integration: Completions in PHP
- Building an Email Server on CloudRaya Using iRedMail
- Improving Email Delivery with Sendinblue SMTP Relay
- Building a Self Hosted Password Manager Using Passbolt
- How to Install Podman on Almalinux/Rocky Linux 9
- ElkarBackup: GUI Based backup Tools based on Rsync and Rsnapshot
- Improving Webserver Performance with SSL Termination on NGINX Load Balancer
- Using NGINX as an HTTP Load Balancer
- Automating Task with Cronjob
- Upgrade Zimbra and the OS Version
- Deploy Mailu on Rancher Kubernetes
- Export and Import Database in MySQL or MariaDB Using Mysqldump
- Backup & Sync Local and Remote Directories Using RSYNC
- Managing Storage Raya from various tools and from various OS
- Binding NextCloud with CloudRaya S3 Object Storage as External Storage Mount
- Simple monitoring and alerting with Monit on Ubuntu 22.04 LTS
- VS Code on your browser! How to install code-server on a VM
- Implementing Redis HA and Auto-Failover on Cloud Raya
- Using XFCE Desktop Environment on Cloud Raya VM
- Installing Python 3.7-3.9 on Ubuntu 22.04 Jammy LTS using PPA
- Implementing Continuous Integration with Gitlab CI and Continuous Delivery with Rancher Fleet
- Using Collabora Online on Cloud Raya NextCloud's VM
- Installing NextCloud in Cloud Raya- Detail Steps from the Beginning to the Very End
- Set Up High Availability PostgreSQL Cluster Using Patroni on Cloud Raya
- Set Up WAF KEMP in Cloud Raya Part 2
- Set Up WAF KEMP in Cloud Raya Part 1
- Using the SSH key Feature in Cloud Raya Dashboard
- Monitor Your Services Uptime Using Uptime Kuma
- Hosting Static Website with Hugo on Cloud Raya
- Kubernetes Ingress Controller using SSL in CloudRaya
- Reverse Proxy management using Nginx Proxy Manager
- Create your own VPN server with DNS-Level AdBlocker using PiVPN
- How to deploy Portainer on Linux to easily manage your docker containers
- High Availability Kubernetes Using RKE in Cloud Raya Part 3
- High Availability Kubernetes Using RKE in Cloud Raya Part 2
- High Availability Kubernetes Using RKE in Cloud Raya Part 1
- How to backup Linux VM via Acronis in Cloud Raya
- How to Backup Desktop Linux and Windows via Acronis in Cloud Raya
- Deploying Magento on Cloud Raya
- How to Install Nextcloud on Cloud Raya
- How to Install CWP in Cloud Raya
- How to Install Node.js and Launch Your First Node App
- How to install and secure MariaDB on Ubuntu 18.04 and 20.04 on Cloud Raya
- How to Install and Securing MongoDB on Ubuntu 18.04 and 20.04
- Classes: Post Installation on Ansible
- Classes: Install and Configure Ansible
- Classes: Introduction to Ansible for a robust Configuration Management
- How to Setup Active Directory Domain Service & DNS with Cloud Raya
- How to Host Your Own Docker Hub in Cloud Raya
- How to Setup Your Own Laravel with Nginx in Ubuntu 18.04
- How to Deploy Container in Cloud Raya using Docker
- Securing CentOS with iptables
- Install and Configure Squid Proxy in Ubuntu
- Installing Apache and Tomcat: A Quick Way
- Securing Ubuntu with UFW
- Install a Node.js and Launch a Node App on Ubuntu 18.04
- Installing LAMP in Ubuntu
- Installing LEMP Stack on Ubuntu 18.04
- Show Remaining Articles53 Collapse Articles
-
- Articles coming soon
-
- Implement Multi-Factor Authentication on CloudRaya Linux VM
- Configurating cPanel Using Ubuntu 20.04 on CloudRaya – Part 2
- Deploying cPanel Using Ubuntu 20.04 on CloudRaya - Part 1
- Integrating S3 Storage Raya and Strapi for Asset Storage Optimization – Part 4
- Integrating Strapi Content to Frontend React - Part 3
- Content Management with Strapi Headless CMS - Part 2
- Strapi Headless CMS Installation in CloudRaya - Part. 1
- Using SSH Key on CloudRaya VM with PuTTY
- Building an Email Server on CloudRaya Using iRedMail
- Improving Email Delivery with Sendinblue SMTP Relay
- Building a Self Hosted Password Manager Using Passbolt
- ElkarBackup: GUI Based backup Tools based on Rsync and Rsnapshot
- Improving Webserver Performance with SSL Termination on NGINX Load Balancer
- Using NGINX as an HTTP Load Balancer
- Upgrade Zimbra and the OS Version
- Deploy Mailu on Rancher Kubernetes
- Managing Storage Raya from various tools and from various OS
- Binding NextCloud with CloudRaya S3 Object Storage as External Storage Mount
- Simple monitoring and alerting with Monit on Ubuntu 22.04 LTS
- VS Code on your browser! How to install code-server on a VM
- Implementing Redis HA and Auto-Failover on Cloud Raya
- Using XFCE Desktop Environment on Cloud Raya VM
- Implementing Continuous Integration with Gitlab CI and Continuous Delivery with Rancher Fleet
- Using Collabora Online on Cloud Raya NextCloud's VM
- Installing NextCloud in Cloud Raya- Detail Steps from the Beginning to the Very End
- Set Up WAF KEMP in Cloud Raya Part 2
- Set Up WAF KEMP in Cloud Raya Part 1
- Monitor Your Services Uptime Using Uptime Kuma
- Create your own VPN server with DNS-Level AdBlocker using PiVPN
- How to deploy Portainer on Linux to easily manage your docker containers
- High Availability Kubernetes Using RKE in Cloud Raya Part 3
- High Availability Kubernetes Using RKE in Cloud Raya Part 2
- High Availability Kubernetes Using RKE in Cloud Raya Part 1
- How to Install Nextcloud on Cloud Raya
- Classes: Post Installation on Ansible
- Classes: Install and Configure Ansible
- Classes: Introduction to Ansible for a robust Configuration Management
- Connect Windows Active Directory on Cloud Raya with Azure AD
- How to Host Your Own Docker Hub in Cloud Raya
- How to Deploy Container in Cloud Raya using Docker
- Show Remaining Articles25 Collapse Articles
-
- Accessing KubeRaya Cluster Using the Kubernetes Dashboard
- Integrating S3 Storage Raya and Strapi for Asset Storage Optimization – Part 4
- Integrating Strapi Content to Frontend React - Part 3
- Content Management with Strapi Headless CMS - Part 2
- Strapi Headless CMS Installation in CloudRaya - Part. 1
- Creating Interactive Chatbot with OpenAI API in PHP
- Installing Multiple PHP Versions in One VM for More Flexible Web Development
- OpenAI API Integration: Completions in PHP
- Improving Webserver Performance with SSL Termination on NGINX Load Balancer
- Using NGINX as an HTTP Load Balancer
- Automating Task with Cronjob
- How to Deploy Django App on Cloud Raya VM Using Gunicorn, Supervisor, and Nginx
- How to Install Node.js and Launch Your First Node App
- How to Setup Your Own Laravel with Nginx in Ubuntu 18.04
- Install a Node.js and Launch a Node App on Ubuntu 18.04
-
- How to use Sudo on a CloudRaya Linux VM
- Keeping Your CloudRaya Linux VMs Up-to-Date
- Implement Multi-Factor Authentication on CloudRaya Linux VM
- Using SSH Key on CloudRaya VM with PuTTY
- Building a Self Hosted Password Manager Using Passbolt
- Improving Webserver Performance with SSL Termination on NGINX Load Balancer
- Export and Import Database in MySQL or MariaDB Using Mysqldump
- Backup & Sync Local and Remote Directories Using RSYNC
- How to Deploy Django App on Cloud Raya VM Using Gunicorn, Supervisor, and Nginx
- Set Up WAF KEMP in Cloud Raya Part 2
- Set Up WAF KEMP in Cloud Raya Part 1
- Using the SSH key Feature in Cloud Raya Dashboard
- How to backup Linux VM via Acronis in Cloud Raya
- How to Backup Desktop Linux and Windows via Acronis in Cloud Raya
- Securing CentOS with iptables
- Securing Ubuntu with UFW
- Show Remaining Articles1 Collapse Articles
-
- Configurating cPanel Using Ubuntu 20.04 on CloudRaya – Part 2
- Deploying cPanel Using Ubuntu 20.04 on CloudRaya - Part 1
- Integrating S3 Storage Raya and Strapi for Asset Storage Optimization – Part 4
- Integrating Strapi Content to Frontend React - Part 3
- Content Management with Strapi Headless CMS - Part 2
- Strapi Headless CMS Installation in CloudRaya - Part. 1
- Creating Interactive Chatbot with OpenAI API in PHP
- Installing Multiple PHP Versions in One VM for More Flexible Web Development
- Building an Email Server on CloudRaya Using iRedMail
- Building a Self Hosted Password Manager Using Passbolt
- Improving Webserver Performance with SSL Termination on NGINX Load Balancer
- Using NGINX as an HTTP Load Balancer
- Installing Python 3.7-3.9 on Ubuntu 22.04 Jammy LTS using PPA
- Reverse Proxy management using Nginx Proxy Manager
- Install and Configure Squid Proxy in Ubuntu
- Installing Apache and Tomcat: A Quick Way
- Installing LAMP in Ubuntu
- Installing LEMP Stack on Ubuntu 18.04
- Show Remaining Articles3 Collapse Articles
-
- Building a Static Website Using Storage Raya S3 Bucket
- Integrating S3 Storage Raya and Strapi for Asset Storage Optimization – Part 4
- Integrating Strapi Content to Frontend React - Part 3
- Content Management with Strapi Headless CMS - Part 2
- Strapi Headless CMS Installation in CloudRaya - Part. 1
- Creating Interactive Chatbot with OpenAI API in PHP
- Installing Multiple PHP Versions in One VM for More Flexible Web Development
- OpenAI API Integration: Completions in PHP
- Hosting Static Website with Hugo on Cloud Raya
- Deploying Magento on Cloud Raya
- How to Install CWP in Cloud Raya
- How to Setup Active Directory Domain Service & DNS with Cloud Raya
-
- Articles coming soon
Basic Setup Service for Kubernetes Cluster on Kuberaya
Introduction
Kuberaya offers capabilities to auto-provisioning nodes in Kubernetes’ cluster in multi-region. The available regions for the process of Kubernetes cluster creation in KubeRaya are Jakarta, Surabaya, and Seattle. KubeRaya also contributes to IP provisioning management, when you are about to add IP from the Load Balancer service, Node Port, or Ingress service.
But, to enjoy the services such as ingress and persistent storage using storage class, you need additional setup in the Kubernetes cluster that you have created.
In this step, we are going to explain how to implement the additional setup, so you can get the additional service that you need.
Set Up Points
The additional setup that we are going to explain in this point is setting up the service like Ingress and Storage Class.
Ingress Setup
Before starting the Ingress setup, we need to determine first which Ingress Controller is needed on the Kubernetes cluster that we have built.
In general, the Ingress Controller commonly used by users in a Kubernetes cluster is the Nginx Ingress Controller. Not only that, Kubernetes also supports AWS and GCP as Ingress Controllers.
Referring to the documentation above, we will proceed with the setup using the Nginx Ingress Controller. The very simple step is to deploy the Nginx Ingress Controller into the Kubernetes cluster using a Helm chart. You can find the full values for the Nginx Ingress Helm chart by typing the following command.
helm show values ingress-nginx –repo https://kubernetes.github.io/ingress-nginx
In the Helm chart values, I will change the deployment type to DaemonSet so that the pods can run on every available node. Additionally, I will make several adjustments to variables to support a more optimal Ingress performance.
Furthermore, I will add tolerations to the pod configuration so that it can run on controller nodes with the NoSchedule taint.
Here is the full example values set after making some edits to the relevant variables.
## nginx configuration
## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/index.md
##
## Overrides for generated resource names
# See templates/_helpers.tpl
# nameOverride:
# fullnameOverride:
# -- Override the deployment namespace; defaults to .Release.Namespace
namespaceOverride: ""
## Labels to apply to all resources
##
commonLabels: {}
# scmhash: abc123
# myLabel: aakkmd
controller:
name: controller
enableAnnotationValidations: false
image:
## Keep false as default for now!
chroot: false
registry: registry.k8s.io
image: ingress-nginx/controller
## for backwards compatibility consider setting the full image url via the repository value below
## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail
## repository:
tag: "v1.9.4"
digest: sha256:5b161f051d017e55d358435f295f5e9a297e66158f136321d9b04520ec6c48a3
digestChroot: sha256:5976b1067cfbca8a21d0ba53d71f83543a73316a61ea7f7e436d6cf84ddf9b26
pullPolicy: IfNotPresent
# www-data -> uid 101
runAsUser: 101
allowPrivilegeEscalation: true
# -- Use an existing PSP instead of creating one
existingPsp: ""
# -- Configures the controller container name
containerName: controller
# -- Configures the ports that the nginx-controller listens on
containerPort:
http: 80
https: 443
# -- Will add custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
config:
# Fine Tuning
use-forwarded-headers: "true"
proxy-connect-timeout: "120"
proxy-read-timeout: "120"
proxy-body-size: "32m"
# -- Annotations to be added to the controller config configuration configmap.
configAnnotations: {}
# -- Will add custom headers before sending traffic to backends according to https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/custom-headers
proxySetHeaders: {}
# -- Will add custom headers before sending response traffic to the client according to: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#add-headers
addHeaders: {}
# -- Optionally customize the pod dnsConfig.
dnsConfig: {}
# -- Optionally customize the pod hostAliases.
hostAliases: []
# - ip: 127.0.0.1
# hostnames:
# - foo.local
# - bar.local
# - ip: 10.1.2.3
# hostnames:
# - foo.remote
# - bar.remote
# -- Optionally customize the pod hostname.
hostname: {}
# -- Optionally change this to ClusterFirstWithHostNet in case you have 'hostNetwork: true'.
# By default, while using host network, name resolution uses the host's DNS. If you wish nginx-controller
# to keep resolving names inside the k8s network, use ClusterFirstWithHostNet.
dnsPolicy: ClusterFirst
# -- Bare-metal considerations via the host network https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network
# Ingress status was blank because there is no Service exposing the Ingress-Nginx Controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply
reportNodeInternalIp: false
# -- Process Ingress objects without ingressClass annotation/ingressClassName field
# Overrides value for --watch-ingress-without-class flag of the controller binary
# Defaults to false
watchIngressWithoutClass: false
# -- Process IngressClass per name (additionally as per spec.controller).
ingressClassByName: false
# -- This configuration enables Topology Aware Routing feature, used together with service annotation service.kubernetes.io/topology-mode="auto"
# Defaults to false
enableTopologyAwareRouting: false
# -- This configuration defines if Ingress Controller should allow users to set
# their own *-snippet annotations, otherwise this is forbidden / dropped
# when users add those annotations.
# Global snippets in ConfigMap are still respected
allowSnippetAnnotations: false
# -- Required for use with CNI based kubernetes installations (such as ones set up by kubeadm),
# since CNI and hostport don't mix yet. Can be deprecated once https://github.com/kubernetes/kubernetes/issues/23920
# is merged
hostNetwork: false
## Use host ports 80 and 443
## Disabled by default
hostPort:
# -- Enable 'hostPort' or not
enabled: false
ports:
# -- 'hostPort' http port
http: 80
# -- 'hostPort' https port
https: 443
# NetworkPolicy for controller component.
networkPolicy:
# -- Enable 'networkPolicy' or not
enabled: false
# -- Election ID to use for status update, by default it uses the controller name combined with a suffix of 'leader'
electionID: ""
## This section refers to the creation of the IngressClass resource
## IngressClass resources are supported since k8s >= 1.18 and required since k8s >= 1.19
ingressClassResource:
# -- Name of the ingressClass
name: nginx
# -- Is this ingressClass enabled or not
enabled: true
# -- Is this the default ingressClass for the cluster
default: false
# -- Controller-value of the controller that is processing this ingressClass
controllerValue: "k8s.io/ingress-nginx"
# -- Parameters is a link to a custom resource containing additional
# configuration for the controller. This is optional if the controller
# does not require extra parameters.
parameters: {}
# -- For backwards compatibility with ingress.class annotation, use ingressClass.
# Algorithm is as follows, first ingressClassName is considered, if not present, controller looks for ingress.class annotation
ingressClass: nginx
# -- Labels to add to the pod container metadata
podLabels: {}
# key: value
# -- Security Context policies for controller pods
podSecurityContext: {}
# -- See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for notes on enabling and using sysctls
sysctls: {}
# sysctls:
# "net.core.somaxconn": "8192"
# -- Allows customization of the source of the IP address or FQDN to report
# in the ingress status field. By default, it reads the information provided
# by the service. If disable, the status field reports the IP address of the
# node or nodes where an ingress controller pod is running.
publishService:
# -- Enable 'publishService' or not
enabled: true
# -- Allows overriding of the publish service to bind to
# Must be /
pathOverride: ""
# Limit the scope of the controller to a specific namespace
scope:
# -- Enable 'scope' or not
enabled: false
# -- Namespace to limit the controller to; defaults to $(POD_NAMESPACE)
namespace: ""
# -- When scope.enabled == false, instead of watching all namespaces, we watching namespaces whose labels
# only match with namespaceSelector. Format like foo=bar. Defaults to empty, means watching all namespaces.
namespaceSelector: ""
# -- Allows customization of the configmap / nginx-configmap namespace; defaults to $(POD_NAMESPACE)
configMapNamespace: ""
tcp:
# -- Allows customization of the tcp-services-configmap; defaults to $(POD_NAMESPACE)
configMapNamespace: ""
# -- Annotations to be added to the tcp config configmap
annotations: {}
udp:
# -- Allows customization of the udp-services-configmap; defaults to $(POD_NAMESPACE)
configMapNamespace: ""
# -- Annotations to be added to the udp config configmap
annotations: {}
# -- Maxmind license key to download GeoLite2 Databases.
## https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases
maxmindLicenseKey: ""
# -- Additional command line arguments to pass to Ingress-Nginx Controller
# E.g. to specify the default SSL certificate you can use
extraArgs: {}
## extraArgs:
## default-ssl-certificate: "/"
# -- Additional environment variables to set
extraEnvs: []
# extraEnvs:
# - name: FOO
# valueFrom:
# secretKeyRef:
# key: FOO
# name: secret-resource
# -- Use a `DaemonSet` or `Deployment`
# kind: Deployment
kind: DaemonSet
# -- Annotations to be added to the controller Deployment or DaemonSet
##
annotations: {}
# keel.sh/pollSchedule: "@every 60m"
# -- Labels to be added to the controller Deployment or DaemonSet and other resources that do not have option to specify labels
##
labels: {}
# keel.sh/policy: patch
# keel.sh/trigger: poll
# -- The update strategy to apply to the Deployment or DaemonSet
##
updateStrategy: {}
# rollingUpdate:
# maxUnavailable: 1
# type: RollingUpdate
# -- `minReadySeconds` to avoid killing pods before we are ready
##
minReadySeconds: 0
# -- Node tolerations for server scheduling to nodes with taints
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
##
tolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
# - key: "key"
# operator: "Equal|Exists"
# value: "value"
# effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"
# -- Affinity and anti-affinity rules for server scheduling to nodes
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
# # An example of preferred pod anti-affinity, weight is in the range 1-100
# podAntiAffinity:
# preferredDuringSchedulingIgnoredDuringExecution:
# - weight: 100
# podAffinityTerm:
# labelSelector:
# matchExpressions:
# - key: app.kubernetes.io/name
# operator: In
# values:
# - ingress-nginx
# - key: app.kubernetes.io/instance
# operator: In
# values:
# - ingress-nginx
# - key: app.kubernetes.io/component
# operator: In
# values:
# - controller
# topologyKey: kubernetes.io/hostname
# # An example of required pod anti-affinity
# podAntiAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - labelSelector:
# matchExpressions:
# - key: app.kubernetes.io/name
# operator: In
# values:
# - ingress-nginx
# - key: app.kubernetes.io/instance
# operator: In
# values:
# - ingress-nginx
# - key: app.kubernetes.io/component
# operator: In
# values:
# - controller
# topologyKey: "kubernetes.io/hostname"
# -- Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in.
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
##
topologySpreadConstraints: []
# - labelSelector:
# matchLabels:
# app.kubernetes.io/name: '{{ include "ingress-nginx.name" . }}'
# app.kubernetes.io/instance: '{{ .Release.Name }}'
# app.kubernetes.io/component: controller
# topologyKey: topology.kubernetes.io/zone
# maxSkew: 1
# whenUnsatisfiable: ScheduleAnyway
# - labelSelector:
# matchLabels:
# app.kubernetes.io/name: '{{ include "ingress-nginx.name" . }}'
# app.kubernetes.io/instance: '{{ .Release.Name }}'
# app.kubernetes.io/component: controller
# topologyKey: kubernetes.io/hostname
# maxSkew: 1
# whenUnsatisfiable: ScheduleAnyway
# -- `terminationGracePeriodSeconds` to avoid killing pods before we are ready
## wait up to five minutes for the drain of connections
##
terminationGracePeriodSeconds: 300
# -- Node labels for controller pod assignment
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
##
nodeSelector:
kubernetes.io/os: linux
## Liveness and readiness probe values
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
## startupProbe:
## httpGet:
## # should match container.healthCheckPath
## path: "/healthz"
## port: 10254
## scheme: HTTP
## initialDelaySeconds: 5
## periodSeconds: 5
## timeoutSeconds: 2
## successThreshold: 1
## failureThreshold: 5
livenessProbe:
httpGet:
# should match container.healthCheckPath
path: "/healthz"
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
# should match container.healthCheckPath
path: "/healthz"
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
# -- Path of the health check endpoint. All requests received on the port defined by
# the healthz-port parameter are forwarded internally to this path.
healthCheckPath: "/healthz"
# -- Address to bind the health check endpoint.
# It is better to set this option to the internal node address
# if the Ingress-Nginx Controller is running in the `hostNetwork: true` mode.
healthCheckHost: ""
# -- Annotations to be added to controller pods
##
podAnnotations: {}
replicaCount: 1
# -- Minimum available pods set in PodDisruptionBudget.
# Define either 'minAvailable' or 'maxUnavailable', never both.
minAvailable: 1
# -- Maximum unavalaile pods set in PodDisruptionBudget. If set, 'minAvailable' is ignored.
# maxUnavailable: 1
## Define requests resources to avoid probe issues due to CPU utilization in busy nodes
## ref: https://github.com/kubernetes/ingress-nginx/issues/4735#issuecomment-551204903
## Ideally, there should be no limits.
## https://engineering.indeedblog.com/blog/2019/12/cpu-throttling-regression-fix/
resources:
## limits:
## cpu: 100m
## memory: 90Mi
requests:
cpu: 100m
memory: 90Mi
# Mutually exclusive with keda autoscaling
autoscaling:
enabled: false
annotations: {}
minReplicas: 1
maxReplicas: 11
targetCPUUtilizationPercentage: 50
targetMemoryUtilizationPercentage: 50
behavior: {}
# scaleDown:
# stabilizationWindowSeconds: 300
# policies:
# - type: Pods
# value: 1
# periodSeconds: 180
# scaleUp:
# stabilizationWindowSeconds: 300
# policies:
# - type: Pods
# value: 2
# periodSeconds: 60
autoscalingTemplate: []
# Custom or additional autoscaling metrics
# ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics
# - type: Pods
# pods:
# metric:
# name: nginx_ingress_controller_nginx_process_requests_total
# target:
# type: AverageValue
# averageValue: 10000m
# Mutually exclusive with hpa autoscaling
keda:
apiVersion: "keda.sh/v1alpha1"
## apiVersion changes with keda 1.x vs 2.x
## 2.x = keda.sh/v1alpha1
## 1.x = keda.k8s.io/v1alpha1
enabled: false
minReplicas: 1
maxReplicas: 11
pollingInterval: 30
cooldownPeriod: 300
# fallback:
# failureThreshold: 3
# replicas: 11
restoreToOriginalReplicaCount: false
scaledObject:
annotations: {}
# Custom annotations for ScaledObject resource
# annotations:
# key: value
triggers: []
# - type: prometheus
# metadata:
# serverAddress: http://:9090
# metricName: http_requests_total
# threshold: '100'
# query: sum(rate(http_requests_total{deployment="my-deployment"}[2m]))
behavior: {}
# scaleDown:
# stabilizationWindowSeconds: 300
# policies:
# - type: Pods
# value: 1
# periodSeconds: 180
# scaleUp:
# stabilizationWindowSeconds: 300
# policies:
# - type: Pods
# value: 2
# periodSeconds: 60
# -- Enable mimalloc as a drop-in replacement for malloc.
## ref: https://github.com/microsoft/mimalloc
##
enableMimalloc: true
## Override NGINX template
customTemplate:
configMapName: ""
configMapKey: ""
service:
enabled: true
# -- If enabled is adding an appProtocol option for Kubernetes service. An appProtocol field replacing annotations that were
# using for setting a backend protocol. Here is an example for AWS: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# It allows choosing the protocol for each backend specified in the Kubernetes service.
# See the following GitHub issue for more details about the purpose: https://github.com/kubernetes/kubernetes/issues/40244
# Will be ignored for Kubernetes versions older than 1.20
##
appProtocol: true
# -- Annotations are mandatory for the load balancer to come up. Varies with the cloud service. Values passed through helm tpl engine.
annotations: {}
labels: {}
# clusterIP: ""
# -- List of IP addresses at which the controller services are available
## Ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
##
externalIPs: []
# -- Used by cloud providers to connect the resulting `LoadBalancer` to a pre-existing static IP according to https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
loadBalancerIP: ""
loadBalancerSourceRanges: []
# -- Used by cloud providers to select a load balancer implementation other than the cloud provider default. https://kubernetes.io/docs/concepts/services-networking/service/#load-balancer-class
loadBalancerClass: ""
enableHttp: true
enableHttps: true
## Set external traffic policy to: "Local" to preserve source IP on providers supporting it.
## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
# externalTrafficPolicy: ""
## Must be either "None" or "ClientIP" if set. Kubernetes will default to "None".
## Ref: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
# sessionAffinity: ""
## Specifies the health check node port (numeric port number) for the service. If healthCheckNodePort isn’t specified,
## the service controller allocates a port from your cluster’s NodePort range.
## Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
# healthCheckNodePort: 0
# -- Represents the dual-stack-ness requested or required by this Service. Possible values are
# SingleStack, PreferDualStack or RequireDualStack.
# The ipFamilies and clusterIPs fields depend on the value of this field.
## Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/
ipFamilyPolicy: "SingleStack"
# -- List of IP families (e.g. IPv4, IPv6) assigned to the service. This field is usually assigned automatically
# based on cluster configuration and the ipFamilyPolicy field.
## Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/
ipFamilies:
- IPv4
ports:
http: 80
https: 443
targetPorts:
http: http
https: https
type: LoadBalancer
## type: NodePort
## nodePorts:
## http: 32080
## https: 32443
## tcp:
## 8080: 32808
nodePorts:
http: ""
https: ""
tcp: {}
udp: {}
external:
enabled: true
internal:
# -- Enables an additional internal load balancer (besides the external one).
enabled: false
# -- Annotations are mandatory for the load balancer to come up. Varies with the cloud service. Values passed through helm tpl engine.
annotations: {}
# -- Used by cloud providers to connect the resulting internal LoadBalancer to a pre-existing static IP. Make sure to add to the service the needed annotation to specify the subnet which the static IP belongs to. For instance, `networking.gke.io/internal-load-balancer-subnet` for GCP and `service.beta.kubernetes.io/aws-load-balancer-subnets` for AWS.
loadBalancerIP: ""
# -- Restrict access For LoadBalancer service. Defaults to 0.0.0.0/0.
loadBalancerSourceRanges: []
## Set external traffic policy to: "Local" to preserve source IP on
## providers supporting it
## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
# externalTrafficPolicy: ""
# -- Custom port mapping for internal service
ports: {}
# http: 80
# https: 443
# -- Custom target port mapping for internal service
targetPorts: {}
# http: http
# https: https
# shareProcessNamespace enables process namespace sharing within the pod.
# This can be used for example to signal log rotation using `kill -USR1` from a sidecar.
shareProcessNamespace: false
# -- Additional containers to be added to the controller pod.
# See https://github.com/lemonldap-ng-controller/lemonldap-ng-controller as example.
extraContainers: []
# - name: my-sidecar
# image: nginx:latest
# - name: lemonldap-ng-controller
# image: lemonldapng/lemonldap-ng-controller:0.2.0
# args:
# - /lemonldap-ng-controller
# - --alsologtostderr
# - --configmap=$(POD_NAMESPACE)/lemonldap-ng-configuration
# env:
# - name: POD_NAME
# valueFrom:
# fieldRef:
# fieldPath: metadata.name
# - name: POD_NAMESPACE
# valueFrom:
# fieldRef:
# fieldPath: metadata.namespace
# volumeMounts:
# - name: copy-portal-skins
# mountPath: /srv/var/lib/lemonldap-ng/portal/skins
# -- Additional volumeMounts to the controller main container.
extraVolumeMounts: []
# - name: copy-portal-skins
# mountPath: /var/lib/lemonldap-ng/portal/skins
# -- Additional volumes to the controller pod.
extraVolumes: []
# - name: copy-portal-skins
# emptyDir: {}
# -- Containers, which are run before the app containers are started.
extraInitContainers: []
# - name: init-myservice
# image: busybox
# command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
# -- Modules, which are mounted into the core nginx image. See values.yaml for a sample to add opentelemetry module
extraModules: []
# - name: mytestmodule
# image: registry.k8s.io/ingress-nginx/mytestmodule
# containerSecurityContext:
# allowPrivilegeEscalation: false
#
# The image must contain a `/usr/local/bin/init_module.sh` executable, which
# will be executed as initContainers, to move its config files within the
# mounted volume.
opentelemetry:
enabled: false
image: registry.k8s.io/ingress-nginx/opentelemetry:v20230721-3e2062ee5@sha256:13bee3f5223883d3ca62fee7309ad02d22ec00ff0d7033e3e9aca7a9f60fd472
containerSecurityContext:
allowPrivilegeEscalation: false
resources: {}
admissionWebhooks:
annotations: {}
# ignore-check.kube-linter.io/no-read-only-rootfs: "This deployment needs write access to root filesystem".
## Additional annotations to the admission webhooks.
## These annotations will be added to the ValidatingWebhookConfiguration and
## the Jobs Spec of the admission webhooks.
enabled: true
# -- Additional environment variables to set
extraEnvs: []
# extraEnvs:
# - name: FOO
# valueFrom:
# secretKeyRef:
# key: FOO
# name: secret-resource
# -- Admission Webhook failure policy to use
failurePolicy: Fail
# timeoutSeconds: 10
port: 8443
certificate: "/usr/local/certificates/cert"
key: "/usr/local/certificates/key"
namespaceSelector: {}
objectSelector: {}
# -- Labels to be added to admission webhooks
labels: {}
# -- Use an existing PSP instead of creating one
existingPsp: ""
service:
annotations: {}
# clusterIP: ""
externalIPs: []
# loadBalancerIP: ""
loadBalancerSourceRanges: []
servicePort: 443
type: ClusterIP
createSecretJob:
securityContext:
allowPrivilegeEscalation: false
resources: {}
# limits:
# cpu: 10m
# memory: 20Mi
# requests:
# cpu: 10m
# memory: 20Mi
patchWebhookJob:
securityContext:
allowPrivilegeEscalation: false
resources: {}
patch:
enabled: true
image:
registry: registry.k8s.io
image: ingress-nginx/kube-webhook-certgen
## for backwards compatibility consider setting the full image url via the repository value below
## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail
## repository:
tag: v20231011-8b53cabe0
digest: sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80
pullPolicy: IfNotPresent
# -- Provide a priority class name to the webhook patching job
##
priorityClassName: ""
podAnnotations: {}
nodeSelector:
kubernetes.io/os: linux
tolerations: []
# -- Labels to be added to patch job resources
labels: {}
securityContext:
runAsNonRoot: true
runAsUser: 2000
fsGroup: 2000
# Use certmanager to generate webhook certs
certManager:
enabled: false
# self-signed root certificate
rootCert:
# default to be 5y
duration: ""
admissionCert:
# default to be 1y
duration: ""
# issuerRef:
# name: "issuer"
# kind: "ClusterIssuer"
metrics:
port: 10254
portName: metrics
# if this port is changed, change healthz-port: in extraArgs: accordingly
enabled: false
service:
annotations: {}
# prometheus.io/scrape: "true"
# prometheus.io/port: "10254"
# -- Labels to be added to the metrics service resource
labels: {}
# clusterIP: ""
# -- List of IP addresses at which the stats-exporter service is available
## Ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
##
externalIPs: []
# loadBalancerIP: ""
loadBalancerSourceRanges: []
servicePort: 10254
type: ClusterIP
# externalTrafficPolicy: ""
# nodePort: ""
serviceMonitor:
enabled: false
additionalLabels: {}
## The label to use to retrieve the job name from.
## jobLabel: "app.kubernetes.io/name"
namespace: ""
namespaceSelector: {}
## Default: scrape .Release.Namespace or namespaceOverride only
## To scrape all, use the following:
## namespaceSelector:
## any: true
scrapeInterval: 30s
# honorLabels: true
targetLabels: []
relabelings: []
metricRelabelings: []
prometheusRule:
enabled: false
additionalLabels: {}
# namespace: ""
rules: []
# # These are just examples rules, please adapt them to your needs
# - alert: NGINXConfigFailed
# expr: count(nginx_ingress_controller_config_last_reload_successful == 0) > 0
# for: 1s
# labels:
# severity: critical
# annotations:
# description: bad ingress config - nginx config test failed
# summary: uninstall the latest ingress changes to allow config reloads to resume
# - alert: NGINXCertificateExpiry
# expr: (avg(nginx_ingress_controller_ssl_expire_time_seconds) by (host) - time()) < 604800 # for: 1s # labels: # severity: critical # annotations: # description: ssl certificate(s) will expire in less then a week # summary: renew expiring certificates to avoid downtime # - alert: NGINXTooMany500s # expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"5.+"} ) / sum(nginx_ingress_controller_requests) ) > 5
# for: 1m
# labels:
# severity: warning
# annotations:
# description: Too many 5XXs
# summary: More than 5% of all requests returned 5XX, this requires your attention
# - alert: NGINXTooMany400s
# expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"4.+"} ) / sum(nginx_ingress_controller_requests) ) > 5
# for: 1m
# labels:
# severity: warning
# annotations:
# description: Too many 4XXs
# summary: More than 5% of all requests returned 4XX, this requires your attention
# -- Improve connection draining when ingress controller pod is deleted using a lifecycle hook:
# With this new hook, we increased the default terminationGracePeriodSeconds from 30 seconds
# to 300, allowing the draining of connections up to five minutes.
# If the active connections end before that, the pod will terminate gracefully at that time.
# To effectively take advantage of this feature, the Configmap feature
# worker-shutdown-timeout new value is 240s instead of 10s.
##
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
priorityClassName: ""
# -- Rollback limit
##
revisionHistoryLimit: 10
## Default 404 backend
##
defaultBackend:
##
enabled: false
name: defaultbackend
image:
registry: registry.k8s.io
image: defaultbackend-amd64
## for backwards compatibility consider setting the full image url via the repository value below
## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail
## repository:
tag: "1.5"
pullPolicy: IfNotPresent
# nobody user -> uid 65534
runAsUser: 65534
runAsNonRoot: true
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
# -- Use an existing PSP instead of creating one
existingPsp: ""
extraArgs: {}
serviceAccount:
create: true
name: ""
automountServiceAccountToken: true
# -- Additional environment variables to set for defaultBackend pods
extraEnvs: []
port: 8080
## Readiness and liveness probes for default backend
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
##
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 0
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
# -- The update strategy to apply to the Deployment or DaemonSet
##
updateStrategy: {}
# rollingUpdate:
# maxUnavailable: 1
# type: RollingUpdate
# -- `minReadySeconds` to avoid killing pods before we are ready
##
minReadySeconds: 0
# -- Node tolerations for server scheduling to nodes with taints
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
##
tolerations: []
# - key: "key"
# operator: "Equal|Exists"
# value: "value"
# effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"
affinity: {}
# -- Security Context policies for controller pods
# See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for
# notes on enabling and using sysctls
##
podSecurityContext: {}
# -- Security Context policies for controller main container.
# See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for
# notes on enabling and using sysctls
##
containerSecurityContext: {}
# -- Labels to add to the pod container metadata
podLabels: {}
# key: value
# -- Node labels for default backend pod assignment
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
##
nodeSelector:
kubernetes.io/os: linux
# -- Annotations to be added to default backend pods
##
podAnnotations: {}
replicaCount: 1
minAvailable: 1
resources: {}
# limits:
# cpu: 10m
# memory: 20Mi
# requests:
# cpu: 10m
# memory: 20Mi
extraVolumeMounts: []
## Additional volumeMounts to the default backend container.
# - name: copy-portal-skins
# mountPath: /var/lib/lemonldap-ng/portal/skins
extraVolumes: []
## Additional volumes to the default backend pod.
# - name: copy-portal-skins
# emptyDir: {}
autoscaling:
annotations: {}
enabled: false
minReplicas: 1
maxReplicas: 2
targetCPUUtilizationPercentage: 50
targetMemoryUtilizationPercentage: 50
# NetworkPolicy for default backend component.
networkPolicy:
# -- Enable 'networkPolicy' or not
enabled: false
service:
annotations: {}
# clusterIP: ""
# -- List of IP addresses at which the default backend service is available
## Ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
##
externalIPs: []
# loadBalancerIP: ""
loadBalancerSourceRanges: []
servicePort: 80
type: ClusterIP
priorityClassName: ""
# -- Labels to be added to the default backend resources
labels: {}
## Enable RBAC as per https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/rbac.md and https://github.com/kubernetes/ingress-nginx/issues/266
rbac:
create: true
scope: false
## If true, create & use Pod Security Policy resources
## https://kubernetes.io/docs/concepts/policy/pod-security-policy/
podSecurityPolicy:
enabled: false
serviceAccount:
create: true
name: ""
automountServiceAccountToken: true
# -- Annotations for the controller service account
annotations: {}
# -- Optional array of imagePullSecrets containing private registry credentials
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# - name: secretName
# -- TCP service key-value pairs
## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/exposing-tcp-udp-services.md
##
tcp: {}
# 8080: "default/example-tcp-svc:9000"
# -- UDP service key-value pairs
## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/exposing-tcp-udp-services.md
##
udp: {}
# 53: "kube-system/kube-dns:53"
# -- Prefix for TCP and UDP ports names in ingress controller service
## Some cloud providers, like Yandex Cloud may have a requirements for a port name regex to support cloud load balancer integration
portNamePrefix: ""
# -- (string) A base64-encoded Diffie-Hellman parameter.
# This can be generated with: `openssl dhparam 4096 2> /dev/null | base64`
## Ref: https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/ssl-dh-param
dhParam: ""
Here, I highlight the changes made to the Helm values variables for the Nginx Ingress:
Below are the Taints applied to the controller node.
How to add tolerations to a pod is quite descriptive; you just need to write the following values:
To deploy the Nginx Ingress Helm chart to the Kubernetes cluster, please type the following command in the command line:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
-f <path of the values file>.yaml \
--namespace ingress-nginx --create-namesp3ace
Testing Ingress Function
After successfully deploying the Ingress to the cluster, the next step is to perform function testing. The manifest file below is an example of implementing Ingress service into a deployment.
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: nginx-deploy
name: nginx-svc
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-deploy
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx-deploy
name: nginx-deploy
spec:
replicas: 1
selector:
matchLabels:
app: nginx-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx-deploy
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
resources: {}
status: {}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: nginx.example.com
http:
paths:
- path: /MyApp
pathType: Prefix
backend:
service:
name: nginx-svc
port:
number: 80
If the manifest file is deployed and runs smoothly, the example application will look like the following image:
Pro Tips
On Kubernetes clusters spawned using the Kuberaya platform, the Kubernetes Dashboard service and deployment are already available, allowing you to view workload visualizations through a web UI.
As we have successfully deployed Ingress on the mentioned cluster, we can expose the Kubernetes Dashboard service using Ingress as well (without the need for proxying to localhost). For more information, you can refer to the following KB link.
Accessing KubeRaya Cluster Using the Kubernetes Dashboard
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: dashboard
namespace: kubernetes-dashboard
spec:
ingressClassName: nginx
rules:
- host: <name of your preferred kubedashborad domain>
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
NFS Storage Class Setup
Management of application state in the form of persistent data, such as application files or database files, requires a dedicated storage in the form of a storage class in the Kubernetes cluster.
This storage is then used for provisioning persistent volumes and persistent volume claims in Kubernetes.
In this session, I will explain how to provision a storage class so that it can be used as persistent storage by workloads in the Kubernetes cluster.
Spawn NFS VM
Firstly, we need to have a VM with packages and NFS services running inside it.
I have set up an NFS VM with NFS service exports configured only for specific IPs and firewall limitations on the VM.
exports setting
/mnt/nfs/data 10.30.3.0/24(rw,async,no_root_squash,no_subtree_check) /mnt/nfs/data 199.180.130.126(rw,async,no_root_squash,no_subtree_check) --> Primary IP Kubernetes klaster
firewall ACL
Setting NFS Storage Class
Next, we will provision the NFS storage class on the Kubernetes cluster we have created. We will use the NFS Subdir External Helm chart available at the following link and add it to our Helm repository.
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner
Next, to integrate the storage class using Helm, you can type the following command:
helm install -n nfs-provisioning –create-namespace nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner –set nfs.server=<IP NFS server Anda> –set nfs.path=<path direktori NFS serving>
The above Helm command will automatically create the nfs-provisioning
namespace, install the pod/workload, storage class, and create the necessary RBAC
Testing Persistant Storage with NFS Storage Class
Next, we will test creating an Nginx Deployment workload that stores its data persistently using the NFS storage class.
Please refer to the following manifest file to create a persistent volume claim and a workload that uses that PVC.
PVC Manifest Example
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: demo-claim
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2G
Workload Manifest Example
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: busybox:latest
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && sleep 600"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: demo-claim
Based on the example manifest file above, the Deployment using the Busybox image will create a file named “SUCCESS” in the /mnt directory inside the corresponding Pod.
After that, we will check inside the NFS PVC to see if the file has been successfully created or not.
We will conduct tests by creating another file inside the Pod and also destroying and recreating the Pod to see if the created data persists in case of Pod restart/termination.
The image above shows that the data located in the mounted PVC directory can persist even when the corresponding Pod is deleted.
Conclusion
Spawning a Kubernetes cluster using Kuberaya is very easy, with the CloudRaya dashboard providing a simple and informative interface.
However, for the cluster to have several basic services, it requires the setup and installation of various services within it.
To discover more insights and tutorials, please explore our Knowledge Base at this page.