In our overview post about hybrid cloud solutions, we introduced Azure Stack HCI and briefly talked about the different solutions within Azure Arc – Azure Arc enabled servers, enabled for data, and enabled for Kubernetes. In this post we discuss further the key drivers of our customers to use Azure Arc for Kubernetes and elaborate on some core technical aspects.

Did you know that DexMach is the 4th enabled Microsoft Azure Stack partner worldwide? If you have any questions on hybrid cloud solutions, you are at the right place. We have a cloud first approach – not from on-premises to cloud, but from cloud to on-premises. We provide the tools and automation to extend to on-premises or multi-cloud environment. Feel free to reach out to us, schedule a call at the end of the article.  

Alternatively, for a high-level use case design custom-made to your organization, opt directly for our 3 days’ Azure Hybrid Cloud Solutions Workshop.

Common scenarios where Azure Arc-enabled Kubernetes can boost your hybrid cloud experience

Everyday more and more applications are moving to the cloud and its popularity is not stopping anytime soon. But the cloud is not always the best or only viable solution for hosting your applications. Let’s have a look at some ‘reasons’ why the hybrid cloud sometimes can be a better option.

One of the most well-known reasons for opting into the hybrid cloud is network latency. For example, manufacturers may use edge computing for applications that control and coordinate heavy machinery. Having high network latency or a spike in latency in such environment can cost a lot of money or even worse lives. However, they can prevent these events from happening with the use of cloud computing. By saving and processing analytics data on the cloud, manufacturers can predict when maintenance will be needed.

Another well-known case for hybrid computing is security and compliancy with governmental legislation. Some governments require you to save and process sensitive data only in certain countries or regions. This is where edge computing is a good fit, and the company may use the cloud for processing and saving all other data.

A mostly overlooked case for hybrid computing is the cost of bandwidth. In most clouds, like Azure, importing data is free of charge. Exporting data out of the cloud will cost for most businesses just a fraction of the total cloud expense as it only costs a few cents per GB.

How Azure Arc-enabled Kubernetes can maximize your hybrid cloud experience

Azure Arc for Kubernetes helps solving these problems by bringing the cloud to the edge. Kubernetes is growing immensely. Not only because of the open-source community around it, also because of the wide adaption of containerized applications. Kubernetes and Arc integration is a key differentiator for Microsoft in comparing to other clouds. You can ensure portability, if you need to be able to move or have different environments that need to be consistent. With Arc enabled services you have the flexibility at the Kubernetes level, it is not required to have a specific distro designed by Microsoft. As long as the distro is CNCF certified, it is good to go! Governance and management tools in the Kubernetes space are the same.

Moreover, it also adds extra features to your Kubernetes clusters which make governance, security and management operations easier. This includes:

  • Kubernetes cluster as a resource in Azure
    You can manage your Kubernetes cluster like any other Azure resource. You can use ARM templates, Azure CLI, apply Azure Policies and create alert definitions based on events and metrics.
  • Direct integration with Azure Monitor
    Azure Monitor does not only collect cluster performance metrics – like CPU, Memory and disk of your Nodes and Containers -, but also visualize them with build-in dashboards.
  • Centralized authentication and authorization with Azure AD RBAC roles
    This can be combined with PIM conditional access and Just In Time access. As a result, you can grant access your cluster administrators and developers with certain predefined permissions to any of your clusters, and even apply it on certain namespaces. They can securely connect to your cluster API from any location without exposing your API servers to the public internet and having to manage the network rules.
  • Deploy your workloads at scale with the native Flux based GitOps integration
    This deploys Kubernetes resources and Helm charts to all your clusters. Additionally, it also manages configuration drift. A more detailed explanation on this follows later.
  • Use your own Kubernetes clusters to host Azure services
    There is a growing list of extensions to enhance your experience:

    • Data Services (Postgresql & MySQL, more on this in our previous blog post ‘We can link Jan his data enablement article’),
    • Azure App Services,
    • Event Grid,
    • Azure API Management gateway and
    • Machine Learning (to create models on data on-premises without uploading data to Azure)

Bring together the governance, security and management operations of all your clusters

Thanks to Architecture Arc agent onboarding, you can unify the governance, security and management of all your clusters hosted anywhere. Additionally, you can enable and easily self-host a growing list of Azure services on your own Kubernetes clusters. This way, you can effectively lower the management burden and TCO ‘of your self-hosted components like databases, APIM’s, Queue’s, security threat detection and monitoring solutions.’

How to deploy Azure Arc for Kubernetes and extra features?

First, we need:

  • an existing Kubernetes cluster that is currently configured as your current context with kubectl,
  • the Helm CLI available and
  • allowed port 443 from your Kubernetes nodes to the internet.

Then, the deployment itself is just a single AZ CLI command. This in the backend enrolls the Azure Arc agent as a deployment of multiple pods on your cluster and callbacks to Azure for completing the registration. Once your cluster is registered with Azure Arc, we can start to enable additional features like Azure Monitor, GitOps Configurations and Data Services from the portal, and ARM templates.

Deploy your workloads at scale with the native Flux based GitOps integration

One configuration repo can be pushed to all clusters at any location for individual Kubernetes resources and Helm charts. It can even work for extensions which are Helm charts deployments, fully managed by the extension creator. The extension creator provides the full lifecycle of this app contained in the Helm chart/extension. These extensions can also be managed at scale by using the Azure CLI or ARM templates:

Azure Monitor Provides visibility into the performance of workloads deployed on the Kubernetes cluster. Collects memory and CPU utilization metrics from controllers, nodes, and containers.
Azure Defender Gathers information related to security like audit log data from the Kubernetes cluster. Provides recommendations and threat alerts based on gathered data.
Azure Arc-enabled Open Service Mesh Deploys Open Service Mesh on the cluster and enables capabilities like mTLS security, fine grained access control, traffic shifting, monitoring with Azure Monitor or with open source add-ons of Prometheus and Grafana, tracing with Jaeger, integration with external certification management solution.
Azure Arc-enabled Data Services Makes it possible for you to run Azure data services on-prem, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice.
Azure App Service on Azure Arc Allows you to provision an App Service Kubernetes environment on top of Azure Arc-enabled Kubernetes clusters.
Event Grid on Kubernetes Create and manage event grid resources such as topics and event subscriptions on top of Azure Arc-enabled Kubernetes clusters.
Azure API Management on Azure Arc Deploy and manage API Management gateway on Azure Arc-enabled Kubernetes clusters.
Azure Arc-enabled Machine Learning Deploy and run Azure Machine Learning on Azure Arc-enabled Kubernetes clusters.

Drift detection and reconciliation with GitOps integration

The CI/CD pipeline applies changes only once during pipeline run. However, the GitOps operator on the cluster continuously polls the Git repository to fetch the desired state of Kubernetes resources on the cluster. If the GitOps operator finds the desired state of resources to be different from the actual state of resources on the cluster, this drift is reconciled.

Minimal firewall configuration is required, it’s an agent based solution that pushed data to Azure on port 443. Furthermore, you can deploy from any location to your cluster without flux and without exposing your API servers to the internet by using the connected cluster feature.

If you are in need of a helping hand, whether just to take the first steps or hold your hand along the entire journey, Dexmach is here! We were Microsoft partner of the year in 2020 and finalist in 2021, and earned numerous Microsoft Advanced Specializations. But besides the theory, we have quite some happy customers and years of field experience earned through those real-life projects.

Want to know more? Have a chat with us!

Glenn Mattys

Glenn Mattys

Head of Customer Innovation

Plan a call with Glenn
Glenn Mattys

Filip De Byser

Cloud Managed Services

Plan a call with Filip