visit
First, the news: Alluxio support for K8s Helm charts is now available! K8s is a certified environment for Alluxio. Now the take away - Alluxio brings back data locality for the disaggregated analytics stack in K8s. How? Read on.
There’s no arguing the rise of containers in real-world deployments over the past few years. Containers simplify running applications in any environment and Kubernetes further transforms the way software and applications are deployed and scaled agnostic of environments. In fact, Kubernetes is increasingly seen as a key technology that enables not only easy resource orchestration in the data center but also in hybrid and multi-cloud environments. While containers and Kubernetes work exceptionally well for stateless applications like web servers and even completely self-contained databases like mongoDB, Couchbase, and others, the stack looks a bit different in the world of advanced analytics and AI. The modern analytical stack is a highly disaggregated stack. Unlike traditional databases or data warehouses, the new stack is split apart. Pick a data lake or two or three to store data (S3, GCS, HDFS, etc.) Pick a computational framework to analyze data (Apache Spark, Presto, Hive, TensorFlow etc.) Make sure all the other dependencies like the catalog service are available (Hive Metastore, AWS Glue, KMS, etc.)Challenge #1 – No Shared Data Access / Caching Layer in the K8s Cluster
K8s is a fantastic container orchestration technology and with tools like Helm charts, operators, and more, deployment can be greatly simplified. However, data-intensive workloads like advanced analytics typically need data sharing between jobs to be effective so that data from one job can be easily be accessed by the next job. Without a data access / caching layer, the data needs to be written back to the data lake and then needs to be read back into the K8s cluster again significantly slowing down data pipelines.Challenge #2 – Lost Data Locality
With data being stored in S3 or other cloud object stores or on-prem in Hadoop, to perform analytics within the K8s cluster, users have a couple of options. Data needs to either be accessed remotely (meaning poor performance) or needs to be manually copied into the K8s cluster (meaning a lot more additional DevOps and management on a per workload basis). And oftentimes this will carry the burden of managing the differences between those copies which can be hard. The ideal solution is for data locality to be recreated in this disaggregated stack.Challenge #3 – No Data Elasticity for Elastic Compute
The beauty of K8s is the flexibility it gets to even the most complex compute workloads – scale up, down, upgrade, restart, etc. based on need and demand. But again the dependency on data being available to compute remains for data-intensive workloads. To scale compute in, out, up, or down, the data within K8s also needs to be able to do the same to leverage the power of the flexibility K8s brings. Data Orchestration can solve these challenges by syncing data into the K8s cluster and allowing for seamless in-memory data access and flexibility to share data across jobs and scale in or out as needed.Also Published At:
Written By: Dipti Borkar