October 27, 2020
The road to 5G has led to a boom in new container-based network functions. Many of the new applications are cloud native and use Kubernetes’ built-in mechanisms for deployment, scaling, healing, and upgrades; justifying the hype and push to migrate traditionally VM-based network functions to containers.
RIFT recognized the promise of container-based network functions early on. In fact, the very first releases of RIFT.ware supported LXC-based Network Functions and RIFT even contributed support for these early containerized applications to the ETSI Open Source MANO (OSM) effort. We have been closely tracking the development and evolution of container technology since then, from LXC to Docker to the current incarnation of Kubernetes-based Containerized Network Functions (CNF).
Coupled with our background in Service Provider automation and NFV, RIFT is today the only orchestrator capable of single-click import of third-party cloud native applications and Network Functions, having onboarded 5G containerized network functions from multiple vendors and deployed these CNFs on both public and private clouds. Once onboarded, RIFT.ware empowers the Service Provider to use the CNFs in all varieties of services from 5G slicing to MEC and SDWAN.
As with all RIFT’s offerings, RIFT.ware support of container-based applications fully conforms to best available open specifications. As many of the standards surrounding CNFs, such as ETSI GS NFV-IFA 040, are still a work in progress in the standards community, RIFT continues to closely track, contribute, and implement these standards as they become available.
Building CNF support into the ETSI NFV framework leverages the existing ETSI support for Network Service (NS) and Network Function (NF) Life Cycle Management (LCM) workflows and brings instant benefits to automated deployment of CNFs:
- CNFs immediately inherited all Network Service level automation. When RIFT introduced the initial phase of CNF support into RIFT.ware Release 8.1 in March, it was instantly possible to compose multi-CNF Network Services via the RIFT.ware UI thus drastically simplifying the design and deployment of 5G Network Slices. In addition, all advanced LCM workflows such as Closed Loop scale and heal, which have long been supported in RIFT.ware for VNF based Network Services, just worked for CNFs as well.
- The role of the VNF Manager is significantly reduced. For as long as NFV has been around, the role of the VNFM has been fraught with questions of necessity vs. functionality. With cloud native functions, the role of managing the scaling and healing of Network Functions (intra NF scale and heal, or the addition, deletion of resources to an existing NF and the healing of an existing NF) is built-in to the Helm Chart itself. In many instances a VNFM, whether specific or generic, is no longer necessary for true cloud-native CNFs.
- A simplified, more uniform on-boarding process. Helm is a mature, de-facto standard in the Kubernetes domain and many NF suppliers have adopted Helm as the base building blocks for their CNFs. The on-boarding process is a simple import of the Helm, plus construction of the additional “wrapper” elements to allow the CNF to be inserted into a Network Service using ETSI NFV procedures. Constructed correctly, Helm and ETSI co-exist peacefully, especially when compared to HEAT and ETSI, which often overlapped and required manual translation to make the two work together.
Kubernetes also benefits greatly from the ETSI NFV framework. As Kubernetes originated from the enterprise domain, support for carrier-grade features such as Enhanced Performance Awareness attributes (DPDK, SRIOV, and similar) and intelligent workload placement, geographic resiliency, and even multi-homed networking needed enhancing. While some of these requirements can be met through custom scripting or other manual extensions at the Helm and Kubernetes layer, such extensions, like all custom scripts, tend to be fragile and time consuming to create and maintain. Kubernetes and Helm are strongest when used at the PaaS/IaaS layers to manage resources under a NF, such as pods, containers, and clusters, while the ETSI NS and NF models provide the additional pieces to manage connectivity especially across hybrid environments and NS life cycle management. To consider several real-world examples:
- User Plane NFs typical in the service provider environment generally require careful placement and complex connectivity during deployment. Not only must the NFs be grouped into redundant clusters (typically 1:1), they must also be placed into different locations to protect against site outages, require support for high bandwidth operations (DPDK and SRIOV), have multiple interfaces that need to be connected in the right topology, and scale and heal procedures that often require network re-configuration. While it is possible to hard code this in the Helm Charts, the ETSI NFV models already contain automatic constructs for creation, placement, connectivity, and life cycle management of Network Functions within a service.
- While new 5G functions are being implemented as cloud-native applications, there exist many functions within the Service Provider ecosystem that remain as VM-based Network Functions for quite some time. Systems such as DNS servers, load balancers, and even some 5G functions themselves may be VNFs and require connectivity to CNFs. This requires not only NF orchestration but also requires orchestration of the network connection between the VM VIM and Kubernetes VIM. Managing both CNFs and VNFs in a service is a built-in capability of ETSI NFV, and should be done using ETSI orchestration functions.
- In general, multi-VIM placements, whether connecting between CNFs and VNFs or between CNFs in public and private cloud, are better executed at the ETSI NS layer. This allows the Kubernetes layer, particularly the Helm charts, to be free of hardcoded values such site names, IP pools and addresses, and similar, which can then be provided as run time parameters by the orchestrator at execution time to tailor the deployment to the cloud. This also allows the same Helm, NF Descriptor, and NS Descriptor to be reused across multiple instantiations, locations, and VIMs (“design once, deploy many”).
RIFT’s RIFT.ware Orchestration and Automation suite considers all these real-world scenarios and fully supports all such deployments. Building upon previous RIFT.ware releases, which include support for design and deployment of complex service chains across multiple clouds and cloud types plus support of containerized applications, the latest 8.2 release enables service providers and enterprises to:
- Onboard container-based applications of any type, including 5G network functions and enterprise applications via a simple wizard.
- Design bespoke Network Services from the catalog of containerized network functions and VM based network functions.
- Mix and match CNFs with VNFs in these Network Services, and network these NFs into complex service chains for any use case.
- Deploy these Network Services across multiple clouds, including public clouds such as Amazon EKS and Google GKE.
- Manage the network connectivity across clouds and hybrid NFs.
- Manage the deployment with Closed Loop autoscaling and autohealing.
With RIFT.ware, Service Providers are guaranteed a future-proof automation solution for design, deployment, and management of 5G and other services in an open, extensible carrier-grade product today.