October 20, 2020
RIFT continues to accelerate the pace of innovation by developing features that push the boundaries of what customers expect from a Network Service Orchestrator to reduce cost and accelerate deployment.
In our 8.1 release in April of 2020, we enhanced the process of onboarding and instantiating a Network Service by supporting both containerized network functions (CNFs) and virtualized network functions (VNFs) and providing customers the ability to decide how to build a service using the RIFT.ware composer portal. The service orchestration framework for cloud-native applications (Kubernetes) makes it possible to rapidly adapt to fast changing network conditions and utilize built-in microservice based scaling mechanisms as well as standards-based management and orchestration. Using this technology, an operator can carefully examine provided functions, roles and behaviors during normal run-time and carefully break them down into a set of subfunctions or microservices that can be individually and automatically managed. RIFT.ware allows a designer to dynamically restart or configure Network Services using the heal LCM operation in Launchpad. A user can also heal a CNFD by updating the Kubernetes release using new values data and as a result a CNF can be recovered from external failures. RIFT.ware supports Helm version 3 by powering CNF deployments to define, install and upgrade CNFs. Our architecture also validates helm charts prior to CNFD generation. Regardless of how you choose to instantiate your NSD, you can update the service in the running or failed failed-temp states as well as easily add or remove a CNF or VNF from an NS.
RIFT is excited to announce some of the key features in our 8.2 release. We continue to advance the possibilities of cloud native network function support with a variety of enhancements. A user can place CNFs on a public cloud using Amazon Elastic Kubernetes (EKS) Service and in future releases we plan to expand our public cloud support on other Kubernetes services as well. This release also introduces ETSI based support for scaling a CNF or a VNF in a Network Service. We have expanded our cloud native functionality by supporting multi-cloud orchestration on a public or a private cloud. We have also improved the way that RIFT.ware handles Kubernetes accounts and introduced a new input parameter framework to identify input variables of a specific CNFD, allowing the CNF to be placed on any site or cloud at instantiation time. These advancements help reduce customer overhead and the need for resources to support an infrastructure.
Amazon Elastic Kubernetes (EKS)
RIFT.ware now supports orchestration of CNFs on Amazon Elastic Kubernetes (EKS) and EC2. An operator can onboard NSDs over EKS which supports Lifecycle Management (LCM) events (instantiate, scale, etc). This feature also introduces support to create Kubernetes network attachments based on ipvlan and host-device plugins.
ETSI Based Scaling for Lifecycle Management Operations for CNFs and VNFs
In past releases, RIFT.ware added support for users to scale-in or scale-out a Network Service. In this release, we have expanded our scale support to include the ability to scale-in or scale out an individual CNF or VNF based on the capacity needed at the time. Users can monitor CNFs and VNFs for KPIs and define a policy to decide when to scale a specific function without affecting the entire service.
Multi-Cloud Orchestration
RIFT.ware’s enhanced multi-cloud / hybrid cloud Network Service functionality allows an operator to deploy a Network Service across multiple clouds. Our architecture now supports two interconnected Network Functions (NFs) that are running in different locations, including across private and public clouds, with no change to the NSD, CNFD, or VNFD models.
An operator can now launch a service with:
- CNFs in a Mixture of Kubernetes Clusters
- VNFs on Multiple VIMs with two Different Datacenters
- VNFs on VIMs of Different Types of Accounts
- CNFs and VNFs in a Combination of VIMs and Kubernetes Clusters
Our multi-cloud orchestration enables RIFT.ware to support end-to-end deployment of hybrid cloud and multi-cloud Network Services. These enhancements save time during service design (“design once”) and provide designers numerous options when deploying a service (“deploy many”).
Kubernetes Enhancements
This release also includes changes to the way that RIFT.ware handles Kubernetes accounts. These accounts are now treated as cloud accounts and they are managed by the cloud account API. The process of creating an account is now more straightforward in the RIFT.ware UI. In addition, this release introduces a new input parameter framework where an operator can onboard a helm chart, create a CNFD in the RIFT.ware catalog and then identify the input variables for that specific CNFD. Lastly, this feature includes support for live monitoring of Kubernetes VIM accounts, by reporting usage information in the RIFT.ware Dashboard UI.
RIFT.ware Installation Enhancements
In previous releases, RIFT.ware Launchpad was installed as a massive container or VM with all the individual components running as processes within a container or a VM. In 8.2, Launchpad is now deployed in a Kubernetes based Cluster k8s or Kubernetes k3s in a VM platform. This new dynamic scaling installation approach allows RIFT.ware to handle orchestration of immeasurable amounts of containers or VMs with a single install.
The collection of these innovative features continues to enhance our already robust RIFT.ware solution. For more information about these advancements, see our RIFT.ware Documentation or email our Service and Sales team at [email protected]. We’d love to show you how release after release we continue to improve our performance, stability and user experience as well as move the industry forward.