5GTANGO architecture design is based on a few basic tenets:

  1. Supporting verticals introduces a much higher heterogeneity into the design and operation of VNFs/NS,
  2. Carrier-grade quality is imperative,
  3. There is no single solution for NFV operation in sight, but there is a trend to harmonizing core interfaces (a process that we will actively support).

From this we derive the need to support flexible verification & validation (V&V) processes addressing different types of systems (different verticals, different NFV operational platforms) as well as different roles in the design and operational workflow of NFV services. We base our design on well-established architectural principles of system and software engineering, e.g., separation of concerns, single responsibility and reuse. In particular, the microservices architecture style has become accepted in distributed systems engineering; our architecture will use it extensively.

From these thoughts, we identify three core architectural roles:

  1. a developer of functions and services,
  2. a validator and verifier role and,
  3. an operator of services.

We point out that these architectural roles can be mapped flexibly to administrative or business entities, enabling a wide range of business models. There are additional supporting roles, e.g., operator of infrastructures or running catalogues of functions and services, which we assume as a given for this architecture design description.

The main architectural components of the SONATA powered by 5GTANGO platform are shown in Figure below. In line with the support of service developers, service certification companies and service operators, SONATA distinguishes three main components SDK, V&V and Service Platform. Services developed and deployed by this system run on top of the underlying infrastructure accessible to the SONATA system via Virtual Infrastructure Managers (VIMs), abstracting from the actual hardware and software.

 

Service Development Kit

The Service Development Kit (SDK) provides a collection of tools, empowering the service developer to rapidly build, validate and test NFV services. Although many tools can be used on their own, the SDK enables a typical developer workflow supporting the creation of an isolated workspace and project environment, the generation and validation of descriptors, the packaging and onboarding, as well as the testing and emulation in a local development environment.

The SDK consists of the following main modules:

  • Project & workspace creation functionality: tng-workspace and tng-project are CLI tools enabling the assisted creation of isolated development environments (having GIT-like options)
  • Descriptor generation functionality: tng-sdk-descriptgen is a web GUI functionality enabling rapid generation of descriptors based on a set of VNFs.
  • Image conversion functionality: tng-sdk-img is a CLI-tool, which makes it possible to convert docker-based VNFs to equivalent Virtual Machine based (VM-based) VNFs. Docker-based VNFs can ease the testing and debugging phases of development, while VMs can ensure improved isolation in production environments.
  • Service validation functionality: tng-sdk-validate consist of a CLI and GUI interface enabling to detect syntactical as well as semantic errors in descriptors (e.g., topological errors in the service graph). In addition, it enables to add customizable validation rules (e.g., ensure that resources are high enough in descriptors).
  • Packaging functionality: tng-package is a CLI tool enabling to generate package files from project folders. This includes support for multiple Management and Orchestration (MANO) platforms (e.g., Open Source MANO - OSM).
  • Onboarding functionality: tng-sdk-access contains CLI scripts to ease the process of onboarding on MANO platforms and the emulator (as an alternative to manual CURL commands).
  • Emulator environment: vim-emu is CLI environment enabling to emulate a local MANO framework as well as a VIM locally on the pc of the developer. This allows the developer to locally deploy a developed service for testing and debugging purposes. tng-sdk-traffic is a tool to generate test traffic in the emulator.
  • Testing functionality: tng-sdk-sm is a CLI tool, which eases the proses of writing and testing Specific Manager components.

V&V Platform

The V&V Platform is a mechanism of ensuring that the uploaded services can be tested on the appropriate target Service Platform to ensure that the service is considered fit for purpose.
The Platform currently will:

  1. Identify and target the appropriate tests (via tag) for the target service
  2. Prepare the target SP and corresponding test environment
  3. Execute the sequence of tests via a test plan on the target service platform
  4. Determine the success or failure of the test
  5. Return the results for future analysis

The main modules of the V&V are the following:

  • Gatekeeper: A shared component with the SONATA SP and offering similar functionality, the V&V Gatekeeper controls access to the V&V platform and is responsible for all interactions with it.
  • Lifecycle Manager: The lifecycle manager is responsible for the overall planning of the test execution including identifying target services to be tested, creating an associated test plan, ensuring that plan is executed and returning the results.
  • Test Execution Engine: Responsible for the setup of the test environment and execution of the test against a target service platform.


Service Platform

SONATA's Service Platform is where:

  1. Users are created, authenticated and authorized;
  2. Packages, containing (network) services and (virtual network) functions descriptions, are on-boarded, validated and stored in the catalogue. A service or a function can bring with it a specific manager, which may change the default behavior the SP has for a specific aspect of that service's or function's lifecycle (e.g., placement, scaling, etc.);
  3. Services from the Catalogue are instantiated (with licenses verified) and orchestrated, through the MANO, in the abstracted infrastructure;
  4. Instantiation records are generated and stored, providing instantiation data to the other components;
  5. Monitoring data is collected and securely provided on demand to the service developer, thus allowing quick and frequent service improvements;
  6. Key Performance Indicators (KPI) are collected, to show the overall business performance of the system;
  7. Operator policies are defined, based on technical metrics and thresholds, as well as information about infrastructure utilization, triggering actions such as update, scaling or healing, among others;
  8. Service Level Agreements (SLAs) are associated to end users, by using business metrics, checking whether those agreements are violated;
  9. Network Slices templates can be defined, instantiating and terminating slices, or taking other more advanced management actions like updates or scaling, among others.


The main modules of the SP are the following:

  • Gatekeeper: controls and enforces whoever (and whatever) wants to interact with the SP and guarantees the quality of the submitted packages, by validating them against a schema (syntactically), the topology of the described service and its integrity.
  • Catalogues: stores and manages (smart delete: only packages with services that do not have running instances can be deleted) package files, its meta-data as well as service's and functions' meta-data.
  • Repositories: stores and manages service and function records, resulting from the instantiation, update and termination processes.
  • MANO Framework: the orchestrator, who manages each service's lifecycle, including when the service and/or its functions bring specific managers with them to be used in certain segments of their lifecycle. Please note the clear separation between the two levels, the Network Function Virtualization Orchestrator (NFVO) and the Virtual Network Function Manager (VNFM) and Controller. This separation was originally recommended by ETSI, and it effectively corresponds to two very different levels of abstraction that is important to be kept separate.
  • Infrastructure Abstraction: hides the complexity and diversity of having to deal with multiple VIMs and WIMs.
  • Monitoring: collects, stores and provides monitoring data for the services and functions instances.
  • Policy Manager: define policy rules based on metrics or infrastructure resources utilization, being able to suggest/order actions to be performed by external components, namely by the MANO Framework. Those actions can be scaling, healing, update, etc.
  • SLA Manager: define SLAs with certain objectives to be guaranteed by the service provider to the end-users and notifies external components (namely the Portal) about the violation of such SLAs.
  • Slice Manager: define Network Slice Templates by using multiple Network Services (NSs) interconnected, and is able to instantiate, terminate, and perform other advanced Slice management operations such as update, scale, etc.