Business Logic Clause Examples

Business Logic. The aim of the RAINBOW Resource and Application-level Monitoring is to collect and make readily available monitoring data regarding resource utilization from the underlying fog infrastructure (e.g., compute, memory, network) and deployed fog services’ behaviour from tailored application-level metrics (e.g., throughput, active users) via the Cluster-head Fog Node. This will enable core RAINBOW services (e.g., Orchestration, Analytics Service) to detect and promptly notify fog service operators of recurring behaviour patterns, potential performance inefficiencies, as well as, dynamically reconfigure underlying resource allocation and service execution to meet the user-requested quality of service. In the subsequent sections we shed light on the core business logic of the RAINBOW Resource and Application-level Monitoring components and describe how they interact with other modules of the RAINBOW platform. Fog service monitoring will be provided through RAINBOW as a service (▇▇▇▇), thus, easing from both fog service developers and operators, the pains accompanying the deployment and management of in-house monitoring infrastructure (e.g. scalability, failures, security risks). In turn, this setting allows for the monitoring process to be decoupled from cloud and fog provider dependencies so as for monitoring to not be disrupted and require significant amount of configuration when a fog service must span across multiple availability zones and/or cloud sites. Although historic and real-time monitoring data will be centrally accessible by tenants through the RAINBOW Dashboard, and subsequently the RAINBOW API, internally RAINBOW Monitoring will collect and process monitoring data/requests in a distributed fashion by featuring a dedicated Monitoring Service deployed, in place, over the overlay mesh network inter-connecting the provisioned nodes of the running fog service. A core feature of the RAINBOW scalable and flexible monitoring stack will be its support for interoperable fog node utilization and application behavior metric collection. Specifically, RAINBOW Monitoring will provide one unified and interoperable metric interface that can be fed with monitoring data by different collection mechanisms (Probes). Various Probes will be developed and provided by RAINBOW, including a Probe for Docker container monitoring, security metric extraction and mesh network monitoring. A Probe template embedding monitoring metric abstractions will be provided so that fog servi...
Business Logic. The Deployment Manager is the component that will undertake the task to materialize a placement plan to an actual placement. To do so, the Deployment Manager has to be fully aware of the available resources, their state and the proper instructions/indications for the actual placement. The knowledge of the resource-state derives from the interaction of the Deployment Manager with the Resource manager. However, its ability to perform deployments in various virtualization environments will derive from a built-in capability to interact with various virtualization endpoints. Hence, the Deployment Manager will rely on an abstract interface for basic management operations for some of the de-facto virtualization APIs along with a set of reference implementations for the most notable ones. It has to be clarified that the component’s behavior should be transactional i.e. the entire deployment should be performed successfully as-a-whole or rolled-back as a whole. In order to achieve this, the component has to interact with the core-orchestration component which is responsible to (among others) maintain a consistent view of which application artefact is deployed in which resource-identifier. During the implementation of the component, several roll-back policies will be supported.
Business Logic. RAINBOW security and trust enablers aspire to provide enhanced remote attestation mechanisms towards the secure composability of fog environments, encompassing a broad array of mixed-criticality services and applications. As described in Section 3, the main goal is to allow the creation of privacy- and trust-aware service graph chains (managed by the Orchestration Lifecycle Manager and established by the Deployment Manager) through the provision of S-ZTP functionalities: fog nodes adhere to the compiled attestation policies by providing verifiable evidence on their configuration integrity and correctness. In terms of design, as will also be described in D2.1[43], the focus of RAINBOW Enhanced Remote Attestation is on cloud-native component (denote as virtual function, VF) Configuration Integrity Verification (CIV) and on secure enrolment. CIV is the process by which a fog node (i.e., hosting a VF) can report in a trusted way (at any requested time) the current status of its configuration. It entails the provision of integrity measurements and guarantees during both the deployment and operation of a VF; covering the system integrity at the deployment phase by the RAINBOW Orchestrator but also ensuring the integrity of the loaded components during their runtime execution.
Business Logic. RAINBOW Multi-domain Sidecar Proxy has many responsibilities like the interaction with the orchestrator in order to pass its state and execute possible adjustments that will be forced by the orchestrator. Also, the Sidecar Proxy is responsible for the metrics extraction and the application-level monitoring measurements in order to be passed to the respective cluster-head and then to be collected to a centralized point in order to be analysed. The monitoring capabilities of the Sidecar Proxy can be multiple and the extension of them will be based on a plugin-based architecture, in which case as more metrics are needed the more plugins will be added. Having collect those metrics, will be fed to its Cluster-head Fog Node, in order then to be passed to the needed agents and make the corresponding decisions based on the provided SLOs.
Business Logic. Before we delve into the details of the Policy Editor we will demystify the concept of “Policy” which, many times, tends to be overloaded. Figure 8 provides an overview of the Policy concept as it will be materialized in the frame of the project.
Business Logic. The data storage component is solely used by the RAINBOW platform’s different components. Depending on the component’s access level, it may either read, write or both read and write data from the storage. Two different ways can be used in order to read data from the storage component. The first one allows the requesting component to directly query a specific node’s local storage unit. The second one uses the data storage API to query the data from a set of nodes available to the platform based on a condition. The reading data process can be summarized in the following steps: 1. A read request is received on the data storage component 2. The access level of the requesting component is confirmed 3. The data are collected based on the request a. If the read request is local the data from the specific node are collected b. If the read request is global the data from the set of nodes are collected based on the set condition 4. The data are returned to the requesting component Writing data to the data storage can be completed through the API in the following steps: 1. A write request is received on the data storage component 2. The access level of the requesting component is confirmed 3. The storage engine decides the placement and replication/sharding of the data based on a. the proximity of the request
Business Logic. It could be argued that ‘orchestration’ as a concept is one of the most difficult goals of the RAINBOW vision and the main reasons for that are boiled down to the following facts: • RAINBOW embraces the “federated scheduling” concept where fog resources and cloud resources will be treated as a uniform pool of resources that are able to host cloud-native service graphs. • This pool of resources can be shared by more than one service graphs simultaneously (i.e. multi-app deployment), with completely different activated policies; hence there is a problem of efficient resource-sharing. • Two service-graphs that employ both fog and cloud resources simultaneously may require contradictive management actions in order to achieve their Service Level Objectives. All these goals will be materialized by the Orchestration Lifecycle Manager. More specifically, the component has the following main responsibilities: 1. Coordinate the deployment procedure of the Service Graph, using the virtualization abstractions of the Deployment Manager and manage the maintenance of its “transactional” behaviour. 2. Check if the applied SLOs Service Level Objectives are fulfilled and execute corrective elasticity actions if they are not. 3. Provide an abstraction model where corrective actions can be registered and applied (These may be elasticity actions, security actions etc). 4. Try to maintain a consistent view of the “advertised” physical resources and their reservation status. 5. Continuously solve the problem of antagonism that is raised because of placement-requests from different applications on the same resources. 6. Predict and resolve conflict resolutions raised by different policy-actions. a. ACS). As such its behaviour is organized in the form of a closed control loop that evaluates continuously the existing resources, the deployment requests and the reconfiguration requests of the already deployed applications. The main difficulty relates to the fact that the entire control loop should remain always in equilibrium taking under consideration that each distinct application is practically an “autonomous” entity. Taking under consideration all the above, it is self-evident that the orchestration business logic can be seen as a centralized logical entity; yet it is a physically distributed entity since it affects “scheduling” of applications-graphs and applications-individually at various levels i.e. at the cloud-level and at the fog-level. As such, Orchestration Lifecycle Man...
Business Logic. The purpose of the Service Graph editor is to author and maintain application templates of cloud-native components. These components can be linked to each other in order to formulate a direct acyclic graph (DAG). Maintaining a serialized format of a DAG is considered to be a norm in all modern cloud-native orchestration tools. For example, Docker-compose and Kubernetes maintain their own ‘proprietary’ DAG format named docker-compose-format and helm-chart-format respectively. The primary goal of the Service Graph Editor will be to author abstracted DAG representations of cloud-native applications that are backwards compatible with de-facto industrial formats (i.e. docker- compose, helm-charts). In addition to the primary goal, authored templates may be accompanied by several requirements in the form of constraints. It is the goal of the Service Graph Editor to capture and formalize their constraints. In general, these constraints can be separated or grouped to deployment constraints, resource constraints, operational constraints and security constraints. Deployment constraints may refer to location requirements, device characteristics (i.e. existence of sensors and actuators), initial sizing of workers etc. Resource constraints may refer to amount of memory, VCPUs, storage, IO throughput, virtualization extensions etc. Operational constraints may refer to minimum QoS thresholds that are considered acceptable while security constraints may refer to attestation capabilities that are available. Each set of constraint entails its own extensible formal-expression-language. Finally, the Analytics Editor will be used for the creation or edit of analytic queries and the declaration of various optimization strategies and constraints in regard to query execution and data movement.
Business Logic. The operational goal of the Pre-Deployment Constraint Solver is to facilitate the identification of an optimal placement plan of a service graph. As already discussed, a service graph is authored by the Service Graph Editor and may be accompanied by inherent constraints or/and Design-Times policies. Thus, each service graph contains some pre-deployment soft and hard constraint requirements such as vCPUs, RAM, storage, network bandwidth, collocation requirements, etc. At the same time, each candidate-for-deployment-node consists of some resources like vCPUS, Ram, Storage, etc, as also from some other characteristics that formulate the topology linkages between Nodes, such as network throughput, network delay etc. The role of the component is to transform all materialized constraints to a formal mathematical optimization problem and trigger the solution-identification. The actual solution is a placement plan for the initial deployment, which will contain the information regarding which Node each component of the Service Graph will be placed at, based on the provided constraints. It could be argued that optimization problems are in principle computationally intensive especially if many soft-constraints have to be combined. This is the reason why the goal is not to find the mathematically/theoretical optimal solution (which make take minutes or hours) but to identify a near-optimal solution without compromising the responsiveness. Furthermore, it should be clarified that, the optimization problem is even more difficult when parallel deployments (with different policies) are antagonizing for shared resources. That is the reason why the constraint-solver has to undertake a so-called online problem. “Online optimization” is a field of optimization theory that deals with optimization problems having no or incomplete knowledge of the future [39]. Since multi-app deployment is a de-facto functional requirement of the project, special emphasis will be put in the trade-off between time and complexity.
Business Logic. Service Provider is the main identified user role that interacts with the Analytics Engine. Nevertheless, analytic services support many other components of the system, such as resource management, which need composite monitoring metrics to be functional. The Analytics Engine will provide three main features to ease the monitoring metrics analysis, namely: 1. A declarative language that facilitates the composition of high-level analytics from monitoring sources. The user-friendly language decreases the complexity of the analysis, while the capability of ad-hoc queries can increase rapidly the productivity of the service provider. 2. A set of high-level optimization objectives and constraints (e.g., budget cap) for the efficient execution of analytics. In a nutshell, a user will be able to provide hints that will tune the data transfer policies, the result's accuracy, performance, etc. 3. Access in real-time the computed analytic insights through an intuitive dashboard. A wide range of plots and graphs will make the deployment performance indicators much more understandable, comparable, and self- explainable. In Figure 16 we present a high-level architectural flow of the Analytics Engine which fulfils the aforementioned functional requirements. To illustrate the core business logic of the Analytics Engine, let us consider a scenario where a Service Provider wants to extract analytics from a Fog application (e.g., to calculate the average energy consumption of a drone swarm of the last hour). To achieve this, the Operator needs to construct a new analytic query through the Analytics/Service Graph Editor (1). This module allows the Service Provider to compose or edit analytic queries and declare various optimization strategies and constraints in regard to query execution and data movement. Next, from the Analytics/Service Graph Editor, (2) the submitted analytic query goes though the Analytics Enabler, which provides an API to create, update and remove analytic queries in a uniform and consistent way. The main responsibility of the module is to construct an analytic execution plan for the efficient calculation of the respective query on the underlying fog infrastructure. Upon query submission, the module parses a query and constructs an abstract representation, known as Abstract Syntax Tree (AST). An AST is a tree structure with the nodes representing the grammatical rules of a language and the leaves its symbols. If the analytic query is successfully transla...