Business Logic Sample Clauses

Business Logic. The purpose of the Service Graph editor is to author and maintain application templates of cloud-native components. These components can be linked to each other in order to formulate a direct acyclic graph (DAG). Maintaining a serialized format of a DAG is considered to be a norm in all modern cloud-native orchestration tools. For example, Docker-compose and Kubernetes maintain their own ‘proprietary’ DAG format named docker-compose-format and helm-chart-format respectively. The primary goal of the Service Graph Editor will be to author abstracted DAG representations of cloud-native applications that are backwards compatible with de-facto industrial formats (i.e. docker- compose, helm-charts). In addition to the primary goal, authored templates may be accompanied by several requirements in the form of constraints. It is the goal of the Service Graph Editor to capture and formalize their constraints. In general, these constraints can be separated or grouped to deployment constraints, resource constraints, operational constraints and security constraints. Deployment constraints may refer to location requirements, device characteristics (i.e. existence of sensors and actuators), initial sizing of workers etc. Resource constraints may refer to amount of memory, VCPUs, storage, IO throughput, virtualization extensions etc. Operational constraints may refer to minimum QoS thresholds that are considered acceptable while security constraints may refer to attestation capabilities that are available. Each set of constraint entails its own extensible formal-expression-language. Finally, the Analytics Editor will be used for the creation or edit of analytic queries and the declaration of various optimization strategies and constraints in regard to query execution and data movement.
AutoNDA by SimpleDocs
Business Logic. Before we delve into the details of the Policy Editor we will demystify the concept of “Policy” which, many times, tends to be overloaded. Figure 8 provides an overview of the Policy concept as it will be materialized in the frame of the project.
Business Logic. The operational goal of the Pre-Deployment Constraint Solver is to facilitate the identification of an optimal placement plan of a service graph. As already discussed, a service graph is authored by the Service Graph Editor and may be accompanied by inherent constraints or/and Design-Times policies. Thus, each service graph contains some pre-deployment soft and hard constraint requirements such as vCPUs, RAM, storage, network bandwidth, collocation requirements, etc. At the same time, each candidate-for-deployment-node consists of some resources like vCPUS, Ram, Storage, etc, as also from some other characteristics that formulate the topology linkages between Nodes, such as network throughput, network delay etc. The role of the component is to transform all materialized constraints to a formal mathematical optimization problem and trigger the solution-identification. The actual solution is a placement plan for the initial deployment, which will contain the information regarding which Node each component of the Service Graph will be placed at, based on the provided constraints. It could be argued that optimization problems are in principle computationally intensive especially if many soft-constraints have to be combined. This is the reason why the goal is not to find the mathematically/theoretical optimal solution (which make take minutes or hours) but to identify a near-optimal solution without compromising the responsiveness. Furthermore, it should be clarified that, the optimization problem is even more difficult when parallel deployments (with different policies) are antagonizing for shared resources. That is the reason why the constraint-solver has to undertake a so-called online problem. “Online optimization” is a field of optimization theory that deals with optimization problems having no or incomplete knowledge of the future [39]. Since multi-app deployment is a de-facto functional requirement of the project, special emphasis will be put in the trade-off between time and complexity.
Business Logic. The Deployment Manager is the component that will undertake the task to materialize a placement plan to an actual placement. To do so, the Deployment Manager has to be fully aware of the available resources, their state and the proper instructions/indications for the actual placement. The knowledge of the resource-state derives from the interaction of the Deployment Manager with the Resource manager. However, its ability to perform deployments in various virtualization environments will derive from a built-in capability to interact with various virtualization endpoints. Hence, the Deployment Manager will rely on an abstract interface for basic management operations for some of the de-facto virtualization APIs along with a set of reference implementations for the most notable ones. It has to be clarified that the component’s behavior should be transactional i.e. the entire deployment should be performed successfully as-a-whole or rolled-back as a whole. In order to achieve this, the component has to interact with the core-orchestration component which is responsible to (among others) maintain a consistent view of which application artefact is deployed in which resource-identifier. During the implementation of the component, several roll-back policies will be supported.
Business Logic. It could be argued that ‘orchestration’ as a concept is one of the most difficult goals of the RAINBOW vision and the main reasons for that are boiled down to the following facts: • RAINBOW embraces the “federated scheduling” concept where fog resources and cloud resources will be treated as a uniform pool of resources that are able to host cloud-native service graphs. • This pool of resources can be shared by more than one service graphs simultaneously (i.e. multi-app deployment), with completely different activated policies; hence there is a problem of efficient resource-sharing. • Two service-graphs that employ both fog and cloud resources simultaneously may require contradictive management actions in order to achieve their Service Level Objectives. All these goals will be materialized by the Orchestration Lifecycle Manager. More specifically, the component has the following main responsibilities:
Business Logic. The goal of the resource manager is to automatically track the available and allocated resources on every node that is part of the deployment either at the cloud-level or the fog-level. The tracked information includes: • CPU • Memory • Ephemeral storage, special hardware (GPU,TPM). All resource types should be automatically announced to the scheduling component. In addition to the built-in resource types, the Resource Manager will incorporate a standard way to onboard new types using labels, and annotations. These will be addressed as extended resources. The availability and allocations of the built-in resource types are automatically managed by the Orchestration component in conjunction with the employed scheduler; hence no custom business logic is required. However, for custom resources, such as battery capacity, and for momentary resource usage, extra business logic is needed to keep this information up to date. Extended resources [41] allow cluster administrators to advertise node-level resources that would otherwise be unknown to Kubernetes, and therefore enable the assignment of custom resources (e.g., battery capacity) to a node. The orchestration itself is unaware of the semantics of a particular extended resource, but they can be used in custom scheduler plugins. They can also take part in complex SLO expressions. On the other hand, labels [42] are key/value pairs that are attached to objects. Therefore, labels can be assigned to nodes and will be natively supported by the employed scheduler determining the set of eligible nodes for a deployment, as a form of metadata. A drone could for example, be given the label “vehicle-type: uav”, while the label would be omitted from a node describing a stationary component, such as a router. Also, the Resource Manager, beyond the intra-cluster activities, it will be responsible for extending the raw (underline) computational resources. Thus, a series of actions and communications through the respective APIs will be made in order to provision new nodes, reserve, instantiate, deploy RAINBOW agents and finally utilize them for the respective Service Graph. The fog-specific resource information and the actual resource usage are updated as part of resource monitoring (see Section 5.5). A part of this information (e.g., battery level) is relevant for scheduling and thus needs to be updated in the respective metadata section of the node. Resource information, which is not relevant for scheduling, is stored by the ...
Business Logic. The aim of the RAINBOW Resource and Application-level Monitoring is to collect and make readily available monitoring data regarding resource utilization from the underlying fog infrastructure (e.g., compute, memory, network) and deployed fog services’ behaviour from tailored application-level metrics (e.g., throughput, active users) via the Cluster-head Fog Node. This will enable core RAINBOW services (e.g., Orchestration, Analytics Service) to detect and promptly notify fog service operators of recurring behaviour patterns, potential performance inefficiencies, as well as, dynamically reconfigure underlying resource allocation and service execution to meet the user-requested quality of service. In the subsequent sections we shed light on the core business logic of the RAINBOW Resource and Application-level Monitoring components and describe how they interact with other modules of the RAINBOW platform. Fog service monitoring will be provided through RAINBOW as a service (XxxX), thus, easing from both fog service developers and operators, the pains accompanying the deployment and management of in-house monitoring infrastructure (e.g. scalability, failures, security risks). In turn, this setting allows for the monitoring process to be decoupled from cloud and fog provider dependencies so as for monitoring to not be disrupted and require significant amount of configuration when a fog service must span across multiple availability zones and/or cloud sites. Although historic and real-time monitoring data will be centrally accessible by tenants through the RAINBOW Dashboard, and subsequently the RAINBOW API, internally RAINBOW Monitoring will collect and process monitoring data/requests in a distributed fashion by featuring a dedicated Monitoring Service deployed, in place, over the overlay mesh network inter-connecting the provisioned nodes of the running fog service. A core feature of the RAINBOW scalable and flexible monitoring stack will be its support for interoperable fog node utilization and application behavior metric collection. Specifically, RAINBOW Monitoring will provide one unified and interoperable metric interface that can be fed with monitoring data by different collection mechanisms (Probes). Various Probes will be developed and provided by RAINBOW, including a Probe for Docker container monitoring, security metric extraction and mesh network monitoring. A Probe template embedding monitoring metric abstractions will be provided so that fog servi...
AutoNDA by SimpleDocs
Business Logic. The Mesh Routing protocol stack is a library that is responsible for the secure on- boarding, and operation of a consistent network overlay among the fog nodes and the selection of a cluster-head which will represent an entire physical deployment. The problem of joining a mesh network as a trusted entity is a rather difficult problem. The problem lies in the fact that in order for a node to join a mesh environment several high-level application protocols have to be executed. All these protocols require plain network connectivity; yet connectivity per se requires addressability and routability. In a mesh environment network addresses are not statically configured since the risk of conflict is high. Therefore, plain IP assignment protocols cannot work. Hence it is the purpose of the Mesh Protocol Stack to: • Define automatically an address within minimum chance of collision • Use this address to join a peer-to-peer network with “limited access” since the existing trusted network has to attest the new node • Execute the attestation protocol in order to be accepted in a security-overlay • Take part in the selection process of a cluster-representative (cluster-head) which will be used to offload several computational and computational tasks. As already mentioned, a fog deployment will be represented by a cluster-head. A critical question is how is this cluster-head selected? There are two possible answers in this question. The first answer is by using a centralized approach i.e. one node is considered supernode and has an overview of all ‘live’ nodes in the mesh network. Every node that joins the network has to be statically configured regarding the address of the supernode. Since in a mesh network there is no such thing as centralized routing, the joining node has to find the path towards the supernode and notify it regarding its existence. Upon this notification the supernode is hashing the joining nodes’ node-id and assigns a relative position to the logical circular topology. This approach is rather effective if low mobility patterns are employed since the cost of route identification towards the supernode is confined. However, this approach has some severe drawbacks in case on excessive dynamicity. More specifically, if a network splits in two subnetworks, the sub-network that does not contain the supernode will continue to operate until a new node joins. The new node (which is statically configured to interact with the ‘absent’ super-node) will never stab...
Business Logic. RAINBOW Multi-domain Sidecar Proxy has many responsibilities like the interaction with the orchestrator in order to pass its state and execute possible adjustments that will be forced by the orchestrator. Also, the Sidecar Proxy is responsible for the metrics extraction and the application-level monitoring measurements in order to be passed to the respective cluster-head and then to be collected to a centralized point in order to be analysed. The monitoring capabilities of the Sidecar Proxy can be multiple and the extension of them will be based on a plugin-based architecture, in which case as more metrics are needed the more plugins will be added. Having collect those metrics, will be fed to its Cluster-head Fog Node, in order then to be passed to the needed agents and make the corresponding decisions based on the provided SLOs.
Business Logic. RAINBOW security and trust enablers aspire to provide enhanced remote attestation mechanisms towards the secure composability of fog environments, encompassing a broad array of mixed-criticality services and applications. As described in Section 3, the main goal is to allow the creation of privacy- and trust-aware service graph chains (managed by the Orchestration Lifecycle Manager and established by the Deployment Manager) through the provision of S-ZTP functionalities: fog nodes adhere to the compiled attestation policies by providing verifiable evidence on their configuration integrity and correctness. In terms of design, as will also be described in D2.1[43], the focus of RAINBOW Enhanced Remote Attestation is on cloud-native component (denote as virtual function, VF) Configuration Integrity Verification (CIV) and on secure enrolment. CIV is the process by which a fog node (i.e., hosting a VF) can report in a trusted way (at any requested time) the current status of its configuration. It entails the provision of integrity measurements and guarantees during both the deployment and operation of a VF; covering the system integrity at the deployment phase by the RAINBOW Orchestrator but also ensuring the integrity of the loaded components during their runtime execution.
Time is Money Join Law Insider Premium to draft better contracts faster.