Scenario Description Sample Clauses

Scenario Description. In Xxxxxxxx’x work [22] a decentralized payment system is envisioned. The essence is to have a consortium of unknown participants achieve consensus [26]. To achieve this, Bitcoin uses a public permissionless blockchain, allowing anyone to participate. Each participant owns one or more Bitcoin accounts. An account is identi- fied by a public cryptographic key, and managed by the corresponding private key. Each account may hold a number of tokens, which represent a value, and can be seen as ‘coins’. Coin ownership can be transferred by transactions. A transaction, in principle, contains the account of the sender, the account of the receiver, the number of coins transferred, and the signature of the sender. Trans- actions created by participants are collected by other participants called miners. These miners independently solve a moderately-hard cryptographic puzzle. The miner that solves the puzzle first, obtains the privilege to propose a new state of accounts, based on the transactions collected. A miner proposes a new state by presenting a sequence of transactions called a block. Note that only miners may write to the blockchain. Each block holds the hash of its previous block, linking all blocks into a block-chain.
AutoNDA by SimpleDocs
Scenario Description. In this third scenario a public permissioned blockchain called Hyperledger Fabric by IBM [5] is used. This blockchain tracks certificates in a supply chain of table grapes. In this scenario [11], a farmer in South Africa produces organic grapes, and presents such a claim to a certification authority. This authority issues a certificate to the farm, allowing the farm to certify its grapes. Grapes are stored in boxes, which are identified by a unique barcode. To ensure a correct certification process, certification authorities are accred- ited by an accreditation authority. The certification authority stores the certifi- cate it receives from an accreditation authority on the blockchain. Additionally, details of the certification authority are stored on the blockchain, so that anyone may see which party certified a farm. This entire process is audited. An audi- tor may revoke the certificate issued by the certification authority, for example, after the discovery of unauthorized pesticides [31] being used in the production of the fruits. An auditor also may revoke accreditations made by the accreditation authority. Here, both revocation types are recorded on the blockchain. The grape boxes are shipped to resellers in Europe, after which the grapes are sold to supermarkets, and eventually to customers. Since it is unknown who may purchase the grapes, public verifiability is required. This allows all parties involved to query the blockchain for the validity of the organic certificate. Also, change of ownership is recorded in the blockchain, and provenance of the labeled boxes can be determined. From this description we observe that there are mul- tiple, known writers. However, these writers are not trusted, as can be observed from the cascading audit trail from farmer to auditor.
Scenario Description. The scenarios considered in our analysis result from the combination of both the traditional and new TST management rules and different water trading alternatives, resulting in 5 different scenarios: • Scenario 1a: traditional TST management rule without water trading; • Scenario 1b: traditional TST management rule with spot water purchases in drought periods; • Scenario 2a: new TST management rules without water trading; 1 The high water values in the Xxxxxx are, in part, due to the concentration of horticultural crops and greenhouses, and also to the widespread modernization of irrigation systems (Calatrava and Xxxxxxxx- Xxxxxxxx 2012). The agricultural sector that depends on the transferred volumes from the Tagus basin generates 1268 € million to the GDP of the Xxxxxx basin (PwC 2013). The cancellation of the TST would lead to a reduction of the GDP close to 7.1% (Xxxxxx 2008). • Scenario 2b: new TST management rule with spot water purchases in drought periods; • Scenario 2c: new TST management rule with the proposed option contract (different parameterizations).
Scenario Description. ‌ The robotics scenario consists of a structured environment of width W and depth D, initially un- known to the robots. The structure of the environment mimics that of a building floor. A team of R robots called rescuers (Fig. 1(a)) is deployed in a special area called the deployment area within the environment. The size of the deployment area is always assumed sufficient to house all the robots. We imagine that some kind of disaster has happened, and the environment is occasionally ob- structed by debris (Fig. 1(b)) that the robots can move. In addition, a portion of the environment is dangerous for robot navigation due to the presence of radiation (Fig. 1(c)). We assume that prolonged exposition to radiation damages the robots. Short-term exposition increases a robot’s sensory noise. Long-term damage eventually disables the robot completely. To avoid damage, the robots can use debris to build a protective wall, thus reaching new areas of the environment. Damage is simulated through a function dr(t) that increases with exposition time t from 1 to 10. The value of dr(t) is used as a scale factor for the natural sensory noise of a robot, until it reaches the value 10, which corresponds to a disabled robot. We imagine that a number V of victims (Fig. 1(d)) are trapped in the environment and must be rescued by the robots. Each victim is suffering a different injury characterized by a gravity Gv. The health hv(t; Gv) of each victim, initially in the range (0,1], deteriorates over time. When hv = 0, the victim is dead. The robots must calculate a suitable rescuing behavior that maximizes the number S of victims rescued. This can be seen as a problem of distributed consensus. A victim is considered rescued when it is deposited in the deployment area alive. In addition, each victim has a different mass Mv. The higher the mass, the larger the number of robots required to carry it to the deployment area. To perform its activities, a robot r must take into account that it has limited energy er. As the robot works, its energy level decreases according to a function er(t). If the energy reaches 0, the robot switches off. Whenever a robot goes (or is transported) to the deployment area, its energy is restored. A reference of all the symbols and their meaning is reported in Table 1.
Scenario Description. ‌ The idea behind the scenario we discuss here is that of an autonomic cloud computing platform; or, in other words, a distributed software system which is able to execute applications in the presence of certain difficulties such as leaving and joining nodes, fluctuating load, and different requirements of applications to be satisfied. The cloud is based on voluntary computing and using peer-to-peer technology to provide a platform- as-a-service. We call this cloud the Science Cloud Platform (SCP) since the cloud is intended to run in an academic environment (although this is not crucial for the approach). The interaction of these three topics mentioned is discussed in the next section. An illustrative picture of how such a cloud may be composed is shown in Figure 11. In our cloud scenario, we assume the following properties of nodes: • Nodes may come and go with or without warning • Node load may change based on outside criteria • Nodes have vastly different hardware, which includes CPU speed, available memory and also additional hardware like specialized graphics processing etc. Also, a node may have different security levels. With regard to the applications, we assume that: • An application has requirements on hardware, i.e. where it can and wants to be run (CPU speed, available memory, other hardware) • An application is not a batch task. Rather, it has a user interface which is directly used by clients in a request-based fashion. The main scenario of the science cloud is based on what the cloud is supposed to do, i.e. run, and continue running in the case of changing nodes and load, applications. The document [ASC12] has listed three smaller scenarios which we combine here to a general scenario which describes how the cloud manages adaptation. On top of this basic scenario, other scenarios may be imagined which improve specific aspects such as how to distribute load based on particular kinds of data or how to improve response times. The basic cloud scenario focuses on application resilience, load distribution and energy saving. In this scenario, we imagine apps being deployed in the cloud which need to be started on an appropriate node based on its SLA (requirements). The requirements may include things like CPU speed of the node to be run on, memory requirements, or similar things. Once the app is started, we can imagine that problems occur, such as that a node is no longer able to execute an app due to high load (in which case it must move the app s...
Scenario Description. ‌ In this report we show architectural aspects of the e-mobility case study and extend the S0 scenario by adaptation mechanisms for partially competitive and partially cooperative mobility. We concretize the scenario and develop a set of components and ensembles that form the architecture of the e-mobility demonstrator. This section presents the concretization and the architectural high-level view. Section
Scenario Description. ‌ We illustrate the performance awareness here on a restricted version of the ASCENS cloud case study. In particular, the scenario we consider is that of a user travelling in a train or a bus, who wants to do productive work using a tablet computer or review travel plans and accommodation. The tablet notes the presence of an offload server machine located in the bus itself, and to save battery, it offloads most computationally intensive tasks to that machine. Later, when the bus approaches its destination, the offload server notifies the tablet that its service will soon become unavailable and tasks will start moving back to the tablet. When the bus enters the terminal, the tablet will discover another offload server, provided by the terminal authority, and move some of its tasks to the newly found machine. The challenge is in predicting which deployment scenario will deliver the expected performance – that is, when is it worth offloading parts of the application to a different computer. For our example, we assume that the application has a frontend that cannot be migrated (such as the user interface which obviously has to stay with the user, Af in our example) and a backend that can be offloaded (typically the computationally intensive tasks, Ab in our example). Figure 2 depicts the adaptation architecture (the used notation is that of component systems, except for interfaces which Mobile device Stationary device
AutoNDA by SimpleDocs
Scenario Description. ‌ For the Smart Supply Chain, the main objective is the improvement of the efficiency of the transportation of components from the supplier plants to FCA production plants, monitoring parameters related to the conditions of the containers during the transportation, in order to be able to react to events than can happen during the travel, that can impact on the physical condition of the components or on the expected delivering date. To reach this goal, travelling containers conditions will be monitored using an HW product prototype called “Outdoor LOGistic TrackER” (OLOGER from now), developed by Cefriel, that will be integrated with MIDIH platform. The first round of experiment (end by M18) will be focused on logistic data coming from these devices. The second round of Experiment (M27), will extend data sources, including other data sources like weather and traffic information, and will require to use other FIWARE lane components of the MIDIH platform. In the first round, the data acquisition system, including transmission, management and storage of IoT industrial logistic data (DiM, Data in Motion), will rely on the MindShpere/FIWARE lane. - Data Ingestion: the ingestion of raw data from the field to FIWARE /MindSphere will leverage on Data Collector modules. MIDIH foreground component MASAI. - Data Processing: the analysis of logistic data (DaR, Data at Rest) in order to produce useful insight and information about the logistic process, will leverage on MindSphere components and ad-hoc logic. - Data Persistence: Mongo DB to manage the storage and loading of data - Data Visualization: visualization of output data will be done leveraging on a Production Logistic Optimization application developed within MIDIH by Cefriel (CC6). [For more details concerning the Business Scenarios and Objectives, please refer to D5.1] The background and foreground components in this scenario (first round) are shown in the following Table 4. COMPONENT TASKS LANE STATUS CLASSIFICATION MongoDB T4.2, T4.3, T4.4 FIWARE DONE BACKGROUND MASAI MindSphere T4.2 FIWARE DONE3 FOREGROUND Table 4 - MIDIH components adopted in the CRF Smart Supply Chain scenario
Scenario Description. ‌ In the Smart Factory scenario, the proposed solution is the development of a system to control and analyse the quality control and process control data. The aim is to provide capabilities of visualization and predictive maintenance to the production line. For this, MIDIH will develop a solution to provide the blue-collar workers and plant supervisors with the capability to visualize and prevent the factory production. In addition to this quality control, a machine and tooling status control module will be developed. Smart Factory scenario background consists of a FIWARE and APACHE lanes with several components: • Data ingestion: Data Collector to enable physical level to FIWARE (i.e. OPC UA, non OPC UA, etc.). • Data bus: Orion Context Broker to manage context information or XXXXX to integrate data streams. • Data processing: CEP Siddhi, Logstash and TensorFlow to analyse events and create complex events or to elaborate files with information when services are executed. • Data persistence: Druid to manage which data must be loaded. • Data visualization: Ruby on Rails and Nginx to present the data. The background and foreground components in this scenario are shown in the following Table 5. COMPONENT TASKS LANE STATUS CLASSIFICATION Orion Context Broker (OCB) T4.2, T4.3, T4.4 FIWARE DONE BACKGROUND HADOOP T4.4 APACHE DONE HIVE T4.4 APACHE DONE IDAS T4.2, T4.3, T4.4 FIWARE DONE MIDIH Connectors T4.3 FIWARE DONE FOREGROUND Table 5 - MIDIH components adopted in the NECO Smart Factory scenario Figure 15Data flow for the Smart Factory Scenario (NECO Use Case)
Scenario Description. ‌ The technical approach consists of the creation of a data space as defined in the IDS Reference Architecture Model4. The software architecture represents the reference architecture of the Industrial Data Space. The elements Connector and Broker from the Industrial Data Space are used as well as the IDS Identity Provider (mandatory and not described in detail as this is a standard component in the Reference Architecture Model5). For the customer, the distribution network organizes itself via apps in the Industrial Data Space. Within MIDIH, a distribution planning app is installed in the manufacturer's IDS connector. The Service Application transferring data through the Supply Chain partners, in this case between the manufacturer and the logistics service provider resp. the transport service provider. The scenario is described in detail in D.5.1 COMPONENT TASKS LANE STATUS CLASSIFICATION XXXXX T4.4 APACHE DONE BACKGROUND SPARK T4.4 APACHE DONE Table 8 - MIDIH components adopted in the Cross-border experimentation in Steel sector
Time is Money Join Law Insider Premium to draft better contracts faster.