Progress and Integration Sample Clauses

Progress and Integration. ‌ MISSCEL has been developed during the third year of the project. In principle, the tool can be used as a standalone ▇▇▇▇▇ file, however, by using the ▇▇▇▇▇ Daemon Wrapper from Section 2.2, we have developed an Eclipse plugin wrapping MISSCEL – here called jMISSCEL – which we have integrated in the SDE.
Progress and Integration. The ▇▇▇▇▇ Daemon Wrapper has been developed during the second year of the project. Based on the wrapper, we have integrated the ▇▇▇▇▇-based tool MISSCEL, presented in Section 2.4, and we plan to similarly integrate another ▇▇▇▇▇-based tool named MESSI, presented in Section 2.3. The ▇▇▇▇▇ Daemon Wrapper facilitates the interaction of ▇▇▇▇▇ with other tools reg- istered with the SDE by exposing those features via the function executeMaudeCommand (command,commandType,resultType), which takes care of the initialization tasks, executes the ▇▇▇▇▇ command command, and returns the part of the ▇▇▇▇▇ output as specified by resultType. A detailed description of ▇▇▇▇▇ and its commands is available in the ▇▇▇▇▇ manual at http: //▇▇▇▇▇.▇▇.▇▇▇▇.▇▇▇/▇▇▇▇▇▇-▇▇▇▇▇▇.
Progress and Integration. The ▇▇▇▇▇ Daemon Wrapper has been developed during the second year of the project. The ▇▇▇▇▇ Daemon Wrapper facilitates the interaction of ▇▇▇▇▇ with other tools reg- istered to the SDE by exposing those features via the function executeMaudeCommand (command,commandType,resultType), which takes care of initialization tasks, executes the ▇▇▇▇▇ command command, and returns part of the ▇▇▇▇▇ output as specified by resultType. A detailed description of ▇▇▇▇▇ and its commands is available in the ▇▇▇▇▇ manual at http: //▇▇▇▇▇.▇▇.▇▇▇▇.▇▇▇/▇▇▇▇▇▇-▇▇▇▇▇▇.
Progress and Integration. ‌ The SPL framework has been entirely developed during the second year of the project. The integration of the SPL tool into the SDE platform is a work in progress.
Progress and Integration. In the course of the third year, we have developed a new major version of ARGoS (version 3), publish- ing 15 beta releases. The development of ARGoS 3 has been a major effort to improve on the already successful previous version. The main goals in this direction were (i) support for adding custom robot types, (ii) integrating ARGoS with other tools, and (iii) generally improving ARGoS as the current state-of-the-art physics-based multi-robot simulator. To address goal (i), we redesigned the architecture of ARGoS from scratch. The final design, based on advanced concepts from C++ templates, allows users to extend any aspect of ARGoS without touching its core. In addition, we rewrote the compilation configuration environment (based on CMake scripts) to make it easier to cross-compile control code from simulation to real robots. For goal (ii), we improved the ARGoS API both in terms of structure and naming, and in terms of documentation. Most importantly, with ARGoS 3 it is now possible to code robot behaviors also with the Lua scripting language, besides the traditional C++ approach. Further integration activities are ongoing: A code draft is available to interface the MultiVeStA distributed statistical analyzer [SV] with ARGoS. When this work will be finished, it will be possible to perform complex statistical analysis like distributed statistical model checking of large robot swarms automatically and with ease. ARGoS is being integrated with the camera-based robot tracking system installed at IRIDIA. This integration will bring our analysis capabilities to a new level. In fact, we will be able to apply complex performance measures to real-robot experiments with the same ease as in simulated experiments.
Progress and Integration. ‌ During the last year, the specification of the science cloud platform has been finalized and is presented in [vRA+12]. The first implementation aimed at creating a baseline adaptivity functionality in the form of a failover system has been created. This system is based on OSGi and is thus able to dynamically install, start, stop, and uninstall applications in the form of bundles. Moreover, the system itself is based on OSGi bundle class loading and the OSGi service-oriented component infrastructure and can thus be easily used for testing different implementations of adaptivity and self-awareness. Since the science cloud platform is still under development, it is not yet integrated into the SDE. It is, however, envisioned that an SDE facade can be provided for both basic lifecycle functionality (starting, stopping) as well as runtime monitoring and control.
Progress and Integration. ‌ During year 3, the specification of the science cloud platform as well as our first prototype have been used to drive an updated implementation, which is based on existing state-of-the-art network algorithms and protocols to create an OSGi-based hybrid cloud platform which combines, as detailed in deliverable [vHP+13], the domains of voluntary, peer-to-peer, and cloud computing. Lessons learned from the first implementation presented last year have been used in the new im- plementation, specially the use of OSGi and its ability to dynamically install and use application code. The entire network layer, however, has been swapped out; we now use the peer-to-peer substrate Pas- try [RD01a] and accompanying protocols for the communication and data layers, which includes the DHT Past [RD01b] and the publish/subscribe mechanism Scribe [CDKR02]. On top of these layers, a variant of the ContractNET [Fou13] protocol has been used to implement application failover. The science cloud platform will be finalized in the last year and integrated into the SDE.
Progress and Integration. Work on GMC in the second project year included extensions for the C++ language features, and sup- port for custom listeners. Registered custom listeners get notified during the state space exploration as soon as a potentially interesting action, such as a method call, an instruction executed, or backtracking occurs. This extension distinguishes GMC for the purpose of the ASCENS project, where it can check ensemble related properties. Multiple bugfixes and code optimizations have also been implemented. As the development of GMC progresses, the integrated development platform will allow using GMC on ARGoS controllers, verifying properties either encoded as assertions in the code, or specified externally.
Progress and Integration. ‌ MESSI has been developed during the third year of the project. It is currently not integrated in the SDE, but the eventual integration with the help of the ▇▇▇▇▇ Daemon Wrapper from Section 2.2 is planned. MESSI currently comes as a set of ▇▇▇▇▇ files to be imported by the specifications of self-assembly strategies provided by the users.
Progress and Integration. ‌ In the third year of the project, we have continued the development of jRESP by focusing on two main aspects: a new implementation of the SCEL group oriented communication and the integration of external reasoners for supporting adaptivity. To provide a more efficient and reliable support to group-oriented interactions, we have included specific classes that realize these interactions in terms of the P2P and multicast protocols provided by Scribe [CDKR03], a generic, scalable and efficient system for group communication and notification, and FreePastry [RD01a], a Scribe substrate for peer-to-peer applications. Moreover, to support the integration of external reasoners, the internal knowledge-handling mech- anisms have been rearranged. Processes executed at a given component can now transparently interact with external reasoners. In the next year, we plan to design a high-level programming language (HL-SCEL), that, by en- riching SCEL with standard programming constructs (e.g. control flow constructs such as while or if-then-else or structured data types), simplifies the programming tasks. We also develop an SDK that will provide a compiler which will generate jRESP code from a HL-SCEL program. This SDK will be integrated in the SDE.