Software Architecture Sample Clauses

Software Architecture. 21 4.6.1 Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.6.2 Components - micro-ROS Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.6.3 Components - micro-ROS Agent 30 4.7.1 Node interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.7.2 Publisher and subscribers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.7.3 Service, server and client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.7.4 Parameters manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.7.5 Graph manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.7.6 Timers and Clocks interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.7.7 Executor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.7.8 Lifecycle and system modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.7.9 Logging utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.7.10 Agent core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.7.11 Parameter server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.7.12 Graph server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.8.1 micro-ROS Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.8.2 micro-ROS Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.7 External Interfaces 31 4.8 4.9 Infrastructure Architecture 38 4.9.1 infrastructure 38 4.10 Deployment 38 4.10.1 Deployment 38 4.10.2 Build system 40 4.10.3 Profiles 40 4.10.4 Test system 41 5 Appendix 41 5.1 A1 Related documents 41 References 42 1 Acronyms‌ Acronym Explanation CDR Common Data Representation DDS Data Distribution Service FSM Finite-State Machine HW Hardware IDL Interface Definition Language IRQ Interrupt Request MCU Microcontroller Unit MPU Microprocessor Unit MTU Maximum Transmission Unit NTP Network Time Protocol OMG Object Management Group OS Operating System OSS Open Source Software QoS Quality of Service PTP Precision Time Protocol ROS Robot Operating System RPC Remote Procedure Call RTOS Real Time Operating System RTPS Real-Time Publish-Subscribe XRCE Extremely R...
AutoNDA by SimpleDocs
Software Architecture. All DSP software conforms to the VP Open software architecture. All DSP modules work on 64 sample blocks of data (8 ms). in each 8 ms interval, the DSP performs the following functions:
Software Architecture. The software consists of a number of threads which are running on the 2 processors, each managing processes with a number of specific performance requirements. Not all the software threads have been implemented inside this project. On the LINK processor [2 in figure 12] a scheduler is managing the processor item allocation and the priorities of each of these processes. This scheduler is interfacing to a K-API [Kernel API]. On the APP processor [1 in figure 12] an OS will be implemented. This OS remains to be selected. An appropriate wireless sensor operating system. Operating system alternatives are RIOT, Contiki, TinyOS, Linux… The RIOT operating system is a promising recent open source European initiative focused on embedded nodes networks that requires low power and computational resources. Short description of the software components: at networking layer: BTLE Link layer interfacing to an HCI –API. HCI is also accessible as HCI commands on the SPI interface. The BTLE link layer is Bluetooth specification V4.0, V4.1 and V4.2 compliant. 802.15.4 MAC layer interfacing to a proprietary 802.15.4-API. Also accessible on the platform SPI interface. The MAC layer is compliant with the 802.15.4 standard. The ZIGBEE and BTLE thread are based on external solutions. SPI external interface. SPI-based debugger that can survive the deep sleep mode. Problem with JTAG debugger is that they require the JTAG interface to be powered in order to be able to run the debugger. This is an issue in this device, as the goal is to put the device in a sleep state most of the time where the JTAG is not powered. The issue can be resolved by doing a SPI debugger. Application layer on the application processor. Communicates with the LINK processor over a COMs API. Following table summarizes the software memory [program and data] sizes. Module Status Program memory Link Layer Developed 16 KBYTE 802.15.4 MAC 50% developed 16KBYTE SCHEDULER Developed 15 KBYTE BTLE HOST External 38KBYTE BTLE Profiles External Partner ZIGBEE External 55KBYTE
Software Architecture. Following the comments from the first year technical review, this appendix has been added to describe the software modules that will be developed in the prototype of the COSIGN control layer. The objective is to highlight the components that will be implemented in the OpenDaylight framework as new OpenDaylight plugins developed from scratch or as extended versions of already existing OpenDaylight plugins. In this latest case, this section will provide details about the modifications and enhancements that will be needed to meet COSIGN requirements. Moreover, some components related to the composition and delivery of virtual optical slices will be implemented as OpenVirteX plugins and will require the extension of the OpenVirteX platform in support of optical resources, as documented in the next tables. The outcomes documented in this section are the result of the software design activities carried out in T3.1, T3.2 and T3.3 from M13 to M15. These activities have taken as input the functional architecture documented in the previous sections of this document and, for each functional component defined in Section 4.1, have defined a set of software modules which will implement the associated control plane functions (functions derived from the requirements, as detailed in Table 2). The list of the resulting software modules is summarized in Table 5, which also specifies the prototype release(s) where each module will be included (i.e., preliminary release in D3.3 “SDN controller for DC optical network virtualization and provisioning software prototypes: Preliminary release” at M24 and/or final release in D3.4 “SDN controller for DC optical network virtualization and provisioning software prototypes: Final release” at M30). The following tables describe in details each software module, defining the software architecture which marks the milestone MS14 (Intra-DC control plane high-level architecture), planned at M15.
Software Architecture. Control software runs on the central station on the surface of the construction site. Central Computer Figure 24: Detailed of the Robot Hardware modules Central Computer Figure 25: Detailed of the Rover Robot software architecture.
Software Architecture. ▪ Back-end - SQL Server 7.0 database ▪ Front-end - Microsoft Access
Software Architecture. The Provider’s responsibilities include the following:
AutoNDA by SimpleDocs
Software Architecture. The content delivery software consists of following main components:  FIBRE control framework to provide the slice of OpenFlow enabled devices (packet, optical), media server and virtual machines to host software’s.  Media Solution: FOGO 4k Player and Streamer. The FOGO is a proprietary solution and will be available till the end of the FIBRE project.  POX controller: this is the SDN controller which runs on top of the experimenter slice. It utilizes the OF Interface to control the slice resources. The controller application is hosted over python DJANGO framework. The DJANGO framework consists of application called POX_CW which houses the content delivery software and interfaces. The main modules of the controller are as follows.
Software Architecture. Overview The basic software architecture is shown in the diagram below:
Software Architecture. Various High-Level Services are supplied as part of RDS, or can be written by third-parties. These services typically rely on abstractions provided by Low-Level Services to communicate with devices. However, some High-Level Services operate on their own. The Low-Level Services must be provided by the maker of a MARK robot or written by a user if they are building their own robot. These services communicate with the Robot IO Controller. They are similar to a Hardware Abstraction Layer and present defined interfaces (referred to as Contracts in RDS) to the High-Level Services. Services for the Kinect and several other devices are included with RDS. CCR (Concurrency and Coordination Runtime) and DSS (Decentralized Software Services) are components of Robotics Developer Studio. All services run within the context of a DSS Node, and multiple DSS Nodes can communicate with each other over a network. RDS is based on .NET, which is a pre-requisite, and this in turn runs on Windows. At the bottom level is the hardware. Some devices, such as the Wireless Xbox Controller, are directly supported by Windows drivers. The Robot IO Controller board contains Firmware that communicates via Windows using a Serial Port, USB, network connection, or some other means. Taking a different view, the following diagram shows how a combination of specific High-Level and Low-Level services can be used to communicate with the hardware on the robot. These services are “orchestrated” by means of manifests that list the required services and the partnerships amongst them (represented by arrows).
Time is Money Join Law Insider Premium to draft better contracts faster.