Common use of Motivating Example Clause in Contracts

Motivating Example. Consider two autonomous vehicles at a T-junction where only one car can pass at a time. Each vehicle can either Pass or Wait. If both vehicles decide to pass through the T-junction at the same time, they will collide. On the other hand, if both decide to wait, they may enter a deadlock state in which they wait indefinitely. Ideally, we would like one vehicle to wait and the other to pass, so that they can traverse the T-junction one at a time. We model the preference between system behaviors by assigning a reward (or penalty) to each combination of actions of the two vehicles, (u1, u2), where u1 is the action of vehicle 1 (M1) and u2 is that of vehicle 2 (M2). The objectives of 246 TABLE I: M1 Rewards 2 1 Wait Pass Wait −1 −1 Pass 3 −10 TABLE II: M2 Rewards 2 1 Wait Pass Wait −1 4 Pass −1 −10 the empty set. The intent is that an implementation M can only be used under a compatible environment. C is consistent if there exists a feasible implementation M for it. For a saturated contract, this is equivalent to G = 0/ . To reason about different abstraction layers in a design, contracts can be ordered according to a refinement relation. A contract C = (V, A, G) refines Cj = (V, Aj, Gj), written C ≤ Cj, if and M1 and M2 are to maximize the rewards associated with their actions, shown in Tab. I and II, respectively. Rewards are as low as 10 to penalize collision and take on a maximum value when a vehicle manages to traverse the junction. From the perspective of M1, the behavior that results in the optimal reward is (u1, u2)= (Pass, Wait). However, if M1 is not aware of the next action or associated reward of M2, it will instead opt for Wait since it must act conservatively to guarantee safety (i.e., no collision) for all the possible actions of M2. If M2 selects its action in the same manner as M1 (i.e., maximizes its own reward), the two vehicles will enter the deadlock state. On the other hand, if the vehicles were able to communicate their actions and rewards to each other, they may be able to avoid deadlock by making decisions in a cooperative manner. We would like to formally reason about the overall behavior of this system using A/G contracts, where each vehicle is modeled as a component implementing a contract, and the composition of M1 and M2 represents the end-to-end system. To do so, we seek for a framework that (1) captures the behaviors of a component while optimizing its objective, and (2) supports a composition mechanism that takes into account the notion of cooperation and non-cooperation. In the following, we first provide an overview of the current A/G contract framework and then describe how we extend it to address these issues.

Appears in 2 contracts

Sources: Assume Guarantee Contracts, Assume Guarantee Contracts

Motivating Example. Consider two autonomous vehicles at a T-junction where only one car can pass at a time. Each vehicle can either Pass or Wait. If both vehicles decide to pass through the T-junction at the same time, they will collide. On the other hand, if both decide to wait, they may enter a deadlock state in which they wait indefinitely. Ideally, we would like one vehicle to wait and the other to pass, so that they can traverse the T-junction one at a time. We model the preference between system behaviors by assigning a reward (or penalty) to each combination of actions of the two vehicles, (u1, u2), where u1 is the action of vehicle 1 (M1) and u2 is that of vehicle 2 (M2). The objectives of 246 TABLE I: M1 Rewards 2 1 Wait Pass Wait −1 −1 Pass 3 −10 TABLE II: M2 Rewards 2 1 Wait Pass Wait −1 4 Pass −1 −10 the empty set. The intent is that an implementation M can only be used under a compatible environment. C is consistent if there exists a feasible implementation M for it. For a saturated contract, this is equivalent to G = 0/ . To reason about different abstraction layers in a design, contracts can be ordered according to a refinement relation. A contract C = (V, A, G) refines Cj = (V, Aj, Gj), written C ≤ Cj, if and M1 and M2 are to maximize the rewards associated with their actions, shown in Tab. I and II, respectively. Rewards are as low as 10 to penalize collision and take on a maximum value when a vehicle manages to traverse the junction. From the perspective of M1, the behavior that results in the optimal reward is (u1, u2)= u2) = (Pass, Wait). However, if M1 is not aware of the next action or associated reward of M2, it will instead opt for Wait since it must act conservatively to guarantee safety (i.e., no collision) for all the possible actions of M2. If M2 selects its action in the same manner as M1 (i.e., maximizes its own reward), the two vehicles will enter the deadlock state. On the other hand, if the vehicles were able to communicate their actions and rewards to each other, they may be able to avoid deadlock by making decisions in a cooperative manner. We would like to formally reason about the overall behavior of this system using A/G contracts, where each vehicle is modeled as a component implementing a contract, and the composition of M1 and M2 represents the end-to-end system. To do so, we seek for a framework that (1) captures the behaviors of a component while optimizing its objective, and (2) supports a composition mechanism that takes into account the notion of cooperation and non-cooperation. In the following, we first provide an overview of the current A/G contract framework and then describe how we extend it to address these issues.

Appears in 1 contract

Sources: Assume Guarantee Contracts