{"component": "clause", "props": {"groups": [{"snippet_links": [{"key": "section-8", "type": "clause", "offset": [128, 137]}, {"key": "the-options", "type": "clause", "offset": [197, 208]}], "size": 3, "snippet": "We now describe the basic step of the reconciliation mechanism, i.e., the reconcili- ation between two sites on a given object. Section 8 describes when to invoke the reconciliation mechanism, and the options that exist in its use. The basic step is as follows.", "samples": [{"hash": "4dI2aycWzLM", "uri": "/contracts/4dI2aycWzLM#algorithm", "label": "Independent Updates and Incremental Agreement in Replicated Databases", "score": 19.0, "published": true}, {"hash": "1Q15TJ4Zphl", "uri": "/contracts/1Q15TJ4Zphl#algorithm", "label": "Independent Updates and Incremental Agreement in Replicated Databases", "score": 19.0, "published": true}], "hash": "5ad99834892798389b011a1a655da92e", "id": 3}, {"snippet_links": [{"key": "our-agreement", "type": "definition", "offset": [81, 94]}, {"key": "byzantine-agreement", "type": "clause", "offset": [139, 158]}, {"key": "unique-identifiers", "type": "definition", "offset": [190, 208]}, {"key": "loss-of", "type": "definition", "offset": [352, 359]}, {"key": "other-processes", "type": "clause", "offset": [476, 491]}, {"key": "the-recipient", "type": "definition", "offset": [578, 591]}, {"key": "the-group", "type": "clause", "offset": [904, 913]}, {"key": "the-fact", "type": "clause", "offset": [1052, 1060]}], "size": 7, "snippet": "Next, we present an algorithm that solves Byzantine agree- ment assuming \u2113 > 3t. Our agreement algorithm is generic: given any synchronous Byzantine agreement algorithm for \u2113 processes with unique identifiers (such algorithms exist when \u2113 = n > 3t, e.g., [13]), we transform it into an algorithm for n processes and \u2113 identifiers, where n \u2265 \u2113. Without loss of generality, we assume that the algorithm to be transformed uses broadcasts: a process sends the same message to all other processes. (If a process wishes to send a message only to specific recipients, it could include the recipient\u2019s identi- fier in the broadcasted message.) In our transformation, we divide processes into groups ac- cording to their identifiers. Each group simulates a single process. If all processes within a group are correct, then they can reach agreement and cooperatively simulate a sin- gle process. If any process in the group is Byzantine, we allow the simulated process of that group to behave in a Byzantine manner. The correctness of our simulation re- lies on the fact that more than two-thirds of the simulated processes will be correct (since \u2113 > 3t), which is enough to achieve agreement.", "samples": [{"hash": "9MqwUrVh8uP", "uri": "/contracts/9MqwUrVh8uP#algorithm", "label": "Byzantine Agreement With Homonyms", "score": 26.0732375086, "published": true}, {"hash": "ghPjMfvz1Hp", "uri": "/contracts/ghPjMfvz1Hp#algorithm", "label": "Byzantine Agreement With Homonyms", "score": 26.0321697467, "published": true}, {"hash": "jyFoCxjvvUb", "uri": "/contracts/jyFoCxjvvUb#algorithm", "label": "Byzantine Agreement With Homonyms", "score": 25.6187542779, "published": true}], "hash": "96a14eae28cf06a755c36a3e69a6d411", "id": 1}, {"snippet_links": [{"key": "based-on", "type": "clause", "offset": [137, 145]}, {"key": "for-example", "type": "definition", "offset": [513, 524]}, {"key": "the-recipients", "type": "clause", "offset": [695, 709]}], "size": 7, "snippet": "We now describe an algorithm that solves Byzantine agree- ment in the basic partially synchronous model when \u2113 > n+3t . Our algorithm is based on the algorithm given by \u2587\u2587\u2587\u2587\u2587, \u2587\u2587\u2587\u2587\u2587 and \u2587\u2587\u2587\u2587\u2587\u2587\u2587\u2587\u2587\u2587 [9] for the classical case where n = \u2113, with several novel features. Generalizing the algo- rithm is not straightforward. Some of the difficulty stems from the following scenario. Suppose two correct processes share an identifier and follow the traditional algorithm of [9]. They could send very different messages (for example, if they have different input values), but recipients of those messages would have no way of telling apart the messages of the two correct senders, so it could appear to the recipients as if a single Byzantine process was sending out contradictory information. Thus, the algorithm has to guard against in- consistent information coming from correct homonym pro- cesses as well as malicious messages sent by the Byzantine processes.", "samples": [{"hash": "9MqwUrVh8uP", "uri": "/contracts/9MqwUrVh8uP#algorithm", "label": "Byzantine Agreement With Homonyms", "score": 26.0732375086, "published": true}, {"hash": "ghPjMfvz1Hp", "uri": "/contracts/ghPjMfvz1Hp#algorithm", "label": "Byzantine Agreement With Homonyms", "score": 26.0321697467, "published": true}, {"hash": "jyFoCxjvvUb", "uri": "/contracts/jyFoCxjvvUb#algorithm", "label": "Byzantine Agreement With Homonyms", "score": 25.6187542779, "published": true}], "hash": "29a50441e3f4ed4f777c8952c51d9253", "id": 2}, {"snippet_links": [{"key": "prior-to", "type": "definition", "offset": [0, 8]}, {"key": "institutional-review-board", "type": "definition", "offset": [56, 82]}, {"key": "in-writing", "type": "clause", "offset": [109, 119]}, {"key": "market-approval", "type": "definition", "offset": [183, 198]}, {"key": "in-the-event", "type": "clause", "offset": [292, 304]}, {"key": "the-fda", "type": "clause", "offset": [310, 317]}, {"key": "fda-submission", "type": "clause", "offset": [355, 369]}, {"key": "negotiate-in-good-faith", "type": "definition", "offset": [394, 417]}, {"key": "to-determine", "type": "definition", "offset": [418, 430]}, {"key": "to-acquire", "type": "definition", "offset": [435, 445]}], "size": 2, "snippet": "Prior to GEMS commencing --------- discussions with the Institutional Review Board (\"IRB\"), R2 shall confirm in writing to GEMS that the R2 Product algorithm can process the PMA (pre-market approval) cases which are done in feasibility format and that R2 can use them for the PMA submission. In the event that the FDA does not accept these cases for R2's FDA submission, then GEMS and R2 shall negotiate in good faith to determine how to acquire additional cases as outlined in Section 2.8.3.", "samples": [{"hash": "4E0SW2SMTHH", "uri": "/contracts/4E0SW2SMTHH#algorithm", "label": "Distributor Agreement (R2 Technology Inc)", "score": 18.0, "published": true}, {"hash": "2dIgvdY2ZGx", "uri": "/contracts/2dIgvdY2ZGx#algorithm", "label": "Distributor Agreement (R2 Technology Inc)", "score": 18.0, "published": true}], "hash": "37dc75acd29011a767f771b923451bda", "id": 4}, {"snippet_links": [{"key": "the-shared", "type": "clause", "offset": [0, 10]}, {"key": "number-of", "type": "clause", "offset": [342, 351]}, {"key": "the-conflict", "type": "definition", "offset": [556, 568]}, {"key": "the-value", "type": "clause", "offset": [585, 594]}, {"key": "replace-the", "type": "clause", "offset": [640, 651]}, {"key": "the-definitions-of", "type": "definition", "offset": [779, 797]}, {"key": "section-41", "type": "clause", "offset": [798, 809]}, {"key": "take-into-account", "type": "definition", "offset": [931, 948]}, {"key": "the-fact", "type": "clause", "offset": [949, 957]}, {"key": "field-of", "type": "definition", "offset": [972, 980]}, {"key": "a-non", "type": "clause", "offset": [1001, 1006]}, {"key": "in-section-4", "type": "clause", "offset": [1144, 1156]}, {"key": "no-conflict", "type": "clause", "offset": [1350, 1361]}, {"key": "figure-4", "type": "definition", "offset": [1928, 1936]}, {"key": "modifications-to-the", "type": "clause", "offset": [1985, 2005]}, {"key": "new-definition", "type": "clause", "offset": [2044, 2058]}, {"key": "the-original", "type": "definition", "offset": [2092, 2104]}, {"key": "figure-1", "type": "definition", "offset": [2118, 2126]}, {"key": "according-to", "type": "definition", "offset": [2389, 2401]}, {"key": "definition-of-a", "type": "clause", "offset": [2737, 2752]}, {"key": "the-case", "type": "definition", "offset": [2798, 2806]}], "size": 2, "snippet": "The shared memory Under x-obstruction-freedom, up to x processes may concurrently progress without preventing termination. As a consequence, in comparison to obstruction-freedom, solving k-set agreement in this setting requires to deal with more contention scenarios. To cope with these additional interleavings of processes, we increase the number of entries in REG . More precisely, REG now contains m = (n \u2212 k + x) entries. Ordering the quadruplets In the base algorithm, the four fields of some quadruplet X are the round number X.rd, the level X.\u2113v\u2113, the conflict flag \u2587.\u2587\u2587\u2113, and the value X.val. Coping with x-concurrency requires to replace the last field, which was initially a singleton, with a set of values. Hereafter, this new field is denoted X.valset. In line with the definitions of Section 4.1, let \u201c>\u201d denote the lexicographical order over the set of quadruplets, where the relation \u2290 is generalized as follows to take into account the fact that the last field of a quadruplet is now a non-empty set of values: X \u2290 Y d=ef (X > Y ) \u2227 [(X.rd > Y.rd) \u2228 (\u2587.\u2587\u2587 \u2113) \u2228 (X.valset \u2287 Y.valset)]. In comparison to the definition appearing in Section 4, the sole new case where the ordering X \u2290 Y holds is (X > Y ) \u2227 (X.valset \u2287 Y.valset). This case captures the fact that, as long as at most x input values are competing at some round, there is no conflict. If such a situation arises, we simply construct a quadruplet that aggregates the different input values. function sup(T ) is % T is a set of quadruplets whose last field is now a set of values % (S1) (S2) (S3) (S4) (S5) (S6) let (r, \u2113eve\u2113, conf \u2113ict, valset ) be max(T ); let tuples(T ) be {X | X \u2208 T \u2227 X.rnd = r}; let values(T ) be {v | X \u2208 T \u2227 v \u2208 X.valset }; let conf \u2113ict (T ) be conflict \u2228 |tuples(T )| > x \u2228 |values(T )| > x; % lexicographical order % l et valset be the (at most) x greatest values in values(T ); return (r, \u2113eve\u2113, conf \u2113ict (T ), valset ) . Figure 4: Function sup() suited to x-obstruction-freedom Modifications to the sup() function Figure 4 describes the new definition of function sup(). Compared with the original algorithm in Figure 1, it introduces a few modifications (underlined and in blue). Those are detailed below. \u2022 Line S1. As pointed out previously, the last field of a quadruplet is now a set of values. The lexicographical ordering over such sets is as follows: sets are ordered first according to their size, and second using some arbitrary order over their elements. By abuse of notation, this order is also written <. For instance, we have {10, 8, 2} < {10, 4, 3} and {10, 4, 3} < {15, 12}. It is assumed that for any set of values S, S < \u22a5 holds. \u2022 Line S2. This line does not change. \u2022 Lines S3 and S4. This variant extends the definition of a conflict. Namely, it considers as a conflict the case where more than x distinct tuples are competing at round r, and also the additional case where more than x distinct values are competing at round r.", "samples": [{"hash": "bvmd0TvxMDe", "uri": "/contracts/bvmd0TvxMDe#algorithm", "label": "Anonymous Obstruction Free (N,k) Set Agreement", "score": 22.6098562628, "published": true}], "hash": "98945eb362a179d0c41da2256727c915", "id": 5}, {"snippet_links": [{"key": "case-study", "type": "definition", "offset": [19, 29]}, {"key": "based-on", "type": "clause", "offset": [34, 42]}, {"key": "other-agents", "type": "definition", "offset": [429, 441]}, {"key": "collision-avoidance", "type": "definition", "offset": [551, 570]}, {"key": "delivery-area", "type": "definition", "offset": [641, 654]}, {"key": "change-to", "type": "definition", "offset": [894, 903]}, {"key": "the-current", "type": "clause", "offset": [973, 984]}, {"key": "change-in", "type": "clause", "offset": [1046, 1055]}, {"key": "the-agent-will", "type": "clause", "offset": [1519, 1533]}, {"key": "an-agent", "type": "clause", "offset": [1780, 1788]}, {"key": "the-items", "type": "definition", "offset": [1929, 1938]}, {"key": "off-the-ground", "type": "definition", "offset": [1984, 1998]}, {"key": "by-the-agents", "type": "clause", "offset": [2274, 2287]}, {"key": "unable-to-deliver", "type": "clause", "offset": [2731, 2748]}, {"key": "new-items", "type": "clause", "offset": [2765, 2774]}, {"key": "the-delivery", "type": "clause", "offset": [2861, 2873]}, {"key": "time-value", "type": "definition", "offset": [3065, 3075]}, {"key": "the-performance", "type": "clause", "offset": [3653, 3668]}, {"key": "for-performance", "type": "clause", "offset": [3724, 3739]}, {"key": "figure-10", "type": "definition", "offset": [3769, 3778]}, {"key": "items-delivered", "type": "clause", "offset": [3875, 3890]}, {"key": "time-limit", "type": "clause", "offset": [3894, 3904]}, {"key": "use-case", "type": "clause", "offset": [3920, 3928]}, {"key": "performance-data", "type": "definition", "offset": [3933, 3949]}, {"key": "changes-in", "type": "clause", "offset": [4027, 4037]}, {"key": "incremental-increases", "type": "clause", "offset": [4150, 4171]}, {"key": "in-performance", "type": "clause", "offset": [4428, 4442]}, {"key": "percentage-change", "type": "definition", "offset": [4680, 4697]}, {"key": "in-the-following-ways", "type": "clause", "offset": [5567, 5588]}, {"key": "for-example", "type": "definition", "offset": [5742, 5753]}, {"key": "working-with", "type": "definition", "offset": [5771, 5783]}, {"key": "the-cost", "type": "clause", "offset": [6073, 6081]}, {"key": "available-to", "type": "definition", "offset": [6536, 6548]}, {"key": "for-the-user", "type": "definition", "offset": [6676, 6688]}, {"key": "depending-on-how", "type": "clause", "offset": [6697, 6713]}, {"key": "resources-for", "type": "clause", "offset": [6758, 6771]}, {"key": "best-performance", "type": "definition", "offset": [7098, 7114]}, {"key": "the-system", "type": "definition", "offset": [7502, 7512]}, {"key": "similar-to", "type": "definition", "offset": [7775, 7785]}, {"key": "fault-tolerance", "type": "clause", "offset": [7826, 7841]}, {"key": "mode-1", "type": "definition", "offset": [8016, 8022]}, {"key": "maximum-value", "type": "definition", "offset": [8142, 8155]}, {"key": "mode-2", "type": "definition", "offset": [8249, 8255]}, {"key": "mode-3", "type": "definition", "offset": [8355, 8361]}, {"key": "mode-4", "type": "definition", "offset": [8595, 8601]}, {"key": "performance-increases", "type": "definition", "offset": [9323, 9344]}, {"key": "not-fault-tolerant", "type": "clause", "offset": [9508, 9526]}, {"key": "the-general", "type": "clause", "offset": [9569, 9580]}, {"key": "other-failure", "type": "definition", "offset": [9794, 9807]}, {"key": "close-to", "type": "definition", "offset": [9894, 9902]}, {"key": "catastrophic-failure", "type": "clause", "offset": [10089, 10109]}, {"key": "caused-by", "type": "clause", "offset": [10110, 10119]}], "size": 1, "snippet": "The agents in this case study are based on the Toshiba DOTS, developed at the Bristol Robotics Laboratory [15] (although it should be noted that the DOTS have more hardware and capabilities than the agents simulated in these experiments). Each agent has the following (simulated) hardware: holonomic wheel configuration; lifting mechanism for items; camera and IR sensor to detect items and obstacles; sensor to communicate with other agents. Using this onboard equipment, each agent can perform the following behaviours every time step: Random Walk; Collision avoidance; Item detection, pick up and put down; Periodic reshuffling of items; Delivery area detection, timer broadcast and item delivery; Swarm Diffusion-Taxis algorithm [16]. The random walk behaviour is updated every 1 second (once every 50 time steps). The agents move forward at a constant speed of 1 m/s for 1 second and then change to a new random direction of movement, adding \u22120.5c < \u03b1random < 0.5c to the current direction of travel, \u03b1. This is modelled as an instantaneous change in direction. The agents follow this random walk unless something comes into their sensory range. If they come within 0.35 m (centre- centre) of an obstacle then their collision avoidance behaviour is triggered. If the obstacle is a wall then the agent adds a quarter turn to their heading direction (\u03b1 = \u03b1+\u03c0/2) until they are moving away from the wall. If the obstacle is another agent or an item (which they want to avoid because they have an item currently) then the agent will move in the opposite direction from that obstacle. If there are multiple obstacles to avoid then the distances and directions to each obstacle sum to a vector, which the agent moves along to avoid them. Item detection, pick up and put down: When an agent that is not currently carrying an item comes into sensory range of an item (0.75 m from agent centre) then the agent will pick up the item. The items are on table-like carriers, which raise them off the ground on stilts. The agents can navigate underneath an item they have found and lift it up from beneath to carry it around. This is based on how items are stored and collected in the Toshiba test-bed, which simulates warehouse scenarios [15]. The items are periodically reshuffled by the agents which will carry items around and put them down again somewhere else if they are not the requested item. They generate a random number between 0 and 100 every time step and if it is below 2 then they drop their item where they are and leave it behind for another agent to pick up. This reshuffling avoids two deadlock cases: (1) the requested item is trapped behind unrequested items; (2) all the agents are carrying unrequested items and are unable to deliver them or pick up new items. Delivery area detection, timer broadcast and item delivery: When an agent arrives in the delivery area it receives a (simulated) signal from a beacon there to say it is in the delivery area. It can then deliver the item it is carrying if it is the requested item. It will also broadcast a time value when it has been in the delivery area, which follows the Swarm Diffusion-Taxis algorithm [16]. In the SDT algorithm, the timer value that the agent broadcasts to its local neighbours is maximum (500) when the agent is in the delivery area. When it is outside the delivery area, the timer value decays by 1 every time steps until it is NaN after 10 seconds. When the agent has the requested item, it will read the timer values of neighbours within its communication range (5.0 m) and move towards the agent with the highest timer value. It will re-do this step every time step. The performance was measured for different swarm sizes and the results for performance and Scalability are given in Figure 10 and expanded in this Section. Figure 10: Scalability, S (Equation 3) and performance (number of items delivered in time limit) for logistics use case (a) Performance data for various swarm sizes 1-40 agents (b) Scalability measured for incremental changes in swarm size (c) Scalability measured from 1 agent to N agents. Equation 3 was used to measure the Scalability of incremental increases in swarm size. For Equations 1, 2 and 3, m = 5 for 1 to 10 agents and m = 10 for 10 to 40 agents. The results are given in Figure 10(b). The Scalability values are all S > 0, which indicates that they are all scalable. This means that there is no decrease in performance due to an increase in agents, at any agent number tested. This can be confirmed by observing the performance curve in Figure 10(a). The results are superlinearly scalable, S > 1, for 5 to 20 agents, which indicates that the performance (percentage change) increases more than the number of agents added (percentage change), which is a superlinear increase in performance. Considering the performance values in Figure 10(a), this matches the shape of the curve because either side of this region, the gradient is less steep. Scalability is also measured for the change in performance from 1 agent. In every case of S, m is increasing, N = 1, PN = P1, PN+m = P1+m. The results for Scalability are given in Figure 10(c). The Scalability values are all scalable, with S > 0. This makes sense because all the performances in Figure 10(a) increase from P1. 1 to 20 and 1 to 30 agents are superlinear Scalability ranges, S > 1. From this, it can be concluded that the maximum super scalable range is 1 to 30 agents and the maximum scalable range is 1 to 40 agents (the maximum tested swarm size). The user can use this information in the following ways. If the Incremental Scalability results were included in a specification then the user could look up the Scalability for a given range of agent numbers. For example, if the user was working with this set-up with 10 agents and they wanted to improve the performance then they could look up Figure 10(b). From this graph they could read that scaling from 10 to 20 agents will give them a super scalable result, meaning that it will be a performance per agent increase that is more than the cost per agent increase. Whereas, if they had a swarm of 20 agents and they looked up what the Scalability was for moving to 30 agents they would find that 0 < S < 1, which is a scalable but not superlinearly scalable result. They would therefore know that they would get a performance increase but it would not be an increase with good efficiency. How they would proceed would depend if they valued performance or efficiency more and what resources they had available to them. If the Scalability from 1 results were included in a specification then this could be used to decide the best swarm size for the user to use, depending on how many agents they have available or have the resources for. If a user had unlimited agents and the same experimental set-up as is used for the results in Figure 10(c), then they could look at this graph and see that they would get the most efficient performance per agent increase (from 1 agent) at 20 agents. From the performance data (Figure 10(a)) they can see that this is not the best performance possible. But if the user values efficiency then 20 agents would be their best choice. Either Incremental Scalability or the Scalability from 1 can also be used to compare one swarm\u2019s Scalability to another. For example, if both swarms have N agents and perform the same task in the same set-up then the swarm with the highest S1toN number would be the most scalable. The performance of the system under 4 different failure modes is measured and given in Figure 11(a). In part (a) of this Figure, the scaled down swarm performance P(SD) and the proportional change P(%N) in performance are also given alongside the performances under faults 1-4. P(SD) is very similar to P(%N), which means that the results for Fault Tolerance (FT) and Robustness (R) are likely to be very similar to each other. A swarm of 25 agents was used for these experiments. The failure modes tested were as follows: \u25cf Failure Mode 1 (FM1): Malicious agents. The delivery area timer value is wrong. For failed agents the timer is always set to 500, the maximum value, so that they look to their neighbours as if they are always in the delivery area. \u25cf Failure Mode 2 (FM2): Failed agents cannot pick up items and they will instead treat them as obstacles. \u25cf Failure mode 3 (FM3): Delivery area timer value always set to 0. Failed agents cannot broadcast how recently they have been in the delivery area but they can still detect the delivery area when they are in it to deliver items they carry. \u25cf Failure mode 4 (FM4): Failed agents cannot deliver items. In the reshuffling behaviour, items are not dropped if they are the requested item. This means that with this failure mode, the requested item will never be passed on to a working agent if it is carried by a failed agent in this failure mode. Therefore, if an agent has failed in this way and is carrying the requested item then the scenario will deadlock and no more items will be delivered. The results for Fault Tolerance at each failure mode (FM) are given in Figure 11(b), alongside the performance data in Figure 11(a). The system is fault tolerant to FM3 up to and including 10 agent failures. Looking at the performance data for FM3, this is clearly true as the average performance increases for some of the results with increasing faulty agents and is always high. The system is fault tolerant to FM1 for 1, 2, 4-10 failed agents. The result where it is not fault tolerant , m = 3 agents, could be accounted for by the general variation in performances for P(FM1) and P(SD), shown in the performance data. The system is fault tolerant to FM2 for 1-2, 4, 6-8 agent failures. The magnitude of the FT results for FM2 are small compared to the other failure modes at almost all numbers of faulty agents tested, because the performance trend is close to the scaled down performance (seen in Figure 11(a)). The system is not fault tolerant to FM4 at any number of faulty agents. The larger magnitude of FT (with negative sign) indicates the catastrophic failure caused by this FM from 2 failed agents and up, seen in the performance data for FM4. As expected, the values for Fault Tolerance, FT (Figure 11(b)) and Robustness, R (Figure 11(c)) are very similar in this case, as was predicted given the performance data for the scaled down swarm. The system is robust to all 4 failure modes tested when 1 agent has failed, as all have results R > 0. The system remains robust to both FM1 and FM3 up to and including 10 agent failures. The system is robust to FM2 up to 5 agent failures and not robust for 6-10 agent failures. Finally, the system is not robust to FM4 for 2 and above agent failures, which also reflects the catastrophic failure caused by this FM.", "samples": [{"hash": "cw343oywo6Z", "uri": "/contracts/cw343oywo6Z#algorithm", "label": "Deliverable", "score": 32.4769245711, "published": true}], "hash": "b38598e8d2069d25f7b7bc5b2da59bc6", "id": 6}, {"snippet_links": [{"key": "arrival-information", "type": "clause", "offset": [12, 31]}, {"key": "the-current", "type": "clause", "offset": [95, 106]}, {"key": "bus-stop", "type": "clause", "offset": [154, 162]}, {"key": "calculate-the", "type": "clause", "offset": [174, 187]}, {"key": "arrival-times", "type": "clause", "offset": [188, 201]}, {"key": "the-user", "type": "clause", "offset": [336, 344]}, {"key": "available-to", "type": "definition", "offset": [480, 492]}, {"key": "the-systems", "type": "clause", "offset": [493, 504]}, {"key": "such-information", "type": "definition", "offset": [511, 527]}, {"key": "information-which", "type": "clause", "offset": [638, 655]}, {"key": "bus-route", "type": "clause", "offset": [716, 725]}, {"key": "accuracy-of-the", "type": "clause", "offset": [736, 751]}, {"key": "based-on", "type": "clause", "offset": [1037, 1045]}, {"key": "historical-data", "type": "clause", "offset": [1046, 1061]}, {"key": "time-of-day", "type": "definition", "offset": [1100, 1111]}, {"key": "day-of", "type": "clause", "offset": [1113, 1119]}], "size": 1, "snippet": "The time of arrival information shall be determined using a predictive algorithm that utilizes the current AVL information for the approaching buses to a bus stop. AIM shall calculate the arrival times for all buses up to the next 60 minutes and display up to the next five buses that will arrive at each stop. If more than five buses, the user can scroll to see the additional bus arrivals. The time of arrival information shall be updated at least every thirty seconds and made available to the systems using such information within one second after the AIM server receives a location update. AIM shall also calculate time of departure information which shall be used for announcements for the first stop for each bus route trip. The accuracy of the predictive algorithm shall be such that the predicted error shall be less than 75 seconds when a bus is five minutes or less from a stop; and less than two minutes when a bus is between six and 10 minutes from a stop. The AIM predictive algorithm shall be a learning algorithm that is based on historical data for the stop location, route, and the time of day, day of week, and week of year.", "samples": [{"hash": "1waCX6pnUR6", "uri": "/contracts/1waCX6pnUR6#algorithm", "label": "Master Agreement", "score": 23.5002323018, "published": true}], "hash": "80175fa8f5212a6058e74b479d7f77dc", "id": 7}, {"snippet_links": [{"key": "cluster-members", "type": "definition", "offset": [88, 103]}, {"key": "public-information", "type": "clause", "offset": [183, 201]}, {"key": "private-key", "type": "definition", "offset": [340, 351]}, {"key": "the-value", "type": "clause", "offset": [373, 382]}], "size": 1, "snippet": "Consider a cluster-based infrastructure-less network with cluster-head \u2018CH\u2019 and several cluster members. Consider two cluster members CMA and CHB want to authenticate each other. The public information about the cluster is, {PN, IDCH, IDCMA, IDCMB, TKprh (PN)} , Hash function, symmetric encipherment. Step1: Cluster member CMA selects the Private key Kprcma and calculate the value of TKpr\ud835\udd00ma (PN) and KCH\u2212CMA = TKpr\ud835\udd00ma TKprh (PN) with the help of public information. Then CMA constructs the message mCMA as follows mCMA = {IDCMA, IDCMB, IDCH, TKpr\ud835\udd00ma (PN), CTCMA} Where, CTCMA = E (KCH\u2212CMA, {IDCMA||IDCMB||IDCH||HCMA}) HCMA = {IDCMA||IDCMB||IDCH||TKpr\ud835\udd00ma (PN)} Cluster member CMA sends the mCMA to \u2018CH, and this message indicates that it wants to authenticate with Cluster member CMB", "samples": [{"hash": "chrwRqRFFQB", "uri": "/contracts/chrwRqRFFQB#algorithm", "label": "Mutual Authenticated Key Agreement", "score": 25.9048596851, "published": true}], "hash": "7548ddb9e71a71e783e42e2d9737db78", "id": 8}, {"snippet_links": [{"key": "this-agreement", "type": "clause", "offset": [0, 14]}, {"key": "in-connection-with", "type": "clause", "offset": [47, 65]}, {"key": "the-term-of-the-agreement", "type": "clause", "offset": [152, 177]}, {"key": "digital-assets", "type": "definition", "offset": [239, 253]}, {"key": "in-section-3", "type": "clause", "offset": [279, 291]}, {"key": "the-customer-acknowledges", "type": "clause", "offset": [299, 324]}, {"key": "associated-with", "type": "definition", "offset": [335, 350]}, {"key": "transaction-verifications", "type": "clause", "offset": [464, 489]}, {"key": "by-the-customer", "type": "clause", "offset": [551, 566]}], "size": 1, "snippet": "This Agreement is for the use of one algorithm in connection with transaction verification for one or more blockchain protocols. At the commencement of the Term of the Agreement, the Customer-selected algorithm may be employed for certain digital assets extraction. As described in Section 3 below, the Customer acknowledges the risks associated with blockchain technologies and acknowledges that variations may occur with the protocols used to perform blockchain transaction verifications (\u201coutput\u201d) for cryptocurrencies using the algorithm selected by the Customer.", "samples": [{"hash": "hCEKu9I3538", "uri": "/contracts/hCEKu9I3538#algorithm", "label": "Terms of Use", "score": 30.5325296152, "published": true}], "hash": "14b2ab0fda9048a98b24e2d9c2893819", "id": 9}, {"snippet_links": [{"key": "first-phase", "type": "definition", "offset": [49, 60]}, {"key": "upon-receiving", "type": "clause", "offset": [197, 211]}, {"key": "the-request", "type": "clause", "offset": [212, 223]}, {"key": "second-phase", "type": "definition", "offset": [415, 427]}, {"key": "the-participating", "type": "clause", "offset": [549, 566]}], "size": 1, "snippet": "The algorithm consists of two phases. During the first phase, the checkpoint initiator identifies all processes with which it has communicated since the last checkpoint and sends them a request. \u2022 Upon receiving the request, each process in turn identifies all processes it has communicated with since the last checkpoint and sends them a request, and so on, until no more processes can be identified. \u2022 During the second phase, all processes identified in the first phase take a checkpoint. The result is a consistent checkpoint that involves only the participating processes. \u2022 In this protocol, after a process takes a checkpoint, it cannot send any message until the second phase terminates successfully, although receiving a message after the checkpoint has been taken is allowable.", "samples": [{"hash": "6iCyQzEKzlU", "uri": "/contracts/6iCyQzEKzlU#algorithm", "label": "Agreement in a Failure Free System", "score": 31.9075489736, "published": true}], "hash": "c50d0004fc063d5652ac5d1a6463bd7c", "id": 10}], "next_curs": "ClISTGoVc35sYXdpbnNpZGVyY29udHJhY3Rzci4LEhZDbGF1c2VTbmlwcGV0R3JvdXBfdjU2IhJhbGdvcml0aG0jMDAwMDAwMGEMogECZW4YACAA", "clause": {"children": [["", ""], ["unforgeability", "Unforgeability"], ["numerate-processes", "Numerate Processes"], ["relay", "Relay"], ["correctness", "Correctness"]], "parents": [["the-synchronous-case", "THE SYNCHRONOUS CASE"], ["the-partially-synchronous-case", "THE PARTIALLY SYNCHRONOUS CASE"], ["scope-of-the-agreement", "Scope of the Agreement"], ["gems-cases", "Gems Cases"], ["reconciliation", "Reconciliation"]], "size": 36, "title": "Algorithm", "id": "algorithm", "related": [["interfaces", "Interfaces", "Interfaces"], ["indicator", "Indicator", "Indicator"], ["model", "Model", "Model"], ["configuration", "Configuration", "Configuration"], ["outputs", "Outputs", "Outputs"]], "related_snippets": [], "updated": "2025-07-07T12:37:48+00:00", "also_ask": ["What key protections should be included to address algorithm transparency and explainability?", "How can parties allocate liability for algorithm errors or unintended outcomes?", "What negotiation leverage exists regarding access to algorithm source code or audit rights?", "How do enforceability standards for algorithm clauses differ across major jurisdictions?", "What are the most common legal pitfalls or ambiguities in algorithm-related contract clauses?"], "drafting_tip": "Define the algorithm's function and parameters to avoid ambiguity; specify ownership and usage rights to ensure enforceability; require documentation of changes to maintain clarity.", "explanation": "The Algorithm clause defines the specific computational or procedural method to be used for processing data or making decisions within the context of the agreement. It typically outlines which algorithm or class of algorithms must be implemented, the parameters or settings to be used, and any requirements for updates or modifications over time. By specifying the algorithm, this clause ensures consistency, transparency, and predictability in automated processes, reducing ambiguity and potential disputes over how outcomes are generated."}, "json": true, "cursor": ""}}