Algorithm Sample Clauses

Algorithm. Next, we present an algorithm that solves Byzantine agree- ment assuming ℓ > 3t. Our agreement algorithm is generic: given any synchronous Byzantine agreement algorithm for ℓ processes with unique identifiers (such algorithms exist when ℓ = n > 3t, e.g., [13]), we transform it into an algorithm for n processes and ℓ identifiers, where n ≥ ℓ. Without loss of generality, we assume that the algorithm to be transformed uses broadcasts: a process sends the same message to all other processes. (If a process wishes to send a message only to specific recipients, it could include the recipient’s identi- fier in the broadcasted message.) In our transformation, we divide processes into groups ac- cording to their identifiers. Each group simulates a single process. If all processes within a group are correct, then they can reach agreement and cooperatively simulate a sin- gle process. If any process in the group is Byzantine, we allow the simulated process of that group to behave in a Byzantine manner. The correctness of our simulation re- lies on the fact that more than two-thirds of the simulated processes will be correct (since ℓ > 3t), which is enough to achieve agreement.
AutoNDA by SimpleDocs
Algorithm. We now describe an algorithm that solves Byzantine agree- ment in the basic partially synchronous model when ℓ > n+3t . Our algorithm is based on the algorithm given by Xxxxx, Xxxxx and Xxxxxxxxxx [9] for the classical case where n = ℓ, with several novel features. Generalizing the algo- rithm is not straightforward. Some of the difficulty stems from the following scenario. Suppose two correct processes share an identifier and follow the traditional algorithm of [9]. They could send very different messages (for example, if they have different input values), but recipients of those messages would have no way of telling apart the messages of the two correct senders, so it could appear to the recipients as if a single Byzantine process was sending out contradictory information. Thus, the algorithm has to guard against in- consistent information coming from correct homonym pro- cesses as well as malicious messages sent by the Byzantine processes.
Algorithm. We now describe the basic step of the reconciliation mechanism, i.e., the reconcili- ation between two sites on a given object. Section 8 describes when to invoke the reconciliation mechanism, and the options that exist in its use. The basic step is as follows.
Algorithm. Prior to GEMS commencing --------- discussions with the Institutional Review Board ("IRB"), R2 shall confirm in writing to GEMS that the R2 Product algorithm can process the PMA (pre-market approval) cases which are done in feasibility format and that R2 can use them for the PMA submission. In the event that the FDA does not accept these cases for R2's FDA submission, then GEMS and R2 shall negotiate in good faith to determine how to acquire additional cases as outlined in Section 2.8.3.
Algorithm. The shared memory Under x-obstruction-freedom, up to x processes may concurrently progress without preventing termination. As a consequence, in comparison to obstruction-freedom, solving k-set agreement in this setting requires to deal with more contention scenarios. To cope with these additional interleavings of processes, we increase the number of entries in REG . More precisely, REG now contains m = (n − k + x) entries. Ordering the quadruplets In the base algorithm, the four fields of some quadruplet X are the round number X.rd, the level X.ℓvℓ, the conflict flag X.xxℓ, and the value X.val. Coping with x-concurrency requires to replace the last field, which was initially a singleton, with a set of values. Hereafter, this new field is denoted X.valset. In line with the definitions of Section 4.1, let “>” denote the lexicographical order over the set of quadruplets, where the relation ⊐ is generalized as follows to take into account the fact that the last field of a quadruplet is now a non-empty set of values: X ⊐ Y d=ef (X > Y ) ∧ [(X.rd > Y.rd) ∨ (X.xx ℓ) ∨ (X.valset ⊇ Y.valset)]. In comparison to the definition appearing in Section 4, the sole new case where the ordering X ⊐ Y holds is (X > Y ) ∧ (X.valset ⊇ Y.valset). This case captures the fact that, as long as at most x input values are competing at some round, there is no conflict. If such a situation arises, we simply construct a quadruplet that aggregates the different input values. function sup(T ) is % T is a set of quadruplets whose last field is now a set of values % (S1) (S2) (S3) (S4) (S5) (S6) let (r, ℓeveℓ, conf ℓict, valset ) be max(T ); let tuples(T ) be {X | X ∈ T ∧ X.rnd = r}; let values(T ) be {v | X ∈ T ∧ v ∈ X.valset }; let conf ℓict (T ) be conflict ∨ |tuples(T )| > x ∨ |values(T )| > x; % lexicographical order % l et valset be the (at most) x greatest values in values(T ); return (r, ℓeveℓ, conf ℓict (T ), valset ) . . Σ Figure 4: Function sup() suited to x-obstruction-freedom Modifications to the sup() function Figure 4 describes the new definition of function sup(). Compared with the original algorithm in Figure 1, it introduces a few modifications (underlined and in blue). Those are detailed below. • Line S1. As pointed out previously, the last field of a quadruplet is now a set of values. The lexicographical ordering over such sets is as follows: sets are ordered first according to their size, and second using some arbitrary order over their elements. By abuse of notation, t...
Algorithm. The algorithm consists of two phases. During the first phase, the checkpoint initiator identifies all processes with which it has communicated since the last checkpoint and sends them a request. • Upon receiving the request, each process in turn identifies all processes it has communicated with since the last checkpoint and sends them a request, and so on, until no more processes can be identified. • During the second phase, all processes identified in the first phase take a checkpoint. The result is a consistent checkpoint that involves only the participating processes. • In this protocol, after a process takes a checkpoint, it cannot send any message until the second phase terminates successfully, although receiving a message after the checkpoint has been taken is allowable.
Algorithm. The ECM-sketch combines the well-known Count-Min sketch structure [2], which is used for conventional streams, with a state-of-the-art tool for sliding-window statistics, i.e. the Exponential Histograms [3].The input of the ECM-sketch data structure is a number of distributed data streams. The output of the ECM-sketch algorithm is a sliding window sketch synopsis that can provide provable, guaranteed error performance for queries, and can be employed to address a broad range of problems, such as maintaining frequency statistics, finding heavy hitters, and computing quintiles in the sliding-window model.
AutoNDA by SimpleDocs
Algorithm. Mutual information I(X; Y) computes the amount of information a random variable includes about another random variable, or in terms of entropy it is the decrease of uncertainty in a random variable due to existing knowledge about the other. For example, suppose discrete random variable X represents the roll of a fair six-sided dice, whereas Y shows whether the roll is odd or even. Then, it is clear that the two random variables share information, as by observing one we receive knowledge about the other. On the other hand, if we have a third discrete random variable Z denoting the role of another dice, then variables X and Z or Y and Z do not share mutual information. More formally, for a pair of discrete random variables X, Y with joint probability function p(x,y) and marginal probability functions p(x)and p(y) respectively, the mutual information I(X;Y) is the relative entropy between the joint distribution and the product distribution:
Algorithm. Transfer entropy is a non-parametric statistic measuring the amount of directed (time-asymmetric) transfer of information between two random processes. Transfer entropy from a process X to another process Y is the amount of uncertainty reduced in future values of Y by knowing the past values of X given past values of Y. The transfer entropy can be written as: = , , +1 , , (2) → +1 , , +1 +1 , , Transfer Entropy from Y to X is written as: = , , +1 , , (2) → +1 , , +1 +1 , , Transfer Entropy is able to distinguish effectively driving and responding elements and to detect asymmetry in the interaction of subsystems. Transfer entropy is conditional Mutual Information [25] with the history of the influenced variable in the condition. Transfer entropy reduces to Xxxxxxx causality for vector auto-regressive processes. Hence, it is advantageous when the model assumption of Xxxxxxx causality doesn't hold, for example, analysis of non-linear signals. However, it usually requires more samples for accurate estimation. While it was originally defined for bivariate analysis, transfer entropy has been extended to multivariate forms, either conditioning on other potential source variables or considering transfer from a collection of sources, although these forms require more samples again.. The flowchart that corresponds to the TE calculation is shown on the following figure. First the pdf estimation is done and the TE is calculated. Input Data NO pdf created pdf Estimation YES NO MI calculated MI Calculation YES MI Result Figure 11: Transfer Entropy basic flowchart
Algorithm. The time of arrival information shall be determined using a predictive algorithm that utilizes the current AVL information for the approaching buses to a bus stop. AIM shall calculate the arrival times for all buses up to the next 60 minutes and display up to the next five buses that will arrive at each stop. If more than five buses, the user can scroll to see the additional bus arrivals. The time of arrival information shall be updated at least every thirty seconds and made available to the systems using such information within one second after the AIM server receives a location update. AIM shall also calculate time of departure information which shall be used for announcements for the first stop for each bus route trip. The accuracy of the predictive algorithm shall be such that the predicted error shall be less than 75 seconds when a bus is five minutes or less from a stop; and less than two minutes when a bus is between six and 10 minutes from a stop. The AIM predictive algorithm shall be a learning algorithm that is based on historical data for the stop location, route, and the time of day, day of week, and week of year.
Time is Money Join Law Insider Premium to draft better contracts faster.