Common use of Previous Work Clause in Contracts

Previous Work. Reputation mechanisms are being used to increase the reliability and perfor- xxxxx of virtual societies (or organisations) while providing mechanisms for exchanging reputation values. In centralised reputation models, a reputation system receives feedback about the interactions among the agents. Each agent evaluates the behaviour of the agents with whom it interacts and informs the reputation system. The system puts together all evaluations and stores such rep- utations. In contrast, in distributed reputation models, each agent evaluates and stores the reputations of the agents with whom it has interacted with and is able to provide such information to other agents. With the aim to cope with the problems of centralised and distributed rep- utation mechanisms3, we proposed the use of a hybrid mechanism [12]. In the distributed part of such a mechanism, agents evaluate the behaviour of other agents by exchanging opinions and storing such information. An opinion has to be justified by providing, for instance, the set of violated norms that contribute to that opinion. X A R ON (A A X {ON R }) A This work is framed in organisational environments that provide a minimum set of organisational mechanisms to regulate agents’ interactions. Formally, an organisation is defined as a tuple g, , , φ, x0, ϕ, om, om where g represents the set of agents participating within the organisation; is the set of actions agents can perform; stands for the environmental states space; φ is a function describing how the system evolves as a result of agents actions; x0 represents the initial state of the system; ϕ is the agents’ capability function describing the actions agents are able to perform in a given state of the environ- ment; om is an organisational mechanism based on organisational norms; and om is an organisational mechanism based on roles that defines the positions agents may enact in the organisation (see [5] for more details). (A R A ) (A A T ) Agents participating in the field of such organisations are involved in different situations. A situation is defined as a tuple g, , , T , that represents an agent g, playing the role , while performing the action , through a time period T . As detailed in [5], different types of situations can be defined following this definition. For instance, situations in which an agent performs an action, regardless of the role it is playing – g, , , –, or situations in which an agent is playing a role along a time period, regardless the action it performs –

Appears in 2 contracts

Samples: ceur-ws.org, nlp.uned.es

AutoNDA by SimpleDocs

Previous Work. Reputation mechanisms are being used to increase the reliability and perfor- xxxxx of virtual societies (or organisations) while providing mechanisms for exchanging reputation values. In centralised reputation models, a reputation system receives feedback about the interactions among the agents. Each agent evaluates the behaviour of the agents with whom it interacts and informs the reputation system. The system puts together all evaluations and stores such rep- utations. In contrast, in distributed reputation models, each agent evaluates and stores the reputations of the agents with whom it has interacted with and is able to provide such information to other agents. With the aim to cope with the problems of centralised and distributed rep- utation mechanisms3, we proposed the use of a hybrid mechanism [12]. In the distributed part of such a mechanism, agents evaluate the behaviour of other agents by exchanging opinions and storing such information. An opinion has to be justified by providing, for instance, the set of violated norms that contribute to that opinion. X A R ON (A ⟨A A X {ON R }) A This work is framed in organisational environments that provide a minimum set of organisational mechanisms to regulate agents’ interactions. Formally, an organisation is defined as a tuple g, , , φ, x0, ϕ, om, om where g represents the set of agents participating within the organisation; is the set of actions agents can perform; stands for the environmental states space; φ is a function describing how the system evolves as a result of agents actions; x0 represents the initial state of the system; ϕ is the agents’ capability function describing the actions agents are able to perform in a given state of the environ- ment; om is an organisational mechanism based on organisational norms; and om is an organisational mechanism based on roles that defines the positions agents may enact in the organisation (see [5] for more details). (A ⟨A R A ) (A ⟩ ⟨A A T ) Agents participating in the field of such organisations are involved in different situations. A situation is defined as a tuple g, , , T , that represents an agent g, playing the role , while performing the action , through a time period T . As detailed in [5], different types of situations can be defined following this definition. For instance, situations in which an agent performs an action, regardless of the role it is playing – g, , , –, or situations in which an agent is playing a role along a time period, regardless the action it performs – ⟨A R T ⟩ g, , , – are examples of possible situations. As we aforementioned, we claim that when agents exchange opinions, those should be justified somehow, in order to allow receivers to reason about them (see [5] for more details) and, what is more important, this justification has to be based on the fulfilment of norms that regulate the different situation in which agents are involved. We consider two different types of norms regarding an organisation and its members. On the one hand, there exists norms that regulate all the participants in the organisation in different situations, known by all of them, which fulfilment could possibly be controlled by some authority entity. We call these norms organisational norms. Furthermore, we also define another type of norms – personal norms , that regulate the preferences an agent has, regarding an individual situation. That is, they regulate how an agent wants a particular situation to be carried out. Agents define their own personal norms and they are the only ones that check their fulfilment. Note that, the personal norms defined by an agent regulates the behaviour of its partners and not its own behaviour, of course. As already pointed out, when an agent a sends an opinion to b about c – usually a reputation value –, a has to justify such value by sending the set of organisational norms that c violated when interacted with it, as well as the facts that prompted that reasoning. Moreover, personal norms that also contribute on an agent’s reputation evaluation, are sent only when requested on 3 In Section 5 we detail those problems B A Centralised Module ag1 ag ag3 Centralised Module info ag1 info ag2 f π π = →Sit, Ag, RepV al, t R info ag3 Π R = → info Sit ⟩ , RepV al ag3 k R = info → Sit , RepV al ⟩ j j ag2 R = → info , RepV al ⟩ Sit ag1 ag5 Centralised Module Π Γ1 Γ2

Appears in 1 contract

Samples: ceur-ws.org

Previous Work. Reputation mechanisms are being used to increase the reliability and perfor- xxxxx of virtual societies (or organisations) while providing mechanisms for exchanging reputation values. In centralised reputation models, a reputation system receives feedback about the interactions among the agents. Each agent evaluates the behaviour of the agents with whom it interacts and informs the reputation system. The system puts together all evaluations and stores such rep- utations. In contrast, in distributed decentralised reputation models, each agent evaluates and stores the reputations of the agents with whom it has interacted with and is able to provide such information to other agents. With the aim to cope with the problems of centralised and distributed rep- utation mechanisms3, we proposed the use of a hybrid mechanism [12]. In the distributed part of such a mechanism, agents evaluate the behaviour of other agents by exchanging opinions and storing such information. An opinion has to be justified by providing, for instance, the set of violated norms that contribute to that opinion. X A R ON (A A X {ON R }) A This work is framed in organisational environments that provide a minimum set of organisational mechanisms to regulate agents’ interactions. Formally, an organisation is defined as a tuple g, , , φ, x0, ϕ, om, om where g represents the set of agents participating within the organisation; is the set of actions agents can perform; stands for the environmental states space; φ is a function describing how the system evolves as a result of agents actions; x0 represents the initial state of the system; ϕ is the agents’ capability function describing the actions agents are able to perform in a given state of the environ- ment; om is an organisational mechanism based on organisational norms; and om is an organisational mechanism based on roles that defines the positions agents may enact in the organisation (see [5] for more details). (A R A ) (A A T ) Agents participating in the field of such organisations are involved in different situations. A situation is defined as a tuple g, , , T , that represents an agent g, playing the role , while performing the action , through a time period T . As detailed in [5], different types of situations can be defined following this definition. For instance, situations in which an agent performs an action, regardless of the role it is playing – g, , , –, or situations in which an agent is playing a role along a time period, regardless the action it performs – (Ag, R, , T ) – are examples of possible situations. As we aforementioned, we claim that when agents exchange opinions, those should be justified somehow, in order to allow receivers to reason about them (see [5] for more details) and, what is more important, this justification has to be based on the fulfilment of norms that regulate the different situation in which agents are involved. We consider two different types of norms regarding an organisation and its members. On the one hand, there exists norms that regulate all the participants in the organisation in different situations, known by all of them, which fulfilment could possibly be controlled by some authority entity. We call these norms organisational norms. Furthermore, we also define another type of norms – personal norms , that regulate the preferences an agent has, regarding an individual situation. That is, they regulate how an agent wants a particular situation to be carried out. Agents define their own personal norms and they are the only one that check their fulfilment. Note that, the personal norms defined by an agent regulates the behaviour of its partners and not its own behaviour, of course. As already pointed out, when an agent a sends an opinion to b about c – usually a reputation value –, a has to justify such value by sending the set of organisational norms that c violated when interacted with it, as well as the facts 3 In Section 5 we detail those problems that prompted that reasoning. Moreover, personal norms that also contribute on an agent’s reputation evaluation, are sent only when requested on demand. Starting from this approach, we focus on how to model the centralised part of the mechanism, stressing in the definition of reputation-based agreement.

Appears in 1 contract

Samples: nlp.uned.es

AutoNDA by SimpleDocs

Previous Work. Reputation mechanisms are being used to increase the reliability and perfor- xxxxx of virtual societies (or organisations) while providing mechanisms for exchanging reputation values. In centralised reputation models, a reputation system receives feedback about the interactions among the agents. Each agent evaluates the behaviour of the agents with whom it interacts and informs the reputation system. The system puts together all evaluations and stores such rep- utations. In contrast, in distributed decentralised reputation models, each agent evaluates and stores the reputations of the agents with whom it has interacted with and is able to provide such information to other agents. With the aim to cope with the problems of centralised and distributed rep- utation mechanisms3, we proposed the use of a hybrid mechanism [12]. In the distributed part of such a mechanism, agents evaluate the behaviour of other agents by exchanging opinions and storing such information. An opinion has to be justified by providing, for instance, the set of violated norms that contribute to that opinion. X A R ON (A ⟨A A X {ON R }) A This work is framed in organisational environments that provide a minimum set of organisational mechanisms to regulate agents’ interactions. Formally, an organisation is defined as a tuple g, , , φ, x0, ϕ, om, om where g represents the set of agents participating within the organisation; is the set of actions agents can perform; stands for the environmental states space; φ is a function describing how the system evolves as a result of agents actions; x0 represents the initial state of the system; ϕ is the agents’ capability function describing the actions agents are able to perform in a given state of the environ- ment; om is an organisational mechanism based on organisational norms; and om is an organisational mechanism based on roles that defines the positions agents may enact in the organisation (see [5] for more details). (A ⟨A R A ) (A ⟩ ⟨A A T ) Agents participating in the field of such organisations are involved in different situations. A situation is defined as a tuple g, , , T , that represents an agent g, playing the role , while performing the action , through a time period T . As detailed in [5], different types of situations can be defined following this definition. For instance, situations in which an agent performs an action, regardless of the role it is playing – g, , , –, or situations in which an agent is playing a role along a time period, regardless the action it performs – ⟨Ag, R, , T ⟩ – are examples of possible situations. As we aforementioned, we claim that when agents exchange opinions, those should be justified somehow, in order to allow receivers to reason about them (see [5] for more details) and, what is more important, this justification has to be based on the fulfilment of norms that regulate the different situation in which agents are involved. We consider two different types of norms regarding an organisation and its members. On the one hand, there exists norms that regulate all the participants in the organisation in different situations, known by all of them, which fulfilment could possibly be controlled by some authority entity. We call these norms organisational norms. Furthermore, we also define another type of norms – personal norms , that regulate the preferences an agent has, regarding an individual situation. That is, they regulate how an agent wants a particular situation to be carried out. Agents define their own personal norms and they are the only one that check their fulfilment. Note that, the personal norms defined by an agent regulates the behaviour of its partners and not its own behaviour, of course. As already pointed out, when an agent a sends an opinion to b about c – usually a reputation value –, a has to justify such value by sending the set of organisational norms that c violated when interacted with it, as well as the facts 3 In Section 5 we detail those problems that prompted that reasoning. Moreover, personal norms that also contribute on an agent’s reputation evaluation, are sent only when requested on demand. Starting from this approach, we focus on how to model the centralised part of the mechanism, stressing in the definition of reputation-based agreement.

Appears in 1 contract

Samples: nlp.uned.es

Time is Money Join Law Insider Premium to draft better contracts faster.