Related Literature Clause Examples

The Related Literature clause identifies and references prior works, studies, or publications that are relevant to the subject matter of the agreement or document. In practice, this clause typically lists specific articles, books, or research papers that provide background, support, or context for the current work. By formally acknowledging these sources, the clause ensures transparency, gives credit to original authors, and helps readers understand the foundation upon which the current work is built.
POPULAR SAMPLE Copied 1 times
Related Literature. Sequential P4D deals with potential challengers are similar to the logic developed by Xxxxxxxx [1984], but with deterrence investment substituted with P4D deals and licensing an AG. In fact, the strategy of launching an AG via a P4D deal with a challenger, as discussed in this paper, is similar to earlier studies that focus on licensing as a strategy to maintain market leadership and/or deter entry. For instance, Xxxxxxx [1984] shows the conditions where the incumbent licenses its production technology to a potential entrant in exchange for terminating research into competing or better technology, while Xxxxxxx [1990] and Xxxxxxx [1994] provide models where the incumbent licenses either the weaker competitor or a competitor from outside of the industry, so as to crowd the market and discourage stronger competitors from entering. Yet, despite these similarities, important differences exist between our paper and previous studies on licensing. In our paper the generic with the AG licence is the de facto strongest competitor to the brand as it enters before other generics and grabs the first mover advantage. Additionally, instead of a license being introduced prior to the potential competitor incurring entry costs, in our paper the license is issued and AG launched only if the next potential entrant has incurred an entry cost, (i.e., a litigation cost) and is successful. Several studies have documented the impact branded manufacturers have when they launch their own generic or an authorized generic (AG) via a third party on independent generic entry. Xxxxxx [2003] argues that authorized generics deter independent generic entry in intermediate sized markets (and “probably” in other markets as well) while Reiffen and Xxxx [2007] show that authorized generic entry may deter independent generic entry in small and intermediate sized markets only and raise the long run prices by 1-2%. Xxxxxx et al. [2007] argue that the effect of authorized entry on independent generic entry and ultimately on consumer welfare is likely to be small but still positive. However, Xxxxxx [2015] reports that early authorized entry has no impact on the likelihood of generic entry. As documented in a report by the Federal Trade Commission [FTC, 2011b, pp.17-18], authorized generics can be launched by the branded firm itself (in-house) or via third parties but require expertise in generic marketing. This is because whereas brand name drugs are typically marketed to physicians and consumers e...
Related Literature. We apply the theory of Bayesian games originally developed by Harsanyi (1967, 1968a,b) to model the interactions between the power plant and the distributor. Bayesian games have been applied to the electricity markets to model the suppliers’ bidding processes in which each power plant’s marginal cost is private information. Such a game has been analyzed in various market conditions by Ferrero et al. (1998), Xxxxxxxxxxxx et al. (2002), Li and Xxxxxxxxxxxx (2005), and Xxxxxxx (2005), among others. Xxxxx¸csu and Puller (2008) analyze bidding processes in which contract positions are private information. Unlike previous works, in this paper information asymmetry comes from the fact that the plant’s status cannot be directly observed by the distributor, and the unit-contingent power purchase agreement introduces incentive conflicts into the system. Several economics papers on contract theory are related to our work. For example, Laffont and Martimort (2002, Section 3.6) discuss an adverse selection problem with audits and costly state verification. The costly audit allows the principal to detect an untruthful agent’s report and impose penalties. The Revelation Principle still applies, and under the truth-revealing mechanisms, punishments are never used, but the existence of punishments reduces agent’s incentive to lie and, hence, reduces informational rents. Xxxxxxxxxx and Png (1989) and Reinganum and Wilde (1985) apply the adverse selection problems with costly state verification to insurance and taxation. In contrast to these papers, our analysis is focused on a particular contract form commonly seen in practice. Because of the restriction on the contract set, instead of invoking the mechanism design approach (as in Xxxxxxx 1981, 1979, Guesnerie and Laffont 1984), we find the equilibrium of the Bayesian game directly. Within the unit-contingent contract space, we show the truth-revealing mechanism is not necessarily optimal. Our work shares some similarities to the economics literature on contracting with costly state verification (e.g., Xxxxxxxx 1979) and literature on incomplete contracts (e.g., Xxxxxx and Sapping- ton 1991, Boot et al. 1993, Bernheim and Whinston 1998) in that by allowing flexibility to act to one party, a better equilibrium outcome can be achieved. However, there are a number of essential dif- ferences between our work and this literature. For instance, Xxxxxxxx (1979) and related economics literature on bonding and insurance are concerne...
Related Literature. There is a wide choice of related literature concerning the aggregation of SLAs. Approaches of close relationship to our work can be roughly categorized in three areas whose boundaries blur to some extent. Models which aggregate the SLOs of single SLIs in a mathematical way are introduced in (Xxxxx and Xxxxxxxx 2007; Xxxxxx et al. 2004; Xxxxx et al. 2008). Models which cover the PROSA characteris- tic of being a document, that is, they provide a framework for building a single document out of a set of SLA documents (SLAs of the single services invoked by one BP) are discussed in (Xxxxxx and Xxxxxxxx 2008; Xxxxxxxxx et al. 2007; Xxxxxxx 2000). Finally, (Xxxx et al. 2002; Xxxx et al. 2008) elaborate models which validate the SLOs on BP level by means of simulations. The related work delivers some valuable insights into two main aspects of the in hand research domain. On the one hand approaches to technically aggregate SLIs and on the other hand ap- proaches which deal with the SLA characteristic of being a document that is aggre- gating the single SLAs to one document. But some highly interesting and impor- tant issues are not covered. Presented models are bottom-up-approaches. Looking at the motivation our approach is customer-oriented. That is a customer who wants to facilitate his business processes by IT-services delivers the objectives concerning the SLOs of the PROSA to the provider(s). Therefore these objectives have to be drilled down to a deep level of technical services – a top-down- approach. Whereas a bottom-up-approach deals with the attributes of technical services and aggregates them bottom-up which is not suitable for our addressed issues. Additionally the mentioned approaches do not cover both aspects custom- er-orientation and provider-methodology. They are all driven by the providers’ perspective. In summary, current approaches deliver first contributions to the do- main of SLA aggregation. But they do not cover the customer as well as the pro- vider perspective in an adequate way. Especially the motivated customer orienta- tion is not represented as much as required.
Related Literature. Sequential P4D deals with potential challengers share the logic developed by Xxxxxxxx (1984), but with deterrence investment substituted with P4D deals and licensing an authorized generic (AG). Indeed, the strategy of launching an AG via a P4D deal with a challenger is similar to earlier studies that focus on licensing as a strategy to maintain market leadership and/or deter entry. For instance, Xxxxxxx (1984) shows the conditions where the incumbent licenses its production technology to a potential entrant in exchange for terminating research into competing for better technology, while Xxxxxxx (1990) and Xxxxxxx (1994) provide models where the incumbent licenses either the weaker competitor or a competitor from outside of the industry, so as to crowd the market and discourage stronger competitors from entering. By contrast, in our paper, the generic with the AG license is the de facto strongest competitor to the brand as it enters before other generics and grabs the first mover advantage. Additionally, instead of a license being introduced prior to the potential competitor incurring entry costs, in our paper the license is issued and AG launched only if the next potential entrant has incurred an entry cost (i.e., litigation cost), and is successful. A significant economic and legal literature builds around theory of harm and focuses on the legality of pay-for-delay deals (Xxxxxxx, 2003a, Xxxxxx and Xxxxxxx, 2005, Xxxxxxx and Xxxxxxx, 2008, Xxxxx, 2012). Under Xxxxxxx’x antitrust welfare criteria – that a settlement should leave the consumers at least as well off as the ongoing patent litigation – a payment that exceeds the expected litigation costs of the licensor is sufficient to establish that consumers lose from the settlement (Xxxxxxx, 2003b, Elhauge and Xxxxxxx, 2012). In line with this reasoning, several authors have argued that pay-for-delay settlements should carry a presumption of per se anticompetitive behavior (see for instance, Xxxxxxxxx et al., 2003, Bulow, 2004, Xxxxxxx and Xxxxxxx, 2004, Xxxxxxxx, 2009). Others have pointed out that while the theory of harm is useful, it has limitations and cannot be applied directly to the more complex agreements between the parties, or that P4D deals can in fact be pro-competitive in some situations, and hence such deals should not be per se illegal (Xxxxx, 2002, Xxxxxx and Xxxxxxx, 2004, Xxxxxx et al., 2010, Xxxxxxxx, 2013). For instance, Xxxxxxx and Xxxxxxx (2015) critique Xxxxxxx and Xxxxxxx (2012) and...
Related Literature. Our proposal is not the first to analyze the question of how to induce international collaboration in climate policy. Starting with Xxxxxxx and Xxxxxxx (1997), this literature has used game theoretic approaches to study the stability of climate policy coalitions under different assumptions. Our work follows in this tradition: abatement is coordinated, and financial transfers are part of our proposal, though they are not explicitly negotiated. In the standard literature, depending on the specific policy setup and assumptions about the behaviour of non-coalition countries, coalitions can be larger or smaller in equilibrium (Ray & Xxxxx, 2001), leading to a positive amount of climate action. However, calibrations typically find the resulting mitigation to fall short of greenhouse gas emissions cuts required to reach the 1.5 °C objective of the Paris Agreement.3 Eyckmans and Xxxxxxx (2006), for instance, find resulting warming of close to 4 °C in the most optimistic scenario using a calibration based on the RICE model. Our proposal differs from this literature in substantive terms: if the unanimity equilibrium is implemented, it leads to Paris-compatible levels of 2 The desire to set total emissions at low levels thereby reducing climate change damages, countervails the wish of each country to increase their production, and hence their individual emissions.
Related Literature. Research on debtor-in-possession financing began receiving popularity in the mid-1990s, likely due to the emergence of the modern U.S. bankruptcy system in 1978 with the adoption of the Bankruptcy Reform Act. According to Xxxxxx, the 1980s saw an explosion of activity in the junk bond markets, as well as the appearance of leveraged buyouts by then-niche private equity players like KKR and TPG (46). The U.S. as a whole was giving more freedom to the debtor in cases of distress, so corporations felt more comfortable issuing junk bonds to raise funds if they knew there was a strong market demand for high yield, and that in a distressed scenario, they did not have an obligation to pay down unsecured debt claims at-cost if the liquidation value of their firm would not cover the debt (Xxxxxx 44). A short series of financial crises in the 1980s and 1990s, notably, Black Monday in 1989, the early 1990s recession in the US after the Iraqi invasion of Kuwait, and importantly, the dot-com bubble burst in 2000, may have prompted research into the implications of the new Bankruptcy Law (44). Initial financial economic research related to the effect of financial distress, and subsequent DIP financings, on equity- market reactions with time-series analysis. Xxxxxx and Xxxxx (2001), published in the Journal of Business Finance & Accounting, was one of the first papers to examine the effect of DIP financing on the outcomes of financial distress. This paper sought to investigate the recent explosion in financial distress and tested the interaction between the reception of the DIP and a host of dependent variables, including market reaction and emergence from Chapter 11. The paper found that equity returns in the two days after the announcement of the DIP were positive and statistically significant, following a worsening market reaction 4 and 5 days before the announcement of the DIP. Additionally, this paper found that the success rate for firms that receive DIP financing is 87.50%, compared to a 71.25% rate for firms that do not. With regards to bankruptcy duration, a variable I intend to regress, Xxxxxx and Xxxxx found that the reception of the DIP reduced the length of time in bankruptcy by 98 days, significant at the 10% level. These results were adjusted to incorporate the size of the DIP, but while the size of the DIP changes inter-group time in bankruptcy, controlling for size effect does not change the results between DIP and non-DIP financed firms. However, the aut...
Related Literature. 3.1 Stability and Renegotiation Proofness Although our notion is not one of renegotiation-proofness, its connection to various theories of renegotiation-proofness is evident. First, both our theory and the notions of renegotiation proofness allow for coalitional deviations, although renegotiation proofness restricts coalitional deviations to those of the grand coalition. Secondly, the notion of stable agreements is defined by applying the notion of stability that was originated by xxx Xxxxxxx and Xxxxxxxxxxx (1944) and extended by Xxxxxxxxx (1990); the theories of renegotiation-proofness exhibit various attempts to apply the notion of sta- bility. As Xxxxxxxxxx (1992) wrote “... the renegotiation literature (as well as the new approach suggested by Xxxxxxxxx (1990)) is returning to the internal and external consistency ideas suggested by xxx Xxxxxxx and Xxxxxxxxxxx (1944). For example, (weak) renegotiation proofness of Xxxxxxxx and Xxx (1989) and Xxxxxxx and Xxxxxx (1989) imposes a version of internal stabil- ity (stronger than ours) while Xxxxxx’x (1991) Pareto perfect equilibrium imposes also external stability in addition to the same internal stability as Xxxxxxxx and Xxx (1989) and Xxxxxxx and Xxxxxx (1989). However, the notions of renegotiation proofness can be criticized for tak- ing Pareto criterion too far as discussed in the introduction. They stipulate that the grand coalition will renegotiate and abandon a punishment when- ever there is a Pareto dominating equilibrium available even though the later equilibrium may rely on punishments that are as severe. Implicitly, a deviat- ing player counts too heavily on renegotiation. Our notion explores a natural extension of the uncertainty aversion on the part of players embedded in the notion of subgame perfection. Our notion can be viewed as the weakest notion that accounts for coalitional deviations. For two-player games, it is easy to see that an efficient (weakly) renegotiation-proof45 equilibrium also belongs to the set of stable agreements for N. However, we do recognize the importance and relevance of renegotiation in formalizing notions stronger than ours. In a future project, we shall extend our analysis to account for credible renegotiation. 3.2 Perfectly Coalition-Proof Xxxx Equilibrium and Strong Perfect Equilibrium Xxxxxxxx, Xxxxx, and Xxxxxxxx (1987) applied their coalition-proof Xxxx equilibrium to dynamic games with finite horizon and proposed the notion of perfectly coalition−proof Xxxx...
Related Literature. This chapter presents a review of the literature that is relevant for this thesis in two parts. The first section provides a brief summary of the literature on DSGE modeling which is the modeling approach employed in this thesis. Then, section 2 briefly describes the empirical literature on the estimation of the degree of interest rate pass- through in the loan rate. This line of literature basically motivates the usage of staggered loan contract mechanism, the main ingredient of the theoretical model that is explained in Chapter 3 in detail.
Related Literature. Our paper is related to a number of literatures. There is an extensive literature on the costs of unproductive activities such as rent seeking, con ict, and in uence activities.13 Our focus, however, is not on the direct e ciency costs generated by these unproductive activities, but on the indirect e ects that they have in preventing ex-ante cooperation. More speci cally, we show that the diversity in opportunities or endowments between the parties who interact repeatedly can increase the magnitude of the endogenous externalities generated by an agreement.
Related Literature. We draw on multiple streams of work that have largely evolved independently of each other. We first discuss related work into the dynamics of consumers’ social media activity. We then discuss research in marketing that has delved into user-generated content with a focus on text analytics methods. We then discuss the limited research that has examined the textual dynamics and the marketing literature on which we draw to develop a discrete-state model of social media content. While prior research has examined temporal patterns in social media activity, much of this stream has focused on metrics such as volume and sentiment. For example, Godes and Xxxxx (2012) use product-level data to investigate the temporal and sequential evolution of online product ratings. Using individual-level data on online product reviews, Moe and Schweidel (2012) model a user’s decision of whether or not to contribute a review, as well as the sentiment of the review. The authors demonstrate dynamics in users’ incidence and evaluation decisions arising from heterogeneity across users. Schweidel and Moe (2014) also document the presence dynamics in the sentiment expressed and the venue to which social media posts are contributed. Research has also viewed product reviews as means by which early purchasers may provide potential buyers with more information than was initially available. Kuksov and Xie (2010) examine the impact that product reviews may have on the firm’s pricing decisions. Sun (2012) looks at the impact of a high variance in previously contributed reviews as providing information to consumers when the average ratings is low, as this may indicate that the product appeals to some customers but not all. Moe and Trusov (2011) also examine how previously contributed reviews affect sales. In doing so, they decompose the effects of previous reviews into a direct effect on sales and an indirect effect through their impact on subsequent reviews. Understanding the dynamics present in social media activity is essential to maintaining the brand, sensing market, and managing customer relationships. Schweidel and Moe (2014) demonstrate that the analysis of social media data can yield a measure of brand health that is a leading indicator of survey-based metrics. Looking at how brand perceptions within an entire industry may shift, Xxxxx and Xxxxxx (2016) investigate the dynamics surrounding social media conversations following product recalls and find evidence of negative spillover effe...