Common use of INDEPENDENT OFFER ANALYSES Clause in Contracts

INDEPENDENT OFFER ANALYSES. Xxxxxx conducted its own rather simplified valuation process. The two sets of valuations generally correlated well, with a fair amount of noise in the comparison, as shown in Figure 3 that compares the two sets of valuations. LCBF valuation, $/MWh IE valuation, $/MWh Xxxxxx did not use its simplified model to construct a separate short list. Instead, the simplified model was useful in quality control to identify errors in PG&E’s or the IE’s inputs, parameters, or assumptions for specific Offers. Also, the comparison helped identify what specific factors caused specific Offers to be ranked high or low in PG&E’s short-listing process, such as the impact of the discount rate assumption, the on-line date, the choice of which transmission cluster to assign to an Offer, and the size of TRCR or transmission wheeling adders. Xxxxxx also scored each Offer for viability independently of PG&E’s analysis, using the original Energy Division version of the Project Viability Calculator. This was useful to get an estimate of what the standard error of the Calculator is, and a sense of whether differences in score reflect significant differences in the viability of projects or are within the noise of the method for assessing viability. Xxxxxx emerged from the comparison (shown in Figure 4) with a view that differences of a dozen or fewer points in viability score may not reflect true differences in the likelihood that one project is significantly likelier than another to achieve successful completion, given the roughness of the tool and the subjectivity of its use. PG&E viability score IE viability score The correlation of the IE and PG&E team’s scores using the Project Viability Calculator is poorer than that between valuation models. Xxxxxx ascribes this to the gray areas in the scoring guidelines, to differences in the subjective judgments of individual scorers, and to PG&E’s use of an additional evaluation criterion in its modified Calculator. The comparison between the sets of scores helped reveal specific errors that Xxxxxx acknowledged in its draft scores and corrected, but no doubt there are other errors in Xxxxxx’x viability scoring that have not yet been identified.

Appears in 5 contracts

Samples: www.pge.com, www.pge.com, www.pge.com

AutoNDA by SimpleDocs

INDEPENDENT OFFER ANALYSES. Xxxxxx conducted its own rather simplified valuation processanalysis. The two sets of PG&E’s and Xxxxxx’x valuations generally correlated wellwell for many Offers, but with a fair amount of noise in the comparison, as shown in Figure 3 that compares the two sets of valuations. LCBF valuationSome of the differences between valuations include: • Less value assigned to Resource Adequacy in the independent assessment, $/MWh IE valuation, $/MWh Xxxxxx did not use its simplified model which tends to construct lower the value ranking of projects with the most estimated Net Qualifying Capacity such as solar generation; • Less value assigned to projects interconnecting in non-CAISO balancing authority areas; • Less of a separate short listpremium assigned to projects with later CODs or longer delivery terms. Instead, the simplified model This comparison was useful in quality control to identify errors in PG&E’s or the IE’s inputs, input parameters, or assumptions for specific Offers. Also, the comparison helped identify what specific factors caused specific Offers to be ranked high or low in PG&E’s short-listing process, such as the impact of the discount rate assumption, the on-line date, the choice of which transmission cluster to assign to an Offer, and the size of TRCR or transmission wheeling addersadder. Xxxxxx also scored each Offer for viability independently of PG&E’s analysis, using the original Energy Division version of the Project Viability Calculator. This was useful to get an estimate of what the standard error of the Calculator is, and a sense of whether differences in score reflect significant differences in the viability of projects or are within the noise of the method for assessing viabilitymethod. Xxxxxx emerged from the comparison (shown in Figure 4) with a view that differences of a dozen or fewer points in viability score may not reflect true significant differences in the likelihood that one project is significantly likelier than another to achieve successful completionwill succeed in attaining commercial operation on schedule, given the roughness modest precision of the tool and the subjectivity of its use. Figure 3 PG&E viability score valuations, $/MWh Comparison of PG&E and IE viability score The correlation valuations IE valuations, $/MWh Some of the differences between viability scores include: • Lower IE and PG&E team’s scores using for projects proposing very large solar photovoltaic facilities; • Lower IE scores for projects from developers with experience only in distributed generation (e.g. beyond the Project Viability Calculator is poorer meter) projects rather than that between valuation models. Xxxxxx ascribes this to the gray areas in the scoring guidelines, to differences in the subjective judgments of individual scorers, and to PG&E’s use of an additional evaluation criterion in its modified Calculator. The comparison between the sets of wholesale generation; • Lower IE scores helped reveal for projects for which specific errors that Xxxxxx acknowledged in its draft scores and corrected, but no doubt there network upgrades are other errors in Xxxxxx’x viability scoring that have not as yet been identifiedpoorly characterized.

Appears in 4 contracts

Samples: www.pge.com, www.pge.com, www.pge.com

INDEPENDENT OFFER ANALYSES. Xxxxxx conducted its own rather simplified valuation process. The two sets of valuations generally correlated well, with a fair amount of noise in the comparison, as shown in Figure 3 that compares the two sets of valuations. Figure 3 LCBF valuation, $/MWh IE valuation, $/MWh Xxxxxx did not use its simplified model to construct a separate short list. Instead, the simplified model was useful in quality control to identify errors in PG&E’s or the IE’s inputs, parameters, or assumptions for specific Offers. Also, the comparison helped identify what specific factors caused specific Offers to be ranked high or low in PG&E’s short-listing process, such as the impact of the discount rate assumption, the on-line date, the choice of which transmission cluster to assign to an Offer, and the size of TRCR or transmission wheeling adders. Xxxxxx also scored each Offer for viability independently of PG&E’s analysis, using the original Energy Division version of the Project Viability Calculator. This was useful to get an estimate of what the standard error of the Calculator is, and a sense of whether differences in score reflect significant differences in the viability of projects or are within the noise of the method for assessing viability. Xxxxxx emerged from the comparison (shown in Figure 4) with a view that differences of a dozen or fewer points in viability score may not reflect true differences in the likelihood that one project is significantly likelier than another to achieve successful completion, given the roughness of the tool and the subjectivity of its use. PG&E viability score IE viability score The correlation of the IE and PG&E team’s scores using the Project Viability Calculator is poorer than that between valuation models. Xxxxxx ascribes this to the gray areas in the scoring guidelines, to differences in the subjective judgments of individual scorers, and to PG&E’s use of an additional evaluation criterion in its modified Calculator. The comparison between the sets of scores helped reveal specific errors that Xxxxxx acknowledged in its draft scores and corrected, but no doubt there are other errors in Xxxxxx’x viability scoring that have not yet been identified.

Appears in 4 contracts

Samples: Purchase Agreement, www.pge.com, www.pge.com

INDEPENDENT OFFER ANALYSES. Xxxxxx conducted its own rather simplified valuation processanalysis. The two sets of Xxxxxx’x valuations generally correlated wellwell with PG&E’s Net Market Value analysis for many Offers, but with a fair amount of noise in the comparison, as shown in Figure 3 9 that compares the two sets of valuations. The mediocre quality of the correlation is less interesting than the outliers and the underlying reasons for some of the divergences: Figure 9. PG&E LCBF valuationvaluation of Net Market Value Scattergram of valuations IE model valuation • PG&E assigned a higher value to new projects interconnecting in non-CAISO balancing authority areas because no transmission adders are applied; Xxxxxx estimates an adder for network upgrades for these projects. This is most clearly seen in the two shortlisted projects interconnecting into IID’s grid. • PG&E assigned network upgrade costs to projects for an interconnection even if the developer reports that the costs will be borne by another project using a share of the interconnection capacity, $/MWh IE valuationon the logic that the costs should still be allocated to the project making an Offer. • Some scatter is due to the difference in discount rates applied to future years’ cash flows; PG&E uses its own authorized weighted cost of capital as a regulated utility, $/MWh Xxxxxx did not use uses a higher estimate of merchant generators’ cost of capital. The adjustments have a considerable impact on the value rankings of Offers. Figure 10 shows a plot of Offers’ NMV vs. PAV, showing visually how for some Offers the ' adjustments can reduce the PAV by as much as ranking. , substantially altering their Figure 10. Overall, if Xxxxxx had used its simplified model to construct a separate short list. Instead, the simplified model was useful in quality control valuation and viability scores to identify errors high-value candidates for selection, more Offers in SP-15 would have been chosen, including more existing geothermal and wind projects. Fewer Offers in NP-15 would have been chosen'' , and projects that Xxxxxx scored below median for project viability would have been rejected, . This simply reflects the strength of PG&E’s or the IE’s inputspreference for projects in its own service territory, parameters, or assumptions for specific Offers. Also, the comparison helped identify what specific factors caused specific Offers to be ranked high or low its disinterest in counting IID network upgrade costs that do not directly affect PG&E’s short-listing process, such as the impact of the discount rate assumption, the on-line date, the choice of which transmission cluster to assign to an Offerrates, and the size of TRCR or transmission wheeling addersits greater willingness to select lower-viability proposals. Xxxxxx also scored each Offer for viability independently of PG&E’s analysis, using the original Energy Division final version of the 2011 Project Viability Calculator. This was useful , anticipating a later need to get an estimate rank projects that obtain executed PPAs against a peer group made up of what the standard error of the Calculator is, and a sense of whether differences in score reflect significant differences in the viability of projects or are within the noise of the method for assessing viability. Xxxxxx emerged from the comparison (shown in Figure 4) with a view that differences of a dozen or fewer points in viability score may not reflect true differences in the likelihood that one project is significantly likelier than another to achieve successful completion, given the roughness of the tool and the subjectivity of its use. PG&E viability score IE viability score The correlation of the IE and PG&E team’s scores using the Project Viability Calculator is poorer than that between valuation models. Xxxxxx ascribes this to the gray areas in the scoring guidelines, to differences in the subjective judgments of individual scorers, and to PG&E’s use of an additional evaluation criterion in its modified Calculator. The comparison between the sets of scores helped reveal specific errors that Xxxxxx acknowledged in its draft scores and corrected, but no doubt there are other errors in Xxxxxx’x viability scoring that have not yet been identifiedall RFO proposals.

Appears in 1 contract

Samples: www.pge.com

AutoNDA by SimpleDocs

INDEPENDENT OFFER ANALYSES. Xxxxxx conducted its own rather simplified valuation process. The two sets of valuations generally correlated well, with a fair amount of noise in the comparison, as shown in Figure 3 that compares the two sets of valuations. LCBF valuation, $/MWh IE valuation, $/MWh Xxxxxx did not use its simplified model to construct a separate short list. Instead, the simplified model was useful in quality control to identify errors in PG&E’s or the IE’s inputs, parameters, or assumptions for specific Offers. Also, the comparison helped identify what specific factors caused specific Offers to be ranked high or low in PG&E’s short-listing process, such as the impact of the discount rate assumption, the on-line date, the choice of which transmission cluster to assign to an Offer, and the size of TRCR or transmission wheeling adders. Xxxxxx also scored each Offer for viability independently of PG&E’s analysis, using the original Energy Division Division’s version of the Project Viability CalculatorCalculator and not PG&E’s modified version. This was useful to get an estimate of what the standard error of the Calculator is, and a sense of whether differences in score reflect significant differences in the viability of projects or are within the noise of the method for assessing viability. Xxxxxx emerged from the comparison (shown in Figure 4) with a view that differences of a dozen or fewer points in viability score may not reflect true differences in the likelihood that one project is significantly likelier than another to achieve successful completion, given the roughness modest precision of the tool and the subjectivity of its use. PG&E viability score IE viability score The correlation of the IE and PG&E team’s scores using the Project Viability Calculator is poorer than that between valuation models. Xxxxxx ascribes this to the gray areas in the scoring guidelines, to differences in the subjective judgments of individual scorers, and to PG&E’s use of an additional evaluation criterion in its modified Calculator. The comparison between the sets of scores helped reveal specific errors that Xxxxxx acknowledged in its draft scores and corrected, but no doubt there are other errors in Xxxxxx’x viability scoring that have not yet been identified.

Appears in 1 contract

Samples: www.pge.com

INDEPENDENT OFFER ANALYSES. Xxxxxx conducted its own rather simplified valuation processanalysis. The two sets of PG&E’s and Xxxxxx’x valuations generally correlated wellwell for many Offers, but with a fair amount of noise in the comparison, as shown in Figure 3 that compares the two sets of valuations. LCBF valuationSome of the differences between valuations include:  Less value assigned to Resource Adequacy in the independent assessment, $/MWh IE valuation, $/MWh Xxxxxx did not use its simplified model which tends to construct lower the value ranking of projects with the most estimated Net Qualifying Capacity such as solar generation;  Less value assigned to projects interconnecting in non-CAISO balancing authority areas;  Less of a separate short listpremium assigned to projects with later CODs or longer delivery terms. Instead, the simplified model This comparison was useful in quality control to identify errors in PG&E’s or the IE’s inputs, input parameters, or assumptions for specific Offers. Also, the comparison helped identify what specific factors caused specific Offers to be ranked high or low in PG&E’s short-listing process, such as the impact of the discount rate assumption, the on-line date, the choice of which transmission cluster to assign to an Offer, and the size of TRCR or transmission wheeling addersadder. Xxxxxx also scored each Offer for viability independently of PG&E’s analysis, using the original Energy Division version of the Project Viability Calculator. This was useful to get an estimate of what the standard error of the Calculator is, and a sense of whether differences in score reflect significant differences in the viability of projects or are within the noise of the method for assessing viabilitymethod. Xxxxxx emerged from the comparison (shown in Figure 4) with a view that differences of a dozen or fewer points in viability score may not reflect true significant differences in the likelihood that one project is significantly likelier than another to achieve successful completionwill succeed in attaining commercial operation on schedule, given the roughness modest precision of the tool and the subjectivity of its use. Figure 3 PG&E viability score valuations, $/MWh Comparison of PG&E and IE viability score The correlation valuations IE valuations, $/MWh Some of the differences between viability scores include:  Lower IE and PG&E team’s scores using for projects proposing very large solar photovoltaic facilities;  Lower IE scores for projects from developers with experience only in distributed generation (e.g. beyond the Project Viability Calculator is poorer meter) projects rather than that between valuation models. Xxxxxx ascribes this to the gray areas in the scoring guidelines, to differences in the subjective judgments of individual scorers, and to PG&E’s use of an additional evaluation criterion in its modified Calculator. The comparison between the sets of wholesale generation;  Lower IE scores helped reveal for projects for which specific errors that Xxxxxx acknowledged in its draft scores and corrected, but no doubt there network upgrades are other errors in Xxxxxx’x viability scoring that have not as yet been identifiedpoorly characterized.

Appears in 1 contract

Samples: www.pge.com

Time is Money Join Law Insider Premium to draft better contracts faster.