Selected metrics Sample Clauses

Selected metrics. The quality of service expresses how the service is effective and fruitful. The promise about a minimum quality is what we are concerned with. The providers in fact guarantee an acceptable level of service to its customers. Promises are typically expressed as bounding metrics and form one of the most important aspects of SLAs. For example an SLA clause could state that the probability of a file becoming lost or corrupted after a period of a year of retention has to be under 0.01%. The following table represents an extract of the global one reported in the appendix. ID Name Description Ref. metrics Monitoring criterion (bounds) QS-01 Availability The guarantee that the service will be available (up and exploitable) at least as much as agreed. The measure of this quantity can be done as percentage of time (e.g. 99% of the time) or percentage on the number of usage (e.g. 98% of the times one tries, the service is usable) ME-01 The availability should never go below a specific threshold, there could be more than a threshold (e.g. for business hours and night) QS-02 Integrity The guarantee that the ingested A/V and metadata contents has been preserved keeping an agreed quality level (assessed for example with PSNR). These probabilities have to be normalised over the amount of data and the retention time. ME-02 ME-03 The integrity should never go below a specific threshold QS-03 SIP Ingestion time One of the most important parameter perceived by a user when submitting a new SIP (or even for updating) is the total elapsed time from the SIP submission to the confirmation from the system that everything has been correctly acquired. This includes: - the time necessary for the upload transfer of the package - the time necessary to extract, validate, index and transform the SIP into an internal representation (AIP) ME-05 The SIP ingestion time should never go above a specific threshold, there could be more than one threshold (e.g. for business hours and night).It can be given as percentage, e.g. 90% of deliveries are done under a threshold 1 and the rest under threshold 2. QS-05 DIP Delivery time One of the most important parameter perceived by a user when asking for some material (media + metadata packaged in a DIP) is the total elapsed time from the request to the complete and correct reception of the package. This time includes: - the time necessary to extract and prepare materials with a coherent DIP wrapper - if necessary, the time for recovering a corrupted file...
AutoNDA by SimpleDocs
Selected metrics. For this trial the following metrics make sense. ● number of duplicates found ● number of improved issues found ● number of issue groups identified ● quality defects found in issues (e.g., bad wording, missing labels) ● number of requirement reviewers correctly identified ● number of items dropped in release planning phase (less is good and shows better planning) ● number of items taken into a release during the feature freeze period in which items can be added by maintainer agreement (less is good, and shows better planning and decision making) In the case of OpenReq infrastructure, performance measures of the provided inferences shall be made. For example, response time is an essential metrics.
Selected metrics. Metrics for service: Extract requirement candidates from English text ● Quantity of automatically identified requirements ● Precision and Recall of automatically identified requirements w.r.t experts' reference ● Serves goal IDENTIFY Metrics for service: Classify a requirement candidate as requirement or prose ● Quantity of correctly classified requirements (TP) ● Precision of automatically identified requirements w.r.t experts' classification ● Recall of automatically identified requirements w.r.t experts' classification ● Serves goal IDENTIFY Metrics for service: Suggest one or more ontology concepts (categories) for a requirement ● Quantity of correctly assigned stakeholders in total (TP) ● Quantity of completely correctly assigned requirements ● Precision of automatically assigned requirements w.r.t experts' assignments ● Recall of automatically assigned requirements w.r.t experts' assignments ● Remark: Categories cannot be evaluated directly, as there are no real test data for categories (just for stakeholders which are responsible for such categories), however, test data could be added by an expert ● Serves goals REUSE, EXPENSES Metrics for service: Rate the quality of a requirement ● Presently out of focus as we cannot influence quality of tender documents ● In future, evaluation could be done after identifying or injecting "bad quality requirements" Metrics for service: Decide whether two requirements are similar, i.e. cover the same contents (e.g. different contents because of different context despite very similar wording such as "maximal temperature in hardware room …" vs. "maximal temperature of hardware module
Selected metrics. In general, following types of metrics are based on Score Model Approach: ● Quantity: natural number ● True positives (TP): natural number (number of correct "yes"-decisions of the tool w.r.t. expert decision) ● False positives (FP): natural number (number of tool's "yes"-decisions where experts decided "no") ● False negatives (FN): natural number (number of tool's "no"-decisions which should have been "yes") ● Precision: percentage (ratio of correct "yes"-decisions to all "yes"-decisions of the tool = TP / (TP+FP)) ● Recall: percentage (ratio of correct "yes"-decisions to all "yes"-decisions of the experts = TP / (TP+FN)) ● Averages of such values Metrics for service: Extract requirement candidates from Italian text ● Quantity of automatically identified requirements ● Precision and Recall of automatically identified requirements w.r.t experts' reference Metrics for service: Classify a requirement candidate as requirement or prose ● Quantity of correctly classified requirements (TP) ● Precision of automatically identified requirements w.r.t experts' classification ● Recall of automatically identified requirements w.r.t experts' classification Metrics for service: Suggest one or more ontology concepts (categories) for a requirement ● Quantity of completely correctly assigned requirements ● Precision of automatically assigned requirements w.r.t experts' assignments ● Recall of automatically assigned requirements w.r.t experts' assignments Metrics for service: Decide whether two requirements are similar, i.e. cover the same contents ● Quantity of similar requirements from Social Network Data ● Precision and Recall of automatically identified similarities w.r.t experts' reference Metrics for service: Decide whether two (similar) requirements are contradicting ● Precision and Recall of automatically identified contradictions (for all pairs) w.r.t experts' reference Metrics for service: Decide whether two (similar) requirements are redundant (same contents or one subsuming the other) ● Precision and Recall of automatically identified equivalences (for all pairs) w.r.t experts' reference ● Precision and Recall of automatically identified subsumptions (for all pairs) w.r.t experts' reference

Related to Selected metrics

  • Performance Measures and Metrics This section outlines the performance measures and metrics upon which service under this SLA will be assessed. Shared Service Centers and Customers will negotiate the performance metric, frequency, customer and provider service responsibilities associated with each performance measure. Measurements of the Port of Seattle activities are critical to improving services and are the basis for cost recovery for services provided. The Port of Seattle and The Northwest Seaport Alliance have identified activities critical to meeting The NWSA’s business requirements and have agreed upon how these activities will be assessed.

  • Selection Criteria for Awarding Task Order The Government will award to the offeror whose proposal is deemed most advantageous to the Government based upon an integrated assessment using the evaluation criteria. The Government will evaluate proposals against established selection criteria specified in the task order RFP. Generally, the Government's award decision will be based on selection criteria which addresses past performance, technical acceptability, proposal risk and cost. Among other sources, evaluation of past performance may be based on past performance assessments provided by TO Program Managers on individual task orders performed throughout the life of the contract. The order of importance for the factors will be identified in the RFP for the specified task order.

  • Performance Indicators The HSP’s delivery of the Services will be measured by the following Indicators, Targets and where applicable Performance Standards. In the following table: n/a meanç ‘not-appIicabIe’, that there iç no defined Performance Standard for the indicator for the applicable year. tbd means a Target, and a Performance Standard, if applicable, will be determined during the applicable year. INDICATOR CATEGORY INDICATOR P = Performance Indicator E = Explanatory Indicator M = Monitoring Indicator 2019/20 PERFORMANCE TARGET STANDARD Organizational Health and Financial Indicators Debt Service Coverage Ratio (P) 1 c1 Total Margin (P) 0 cO Coordination and Access Indicators Percent Resident Days – Long Stay (E) n/a n/a Wait Time from LHIN Determination of Eligibility to LTC Home Response (M) n/a n/a Long-Term Care Home Refusal Rate (E) n/a n/a SCHEDULE D — PERFORMANCE 2/3 INDICATOR CATEGORY Quality and Resident Safety Indicators INDICATOR P = Performance Indicator E = Explanatory Indicator M = Monitoring Indicator Percentage of Residents Who Fell in the Last 30 days (M) 2019/20 PERFORMANCE TARGET STANDARD n/a n/a Percentage of Residents Whose Pressure Ulcer Worsened (M) n/a n/a Percentage of Residents on Antipsychotics Without a Diagnosis of Psychosis (M) n/a n/a Percentage of Residents in Daily Physical Restraints (M) n/a n/a SCHEDULE D — PERFORMANCE 2.0 LHIN-Specific Performance Obligations 3/3

  • Performance indicators and targets The purpose of the innovation performance indicators and targets is to assist the University and the Commonwealth in monitoring the University's progress against the Commonwealth's objectives and the University's strategies for innovation. The University will report principal performance information and aim to meet the innovation performance indicators and targets set out in the following tables.

  • STATEWIDE ACHIEVEMENT TESTING When CONTRACTOR is an NPS, per implementation of Senate Bill 484, CONTRACTOR shall administer all Statewide assessments within the California Assessment of Student Performance and Progress (“CAASP”), Desired Results Developmental Profile (“DRDP”), California Alternative Assessment (“CAA”), achievement and abilities tests (using LEA-authorized assessment instruments), the Fitness Gram with the exception of the English Language Proficiency Assessments for California (“ELPAC”) to be completed by the LEA, and as appropriate to the student, and mandated by XXX xxxxxxxx to LEA and state and federal guidelines. CONTRACTOR is subject to the alternative accountability system developed pursuant to Education Code section 52052, in the same manner as public schools. Each LEA student placed with CONTRACTOR by the LEA shall be tested by qualified staff of CONTRACTOR in accordance with that accountability program. XXX shall provide test administration training to CONTRACTOR’S qualified staff. CONTRACTOR shall attend LEA test training and comply with completion of all coding requirements as required by XXX.

  • Attainment on Performance Indicators The District will be responsible for overseeing the academic programs offered in its schools and ensuring that those programs meet or exceed state and local expectations for levels of attainment on the statewide performance indicators, as specified in 1 CCR 301-1.

  • Health Promotion Incentives The Joint Labor-Management Committee on Health Plans shall develop a program which provides incentives for employees who participate in a health promotion program. The health promotion program shall emphasize the adoption and maintenance of more healthy lifestyle behaviors and shall encourage wiser usage of the health care system.

  • Long Term Cost Evaluation Criterion 4. READ CAREFULLY and see in the RFP document under "Proposal Scoring and Evaluation". Points will be assigned to this criterion based on your answer to this Attribute. Points are awarded if you agree not increase your catalog prices (as defined herein) more than X% annually over the previous year for the life of the contract, unless an exigent circumstance exists in the marketplace and the excess price increase which exceeds X% annually is supported by documentation provided by you and your suppliers and shared with TIPS, if requested. If you agree NOT to increase prices more than 5%, except when justified by supporting documentation, you are awarded 10 points; if 6% to 14%, except when justified by supporting documentation, you receive 1 to 9 points incrementally. Price increases 14% or greater, except when justified by supporting documentation, receive 0 points. increases will be 5% or less annually per question Required Confidentiality Claim Form Required Confidentiality Claim Form This completed form is required by TIPS. By submitting a response to this solicitation you agree to download from the “Attachments” section, complete according to the instructions on the form, then uploading the completed form, with any confidential attachments, if applicable, to the “Response Attachments” section titled “Confidentiality Form” in order to provide to TIPS the completed form titled, “CONFIDENTIALITY CLAIM FORM”. By completing this process, you provide us with the information we require to comply with the open record laws of the State of Texas as they may apply to your proposal submission. If you do not provide the form with your proposal, an award will not be made if your proposal is qualified for an award, until TIPS has an accurate, completed form from you. Read the form carefully before completing and if you have any questions, email Xxxx Xxxxxx at TIPS at xxxx.xxxxxx@xxxx-xxx.xxx 8 Choice of Law clauses with TIPS Members If the vendor is awarded a contract with TIPS under this solicitation, the vendor agrees to make any Choice of Law clauses in any contract or agreement entered into between the awarded vendor and with a TIPS member entity to read as follows: "Choice of law shall be the laws of the state where the customer resides" or words to that effect. 9

  • Metrics The DISTRICT and PARTNER will partake in monthly coordination meetings at mutually agreed upon times and dates to discuss the progress of the program Scope of Work. DISTRICT and PARTNER will also mutually establish criteria and process for ongoing program assessment/evaluation such as, but not limited to the DISTRICT’s assessment metrics and other state metrics [(Measures of Academic Progress – English, SBAC – 11th grade, Redesignation Rates, mutually developed rubric score/s, student attendance, and Social Emotional Learning (SEL) data)]. The DISTRICT and PARTNER will also engage in annual review of program content to ensure standards alignment that comply with DISTRICT approved coursework. The PARTNER will provide their impact data based upon these metrics.

  • Using Student feedback in Educator Evaluation ESE will provide model contract language, direction and guidance on using student feedback in Educator Evaluation by June 30, 2013. Upon receiving this model contract language, direction and guidance, the parties agree to bargain with respect to this matter.

Time is Money Join Law Insider Premium to draft better contracts faster.