Performance Improvements Sample Clauses
Performance Improvements. (a) The parties are committed to working for the achievement of productivity improvements at the site level. Transport employees at each plant will participate in joint site consultative processes that have the objective of improving productivity. Such site consultative processes would include consultative committees and work groups. These consultative processes will support measures that will make positive progress in the Key Performance Indicators (KPIs) at each site. The areas covered by KPIs will include but not be limited to: • Hours (paid) per 1000 litres collected. • Kilometres travelled per 1000 litres collected. • Average litres per tanker per shift (averaged across the site fleet). • Conformance to schedule: ⮚ Time ⮚ Sequence ⮚ Kilometres travelled. • Average fuel economy (averaged across the site fleet). • Customer (Supplier) complaints. • Vehicle per trip and post-trip checks. (The existing base line to be the measure of comparison and/or level of variation.) These consultative processes will set targets for productivity improvements in the KPIs. Employee representatives will be able to participate in consultative committee discussions covering the implementation of the terms of this provision.
(b) The parties are committed to a continuing process of enhancing efficiency and productivity.
Performance Improvements. The responsiveness and the scalability of Graasp has been improved to en- able a better user experience and to provide a stable platform to build the Go-Lab portal upon. To be able to do meaningful performance improvements, the source code was analysed and profiled to identify performance bottlenecks. Afterwards, critical parts have been removed, redesigned and reimplemented. Two main metrics were used to measure the responsiveness: the average first load time and the average transition time between spaces. For Graasp this is al- most equal to the app server response time (see Figure 28). The measurements were done with the help of the NewRelic web application monitoring tool6. To achieve better responsiveness and scalability, the following improvements have been made: Another performance-related issue was located in the network commu- nication between the front-end (the presentation tier) and the back-end (the logic tier) (see deliverable G5.2 for the technical documentation of Graasp). The presentation tier and the logic tier communicate using JSON 1GitHub, ▇▇▇▇▇://▇▇▇▇▇▇.▇▇▇ 2Graasp GitHub repository is private due to source code licensing restrictions. 3Graasp Shindig repository, ▇▇▇▇▇://▇▇▇▇▇▇.▇▇▇/react-epfl/shindig-react 4Graasp issues page, ▇▇▇▇▇://▇▇▇▇▇▇.▇▇▇/react-epfl/graasp/issues?page=1&state= closed 5Graasp Shindig issues page, ▇▇▇▇▇://▇▇▇▇▇▇.▇▇▇/react-epfl/shindig-react/issues ?page=1&state=closed 6New Relic, ▇▇▇▇://▇▇▇▇▇▇▇▇.▇▇▇/ objects. The creation of these objects was inefficient, often due to redun- ▇▇▇▇ JSON fields. Such fields were identified and removed. This enabled a reduction in network traffic by a factor of 2 to 10 times providing an im- portant responsiveness improvement. Overall, the changes resulted in the following improvements of the measured metrics: The current Graasp performance is presented in Figure 28. From the top chart, one can see that at the moment the network loading (brown) is very low and that the main time is spent on rendering the page (blue) and processing the DOM tree (yellow). From the bottom chart one can see that memcached (the tiny dark blue line on the top) is very efficient in serving requests and that most of the time is spent on querying the database. If further performance improvements become necessary, the page rendering and DOM processing should be further improved as these are currently the most time-consuming.
Performance Improvements. The performance of the voxelNotepad2 program continues to be as critical to its utility as its feature set. At the end of the Phase 2 effort, vnp2 was capable of interactively visualizing data sets of approximately 1 million cells. These data sets could be viewed, rotated, translated and zoomed at interactive rates (i.e., approximately 30Hz). Initial loading times for property data and the time required to move between time steps, however, is still less than optimal. Some tests indicate initial ECL data load times of 300 seconds for 1 million cells. The loading of “pre-processed” ECL data (including for time stepping purposes) is on the order of 60 seconds. D▇▇▇▇▇ ▇▇▇▇▇ ▇ of the effort, we will significantly improve the voxelNotepad2 program’s loading and time stepping performance. Our goal is to be able to load and time step pre-processed 1 million cell data sets in 15 seconds or less. In addition, we will explore the vnp2 programs’ performance and possible improvements for data sets up to and including 10 million cells. The vnp2 program’s loading and time stepping code will be significantly optimized during the Phase 3 effort. We will modify the vnp2 pre-processing algorithms and code to directly save property and time step access information. In addition, we will optimize the vnp2 program’s in memory data caching system to work efficiently with 1 million cell and larger data sets. Finally, we will characterize the program’s performance on data sets larger than 1 million cells and identify possible avenues to supporting data sets up to and including 10 million cells.
Performance Improvements. The performance of the voxelNotepad2 program continues to be as critical to its utility as its feature set. At the end of the Phase 2 effort, vnp2 was capable of interactively visualizing data sets of approximately ****. These data sets could be viewed, rotated, translated and zoomed at **** rates (i.e., approximately****). Initial loading times for property data and the time required to move between ****, however, is still less than optimal. Some tests indicate initial **** data load times of **** seconds for ****. The loading of “pre-processed” **** data (including for **** purposes) is on the order of **** seconds. During Phase 3 of the effort, we will significantly improve the voxelNotepad2 program’s loading and **** performance. Our goal is to be able to load and ****pre-processed **** data sets in **** seconds or less. In addition, we will explore the vnp2 programs’ performance and possible improvements for data sets up to and including ****. The vnp2 program’s loading and**** code will be significantly optimized during the Phase 3 effort. We will modify the vnp2 pre-processing algorithms and code to directly save property and **** access information. In addition, we will optimize the vnp2 program’s in **** data **** system to work efficiently with **** and larger data sets. Finally, we will characterize the program’s performance on data sets larger than **** and identify possible avenues to supporting data sets up to and including ****
5.3. Deliverables The deliverables for the Phase 3 effort are:
5.3.1. voxelNotepad2 Prototype program executables.
5.3.2. Documentation including a User Manual and Installation instructions in Microsoft Word format.
5.3.3. Internal use of vnp2 Prototype license for evaluation and demonstration within SAUDI ARAMCO.
5.3.4. Hardware to convert a SAUDI ARAMCO computer system to a P▇▇▇▇ ▇ ▇▇▇▇▇▇▇▇▇▇ ▇▇▇▇▇▇▇▇▇▇▇. Novint will deliver to ASC (FOB Houston, Texas) hardware to allow an existing SAUDI ARAMCO computer to do the visual, haptic and sound required for the Phase 3 voxelNotepad system. This hardware includes: a) a PHANTOM Desktop model haptic interface device; b) a PCI graphics card capable of operating with the haptic interface device and of supporting frame sequential stereoscopic graphics; c) sound interface hardware compatible with the haptic interface device and capable of supporting real-time MIDI and mp3 sound generation; and d) visual display calibration hardware required in order to calibrate and properly disp...
Performance Improvements. As far as the results with respect to performance the final architecture is up to 1.8 times faster than the previous architecture. The expected results would be more than 2x as the cores are more than doubled. The reason is that the overhead for writing to LMEM the p(x), p(xn+1,x) and p(x,y) is included on the total runtime.. Actually for small numbers of bins the execution time is about the same between the 3 core and the 8core implementation. With the increase of the num of bins the performance increase is more visible as the write LMEM calls become a smaller portion of the total execution time. Another factor that increases the overhead is the initialization of the 10 streams from the LMEM. Num of Bins HWv.2 (3cores) (sec) HW (8cores) (sec) SpeedUp 96 0.01 0.01 1 2*96 0.043 0.025 1.7 5*96 0.52 0.33 1.6 8*96 2.1 1.2 1.8 10*96 4 2.4 1.7 12*96 6.8 4.1 1.7 The number of bins has to be multiple of 96 as LMEM data have to be multiple of 384 bytes, which is the burst size. The difference from MI is that the PDFs are a lot larger in the TE case, so the division of the p(x,y) does not lead to streams with size smaller than 384 bytes. Padding is used if different size of bins is needed but it has a performance penalty. The best solution is for the number of Bins to be a multiple of 96. We present here the results with this restriction, as padding causes a very slight decrease in performance. We suggest if this architecture is used, that the num of bins is multiple of 96, if the applications allow it, in order to avoid the overhead introduced by padding. For completion we present the increase in performance vs. The version 2 software on the following Table. As a result the final architecture is up to 10 times faster than the equivalent software implementation. Num of Bins SWv2(sec) HW8core(sec) SpeedUp 96 0.04 0.01 4 2*96 0.25 0.025 10 5*96 3.2 0.33 9.7 8*96 11.6 1.2 9.7 10*96 22 2.4 9.2 12*96 37 4.1 9 The above results show that the hardware system with 1 DFE can calculate up to 40 TE values every second, for 2*9 bins. This means that the hardware implementation can calculate the pairwise TE between 6 different stocks every second. Just like with the Mutual Information case the multi DFE implementation for the final architecture is under development at the moment that the deliverable is being written. As a result we cannot provide any actual runtime results for a multi DFE implementation. At the moment we can only make a projection of the performance with the use...
