Network Latency Sample Clauses

Network Latency. Network Latency is defined as the average time taken for an IP packet to traverse a pair of backbone Company POPs on the Company Network. The Company Network Latency Guarantee means that the average monthly network latency between North American Company POPs shall not to exceed eighty five (85) ms. In the event that guaranteed network latency metrics are not met during any one calendar-month period, Company will provide a credit equivalent to one (1) day of Service Charge.
AutoNDA by SimpleDocs
Network Latency. The network latency will average less than 25 ms per element averaged across all elements on the local portion of SI&T’s network. After being notified by Customer of network latency in excess of the limit specified above, SI&T will use commercially reasonable efforts to determine the source of such excess network latency and to correct such problem to the extent that the source of the problem is on SI&T’s Network. If SI&T fails to remedy such network latency within four (4) hours of verification and if the average network latency for the preceding 30 days has exceeded the rates specified above, Customer may request a one (1) day Service Credit for that particular event. Customer may not request network latency service credit more than once for any given calendar day. Network Latency across an element is defined as the average time taken for data to make a round trip across such element. Elements in the transport circuit include routers, switches, circuits and other components. Test points for latency are designated solely by SI&T. Testing must be done during a period in which the only traffic on the circuit is the test traffic. Average latency is not measured when a circuit is experiencing a service outage. In the case of continuous high latency exceeding the limits of this SLA, SI&T reserves the right to recommend the disconnection of the affected circuit without penalty of breach.
Network Latency. The average network transit delay (“Latency”) will be measured via roundtrip pings on an ongoing basis every five minutes to determine a consistent average monthly performance level for Latency between edge. Edge locations are defined at Customer sites, trading partner locations, or Exchange locations. Latency is calculated as follows: Target Latency Goal = Minimum Latency + (Per Mile Latency * Round Trip Miles* Between Customer Edges) Region Minimum Latency Per Mile Latency If Goal Exceeded By Intra U.S. 10ms .02ms 1-10ms 11-20ms >20ms International 20ms .03ms 1-10ms 11-20ms >20ms Credit as % of Lumen Financial Connect Port MRC of Affected Service* 10% 20% 30% To simplify calculations, air miles are used to generate latency targets. For example, if location A is 100 air miles from location B (i.e. 200 miles roundtrip) the latency target would be 20ms + (.02 ms * 200) = 24 ms. Route miles are used in lieu of air miles only when the number of route miles is greater than 2x the number of air miles. *subject to requirements and limitations in Section 4 (ii) Exchange Connectivity Latency to New York, Chicago, and London Data Centers. Global Exchange Connectivity Latency metrics are calculated one way in milliseconds. The Global Exchange Connectivity Latency Goal in this subsection is applicable only if a Customer location is within the Lumen Data Center listed in the table below. The Global Exchange Connectivity Latency Goal is applicable to one connection of a primary/secondary resilient connection to the Exchange listed in the table below. The table below reflects measurements one way in milliseconds. Global Exchange Connectivity Latency Goals are measured using monthly averages. Exchange Lumen Data Center Remedy (Credit is applied to Lumen Financial Connect Port MRC of the Affected Service) LO4 LO1 NJ2 NJ2X NJ1 XX0 XX0 Failure to meet the Goal qualifies Customer for 25% of the Lumen Financial Connect Port MRC (Credit cannot be combined with Network Availability SLA credit.) SFTI EU 0.25 1 -- -- -- -- -- LSE 0.5 1 -- -- -- -- -- BATS EU 0.25 1 -- -- -- -- -- BOX -- -- 0.25 0.25 0.3 0.25 10 BATS US -- -- 0.25 0.25 0.25 1 10 CBOE -- -- 10 10 10 10 0.25 CME -- -- 10 10 10 10 0.25 ICE -- -- 10 10 10 10 0.25 ISE -- -- 1 1 1 1.5 10 NASDAQ NLX -- -- 0.25 0.5 0.5 1 10 NYSE SFTI US -- -- 0.25 0.25 0.25 1 10 (c) Packet Delivery. Packet Delivery will be measured on an ongoing basis every five minutes to determine a consistent average monthly performance level for packe...
Network Latency. As the primary locus of data moves from disk to flash or even DRAM, the network is becoming the primary source of latency in remote data access. Network latency is an expression of how much time it takes for a packet of data to get from one point to another. Several factors contribute to network latency, including not only the time it takes for a packet to travel in the cable, but also the time the equipment/switch uses to transmit, receive, buffer, and forward the packet. Total packet latency is the sum of all of the path latencies and of all the switch latencies encountered along the route (usually reported as RTT, Round Trip Time). A packet that travels over N links will pass through N −1 switches. The value of N for any given packet will vary depending on the amount of locality that can be exploited in an application’s communication pattern, the topology of the network, the routing algorithm, and the size of the network. However, when it comes to typical case latency in a large-scale data centre network, path latency is a very small part of total latency. Total latency is dominated by the switch latency which includes delays due to buffering, routing algorithm complexity, arbitration, flow control, switch traversal, and the load congestion for a particular switch egress port. Note that these delays are incurred at every switch in the network, and hence these delays are multiplied by the hop count. One of the possible ways to reduce hop count is to increase the radix of the switches. Increased switch radix also means fewer switches for a network of a given size and therefore a reduced CapEx cost. Reduced hop count and fewer switches also lead to reduced power consumption. For electrical switches, there is a fundamental trade-off due to the poor scaling of both signal pins and per pin bandwidth. For example, one could choose to utilize more pins per port which results in a lower radix, but a higher bandwidth per port. Another option is to use fewer pins per port which would increase the switch radix, but the bandwidth of each port would suffer. Photonics may lead to a better solution, namely the bandwidth advantage due to spatial/spectrum division multiplexing and the tighter signal packaging density of optics, i.e., high-radix switches are feasible without a corresponding degradation of port bandwidth.
Network Latency. Latency is the time delay experienced between a local computer/device generating a Layer 3 ICMP 64 byte ping message and receiving a response from the targeted remote computer/device. It is normally expressed in milliseconds (thousandths of a second). No SLA is offered for IP packets traversing the public Internet. (Defined as the Ethoplex edge router interface connecting to the Tier 1 provider and beyond). For Internet Access, the Ethoplex network is an extension of the public Internet. Ethoplex will measure latency using a standard 64 byte ping from one network device to a second network device in a round trip fashion. The ping test shall be conducted every 5 minutes for 24 hours for an entire month to constitute the measurement period. A month is defined as 30 days times 24 hours for a total of 720 hours. Pinging every five minutes produces 12 pings per hour, 288 pings per day and 8,640 pings per month. Latency will be measured as an average measurement over the month, beginning on the first of each month, to determine the performance of the network based upon the Latency Report issued by Ethoplex. The SLA will be determined to be non-compliant if there is a period of one consecutive hour or more in a 24 hour period (day) with Ethoplex measurements exceeding 15ms on average to qualify for non-standard performance. The customer must open a trouble ticket with the Ethoplex NOC in order to qualify for the credits issued for a non-compliant SLA performance. Network Availability Network Availability is defined as the total number of minutes in a billing month during which a Ethoplex Ethernet service is available to exchange data between the two Customer end points, or a Customer end point and the router connecting Ethoplex to the Tier 1 provider, divided by the total number of minutes in a billing month expresses as a percentage. A billing month has 43,200 minutes. Network Availability is calculated as the total number of minutes during a calendar month when a specific customer connection and local access arrangements are available to exchange data between two or more customer end points with the same type of service, divided by the total number of minutes for that month. Network Availability covering Type 1 (On-Net) access is 99.99% that translates to 4.32 minutes per month of down time outside the maintenance window(s) for Layer 2 Ethernet transport services. The calculation of Network Availability commences after the Customer opens a Trouble Ticket with t...
Network Latency. The end-to-end Network Latency will not be greater than an average of nine (9) milliseconds.
Network Latency. 4.4.1 BT agrees to provide the Service with a Latency commitment subject to the terms of this Contract.
AutoNDA by SimpleDocs
Network Latency. The Services are targeted to have “Network Latency” of 60ms or less within the Network. Network Latency means the round-trip, packet, transit time between the Customer’s premises and Edge’s Node as averaged over a calendar month.
Network Latency. Latency is the delay in traffic between any start and end point on the Amicus Networks Service provided. It is measured from core node to core node in milliseconds (ms) and is averaged as a one-way delay (not round trip delay) over any calendar month. The following table outlines the target service levels for latency based on one-way route averages for the Service that is being provided: Service Latency Transit US < 30 ms Transit Europe < 20 ms Transit < 15 ms Customer Direct Internet Access Ethernet Circuit < 10ms
Network Latency. Jitter <= 30ms:
Time is Money Join Law Insider Premium to draft better contracts faster.