Network Performance Measurement
Network performance is defined by the overall quality of service provided by a network. This encompasses numerous parameters and measurements that must be analyzed collectively to assess a given network. Network performance measurement is therefore defined as the overall set of processes and tools that can be used to quantitatively and qualitatively assess network performance and provide actionable data to remediate any network performance issues.
Why Measure Network Performance
The demands on networks are increasing every day, and the need for proper network performance measurement is more important than ever before. Effective network performance translates into improved user satisfaction, whether that be internal employee efficiencies, or customer-facing network components such as an e-commerce website, making the business rationale for performance testing and monitoring self-evident.
When delivering services and applications to users, bandwidth issues, network down time, and bottlenecks can quickly escalate into IT crisis mode. Proactive network performance management solutions that detect and diagnose performance issues are the best way to guarantee ongoing user satisfaction.
The performance of a network can never be fully modeled, so measuring network performance before, during, and after updates are made and monitoring performance on an ongoing basis are the only valid methods to fully ensure network quality. While measuring and monitoring network performance parameters are essential, the interpretation and actions stemming from these measurements are equally important.
Network Performance Measurement Tools
Network performance measurement tools can be broadly categorized into two types - passive and active. Passive network measurement tools monitor (or measure) existing applications on the network to gather data on performance metrics. This category of tool minimizes network disruption, since no additional traffic is introduced by the tool itself. In addition, by measuring network performance using actual applications, a realistic assessment of the user experience may be obtained.
Active networking performance measurement tools generate data that can be tailored to baseline performance using pre-set routines. This testing requires an additive level of data traffic by nature, so it must be scheduled appropriately to minimize impact on existing network traffic
The continuous improvement of network performance monitoring tools has enabled IT professionals to stay one step ahead of the game. Advanced tools provide cutting edge data packet capture analytics, software solutions that integrate user experience data into effective root cause analysis and trending, and large-scale network performance measurement dashboards with remote diagnostic capabilities.
Network Performance Measurement Parameters
To ensure optimized network performance, the most important criterion should be selected for measurement. Many of the parameters included in a comprehensive network performance measurement solution focus on data speed and data quality. Both of these broad categories can significantly impact end user experience and are influenced by several factors.
With regards to network performance measurement, latency is simply the amount of time it takes for data to travel from one defined location to another. This parameter is sometimes referred to as delay. Ideally, the latency of a network is as close to zero as possible. The absolute limit or governing factor for latency is the speed of light, but packet queuing in switched networks and the refractive index of fiber optic cabling are examples of variables that can increase latency.
With regards to network performance measurement, packet loss refers to the number of packets transmitted from one destination to another that fail to transmit. This metric can be quantified by capturing traffic data on both ends, then identifying missing packets and/or retransmission of packets. Packet loss can be caused by network congestion, router performance and software issues, among other factors.
The end effects will be detected by users in the form of voice and streaming interruptions, or incomplete transmission of files. Since retransmission is a method utilized by network protocols to compensate for packet loss, the network congestion that initially led to the issue can sometimes be exacerbated by the increased volume caused by retransmission.
To minimize the impact of packet loss and other network performance problems, it is important to develop and utilize tools and processes that identify and alleviate the true source of problems quickly. By analyzing response time to end user requests, the system or component that is at the root of the issue can be identified. Data packet capture analytics tools can be used to review response time for TCP connections, which in turn can pinpoint which applications are contributing to the bottleneck.
Transmission Control Protocol (TCP) is a standard for network conversation through which applications exchange data, which works in conjunction with the Internet Protocol (IP) to define how packets of data are sent from one computer to another. The successive steps in a TCP session correspond to time intervals that can be analyzed to detect excessive latency in connection or round trip times.
Throughput and Bandwidth
Throughput is a metric often associated with the manufacturing industry and is most commonly defined as the amount of material or items passing through a particular system or process. A common question in the manufacturing industry is how many of product X were produced today, and did this number meet expectations. For network performance measurement, throughput is defined in terms of the amount of data or number of data packets that can be delivered in a pre-defined time frame.
Bandwidth, usually measured in bits per second, is a characterization of the amount of data that can be transferred over a given time period. Bandwidth is therefore a measure of capacity rather than speed. For example, a bus may be capable of carrying 100 passengers (bandwidth), but the bus may actually only transport 85 passengers (throughput).
Jitter is defined as the variation in time delay for the data packets sent over a network. This variable represents an identified disruption in the normal sequencing of data packets. Jitter is related to latency, since the jitter manifests itself in increased or uneven latency between data packets, which can disrupt network performance and lead to packet loss and network congestion. Although some level of jitter is to be expected and can usually be tolerated, quantifying network jitter is an important aspect of comprehensive network performance measurement.
Latency vs Throughput
While the concepts of throughput and bandwidth are sometimes misunderstood, the same confusion is common between the terms latency and throughput. Although these parameters are closely related, it is important to understand the difference between the two.
In relation to network performance measurement, throughput is a measurement of actual system performance, quantified in terms of data transfer over a given time.
Latency is a measurement of the delay in transfer time, meaning it will directly impact the throughput, but is not synonymous with it. The latency might be thought of as an unavoidable bottleneck on an assembly line, such as a test process, measured in units of time. Throughput, on the other hand, is measured in units completed which is inherently influenced by this latency.
Factors Affecting Network Performance
Network performance management includes monitoring and optimization practices for key network performance metrics such as application down time and packet loss. Increased network availability and minimized response time when problems occur are two of the logical outputs for a successful network management program. A holistic approach to network performance management must consider all of the essential categories through which problems may be manifested.
The overall network infrastructure includes network hardware, such as routers, switches and cables, networking software, including security and operating systems as well as network services such as IP addressing and wireless protocols. From the infrastructure perspective, it is important to characterize the overall traffic and bandwidth patterns on the network. This network performance measurement will provide insight into which flows are most congested over time and could become potential problem areas.
Identifying the over-capacity elements of the infrastructure can lead to proactive corrections or upgrades that can minimize future downtime rather than simply responding to any performance crisis that may arise.
Performance limitations inherent to the network itself are often a source of significant emphasis. Multiple facets of the network can contribute to performance, and deficiencies in any of these areas can lead to systemic problems. Since hardware requirements are essential to capacity planning, these elements should be designed to meet all anticipated system demands. For example, an inadequate bus size on the network backplane or insufficient available memory might in turn lead to an increase in packet loss or otherwise decreased network performance. Network congestion, on either the active devices or physical links (cabling) of the network can lead to decreased speeds, if packets are queued, or packet loss if no queuing system is in place.
While network hardware and infrastructure issues can directly impact user experience for a given application, it is important to consider the impact of the applications themselves as important cogs in the overall network architecture. Poor performing applications can over-consume bandwidth and diminish user experience. As applications become more complex over time, diagnosing and monitoring application performance gains importance. Window sizes and keep-alives are examples of application characteristics that impact network performance and capacity.
Whenever possible, applications should be designed with their intended network environment in mind, using real-world networks for testing rather than simulation labs. Ultimately, the variety of network conditions an application is exposed to cannot be fully anticipated, but improvements in development practices can lead to a decrease in network performance degradation due to application issues. Applications contributing to poor network performance can be identified using analytics to identify slow response time, while correcting these design limitations post-release can become a formidable task.
Network security is intended to protect privacy, intellectual property, and data integrity. Thus, the need for robust cybersecurity is never in question. Managing and mitigating network security issues requires device scanning, data encryption, virus protection, authentication and intrusion detection, all of which consume valuable network bandwidth and can impact performance.
Security breaches and downtime due to viruses are among the most costly performance problems encountered, so any degradation induced by security products should be carefully weighed against the potential downtime or data integrity disasters they prevent. With these constraints in mind, an invaluable element of network performance monitoring with respect to security is the strategic use of network security forensics. By recording, capturing and analyzing network data, the source of intrusions and anomalous traffic such as malware may be identified. Captured network traffic can utilized retrospectively for investigative purposes by reassembling transferred files.
Full Packet Capture (FPC) is one such technique used for after-the-fact security investigations. Rather than monitoring incoming traffic for known malicious signatures, FPC provides constant storage of unmodified network traffic and the ability to replay previous traffic through new detection signatures. Given the high volume of data packet transfer inherent to a modern network, the storage requirements associated with FPC can be formidable. By defining the mean time to detect (MTTD) based on previous incident results, a logical minimum time for packet data storage can be established. In some cases, packet filtering may be a viable method to selectively monitor high risk traffic and lessen the storage demands. To facilitate forensic analysis capabilities, FPC software must enable accurate time and date stamping of stored packets for search and investigation purposes.
Network Performance Measurement Challenges
The potential culprits leading to diminished network performance become actionable with an observable drop off in speed or quality. Network performance measurement solutions should be designed with the user in mind. Slight degradation in latency, for example, may not be perceptible. Finding these acceptable limits is the key to establishing relevant testing and monitoring.
With performance demands constantly increasing, novel solutions to common performance issues have emerged. Packet shaping is a method used to prioritize package delivery for different applications. This allows adequate bandwidth to be consistently allocated to the most important categories. File compression is another innovation that decreases the bandwidth and memory consumed.
Perhaps the most important component in maintaining network performance is the implementation of effective network performance measurement and oversight practices. If problems with servers, routing, delivery or bandwidth can be detected in real time, expedient solutions and preventative strategies are the logical byproducts.