Network Performance

“Sir, I am unable to join the class as my network connectivity is poor!” A very common conversation during these days (ignoring the facts!). So, when you say your network connectivity is poor it does mean to the performance of the network. Why my network connection is slow? What are the factors affecting speed? Let’s dive into the details.

What is Network Performance?

It is the analysis of a collective network and the measure of the quality of services offered by an underlying network to the user. It is both the qualitative and quantitative measures of the performance of the network. The analysis assists the network administrator to review and improve the network services. Network performance is measured by reviewing statistics of certain factors. Here are the factors that affect the performance of the network.

  1. Network Bandwidth
  2. Throughput
  3. Latency or delay
  4. Jitter
  5. Packet Loss and congestion

Network Bandwidth:

It is the ability or capacity of the network to transmit the data. It is often mistakenly understood with internet speed. Speed is what we receive and bandwidth is what the network is capable of. To learn about bandwidth and speed, click here. More bandwidth doesn’t mean more speed. There are two contexts in which the term “bandwidth” is referred. In digital devices, the bandwidth is measured in bps (bits-per-second or bytes-per-second), and in analog devices, it is measured in Hertz (Hz). It can be considered as a range of frequencies a channel can pass through it. Ex. Telephone network (in Hz).

Throughput:

It is the number of packets of data transmitted in a given unit time. It can also be said as the measure of no.of units of data a system can process in a given time. Throughput has different definitions in different contexts. Often bandwidth and throughput are thought of the same. But they are not. 

Consider a highway on which 300 vehicles can pass. But a given instant of time, it is observed that only 200 vehicles passed. In simple terms, bandwidth is the theoretical and potential measurement of the transfer of data. Throughput is the practical and the actual measurement of the number of packets delivered. From the above example, the highway has a bandwidth of 300 units but the actual throughput is 200 units.

Latency or Delay:

It is the time taken for complete information to reach the destination from the source. It is the total time taken from the point when the first bit of data transmitted from a source to the last bit reached the destination. In simple terms, it is the time taken for a network to successfully deliver the packets to the destination. The networks in which delays are low are termed as “Low Latency networks” and those with higher delays are termed as “High Latency Networks”. High latency can lead to a bottleneck and consequently decreasing the bandwidth of the network. 

Latency is the sum of transmission, propagation, queueing, and processing delays. Let us assume some packets of information are being transmitted from Host A (source) to Host B (destination).

Processing delay:

We know that the information travels via packet-switching (assuming here, could travel through circuit switching). In this method of transmission, packets do not travel in a single path. Depending on the destination address in the packet header, they independently travel on different paths. They choose different paths if there exists congestion in some paths. This could cause a loss in data. So, to identify the packets and arrange them, sequence numbers are associated with them. These sequence numbers or identification numbers are stored in the IP packet headers along with source IP address, destination IP address, and the packet length. What are these IP headers? Reference has been provided to help you with the concept!

Time taken for the networking device (switch or router) to process the packet header is known as processing delay. The processing delay is totally dependent on the processing speed of the networking device.

Queueing Delay:

Packet switching uses the queue method to put the packets on to transmission line. A transmission line is a path the packet takes to reach the destination. This is because the packets are independent to choose the path and may get discarded at any hop. What the devices do here is placing the packets at a place and moving them once there turn arrives. This is the queue, more technically buffer.

The time a packet waits to get processed in the buffer of the networking device is known as queueing delay. This delay depends on the arrival time of the incoming packets, the nature of the network’s traffic, and the transmission capacity of the outgoing link. 

Transmission delay:

After queuing, the packet has to reach the outgoing link or the transmission line. The time taken by a packet to reach the outgoing link is defined as transmission delay. The delay is determined by the size of the packet and the capacity of the outgoing link. 

Transmission delay= (L/B) seconds

L= Number of bits in a packet
B= Speed of outgoing link

Conversions:
1 bits per second= 0.001 Kilo bits per second = 0.000001 Mega bits per second

1 byte = 8 bits

Propagation delay:

Once the packet is put on the outgoing link, the time taken by the last bit to reach the destination (Host B) is known as propagation delay. It is dependent on the distance between the sender and receiver, and the propagation speed of the wave signal.

Propagation delay= (D/S) seconds

D= distance between the sender and receiver
S= Speed of propagation

Jitter:

It is technically the packet delay variance. In simple terms, when the packets have different delays then it leads to jitter. Suppose a packet has a 10 ms delay and the preceding packet has a 20 ms delay. The destination application will suffer with jitter. This jitter could cause a flickering display or damage to the speed of the server. The causes of jitter are the electromagnetic interference or the crosstalk. This further leads to packet loss and congestion.

Packet Loss and Congestion:

The unexpected or the difference in arrival times of packets to the destination makes the destination application unable to process the data packets. This will lead to packet loss. This gives a negative impact when viewing a video or playing a game by missing out on the pixels or even worse effects.

Congestion is similar to a traffic jam. When more packets reaches the junction at the same interval of time like more vehicles on the highway at the same time, none of the packets can be processed. This is congestion. 

Neologism and Related Terms:

  • Bottleneck: It is the condition where the data transmission becomes halted or limited. High latency or insufficient computer causes a bottleneck.
  • Packet switching: It is a method of data transmission in which the data is broken down into chunks of pieces called packets. It helps in minimizing data loss and latencies in the network. The destination application assembles the dismantled transmission of data from the source.
  • Cross-talk: When a packet on one transmission line or path shifts to another and creates a negative impact on the network is cross-talk.
  • Circuit switching: It is a method of transmission where the communications between the two end devices must be set up before the actual transmission. Once the connection is established, the two devices will stay connected until the data transmission is completed. An example can be an analog telephone network.
  • Hop: When a packet is passed from one segment of the network to the next a hop is created.

References:

8 thoughts on “Network Performance

Leave a comment