Network congestion

Typical effects include queueing delay, packet loss or the blocking of new connections.

These include: exponential backoff in protocols such as CSMA/CA in 802.11 and the similar CSMA/CD in the original Ethernet, window reduction in TCP, and fair queueing in devices such as routers and network switches.

Network resources are limited, including router processing time and link throughput.

[2] Even on fast computer networks, the backbone can easily be congested by a few servers and client PCs.

Congestion collapse generally occurs at choke points in the network, where incoming traffic exceeds outgoing bandwidth.

When a network is in this condition, it settles into a stable state where traffic demand is high but little useful throughput is available, during which packet delay and loss occur and quality of service is extremely poor.

[3] It was first observed on the early Internet in October 1986,[4] when the NSFNET phase-I backbone dropped three orders of magnitude from its capacity of 32 kbit/s to 40 bit/s,[5] which continued until end nodes started implementing Van Jacobson and Sally Floyd's congestion control between 1987 and 1988.

be an increasing, strictly concave function, called the utility, which measures how much benefit a user obtains by transmitting at rate

Among the ways to classify congestion control algorithms are: Mechanisms have been invented to prevent network congestion or to deal with a network collapse: The correct endpoint behavior is usually to repeat dropped information, but progressively slow the repetition rate.

Provided all endpoints do this, the congestion lifts and the network resumes normal behavior.

[citation needed] Other strategies such as slow start ensure that new connections don't overwhelm the router before congestion detection initiates.

Common router congestion avoidance mechanisms include fair queuing and other scheduling algorithms, and random early detection (RED) where packets are randomly dropped as congestion is detected.

This proactively triggers the endpoints to slow transmission before congestion collapse occurs.

Some end-to-end protocols are designed to behave well under congested conditions; TCP is a well known example.

The first TCP implementations to handle congestion were described in 1984,[8] but Van Jacobson's inclusion of an open source solution in the Berkeley Standard Distribution UNIX ("BSD") in 1988 first provided good behavior.

Thus, special measures, such as quality of service, must be taken to keep packets from being dropped in the presence of congestion.

[10][11][12][13][14] Problems occur when concurrent TCP flows experience tail-drops, especially when bufferbloat is present.

This delayed packet loss interferes with TCP's automatic congestion avoidance.

One solution is to use random early detection (RED) on the network equipment's egress queue.

[15][16] On networking hardware ports with more than one egress queue, weighted random early detection (WRED) can be used.

This is better than the indirect congestion notification signaled by packet loss by the RED/WRED algorithms, but it requires support by both hosts.

The L4S protocol is an enhanced version of ECN which allows senders to collaborate with network devices to control congestion.

When an application requests a large file, graphic or web page, it usually advertises a window of between 32K and 64K.

When many applications simultaneously request downloads, this data can create a congestion point at an upstream provider.

Effective congestion notifications can be propagated to transport layer protocols, such as TCP and UDP, for the appropriate adjustments.

WiFi, 3G and other networks with a radio layer are susceptible to data loss due to interference and may experience poor throughput in some cases.

The TCP connections running over a radio-based physical layer see the data loss and tend to erroneously believe that congestion is occurring.

Initial performance can be poor, and many connections never get out of the slow-start regime, significantly increasing latency.

Admission control is any system that requires devices to receive permission before establishing new network connections.

Examples include Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T G.hn standard for home networking over legacy wiring, Resource Reservation Protocol for IP networks and Stream Reservation Protocol for Ethernet.