 |
A network of wires, not wireless: As we continually learn, wireless is different. Wireless systems typically have
higher bit error rates (BERs) than wire-based carriage systems. Mobile wireless systems also include factors of
signal fade, base-station handover, and variable levels of load. TCP was designed with wire-based carriage in mind, and the
design of the protocol makes numerous assumptions that are typical of such of an environment. TCP makes the assumption that
packet loss is the result of network congestion, rather than bit-level corruption. TCP also assumes some level of stability
in the RTT, because TCP uses a method of damping down the changes in the RTT estimate. |
 |
A best-path route-selection protocol: TCP assumes that there is a single best metric path to any destination because
TCP assumes that packet reordering occurs on a relatively minor scale, if at all. This implies that all packets in a
connection must follow the same path within the network or, if there is any form of load balancing, the order of packets
within each flow is preserved by some network-level mechanism. |
 |
A network with fixed bandwidth circuits, not varying bandwidth: TCP assumes that available bandwidth is constant,
and will not vary over short time intervals. TCP uses an end-to-end control loop to control the sending rate, and it takes
many RTT intervals to adjust to varying network conditions. Rapidly changing bandwidth forces TCP to make very conservative
assumptions about available network capacity. |
 |
A switched network with first-in, first-out (FIFO) buffers: TCP also makes some assumptions about the architecture
of the switching elements within the network. In particular, TCP assumes that the switching elements use simple FIFO queues
to resolve contention within the switches. TCP makes some assumption about the size of the buffer as well as its queuing
behavior, and TCP works most efficiently when the buffer associated with a network interface is of the same order of size as
the delay bandwidth product of the associated link. |
 |
The duration of TCP sessions: TCP also makes some assumptions about the nature of the application. In particular, it
assumes that the TCP session will last for some number of round-trip times, so that the overhead of the initial protocol
handshake is not detrimental to the efficiency of the application. TCP also takes numerous RTT intervals to establish the
characteristics of the connection in terms of the true RTT interval of the connection as well as the available capacity. The
introduction of short-duration sessions, such as found in transaction applications and short Web transfers, is a new factor
that impacts the efficiency of TCP. |
 |
Large payloads and adequate bandwidth: TCP assumes that the overhead of a minimum of 40 bytes of protocol per TCP
packet (20 bytes of IP header and 20 bytes of TCP header) is an acceptable overhead when compared to the available bandwidth
and the average payload size. When applied to low-bandwidth links, this is no longer the case, and the protocol overheads may
make the resultant communications system too inefficient to be useful. |
 |
Interaction with other TCP sessions: TCP assumes that other TCP sessions will also be active within the network, and
that each TCP session should operate cooperatively to share available bandwidth in order to maximize network efficiency. TCP
may not interact well with other forms of flow-control protocols, and this could result in unpredictable outcomes in terms of
sharing of the network resource between the active flows as well as poor overall network efficiency. |
 |
ACK Pacing: Each burst of data packets will generate a corresponding burst of ACK packets. The spacing of these ACK
packets determines the burst rate of the next sending packet sequence. For long-delay systems, the size of such bursts
becomes a limiting factor. TCP slow start generates packet bursts at twice the bottleneck data rate, so that the bottleneck
feeder router may have to absorb one-half of every packet burst within its internal queues. If these queues are not
dimensioned to the delay bandwidth product of the next hop, these queues become the limiting factor, rather than the path
bandwidth itself. If you can slow down the TCP burst rate, the pressure on the feeder queue is alleviated. One approach to
slow down the burst rate is to impose a delay on successive ACKs at a network control point (Figure 3). This measure will
reduce the burst rate, but not impact the overall TCP throughput. ACK pacing is most effective on long delay paths, and it is
intended to spread out the burst load, reducing the pressure on the bottleneck queue and increasing the actual data
throughput. |
 |
Window Manipulation: Each ACK packet carries a receiver window size. This advertised window determines the maximum
burst size available to the sender. Manipulating this window size downward allows a control point to control the maximal TCP
sending rate. This manipulation can be done as part of a traffic-shaping control point, enforcing bandwidth limitations on a
flow or set of flows. |
 |
Both of these formulae assume that the TCP Receiver Window is not limiting the performance of the connection in any way.
Because the receiver window is entirely determined by end hosts, we assume that hosts will maximize the announced receiver
window in order to maximize their network performance. |
 |
Both of these formulae allow for bandwidth to become infi nite if there is no loss. This is because an Internet path will
drop packets at bottleneck queues if the load is too high. Thus, a completely lossless TCP/IP network can never occur (unless
the network is being underutilized). |
 |
The RTT used is the average RTT including queuing delays. |
 |
The formulae are calculations for a single TCP connection. If a path carries many TCP connections, each will follow the
formulae above independently. |
 |
The formulae assume long-running TCP connections. For connections that are extremely short (<10 packets) and don't lose
any packets, performance is driven by the TCP slow-start algorithm. For connections of medium length, where on average only a
few segments are lost, single-connection performance will actually be slightly better than given by the formulae above. |
 |
The difference between the simple and complex formulae above is that the complex formula includes the effects of TCP
retransmission timeouts. For very low levels of packet loss (signifi cantly less than 1 percent), timeouts are unlikely to
occur, and the formulae lead to very similar results. At higher packet losses (1 percent and above), the complex formula
gives a more accurate estimate of performance (which will always be signifi cantly lower than the result from the simple
formula). |