IT Questions and Answers :)

Sunday, August 25, 2019

While having problems transferring data to an outside business partner, you're unable to reliably upload files to their site each day. You find no packet loss, but deduce varying degrees of RTT. Which TCP CCA would be best suited for this problem?

While having problems transferring data to an outside business partner, you're unable to reliably upload files to their site each day. You find no packet loss, but deduce varying degrees of RTT. Which TCP CCA would be best suited for this problem?

  • TCP Vegas
  • TCP Tahoe
  • TCP New Reno
  • TCP Reno [Please let me comment if its incorrect]
While having problems transferring data to an outside business partner, you're unable to reliably upload files to their site each day. You find no packet loss, but deduce varying degrees of RTT. Which TCP CCA would be best suited for this problem?


EXPLANATION

TCP Tahoe
When a loss occurs, fast retransmit is sent, half of the current CWND is saved as ssthresh and slow start begins again from its initial CWND. Once the CWND reaches ssthresh, TCP changes to congestion avoidance algorithm where each new ACK increases the CWND by MSS / CWND. This results in a linear increase of the CWND. 

TCP Reno
A fast retransmit is sent, half of the current CWND is saved as ssthresh and as new CWND, thus skipping slow start and going directly to the congestion avoidance algorithm. The overall algorithm here is called fast recovery.
Once ssthresh is reached, TCP changes from slow-start algorithm to the linear growth (congestion avoidance) algorithm. At this point, the window is increased by 1 segment for each round-trip delay time (RTT).
Slow start assumes that unacknowledged segments are due to network congestion. While this is an acceptable assumption for many networks, segments may be lost for other reasons, such as poor data link layer transmission quality. Thus, slow start can perform poorly in situations with poor reception, such as wireless networks.
The slow start protocol also performs badly for short-lived connections. Older web browsers would create many consecutive short-lived connections to the web server, and would open and close the connection for each file requested. This kept most connections in the slow start mode, which resulted in poor response time. To avoid this problem, modern browsers either open multiple connections simultaneously or reuse one connection for all files requested from a particular web server. Connections, however, cannot be reused for the multiple third-party servers used by web sites to implement web advertising, sharing features of social networking services,[6] and counter scripts of web analytics.

TCP Vegas

Until the mid-1990s, all of TCP's set timeouts and measured round-trip delays were based upon only the last transmitted packet in the transmit buffer. University of Arizona researchers Larry Peterson and Lawrence Brakmo introduced TCP Vegas (named after the largest Nevada city) in which timeouts were set and round-trip delays were measured for every packet in the transmit buffer. In addition, TCP Vegas uses additive increases in the congestion window. In a comparison study of various TCP congestion control algorithms, TCP Vegas appeared to be the smoothest followed by TCP CUBIC.[15]
TCP Vegas was not widely deployed outside Peterson's laboratory but was selected as the default congestion control method for DD-WRT firmware v24 SP2.[16]

TCP New Reno

TCP New Reno, defined by RFC 6582 (which obsoletes previous definitions in RFC 3782 and RFC 2582), improves retransmission during the fast-recovery phase of TCP Reno. During fast recovery, for every duplicate ACK that is returned to TCP New Reno, a new unsent packet from the end of the congestion window is sent, to keep the transmit window full. For every ACK that makes partial progress in the sequence space, the sender assumes that the ACK points to a new hole, and the next packet beyond the ACKed sequence number is sent.
Because the timeout timer is reset whenever there is progress in the transmit buffer, this allows New Reno to fill large holes, or multiple holes, in the sequence space – much like TCP SACK. Because New Reno can send new packets at the end of the congestion window during fast recovery, high throughput is maintained during the hole-filling process, even when there are multiple holes, of multiple packets each. When TCP enters fast recovery it records the highest outstanding unacknowledged packet sequence number. When this sequence number is acknowledged, TCP returns to the congestion avoidance state.
A problem occurs with New Reno when there are no packet losses but instead, packets are reordered by more than 3 packet sequence numbers. When this happens, New Reno mistakenly enters fast recovery, but when the reordered packet is delivered, ACK sequence-number progress occurs and from there until the end of fast recovery, every bit of sequence-number progress produces a duplicate and needless retransmission that is immediately ACKed.
New Reno performs as well as SACK at low packet error rates, and substantially outperforms Reno at high error rates.
Share:

0 comments:

Post a Comment

Popular Posts