Embedded Web Server for the CR16
maximum number of bytes that it may send in any one segment. If not explicitly advertised, an
implicit, default value of 536 bytes is used. This is based upon the fact that user access to TCP
was originally envisioned to be analogous to that of a file system. Users would read and
write data just as they would read and write files on a disk. To that end, the 512-byte sector
size utilized by common file systems would serve as the basis for TCPs default MSS. Add in
TCPs header overhead of 20 bytes (including one option 4 bytes in length) and you arrive at this
536 figure. (The Internet Protocol (IP) follows this logic, establishing a minimum datagram size
of 576 bytes that all IPs must be prepared to accept.)
The Link layers Maximum Transmission Unit (MTU):
But since the other end knows the maximum number of bytes it can transmit in a segment, why
does it require another parameter called a Window? Quite simply, the MSS value advertised
during SYN is usually unrelated to the receiving applications total buffer capacity. In other
words, the MSS value learned during SYN is typically governed by the underlying Link layers
Maximum Transmission Unit (MTU), otherwise referred to as its Frame size. Ethernet frames,
for example, are limited to 1500 bytes. Consequently, a TCP operating over an Ethernet would
likely advertise a MSS of no greater than about 1460 bytes (allowing 40 or so bytes to account
for the lower layers headers). This is important for two reasons:
A TCP must do its best to ensure that its segments can be transmitted over its LAN
unfragmented i.e. in one piece (more on this fragmentation thing a bit later ).
Efficiency is improved by transmitting the largest possible segment supported by the
underlying network. This is obvious since it reduces the number of TCP segments that
must be transmitted, as well as the number of corresponding ACK packets sent by the
However, if the applications receive buffers are larger than the Link layers MTU, a TCP may be
capable of receiving multiple frames of data prior to pushing this data through to the
application. In this case the sending TCP may immediately transmit several segments, instead
of simply sending them one at a time and waiting for a confirmatory ACK after each one.
Since datagrams traversing the internet may encounter sundry, unpredictable delays, requiring
a transmitter to wait for the peer to ACK every segment before sending another would result in
a great deal of wasted time.
So this is why TCP includes the Window. The judicious use of the Window helps minimize such
waste by allowing the transmitter to send as much data as the peer is capable of accepting,
without having to wait for an ACK of the individual segments. Certainly, the receiver must still
ACK every segment, but this can be done in an aggregate manner instead of one at a time.
Silly Window Syndrome (SWS):
One important consideration for any TCPs Window management scheme is the nefarious Silly
Window Syndrome, or SWS. Since first encountered by a Professor on acid, it has generated a
great deal of press and seems to be a favorite buzzword of many armchair Internet experts.
SWS is an unforeseen weakness in a literal, straightforward implementation of the window
management scheme as suggested in RFC 793, somehow or other exploited by the original
Telnet Application. Simply defined, Silly Window Syndrome is a stable pattern of small