Advanced Topics on TFTP

The TFTP protocol has been with us for quite a long time now. It was initially documented by the Internet Engineering Task Force (IETF) standard RFC-783 (1981) and later on its updated version the RFC-1350 (1992). It is considered by many the simplest file transfer protocol, which is the reason it became the favorite choice for embedded devices or anything with limited hardware resources on the need of sending and/or receiving files.

With simplicity in mind it is not hard to understand why the protocol does not have provisions for user authentication. Some people consider TFTP insecure because of this, but taking into account the protocol does not include file listing/removal capabilities either, many other people consider TFTP security is "acceptable" on many scenarios. Even today very famous companies rely on just concealed HTML URLs for their customer download of sensitive material.

The protocol got two major improvements. The first being RFC-2347 introducing the "option negotiation" framework as a way to dynamically coordinate parameters between requester and provider before a particular transfer begins. Right after RFC-2348 introduced "block-size" as one of the negotiated options using the previously defined framework. This way the fixed 512 bytes block size ruled on the original specification was able to be dynamically "negotiated" to higher values on a per-transfer basis.

Considering the file transfer itself, the protocol appears being surprisingly simple. It uses UDP; even without a complete TCP/IP stack implementation the protocol could work, but TCP is not there guaranteeing the packet delivery. As transport and session control there is instead only a rudimentary block retransmission schema that is triggered in case of a detected missing block or acknowledgment. The data/acknowledgment exchange is performed in lock-step, with only one block sent before to stop transmitting and wait for the corresponding block acknowledgement. This last characteristic (single data/acknowledgment block sequence) is really today's TFTP's Achilles’ heel: TFTP transfer rate is very sensitive to system's latency. Not only does network latency negatively affects TFTP performance; a busy host or client over a low latency network could represent problem as well.

Let's see how the typical TFTP file transfer looks like when watched on a network sniffer (Wireshark)

192.168.20.30 -> 192.168.20.1 TFTP Read Req, File: pxeserva.0\0, Trans typ: octet\0 blksize\0 = 1456\0

192.168.20.1 -> 192.168.20.30 TFTP Option Acknowledgement

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 0

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 1

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 1

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 2

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 2

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 3

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 3

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 4

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 4

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 5

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 5

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 6

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 6

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 7

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 7

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 8

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 8

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 9

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 9

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 10

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 10

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 11

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 11

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 12

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 12

List 1: A typical RFC-1350 file transfer

It is easy to see in List 1 the file request, its option acknowledgment, another acknowledgment (Block: 0), and finally the transfer itself where for every data packet sent the sender synchronously stops and waits for its corresponding acknowledgment.

Since RFC-906 "Bootstrap Loading using TFTP" (1984), passing by Intel's PXE "Preboot Execution Environment" (1999), up to today's version of Microsoft's WDS and derivatives, TFTP has always been the protocol of choice for the early stages of any well known net boot/install strategy. The TFTP code used by the initial PXE load is actually embedded onto the target's Network card. It is this embedded code the one that after an initial DHCP transaction requests to the TFTP server the required bootstrap loader and following files.

Bootstrap loaders are very small files (~20K) that cannot get triggered anyone's anxiety even if the transfer rate is not the best one. But the bootstrap loader has to transfer right away heavier components also by TFTP. This is the moment when the lack of performance begins to be noticeable. Today some TFTP transfers can be very heavy; i.e. Net booting MS Windows 8 involves the TFTP transfer of a 180MB Windows PE component used on its install procedure... This can be considered heavy stuff for a regular RFC-1350 TFTP transfer.
At this point, when today's networks are pretty reliable, when the TFTP server usually resides in the same DHCP server, and this one is usually connected to the same (allow me to say) collision domain the PXE target is connected to (no routing), it results obvious the TFTP original lock-step strategy has room for improvement.

The first attempts on this matter were carried out by TFTP servers implementing a "Windowed" version of the protocol. The basic idea here it is sending N consecutive blocks instead of just one, then stop and wait for the reception of the sequence of the N corresponding acknowledgements before to repeat the cycle. Some people called this "Pipelined" TFTP, I personally consider "Windowed" is a more appropriate term.

192.168.20.30 -> 192.168.20.1 TFTP Read Req, File: pxeserva.0\0, Trans typ: octet\0 blksize\0 = 1456\0

192.168.20.1 -> 192.168.20.30 TFTP Option Acknowledgement

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 0

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 1

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 2

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 3

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 4

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 1

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 2

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 3

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 4

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 5

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 6

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 7

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 8

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 5

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 6

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 7

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 8

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 9

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 10

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 11

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 12

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 9

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 10

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 11

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 12

List 2: A classic RFC-1350 client against a "windowed" TFTP server using window-size=4

In List 2 we see the sliding window of 4 blocks goes from server to client in a single batch, next the server stops and waits for the sequence of 4 acknowledgments from the client. The first depicted transfer of this 12 block file took 0.122008s the second one 0.112087s. The difference may look small but please consider for the purposes of this particular test I used a very small file.

Without a doubt this first step breaking the lock-step schema represents an improvement but the server still have to wait for, and deal with, a lot of unnecessary acknowledgments sent by the unaware client, plus we do not know if the error recovery capabilities might result somehow affected on some clients.
As an example let's simulate an error condition with Serva and see how two different TFTP clients react in the case of a missing packet. The error condition is triggered by not sending the block #6 on the first try. The test was conducted on a PXE booting session; the first run corresponds to the TFTP client driven by the Network card while initially transferring pxeserva.0 (pxelinux). The second run corresponds to the TFTP client driven by pxelinux.0 itself while transferring (during the same PXE session) the next booting component vesamenu.c32.

192.168.20.30 -> 192.168.20.1 TFTP Read Request, File: pxeserva.0\000, Transfer type: octet\000

192.168.20.1 -> 192.168.20.30 TFTP Option Acknowledgement

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 0

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 1

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 2

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 3

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 4

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 1

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 2

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 3

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 4

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 5

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 7 <- block #6 is missing

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 8

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 5 <- last well rcvd pkt ack

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 6 <- Serv timeout + error recovery

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 7

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 8

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 6

192.168.20.30 192.168.20.1 TFTP Acknowledgement, Block: 7

192.168.20.30 192.168.20.1 TFTP Acknowledgement, Block: 8

192.168.20.1 192.168.20.30 TFTP Data Packet, Block: 9

192.168.20.1 192.168.20.30 TFTP Data Packet, Block: 10

192.168.20.1 192.168.20.30 TFTP Data Packet, Block: 11

192.168.20.1 192.168.20.30 TFTP Data Packet, Block: 12

192.168.20.30 192.168.20.1 TFTP Acknowledgement, Block: 9

192.168.20.30 192.168.20.1 TFTP Acknowledgement, Block: 10

192.168.20.30 192.168.20.1 TFTP Acknowledgement, Block: 11

192.168.20.30 192.168.20.1 TFTP Acknowledgement, Block: 12

List 3: TFTP client driven by the NIC's firmware

In List 3 we can see the block #6 gets lost, the last acknowledgment the client sends is the #5, the server times-out waiting for the rest of the acknowledgments and decides to resend the part of the window that was not acknowledged. The recovery worked and the booting process continued.

 

192.168.20.30 -> 192.168.20.1 TFTP Read Request, File: vesamenu.c32\000, Transfer type: octet\000

192.168.20.1 -> 192.168.20.30 TFTP Option Acknowledgement

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 0

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 1

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 2

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 3

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 4

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 1

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 2

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 3

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 4

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 5

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 7 <- block #6 is missing

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 8

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 5

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 7 <- client mistake

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 8 <- client mistake

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 5 <- client resend acks

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 5

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 5

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 5

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 5

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 5

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 5

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 5

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 5

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 6 <- Serv timeout + error recovery

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 7

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 8

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 6

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 7

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 8

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 9

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 10

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 11

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 12

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 9

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 10

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 11

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 12

...

List 4: TFTP client driven by pxeserva.0 (pxelinux)

In List 4 we see a different reaction to the same error. The client detects the missing block but "mistakenly" acknowledges blocks #7 & #8. Then it starts resending the acknowledgment of the last block correctly received. As in the run before the server times-out waiting for the missing acknowledgments and decides to resend the part of the window that was not acknowledged. The error recovery worked right and the booting process continued.

The last two examples show how a windowed TFTP strategy allows to improve speed "sometimes" keeping error recovery capabilities intact. Remember this could be not always the case and you should perform your own tests with your own RFC-1350 TFTP clients.

 

"windowsize" Negotiation

The biggest issue with the windowed approach described so far is the fact that there's no provision for the agreement between Server and Client on the use of this modified TFTP protocol or the classic RFC-1350 protocol. Then from a server point of view there is no flexible way to support both methods at the same time.

The next obvious step was to define an option for being negotiated under the terms of RFC-2347 before a particular transfer begins. The option's natural name is "windowsize" and it is used today by Microsoft WDS and derivatives since Vista SP1 and by Serva since v2.0.0. Here the corresponding draft that we have presented Mar-2012, later published Jan-2015 as the IETF Standards Track RFC-7440 - TFTP Windowsize Option.
When the MS WDS client is indicated to use this option by the corresponding parameter on the previously retrieved BCD, it will propose to the TFTP server the window-size specified on the BCD parameter but never higher than 64 blocks. The server when negotiates the option can take the proposed value but it can also reply with a value smaller than the one initially proposed.

In the next run we see a transfer where a MS WDS TFTP client negotiates the "windowsize" option against Serva TFTP server. The client initially proposes windowsize=64 but finally accepts Serva's "counter offer" of windowsize=4.

192.168.20.30 -> 192.168.20.1 TFTP Read Request, File: \WIA_WDS\w8_DevPrev\_SERVA_\boot\ServaBoot.wim\0, Transfer type: octet\0, tsize\0=0\0, blksize\0=1456\0, windowsize\0=64\0

192.168.20.1 -> 192.168.20.30 TFTP Option Acknowledgement, tsize\0=189206885\0, blksize\0=1456\0, windowsize\0=4\0

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 0

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 1

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 2

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 3

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 4

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 4

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 5

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 6

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 7

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 8

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 8

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 9

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 10

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 11

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 12

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 12

...

List 5: Negotiated Windowed TFTP Transfer

List 5 shows how the unnecessary acknowledgments were eliminated. The approach is totally backward compatible with pure RFC-1350 clients because if the option "windowsize" is either not negotiated or Serva TFTP server answers windowsize=1, the transfer would takes place undistinguished from a regular single block lock-step RFC-1350 transfer (more on this later).

The recovery capabilities are also very important. In the next run we see them working when we simulate the same error condition we did before; missing block #6 on the first try.

192.168.20.30 -> 192.168.20.1 TFTP Read Request, File: \WIA_WDS\w8_DevPrev\_SERVA_\boot\ServaBoot.wim\0, Transfer type: octet\0, tsize\0=0\0, blksize\0=1456\0, windowsize\0=64\0

192.168.20.1 -> 192.168.20.30 TFTP Option Acknowledgement, tsize\0=189206885\0, blksize\0=1456\0, windowsize\0=4\0

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 0

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 1

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 2

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 3

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 4

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 4

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 5

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 7 <- block #6 is missing

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 8

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 5 <- no ack #8 rcvd; re-send wnd

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 6

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 7

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 8

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 8

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 9

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 10

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 11

192.168.20.1 -> 192.168.20.30 TFTP Data Packet, Block: 12

192.168.20.30 -> 192.168.20.1 TFTP Acknowledgement, Block: 1

...

List 6: Negotiated Windowed TFTP Transfer w/errors

List 6 shows the block #6 never makes it to the wire on the first try, then the window acknowledgment #8 gets delayed until the server times out waiting for it and it resends the "whole window" (blocks #5,#6,#7, and #8); correctly on the second try.
NOTE: The error condition is differently handled when we enforce a windowed approach to an RFC-1350 client than when we deal with a client that understand the windowsize variable. In the first case Serva re-sends from the first not acknowledged block in the second case it re-sends the whole window.

Now, the big question is how does one go about selecting the right window size? I personally have conducted a series of test always using two Toshiba Tecra, Gigabit-Ethernet connected back-to-back running Vista Business, both with identical hardware features: Core2 duo 2.2Ghz, 4Gig Ram, 7200RPMs HDDs. In one of them the client performing a Windows 8 (Developers Preview) network install on a VMware environment and Serva running as PXE server in the other one. The figures were gathered over the transfer of the 180Mb file ServaBoot.wim. Exactly the same conditions for all the transfers, only changing the parameter "windowsize". The next chart summarizes the results.

windowsize Time (s) Improvement
1 257 0%
2 131 -49%
4 76 -70%
8 54 -79%
16 42 -84%
32 38 -85%
64 35 -86%

NOTE: windowsize=1 is equivalent to lock-step RFC-1350.

As we can see the improvement is significant. A relative small windowsize=4 gives us a 70% improvement on the transfer ratio compared to plain RFC-1350, and we have error recovery!
Negotiating "windowsize" in a per-transfer basis easily allows dealing with old RFC-1350 clients. If the "windowsize" option is not proposed to the server, then the server transparently adopts the old RFC-1350 mode and the client will not see any difference.
Then scenarios like WDS where there are several TFTP clients loaded at different stages of the procedure can be handled perfectly. The old/small TFTP clients like the ones embedded in the NIC and the first stage bootstrap loaders always transfer small files that can be handled very well by RFC-1350. But when the big guys (i.e. bootmgr.exe) take control trying to move a monster of 180 Megs the Negotiated Windowed mode is invoked and we can reach transfer rates comparable to the rates achieved by SMB/CIFS mapped drives. Negotiated Windowed TFTP is definitely something to seriously consider.

Recently Microsoft has gone even further; Windows 8 bootmgr.exe has incorporated a new negotiated variable "mstfwindow". When a client proposes this variable to the server with a value mstfwindow=31416 that gets responded by the server with mstfwindow=27182. Then a new transfer mode is agreed; In-transfer "variable windowsize", where the client and the server start the transfer using a windowsize=4 and later this value can be adjusted within the transfer.
Note: "31416" of course is "pi" and "27182" is "e" the base of the natural logarithms, just the Microsoft's "mathematical" way to come to an agreement on a protocol use...

Getting the most out of your Serva TFTP server.

Since Version 2.0.0 Serva uses a new TFTP engine. It implements and runs by default a Negotiated Windowed TFTP server. The Negotiated Windowed TFTP Server means it can natively handle RFC-1350 clients but also parses the window-size option. By default the server limits the value of the requested "windowsize" to 16 blocks, but this limit can be changed or eliminated altogether. The Negotiated Windowed mode can be reduced to RFC-1350 by forcing a limit of windowsize=1.

Serva's TFTP configuration Tab

The TFTP engine also allows to "enforce" a windowed mode to old RFC-1350 clients that do not handle the window-size option. This way on reliable networks we can have a substantial transfer rate improvement with these old clients but remember the error recovery capabilities could be affected. The Enforced Windowed option is not used by default.
Of course we can combine both worlds; i.e. MS WDS install scenario (Vista SP1 and up), we want to improve the transfer of the small components driven by the old RFC-1350 clients then we "Enforce Windowed TFTP" with a conservative window-size=4. at the same time we want to take advantage of the negotiated window-size option used on the big transfers (Boot.wim/ServaBoot.wim) but the Microsoft TFTP client (bootmgr.exe) proposed value of 64 blocks is too much, then we limit the negotiated value to i.e. 32.

If you are serious about error recovery once you defined your TFTP parameters you can simulate transfer errors and analyze your client response while monitoring the TFTP traffic on Serva's Log panel and/or using Wireshark network sniffer.
Serva's TFTP Error Simulation Engine, simulates errors by generating missing data blocks on fixed, evenly or randomly scattered file locations.
The "fixed" mode permits to analyze errors on i.e. window boundaries, end of the file, etc.
The "even" mode simulates an evenly distributed load of errors.
The "random" mode simulates randomly located errors.

NOTE: Please Do not forget to turn the "Error Simulator Engine" OFF in production !

NOTE: The "Negotiated Windowed" mode, "Enforced Windowed" mode, and the "Error Simulator Engine" on version 2.0.0 are implemented for the reading mode of the TFTP server module only. This is the mode that is used on network boot/installations where Serva acts as PXE server.

 

Final words

Using the data provided in this article you will have a good starting point for effectively use your TFTP server on your next net boot/install endeavor.
Comments or ideas on how to improve the information contained in this document please contact us here.


Updated 05/03/2016
Originally published 05/08/2012