Commit d383cf8d authored by bernd's avatar bernd

flow control update

parent 949cdab9
......@@ -483,7 +483,7 @@ Level 7: Applications
\end_layout
\begin_layout Subsection
Basic Framework
Basic Frameworks
\end_layout
\begin_layout Frame
......@@ -491,7 +491,7 @@ Basic Framework
status open
\begin_layout Plain Layout
Basic Framework
Basic Frameworks
\end_layout
\end_inset
......@@ -567,7 +567,31 @@ control
\begin_inset space ~
\end_inset
system For larger content (not yet implemented)
system For larger content (not yet implemented)
\end_layout
\begin_deeper
\begin_layout Pause
\end_layout
\end_deeper
\begin_layout Description
Sync to synchronize your computers (RSN)
\end_layout
\begin_deeper
\begin_layout Pause
\end_layout
\end_deeper
\begin_layout Description
Audio/Video
\begin_inset space ~
\end_inset
Chat Real time data streaming (RSN)
\end_layout
\end_deeper
......
No preview for this file type
<h1>Flow Control</h1>
# Flow Control #
<p>The assumptions of TCP are wrong, so TCP's flow control is broken - that's
The assumptions of TCP are wrong, so TCP's flow control is broken - that's
one of my rationale to create a new protocol. So what are my assumptions, and
what do I propose instead?</p>
what do I propose instead?
<p>Let's first look at TCP: What does TCP assume?</p>
Let's first look at TCP: What does TCP assume?
<p>TCP defines a window size. This equals the amount of data which is in
TCP defines a window size. This equals the amount of data which is in
flight, so supposed there is no packet drop. &nbsp;The tail is the last
acknowledged packet, and the head is the last sent packet. &nbsp;TCP has a
"slow start", so it starts with one packet, and increases the number of
......@@ -14,9 +14,9 @@ segments in the window depending on how many segments are acknowledged. &nbsp;Th
gives an exponential growth phase, until there's one packet drop. &nbsp;The
assumption here is that when the sender sends too fast, packets are dropped.
&nbsp;If a packet is dropped, TCP will half the window size, and slowly grow it
by one segment per round-trip - until again a packet is lost.</p>
by one segment per round-trip - until again a packet is lost.
<p>This means the number of packets in flight oscillates by a factor of two.
This means the number of packets in flight oscillates by a factor of two.
&nbsp;So what about optimal buffer size? &nbsp;The optimal buffer size for a
TCP connection is capable to keep packets for 0.5 RTT - because that's the same
amount of packets that fit onto the wire (they take 0.5 RTT from source to
......@@ -26,84 +26,86 @@ requires that the buffering router measures the RTT for each TCP connection.
syn to the first ack. &nbsp;In practice, there's usually no scientific method
applied to choose the right buffer size; if you are lucky, there had been a few
experiments, selecting some buffer size on an educated guess, and the router
has a global FIFO.</p>
has a global FIFO.
<p>The worst problem with this algorithm is on networks with poor quality, such
The worst problem with this algorithm is on networks with poor quality, such
as wireless networks, where packet drops are relatively frequent, and have
nothing to do with the sender rate. &nbsp;The next problem is that a filled up
buffer in the router delays all other connections, including low-latency
low-bandwidth real-time protocols.</p>
low-bandwidth real-time protocols.
<p>So that's not the way to go. &nbsp;We must reevaluate the assumptions and
find a solution.</p>
So that's not the way to go. &nbsp;We must reevaluate the assumptions and
find a solution.
<h2>The assumptions</h2>
<ul>
<li>Network devices do have buffers, most of them "too large buffers" for TCP
to work reasonable</li>
<li>The buffers are your friend, not your enemy, they avoid retransmissions</li>
<li>Buffers should usually stay almost empty</li>
<li>Packet drops are not related to flow control problems</li>
<li>Intermediate network hops can help fairness, by providing "fair queuing"</li>
</ul>
## The assumptions ##
<h2>The solution</h2>
+ Network devices do have buffers, most of them "too large buffers" for TCP
to work reasonable
+ The buffers are your friend, not your enemy, they avoid retransmissions
+ Buffers should usually stay almost empty
+ Packet drops are not related to flow control problems
+ Intermediate network hops can help fairness, by providing "fair queuing"
<p>Since network hops which may help with flow control are not likely to be
## The solution ##
Since network hops which may help with flow control are not likely to be
available soon (and probably also not the right solution), the solution has to
do end-to-end flow control (like TCP/IP), working with single (unfair) FIFO
queuing. The flow control should be fair (<i>n</i> competing connections should
get <i>1/n</i> of the data rate each), and it should not completely yield to
TCP, even in a buffer-bloat configuration.</p>
<p>The approach is the following: The sender sends short bursts of packets
(default: 8 packets per burst), and the receiver measures the timing when these
packets arrive - from earliest to latest, and calculates the achievable data
rate. The receiver sends this data rate back to the sender, which adjusts its
sending rate (to make sure the rate is not faked, the receiver must prove it
has received at least most packets). Data rate calculation accumulates rates
over several bursts (default: 4 bursts per block), and sends only a final
result, i.e. one acknowledge per 32 packets. This is the P part of a PID
controller.</p>
<p>The sender tracks two things: Slack and slack-growth (I and D of the PID
queuing. The flow control should be fair (_n_ competing connections should
get _n_ of the data rate each), and it should not completely yield to
TCP, even in a buffer-bloat configuration.
The approach is the following: The sender sends short bursts of
packets (default: 8 packets per burst), and the receiver measures the
timing when these packets arrive - from earliest to latest, and
calculates the achievable data rate. The receiver sends this data rate
back to the sender, which adjusts its sending rate (to make sure the
rate is not faked, the receiver must prove it has received at least
most packets). Data rate calculation accumulates rates over several
bursts (default: 4 bursts per block), and sends only a final result,
i.e. one acknowledge per 32 packets. This is the P part of a PID
controller; the receiver constantly provides measurements of
acheivable rates, and the sender adjusts this rate on every ack
received.
The sender tracks two things: Slack and slack-growth (I and D of the PID
controller). Slack, i.e. accumulated buffer space, provides an exponential
slowdown, where a factor of two equates to either half the difference of
maximum and minimum observed slack or 20ms (whatever is larger).</p>
maximum and minimum observed slack or 20ms (whatever is larger).
<p>Slack-growth is observed by the timing of the last burst compared with the
Slack-growth is observed by the timing of the last burst compared with the
first burst in the four burst sequence. This tells us how excessive our data
rate is. To compensate, we need to multiply that time with the bursts in
flight, and add that as extra delay after the next burst we send. This allows
the buffer to recover.</p>
the buffer to recover.
<p>The whole algorithm, sender and receiver side, fits into about 100 lines at
The whole algorithm, sender and receiver side, fits into about 100 lines at
the moment, which includes the rate control and burst generation on the sender
side, but does not include all the debugging and statistics code to observe
what happens.</p>
what happens.
<p>To get fast long-distance connections up to speed quickly, the first rate
To get fast long-distance connections up to speed quickly, the first rate
adjustment will also up the packets in flight. Later, each ack allows for
further packets in flight (default: a maximum of two bursts, i.e. 64 packets)
before the next ack is expected. To achieve this, the sender measures round
trip delay.</p>
trip delay.
<p>This helps to detect broken connections - if the receiver goes offline or
This helps to detect broken connections - if the receiver goes offline or
has been suspended temporarily, the sender stops. It can't call back the
packets in flight, for sure, these will get lost, and might temporarily fill up
buffers.</p>
buffers.
<p>The algorithm is measured as fair and sufficiently stable for several
The algorithm is measured as fair and sufficiently stable for several
parallel connections to the same sender, and it works together with parallel
TCP and LEDBAT traffic.</p>
TCP and LEDBAT traffic.
<h2>Fair Queuing</h2>
## Fair Queuing ##
<p>Instead of using a single FIFO buffer policy, routers (or, in net2o
Instead of using a single FIFO buffer policy, routers (or, in net2o
terminology, switches) can help fairness: Under congestion, each connection is
given its own FIFO, and all filled FIFOs of the same QoS level are served in a
round-robin fashion. &nbsp;This allows the receiver to accurately determine the
actual achievable bandwidth, and does not trigger the more heuristical
delay-optimizing strategies. &nbsp;Also, this buffer policy allows to have
different flow control algorithms on different protocols, and still have
fairness.</p>
\ No newline at end of file
fairness.
\ No newline at end of file
......@@ -64,7 +64,7 @@ point:
1. Physical layer - this is not part of net2o itself.
2. [Topology](topology.md)
3. [Encryption](encryption.wiki)
4. [Flow Control](flow-control.wiki)
4. [Flow Control](flow-control.md)
5. [Commands](commands.md)
6. [Distributed Data](distributed-data.wiki)
7. [Applications](applications.wiki)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment