⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 uip-doc.txt

📁 最新版IAR FOR ARM(EWARM)5.11中的代码例子
💻 TXT
📖 第 1 页 / 共 4 页
字号:
" message with the connection. This message will later be sent out by
the senddata() function.

The acked() function is called whenever data that previously was sent
has been acknowleged by the receiving host. This acked() function
first reduces the amount of data that is left to send, by subtracting
the length of the previously sent data (obtained from "uip_conn->len")
from the "textlen" variable, and also adjusts the "textptr" pointer
accordingly. It then checks if the "textlen" variable now is zero,
which indicates that all data now has been successfully received, and
if so changes application state. If the application was in the
"STATE_HELLO" state, it switches state to "STATE_WORLD" and sets up a
7 byte "world!\n" message to be sent. If the application was in the
"STATE_WORLD" state, it closes the connection.

Finally, the senddata() function takes care of actually sending the
data that is to be sent. It is called by the event handler function
when new data has been received, when data has been acknowledged, when
a new connection has been established, when the connection is polled
because of inactivity, or when a retransmission should be made. The
purpose of the senddata() function is to optionally format the data
that is to be sent, and to call the uip_send() function to actually
send out the data. In this particular example, the function simply
calls uip_send() with the appropriate arguments if data is to be sent,
after checking if data should be sent out or not as indicated by the
"textlen" variable.

It is important to note that the senddata() function never should
affect the application state; this should only be done in the acked()
and newdata() functions.

\section protoimpl Protocol Implementations

The protocols in the TCP/IP protocol suite are designed in a layered
fashion where each protocol performs a specific function and the
interactions between the protocol layers are strictly defined. While
the layered approach is a good way to design protocols, it is not
always the best way to implement them. In uIP, the protocol
implementations are tightly coupled in order to save code space.

This section gives detailed information on the specific protocol
implementations in uIP.

\subsection ip IP --- Internet Protocol

When incoming packets are processed by uIP, the IP layer is the first
protocol that examines the packet. The IP layer does a few simple
checks such as if the destination IP address of the incoming packet
matches any of the local IP address and verifies the IP header
checksum. Since there are no IP options that are strictly required and
because they are very uncommon, any IP options in received packets are
dropped.

\subsubsection ipreass IP Fragment Reassembly

IP fragment reassembly is implemented using a separate buffer that
holds the packet to be reassembled. An incoming fragment is copied
into the right place in the buffer and a bit map is used to keep track
of which fragments have been received. Because the first byte of an IP
fragment is aligned on an 8-byte boundary, the bit map requires a
small amount of memory. When all fragments have been reassembled, the
resulting IP packet is passed to the transport layer. If all fragments
have not been received within a specified time frame, the packet is
dropped.

The current implementation only has a single buffer for holding
packets to be reassembled, and therefore does not support simultaneous
reassembly of more than one packet. Since fragmented packets are
uncommon, this ought to be a reasonable decision. Extending the
implementation to support multiple buffers would be straightforward,
however.

\subsubsection ipbroadcast Broadcasts and Multicasts

IP has the ability to broadcast and multicast packets on the local
network. Such packets are addressed to special broadcast and multicast
addresses. Broadcast is used heavily in many UDP based protocols such
as the Microsoft Windows file-sharing SMB protocol. Multicast is
primarily used in protocols used for multimedia distribution such as
RTP. TCP is a point-to-point protocol and does not use broadcast or
multicast packets. uIP current supports broadcast packets as well as
sending multicast packets. Joining multicast groups (IGMP) and
receiving non-local multicast packets is not currently supported.

\subsection icmp ICMP --- Internet Control Message Protocol

The ICMP protocol is used for reporting soft error conditions and for
querying host parameters. Its main use is, however, the echo mechanism
which is used by the "ping" program.

The ICMP implementation in uIP is very simple as itis restricted to
only implement ICMP echo messages. Replies to echo messages are
constructed by simply swapping the source and destination IP addresses
of incoming echo requests and rewriting the ICMP header with the
Echo-Reply message type. The ICMP checksum is adjusted using standard
techniques (see RFC1624).

Since only the ICMP echo message is implemented, there is no support
for Path MTU discovery or ICMP redirect messages. Neither of these is
strictly required for interoperability; they are performance
enhancement mechanisms.

\subsection tcp TCP --- Transmission Control Protocol

The TCP implementation in uIP is driven by incoming packets and timer
events. Incoming packets are parsed by TCP and if the packet contains
data that is to be delivered to the application, the application is
invoked by the means of the application function call. If the incoming
packet acknowledges previously sent data, the connection state is
updated and the application is informed, allowing it to send out new
data.

\subsubsection listeb Listening Connections

TCP allows a connection to listen for incoming connection requests. In
uIP, a listening connection is identified by the 16-bit port number
and incoming connection requests are checked against the list of
listening connections. This list of listening connections is dynamic
and can be altered by the applications in the system.

\subsubsection slidingwindow Sliding Window

Most TCP implementations use a sliding window mechanism for sending
data. Multiple data segments are sent in succession without waiting
for an acknowledgment for each segment.

The sliding window algorithm uses a lot of 32-bit operations and
because 32-bit arithmetic is fairly expensive on most 8-bit CPUs, uIP
does not implement it. Also, uIP does not buffer sent packets and a
sliding window implementation that does not buffer sent packets will have
to be supported by a complex application layer. Instead, uIP allows
only a single TCP segment per connection to be unacknowledged at any
given time.

It is important to note that even though most TCP implementations use
the sliding window algorithm, it is not required by the TCP
specifications. Removing the sliding window mechanism does not affect
interoperability in any way.

\subsubsection rttest Round-Trip Time Estimation

TCP continuously estimates the current Round-Trip Time (RTT) of every
active connection in order to find a suitable value for the
retransmission time-out.

The RTT estimation in uIP is implemented using TCP's periodic
timer. Each time the periodic timer fires, it increments a counter for
each connection that has unacknowledged data in the network. When an
acknowledgment is received, the current value of the counter is used
as a sample of the RTT. The sample is used together with Van
Jacobson's standard TCP RTT estimation function to calculate an
estimate of the RTT. Karn's algorithm is used to ensure that
retransmissions do not skew the estimates.

\subsubsection rexmit Retransmissions

Retransmissions are driven by the periodic TCP timer. Every time the
periodic timer is invoked, the retransmission timer for each
connection is decremented. If the timer reaches zero, a retransmission
should be made.

As uIP does not keep track of packet contents after they have
been sent by the device driver, uIP requires that the
application takes an active part in performing the
retransmission. When uIP decides that a segment should be
retransmitted, it calls the application with a flag set indicating
that a retransmission is required. The application checks the
retransmission flag and produces the same data that was previously
sent. From the application's standpoint, performing a retransmission
is not different from how the data originally was sent. Therefore the
application can be written in such a way that the same code is used
both for sending data and retransmitting data. Also, it is important
to note that even though the actual retransmission operation is
carried out by the application, it is the responsibility of the stack
to know when the retransmission should be made. Thus the complexity of
the application does not necessarily increase because it takes an
active part in doing retransmissions.

\subsubsection flowcontrol Flow Control

The purpose of TCP's flow control mechanisms is to allow communication
between hosts with wildly varying memory dimensions. In each TCP
segment, the sender of the segment indicates its available buffer
space. A TCP sender must not send more data than the buffer space
indicated by the receiver.

In uIP, the application cannot send more data than the receiving host
can buffer. And application cannot send more data than the amount of
bytes it is allowed to send by the receiving host. If the remote host
cannot accept any data at all, the stack initiates the zero window
probing mechanism.

\subsubsection congestioncontrol Congestion Control

The congestion control mechanisms limit the number of simultaneous TCP
segments in the network. The algorithms used for congestion control
are designed to be simple to implement and require only a few lines of
code.

Since uIP only handles one in-flight TCP segment per connection,
the amount of simultaneous segments cannot be further limited, thus
the congestion control mechanisms are not needed.

\subsubsection urgdata Urgent Data

TCP's urgent data mechanism provides an application-to-application
notification mechanism, which can be used by an application to mark
parts of the data stream as being more urgent than the normal
stream. It is up to the receiving application to interpret the meaning
of the urgent data.

In many TCP implementations, including the BSD implementation, the
urgent data feature increases the complexity of the implementation
because it requires an asynchronous notification mechanism in an
otherwise synchronous API. As uIP already use an asynchronous event
based API, the implementation of the urgent data feature does not lead
to increased complexity.

\section performance Performance

In TCP/IP implementations for high-end systems, processing time is
dominated by the checksum calculation loop, the operation of copying
packet data and context switching. Operating systems for high-end
systems often have multiple protection domains for protecting kernel
data from user processes and user processes from each other. Because
the TCP/IP stack is run in the kernel, data has to be copied between
the kernel space and the address space of the user processes and a
context switch has to be performed once the data has been
copied. Performance can be enhanced by combining the copy operation
with the checksum calculation. Because high-end systems usually have
numerous active connections, packet demultiplexing is also an
expensive operation.

A small embedded device does not have the necessary processing power
to have multiple protection domains and the power to run a
multitasking operating system. Therefore there is no need to copy
data between the TCP/IP stack and the application program. With an
event based API there is no context switch between the TCP/IP stack
and the applications.

In such limited systems, the TCP/IP processing overhead is dominated
by the copying of packet data from the network device to host memory,
and checksum calculation. Apart from the checksum calculation and
copying, the TCP processing done for an incoming packet involves only
updating a few counters and flags before handing the data over to the
application. Thus an estimate of the CPU overhead of our TCP/IP
implementations can be obtained by calculating the amount of CPU
cycles needed for the checksum calculation and copying of a maximum
sized packet.

\subsection delack The Impact of Delayed Acknowledgments

Most TCP receivers implement the delayed acknowledgment algorithm for
reducing the number of pure acknowledgment packets sent. A TCP
receiver using this algorithm will only send acknowledgments for every
other received segment. If no segment is received within a specific
time-frame, an acknowledgment is sent. The time-frame can be as high
as 500 ms but typically is 200 ms.

A TCP sender such as uIP that only handles a single outstanding TCP
segment will interact poorly with the delayed acknowledgment
algorithm. Because the receiver only receives a single segment at a
time, it will wait as much as 500 ms before an acknowledgment is
sent. This means that the maximum possible throughput is severely
limited by the 500 ms idle time.

Thus the maximum throughput equation when sending data from uIP will
be $p = s / (t + t_d)$ where $s$ is the segment size and $t_d$ is the
delayed acknowledgment timeout, which typically is between 200 and
500 ms. With a segment size of 1000 bytes, a round-trip time of 40 ms
and a delayed acknowledgment timeout of 200 ms, the maximum
throughput will be 4166 bytes per second. With the delayed acknowledgment
algorithm disabled at the receiver, the maximum throughput would be
25000 bytes per second.

It should be noted, however, that since small systems running uIP are
not very likely to have large amounts of data to send, the delayed
acknowledgmen t throughput degradation of uIP need not be very
severe. Small amounts of data sent by such a system will not span more
than a single TCP segment, and would therefore not be affected by the
throughput degradation anyway.

The maximum throughput when uIP acts as a receiver is not affected by
the delayed acknowledgment throughput degradation.



*/


/** @} */

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -