⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 tcp.c

📁 cryptlib是功能强大的安全工具集。允许开发人员快速在自己的软件中集成加密和认证服务。
💻 C
📖 第 1 页 / 共 5 页
字号:
	freeAddressInfo( addrInfoPtr );
	if( cryptStatusError( status ) )
		/* There was an error setting up the socket, don't try anything
		   further */
		return( mapError( stream, socketErrorInfo, CRYPT_ERROR_OPEN ) );

	/* Wait for a connection.  At the moment this always waits forever
	   (actually some select()'s limit the size of the second count, so we
	   set it to a maximum of 1 year's worth), but in the future we could
	   have a separate timeout value for accepting incoming connections to
	   mirror the connection-wait timeout for outgoing connections.

	   Because of the way that accept works, the socket that we eventually
	   and up with isn't the one that we listen on, but we have to
	   temporarily make it the one associated with the stream in order for
	   ioWait() to work */
	stream->netSocket = listenSocket;
	status = ioWait( stream, min( stream->timeout, 30000000L ), 0,
					 IOWAIT_ACCEPT );
	stream->netSocket = CRYPT_ERROR;
	if( cryptStatusError( status ) )
		return( status );

	/* We have an incoming connection ready to go, accept it.  There's a
	   potential complication here in that if a client connects and then
	   immediately sends a RST after the TCP handshake has completed,
	   ioWait() will return with an indication that there's an incoming
	   connection ready to go, but the following accept(), if it's called
	   after the RST has arrived, will block waiting for the next incoming
	   connection.  This is rather unlikely in practice, but could occur
	   as part of a DoS by setting the SO_LINGER time to 0 and disconnecting
	   immediately.  This has the effect of turning the accept() with
	   timeout into an indefinite-wait accept().

	   To get around this, we make the socket temporarily non-blocking, so
	   that accept() returns an error if the client has closed the
	   connection.  The exact error varies, BSD implementations handle the
	   error internally and return to the accept() while SVR4
	   implementations return either EPROTO (older, pre-Posix behaviour) or
	   ECONNABORTED (newer Posix-compliant behaviour, since EPROTO is also
	   used for other protocol-related errors).

	   Since BSD implementations hide the problem, they wouldn't normally
	   return an error, however by temporarily making the socket non-
	   blocking we force it to return an EWOULDBLOCK if this situation
	   occurs.  Since this could lead to a misleading returned error, we
	   intercept it and substitute a custom error string.  Note that when
	   we make the listen socket blocking again, we also have to make the
	   newly-created ephemeral socket blocking, since it inherits its
	   attributes from the listen socket */
	setSocketNonblocking( listenSocket );
	netSocket = accept( listenSocket, ( struct sockaddr * ) &clientAddr,
						&clientAddrLen );
	if( isBadSocket( netSocket ) )
		{
		if( isNonblockWarning() )
			status = setSocketError( stream, "Remote system closed the "
									 "connection after completing the TCP "
									 "handshake", CRYPT_ERROR_OPEN, TRUE );
		else
			status = getSocketError( stream, CRYPT_ERROR_OPEN );
		setSocketBlocking( listenSocket );
		deleteSocket( listenSocket );
		return( status );
		}
	setSocketBlocking( listenSocket );
	setSocketBlocking( netSocket );

	/* Get the IP address of the connected client.  We could get its full
	   name, but this can slow down connections because of the time that it
	   takes to do the lookup and is less authoritative because of potential
	   spoofing.  In any case the caller can still look up the name if they
	   need it */
	getNameInfo( ( const struct sockaddr * ) &clientAddr,
				 stream->clientAddress, sizeof( stream->clientAddress ),
				 &stream->clientPort );

	/* We've got a new connection, add the socket to the pool.  Since this
	   was created external to the pool we don't use newSocket() to create a
	   new socket but only add the existing socket */
	status = addSocket( netSocket );
	if( cryptStatusError( status ) )
		{
		/* There was a problem adding the new socket, close it and exit.
		   We don't call deleteSocket() since it wasn't added to the pool,
		   instead we call closesocket() directly */
		closesocket( netSocket );
		return( setSocketError( stream, "Couldn't add socket to socket pool",
								status, FALSE ) );
		}
	stream->netSocket = netSocket;
	stream->listenSocket = listenSocket;

	/* Turn off Nagle, since we do our own optimised TCP handling */
	setsockopt( stream->netSocket, IPPROTO_TCP, TCP_NODELAY,
				( void * ) &trueValue, sizeof( int ) );

	return( CRYPT_OK );
	}

static int openSocketFunction( STREAM *stream, const char *server,
							   const int port )
	{
	int status;

	assert( port >= 22 );
	assert( ( stream->flags & STREAM_NFLAG_ISSERVER ) || server != NULL );

	/* If it's a server stream, open a listening socket */
	if( stream->flags & STREAM_NFLAG_ISSERVER )
		{
		const int savedTimeout = stream->timeout;

		/* Timeouts for server sockets are actually three-level rather than
		   the usual two-level model, there's an initial (pre-connect)
		   timeout while we wait for an incoming connection to arrive, and
		   then we go to the usual session connect vs. session read/write
		   timeout mechanism.  To handle the pre-connect phase we set an
		   (effectively infinite) timeout at this point to ensure that the
		   server always waits forever for an incoming connection to
		   appear */
		stream->timeout = INT_MAX - 1;
		status = openServerSocket( stream, server, port );
		stream->timeout = savedTimeout;
		return( status );
		}

	/* It's a client stream, perform a two-part nonblocking open.  Currently
	   the two portions are performed back-to-back, in the future we can
	   interleave the two and perform general crypto processing (e.g. hash/
	   MAC context setup for SSL) while the open is completing */
	status = preOpenSocket( stream, server, port );
	if( cryptStatusOK( status ) )
		status = completeOpen( stream );
	assert( ( cryptStatusError( status ) && \
			  stream->netSocket == CRYPT_ERROR ) || \
			( cryptStatusOK( status ) && \
			  stream->netSocket != CRYPT_ERROR ) );
	return( status );
	}

/* Close a connection.  Safely handling closes is extremely difficult due to
   a combination of the way TCP/IP (and TCP stacks) work and various bugs
   and quirks in implementations.  After a close (and particularly if short-
   timeout non-blocking writes are used), there can still be data left in
   TCP send buffers, and also as unacknowledged segments on the network.  At
   this point there's no easy way for the TCP stack to know how long it
   should hang around trying to get the data out and waiting for acks to come
   back.  If it doesn't wait long enough, it'll end up discarding unsent
   data.  If it waits too long, it could potentially wait forever in the
   presence of network outages or crashed peers.  What's worse, since the
   socket is now closed, there's no way to report any problems that may occur
   to the caller.

   We try and handle this with a combination of shutdown() and close(), but
   due to implementation bugs/quirks and the TCP stack issues above this
   doesn't work all of the time.  The details get very implementation-
   specific, for example with glibc the manpage says that setting SO_LINGER
   causes shutdown() not to return until queued messages are sent (which is
   wrong, and non non-glibc implementations like PHUX and Solaris
   specifically point out that only close() is affected), but that
   shutdown() discards unsent data.  glibc in turn is dependent on the
   kernel it's running on top of, under Linux shutdown() returns immediately
   but data is still sent regardless of the SO_LINGER setting.

   BSD Net/2 and later (which many stacks are derived from, including non-
   Unix systems like OS/2) returned immediately from a close() but still
   sent queued data on a best-effort basis.  With SO_LINGER set and a zero
   timeout the close was abortive (which Linux also implemented starting
   with the 2.4 kernel), and with a non-zero timeout it would wait until all
   the data was sent, which meant that it could block almost indefinitely
   (minutes or even hours, this is the worst-case behaviour mentioned
   above).  This was finally fixed in 4.4BSD (although a lot of 4.3BSD-
   derived stacks ended up with the indefinite-wait behaviour), but even
   then there was some confusion as to whether the wait time was in machine-
   specific ticks or seconds (Posix finally declared it to be seconds).
   Under Winsock, close() simply discards queued data while shutdown() has
   the same effect as under Linux, sending enqueued data asynchronously
   regardless of the SO_LINGER setting.

   This is a real mess to sort out safely, the best that we can do is to
   perform a shutdown() followed later by a close().  Messing with SO_LINGER
   is too risky, something like performing an ioWait() doesn't work either
   since it just results in whoever initiated the shutdown being blocked for
   the I/O wait time, and waiting for a recv() of 0 bytes isn't safe because
   the higher-level code may need to read back a shutdown ack from the other
   side, which a recv() performed here would interfere with.  Under Windows
   we could handle it by waiting for an FD_CLOSE to be posted, but this
   requires the use of a window handle which we don't have */

static void closeSocketFunction( STREAM *stream,
								 const BOOLEAN fullDisconnect )
	{
	/* If it's a partial disconnect, close only the send side of the channel.
	   The send-side close can help with ensuring that all data queued for
	   transmission is sent */
	if( !fullDisconnect )
		{
		if( stream->netSocket != CRYPT_ERROR )
			shutdown( stream->netSocket, SHUT_WR );
		return;
		}

	/* If it's an open-on-demand HTTP stream then the socket isn't
	   necessarily open even if the stream was successfully connected, so
	   we only close it if necessary.  It's easier handling it at this level
	   than expecting the caller to distinguish between an opened-stream-but-
	   not-opened-socket and a conventional open stream */
	if( stream->netSocket != CRYPT_ERROR )
		deleteSocket( stream->netSocket );
	if( stream->listenSocket != CRYPT_ERROR )
		deleteSocket( stream->listenSocket );
	stream->netSocket = stream->listenSocket = CRYPT_ERROR;
	}

/* Check an externally-supplied socket to make sure that it's set up as
   required by cryptlib */

static int checkSocketFunction( STREAM *stream )
	{
	int value;

	/* Check that we've been passed a valid network socket, and that it's
	   blocking */
	getSocketNonblockingStatus( stream->netSocket, value );
	if( isSocketError( value ) )
		return( getSocketError( stream, CRYPT_ARGERROR_NUM1 ) );
	if( value )
		return( setSocketError( stream, "Socket is non-blocking",
								CRYPT_ARGERROR_NUM1, TRUE ) );

	return( CRYPT_OK );
	}

/* Read and write data from and to a socket.  Because data can appear in
   bits and pieces when reading we have to implement timeout handling at two
   levels, once via ioWait() and a second time as an overall timeout.  If we
   only used ioWait() this could potentially stretch the overall timeout to
   (length * timeout) so we also perform a time check that leads to a worst-
   case timeout of (timeout-1 + timeout).  This is the same as the
   implementation of SO_SND/RCVTIMEO in Berkeley-derived implementations,
   where the timeout value is actually an interval timer rather than a
   absolute timer.

   In addition to the standard stream-based timeout, we can also be called
   with flags specifying explicit blocking behaviour (for a read where we
   know that we're expecting a certain amount of data) or explicit
   nonblocking behaviour (for speculative reads to fill a buffer).  These
   flags are used by the buffered-read routines, which try and speculatively
   read as much data as possible to avoid the many small reads required by
   some protocols.  We don't do the blocking read using MSG_WAITALL since
   this can (potentially) block forever if not all of the data arrives.

   Finally, if we're performing a blocking read (which is usually done when
   we're expecting a predetermined number of bytes), we dynamically adjust
   the timeout so that if data is streaming in at a steady rate, we don't
   abort the read just because there's more data to transfer than we can
   manage in the originally specified timeout interval.

   Handling of return values is as follows:

	timeout		byteCount		return
	-------		---------		------
		0			0				0
		0		  > 0			byteCount
	  > 0			0			CRYPT_ERROR_TIMEOUT
	  > 0		  > 0			byteCount

   At the sread()/swrite() level if the partial-read/write flags aren't set
   for the stream, a byteCount < length is also converted to a
   CRYPTO_ERROR_TIMEOUT */

static int readSocketFunction( STREAM *stream, BYTE *buffer,
							   const int length, const int flags )
	{
	const time_t startTime = getTime();
	BYTE *bufPtr = buffer;
	time_t timeout = ( flags & TRANSPORT_FLAG_NONBLOCKING ) ? 0 : \
					 ( flags & TRANSPORT_FLAG_BLOCKING ) ? \
						max( 30, stream->timeout ) : stream->timeout;
	int bytesToRead = length, byteCount = 0;

	assert( timeout >= 0 );
	while( bytesToRead > 0 && \
		   ( ( getTime() - startTime < timeout || timeout <= 0 ) ) )
		{
		int bytesRead, status;

		/* Wait for data to become available */
		status = ioWait( stream, timeout, byteCount, IOWAIT_READ );
		if( status != CRYPT_OK )
			return( ( status == OK_SPECIAL ) ? 0 : status );

		/* We've got data waiting, read it */
		bytesRead = recv( stream->netSocket, bufPtr, bytesToRead, 0 );
		if( isSocketError( bytesRead ) )
			{
			/* If it's a restartable read due to something like an
			   interrupted system call, retry the read */
			if( isRestartableError() )
				{
				assert( !"Restartable read, recv() indicated error" );
				continue;
				}

			/* There was a problem with the read */
			return( getSocketError( stream, CRYPT_ERROR_READ ) );
			}
		if( bytesRead == 0 )
			{
			/* Under some odd circumstances (Winsock bugs when using non-
			   blocking sockets, or calling select() with a timeout of 0),
			   recv() can return zero bytes without an EOF condition being
			   present, even though it should return an error status if this
			   happens (this could also happen under very old SysV
			   implementations using O_NDELAY for nonblocking I/O).  To try
			   and catch this, we check for a restartable read due to
			   something like an interrupted system call and retry the read
			   if it is.  Unfortunately this doesn't catch the Winsock zero-
			   delay bug, but it may catch problems in other implementations.

			   Unfortunately this doesn't work under all circumstances
			   either.  If the connection is genuinely closed, select() will
			   return a data-available status and recv() will return zero,
			   both without changing errno.  If the last status set in errno
			   matches the isRestartableError() check, the code will loop
			   forever.  Because of this, we can't use the following check,
			   although since it doesn't catch the Winsock zero-delay bug
			   anyway it's probably no big deal.

			   The real culprit here is the design flaw in recv(), which
			   uses

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -