⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 tm1if_pc.c

📁 wince host 和 target PCI驱动程序
💻 C
📖 第 1 页 / 共 2 页
字号:
/* *  COPYRIGHT (c) 1995 by Philips Semiconductors * *   +-----------------------------------------------------------------+ *   | THIS SOFTWARE IS FURNISHED UNDER A LICENSE AND MAY ONLY BE USED | *   | AND COPIED IN ACCORDANCE WITH THE TERMS AND CONDITIONS OF SUCH  | *   | A LICENSE AND WITH THE INCLUSION OF THE THIS COPY RIGHT NOTICE. | *   | THIS SOFTWARE OR ANY OTHER COPIES OF THIS SOFTWARE MAY NOT BE   | *   | PROVIDED OR OTHERWISE MADE AVAILABLE TO ANY OTHER PERSON. THE   | *   | OWNERSHIP AND TITLE OF THIS SOFTWARE IS NOT TRANSFERRED.        | *   +-----------------------------------------------------------------+ * *  Module name              : TM1IF_pc.c    1.20 * *  Last update              : 11:04:46 - 99/03/29 * *  Title                    : PC version of HostCall server, PC part * *  Reviewed                 :  * *  Revision history         :  * *  Description              :   * *            This module is part of an implementation of the server part   *            for a HostCall interface. HostCall is the software component  *            which is required by the TCS toolset to adapt the programs  *            generated by this toolset to a specific host (see file  *            HostCall.h in the TCS include dir). * *            HostIF is the part of the implementation which runs on the *            host, serving all IO requests of the TM-1 application.  *            HostIF is one of two modules of this host-resident part, and *            defines all host specific issues which are communication *            with the target and the ability to start a number of server *            tasks. The other part, RPCClient, defines *how* messages  *            should be served; RPCClient it still is host independent. * *            RPCServ is the counterpart of RPCClient on the target, and *            TM1IF is the counterpart of HostIF on the target. However, *            contrary to the target situation, where RPCClient is built *            on top of HostIF, on the host side TM1IF is built on top of *            RPCServ. Hence TM1IF forms the external interface of the  *            server pair. * *            The interface to this module is as follows: First, it should *            be initialised. Second, a number of nodes should be defined,  *            specifying node specific information.  *            After this, the nodes can be started and a communication *            session with these nodes can be started by means of function *            TM1IF_start_serving; depending on the implementation, a call *            to this function either immediately returns, with a number *            of serving tasks created for handling target's IO requests, *            or it blocks until function TM1IF_term is called (typically *            by the exit handler, when the last node has reported termination). * *            Having a pool of servers reduces the possibility that serving *            is halted due to serving requests which take a longer  *            time to complete. For instance, when running pSOS on the  *            target with one task requesting keyboard input from the host,  *            the current implementation of RPCClient will stop serving  *            requests when only one server task is selected: while blocked  *            on the keyboard, it will not be able to serve requests from  *            other pSOS tasks.  *            The current host (with the OS used) might or might not be *            able to dispatch multiple tasks, so start_serving returns *            one of the following results: *                - TM1IF_Serving_Failed *                           Initialisation (e.g. setup of target *                           communication, or server task creation)  *                           failed. *                - TM1IF_Serving_Started *                           The servers have been created, and *                           are now busy serving. *                           Serving should be stopped using the *                           TM1IF_term function. *                - TM1IF_Serving_Completed *                           TM1IF was not capable of creating independent  *                           serving tasks, so the TM-1 application *                           has been entirely served during the call *                           to start_serving. *            *            NB: This interface does not provide for loading and *                starting the TM1 appication.  * *            This module forms a Windows 95 implementation of *            the TM1IF interface. It is capable of creating multiple *            server tasks. The communication is implemented on top of *            the Trimedia Manager interface, of which it reserves  *            Channel #1. *//*---------------------------- Includes --------------------------------------*/#include <windows.h>#include <winioctl.h>#include <time.h>#include <sys/timeb.h>#include <signal.h>#include <stdarg.h>#include <assert.h>#include <math.h>#include <string.h>#include <setjmp.h>#include <sys/types.h>#include <errno.h>#include <sys/stat.h>#include <fcntl.h>#include <stdio.h>#include <stdlib.h>#include "tmtypes.h"#include "tmwindef.h"#include "tmman32.h"#include "HostCall.h"#include "RPC_Common.h"#include "TM1IF.h"#include "io.h"#include "Lib_Local.h"/*---------------------------- Module State ----------------------------------*/#define HostCall_CHANNEL             1     /* Reserved tm manager channel     */#define HostCall_CHANNEL_CAPACITY   16     /* This may be small, since the    */                                           /*   message will be 'quickly'     */                                           /*   taken by the server           */#define HostCall_SERVICE_CAPACITY   64     /* Maximum number of outstanding   */                                           /*   services in this server       */#define NROF_SERVERS                 4/* * Various global state: */static Int32    nrof_created_servers;static HANDLE  *created_servers;static Bool     same_endian;/* * ring buffer representation: */static volatile   Pointer  pending_requests[ HostCall_SERVICE_CAPACITY ];static volatile   Int32 first,last;static HANDLE     element_counter,overflow_detector,get_mutex;typedef struct {    DWORD     dsp_number;    DWORD     dsp_handle;    DWORD     hostcall_channel;} *NodeData;/*----------------------------- Ring Buffer Functions ------------------------*/#define I(i)  ((i) % HostCall_SERVICE_CAPACITY) /* Index into ring buffer *//* * Ring buffer put- and get functions. The idea here is * that only the message notification routine (notify, see below) * has access to 'last', and that the server threads have access * to 'first'. Server threads are kept apart by means of 'get_mutex'. * * Synchronisation between notifyer and threads is done via a  * counting semaphore element_counter. Another one, overflow_detector,   * is used for checking buffer overflow before the actual insertion. */static Pointer get_pending_request(){    Pointer result;    WaitForSingleObject( get_mutex,       INFINITE );    WaitForSingleObject( element_counter, INFINITE );    result= pending_requests[first];    first= I(first+1);    WaitForSingleObject( overflow_detector, INFINITE );    ReleaseSemaphore   ( get_mutex,1,NULL);    return result;}static Bool put_pending_request(Pointer raw_command){    if (ReleaseSemaphore(overflow_detector,1,NULL)) {        last= I(last+1);        pending_requests[last]= raw_command;        ReleaseSemaphore(element_counter,1,NULL);        return TRUE;    } else {        return False;    }}   static Int32 dummy;static void send_back( Pointer raw_command ){	TMSTD_PACKET packet;        NodeData     data= RPCServ_raw_to_info(raw_command)->data;	packet.dwCommand= (DWORD)raw_command;	while ( tmMsgSend( data->hostcall_channel, &packet ) != TMOK ) {}}/* * Message arrival-from-target notifyer:  */static STATUS	    notify ( DWORD   d1,    DWORD   d2, 	     Pointer d3,    Pointer packet ){	Pointer          *to_raw_command = (Pointer*)packet;	Pointer              raw_command = *to_raw_command;        HostCall_command    *command     =  RPCServ_raw_to_host(raw_command);           if (put_pending_request( raw_command )) {            command->notification_status = HostCall_BUSY;        } else {             command->notification_status = HostCall_ERROR;            send_back(raw_command);        }	return TMOK;}/*--------------------------- Exported Functions -----------------------------*/    static Bool serving_stopped;    static void serve()    {        while (!serving_stopped) {            Pointer             raw_command;            raw_command= get_pending_request();            if (serving_stopped) { break; }            if (RPCServ_serve(raw_command)) {                send_back(raw_command);            }        }    }/* * Function         : Start serving the (independently started) *                    TM-1 application. Serving should use the specified *                    IO functions, and use the specified file descriptors *                    for stdin, stdout and stderr. It should try to  *                    start the requested number of serving tasks and return, *                    but when there are no capabilities for starting  *                    tasks this function itself can act as server before *                    it returns. * Function Result  : see module header * Precondition     : - * Postcondition    : - * Sideeffects      : - */TM1IF_Served_Status        TM1IF_start_serving( ){    Int32 i;    first= 1; last= 0;    element_counter  = CreateSemaphore(NULL,0,HostCall_SERVICE_CAPACITY,"");    overflow_detector= CreateSemaphore(NULL,0,HostCall_SERVICE_CAPACITY,"");    get_mutex      = CreateSemaphore(NULL,1,1,"");    nrof_created_servers = 0;    created_servers      = malloc( NROF_SERVERS * sizeof(HANDLE) );

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -