⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 hal.c

📁 rtai-3.1-test3的源代码(Real-Time Application Interface )
💻 C
📖 第 1 页 / 共 4 页
字号:
 * allow also any new interrupts on the same request as soon as you enable * interrupts at the CPU level. *  * Often some of the above functions do equivalent things. Once more there is no * way of doing it right except by knowing the hardware you are manipulating. * Furthermore you must also remember that when you install a hard real time * handler the related interrupt is usually disabled, unless you are overtaking * one already owned by Linux which has been enabled by it.   Recall that if * have done it right, and interrupts do not show up, it is likely you have just * to rt_enable_irq() your irq. */void rt_ack_irq (unsigned irq) {    rt_enable_irq(irq);}void rt_do_irq (unsigned irq) {    adeos_trigger_irq(irq);}/** * Install shared Linux interrupt handler. * * rt_request_linux_irq installs function @a handler as a standard Linux * interrupt service routine for IRQ level @a irq forcing Linux to share the IRQ * with other interrupt handlers, even if it does not want. The handler is * appended to any already existing Linux handler for the same irq and is run by * Linux irq as any of its handler. In this way a real time application can * monitor Linux interrupts handling at its will. The handler appears in * /proc/interrupts. * * @param handler pointer on the interrupt service routine to be installed. * * @param name is a name for /proc/interrupts. * * @param dev_id is to pass to the interrupt handler, in the same way as the * standard Linux irq request call. * * The interrupt service routine can be uninstalled with rt_free_linux_irq(). * * @retval 0 on success. * @retval EINVAL if @a irq is not a valid IRQ number or handler is @c NULL. * @retval EBUSY if there is already a handler of interrupt @a irq. */int rt_request_linux_irq (unsigned irq,			  irqreturn_t (*handler)(int irq,						 void *dev_id,						 struct pt_regs *regs), 			  char *name,			  void *dev_id){    unsigned long flags;    if (irq >= NR_IRQS || !handler)	return -EINVAL;    rtai_local_irq_save(flags);    spin_lock(&irq_desc[irq].lock);    if (rtai_linux_irq[irq].count++ == 0 && irq_desc[irq].action)	{	rtai_linux_irq[irq].flags = irq_desc[irq].action->flags;	irq_desc[irq].action->flags |= SA_SHIRQ;	}    spin_unlock(&irq_desc[irq].lock);    rtai_local_irq_restore(flags);    request_irq(irq,handler,SA_SHIRQ,name,dev_id);    return 0;}/** * Uninstall shared Linux interrupt handler. * * @param dev_id is to pass to the interrupt handler, in the same way as the * standard Linux irq request call. * * @param irq is the IRQ level of the interrupt handler to be freed. * * @retval 0 on success. * @retval EINVAL if @a irq is not a valid IRQ number. */int rt_free_linux_irq (unsigned irq, void *dev_id){    unsigned long flags;    if (irq >= NR_IRQS || rtai_linux_irq[irq].count == 0)	return -EINVAL;    rtai_local_irq_save(flags);    free_irq(irq,dev_id);    spin_lock(&irq_desc[irq].lock);    if (--rtai_linux_irq[irq].count == 0 && irq_desc[irq].action)	irq_desc[irq].action->flags = rtai_linux_irq[irq].flags;    spin_unlock(&irq_desc[irq].lock);    rtai_local_irq_restore(flags);    return 0;}/** * Pend an IRQ to Linux. * * rt_pend_linux_irq appends a Linux interrupt irq for processing in Linux IRQ * mode, i.e. with hardware interrupts fully enabled. * * @note rt_pend_linux_irq does not perform any check on @a irq. */void rt_pend_linux_irq (unsigned irq) {    adeos_propagate_irq(irq);}/** * Install a system request handler * * rt_request_srq installs a two way RTAI system request (srq) by assigning * @a u_handler, a function to be used when a user calls srq from user space, * and @a k_handler, the function to be called in kernel space following its * activation by a call to rt_pend_linux_srq(). @a k_handler is in practice * used to request a service from the kernel. In fact Linux system requests * cannot be used safely from RTAI so you can setup a handler that receives real * time requests and safely executes them when Linux is running. * * @param u_handler can be used to effectively enter kernel space without the * overhead and clumsiness of standard Unix/Linux protocols.   This is very * flexible service that allows you to personalize your use of  RTAI. * * @return the number of the assigned system request on success. * @retval EINVAL if @a k_handler is @c NULL. * @retval EBUSY if no free srq slot is available. */int rt_request_srq (unsigned label,		    void (*k_handler)(void),		    long long (*u_handler)(unsigned)){    unsigned long flags;    int srq;    if (k_handler == NULL)	return -EINVAL;    rtai_local_irq_save(flags);    if (rtai_sysreq_map != ~0)	{	srq = ffz(rtai_sysreq_map);	set_bit(srq,&rtai_sysreq_map);	rtai_sysreq_table[srq].k_handler = k_handler;	rtai_sysreq_table[srq].u_handler = u_handler;	rtai_sysreq_table[srq].label = label;	}    else	srq = -EBUSY;    rtai_local_irq_restore(flags);    return srq;}/** * Uninstall a system request handler * * rt_free_srq uninstalls the specified system call @a srq, returned by * installing the related handler with a previous call to rt_request_srq(). * * @retval EINVAL if @a srq is invalid. */int rt_free_srq (unsigned srq){    if (srq < 2 || srq >= RTAI_NR_SRQS ||	!test_and_clear_bit(srq,&rtai_sysreq_map))	return -EINVAL;    return 0;}/** * Append a Linux IRQ. * * rt_pend_linux_srq appends a system call request srq to be used as a service * request to the Linux kernel. * * @param srq is the value returned by rt_request_srq. * * @note rt_pend_linux_srq does not perform any check on irq. */void rt_pend_linux_srq (unsigned srq){    if (srq > 1 && srq < RTAI_NR_SRQS)	{	set_bit(srq,&rtai_sysreq_pending);	adeos_schedule_irq(rtai_sysreq_virq);	}}#ifdef CONFIG_SMPstatic void rtai_critical_sync (void){    struct apic_timer_setup_data *p;    switch (rtai_sync_level)	{	case 1:	    p = &rtai_timer_mode[adeos_processor_id()];	    	    while (rtai_rdtsc() < rtai_timers_sync_time)		;	    if (p->mode)		rtai_setup_periodic_apic(p->count,RTAI_APIC_TIMER_VECTOR);	    else		rtai_setup_oneshot_apic(p->count,RTAI_APIC_TIMER_VECTOR);	    break;	case 2:	    rtai_setup_oneshot_apic(0,RTAI_APIC_TIMER_VECTOR);	    break;	case 3:	    rtai_setup_periodic_apic(RTAI_APIC_ICOUNT,LOCAL_TIMER_VECTOR);	    break;	}}irqreturn_t rtai_broadcast_to_local_timers (int irq,					    void *dev_id,					    struct pt_regs *regs){    unsigned long flags;    rtai_hw_lock(flags);    apic_wait_icr_idle();    apic_write_around(APIC_ICR,APIC_DM_FIXED|APIC_DEST_ALLINC|LOCAL_TIMER_VECTOR);    rtai_hw_unlock(flags);    return RTAI_LINUX_IRQ_HANDLED;} #else /* !CONFIG_SMP */#define rtai_critical_sync NULLirqreturn_t rtai_broadcast_to_local_timers (int irq,					    void *dev_id,					    struct pt_regs *regs) {    return RTAI_LINUX_IRQ_HANDLED;} #endif /* CONFIG_SMP */#ifdef CONFIG_X86_LOCAL_APIC/** * Install a local APICs timer interrupt handler * * rt_request_apic_timers requests local APICs timers and defines the mode and * count to be used for each local APIC timer. Modes and counts can be chosen * arbitrarily for each local APIC timer. * * @param apic_timer_data is a pointer to a vector of structures * @code struct apic_timer_setup_data { int mode, count; } * @endcode sized with the number of CPUs available. * * Such a structure defines: * - mode: 0 for a oneshot timing, 1 for a periodic timing. * - count: is the period in nanoseconds you want to use on the corresponding * timer, not used for oneshot timers.  It is in nanoseconds to ease its * programming when different values are used by each timer, so that you do not * have to care converting it from the CPU on which you are calling this * function. * * The start of the timing should be reasonably synchronized.   You should call * this function with due care and only when you want to manage the related * interrupts in your own handler.   For using local APIC timers in pacing real * time tasks use the usual rt_start_timer(), which under the MUP scheduler sets * the same timer policy on all the local APIC timers, or start_rt_apic_timers() * that allows you to use @c struct @c apic_timer_setup_data directly. */void rt_request_apic_timers (void (*handler)(void),			     struct apic_timer_setup_data *tmdata){    volatile struct rt_times *rtimes;    struct apic_timer_setup_data *p;    unsigned long flags;    int cpuid;    TRACE_RTAI_TIMER(TRACE_RTAI_EV_TIMER_REQUEST_APIC,handler,0);    flags = rtai_critical_enter(rtai_critical_sync);    rtai_sync_level = 1;    rtai_timers_sync_time = rtai_rdtsc() + rtai_imuldiv(LATCH,							rtai_tunables.cpu_freq,							RTAI_FREQ_8254);    for (cpuid = 0; cpuid < RTAI_NR_CPUS; cpuid++)	{	p = &rtai_timer_mode[cpuid];	*p = tmdata[cpuid];	rtimes = &rt_smp_times[cpuid];	if (p->mode)	    {	    rtimes->linux_tick = RTAI_APIC_ICOUNT;	    rtimes->tick_time = rtai_llimd(rtai_timers_sync_time,					   RTAI_FREQ_APIC,					   rtai_tunables.cpu_freq);	    rtimes->periodic_tick = rtai_imuldiv(p->count,						 RTAI_FREQ_APIC,						 1000000000);	    p->count = rtimes->periodic_tick;	    }	else	    {	    rtimes->linux_tick = rtai_imuldiv(LATCH,					      rtai_tunables.cpu_freq,					      RTAI_FREQ_8254);	    rtimes->tick_time = rtai_timers_sync_time;	    rtimes->periodic_tick = rtimes->linux_tick;	    p->count = RTAI_APIC_ICOUNT;	    }	rtimes->intr_time = rtimes->tick_time + rtimes->periodic_tick;	rtimes->linux_time = rtimes->tick_time + rtimes->linux_tick;	}    p = &rtai_timer_mode[adeos_processor_id()];    while (rtai_rdtsc() < rtai_timers_sync_time)	;    if (p->mode)	rtai_setup_periodic_apic(p->count,RTAI_APIC_TIMER_VECTOR);    else	rtai_setup_oneshot_apic(p->count,RTAI_APIC_TIMER_VECTOR);    rt_release_irq(RTAI_APIC_TIMER_IPI);    rt_request_irq(RTAI_APIC_TIMER_IPI,(rt_irq_handler_t)handler,NULL);    rt_request_linux_irq(RTAI_TIMER_8254_IRQ,			 &rtai_broadcast_to_local_timers,			 "broadcast",			 &rtai_broadcast_to_local_timers);    for (cpuid = 0; cpuid < RTAI_NR_CPUS; cpuid++)	{	p = &tmdata[cpuid];	if (p->mode)	    p->count = rtai_imuldiv(p->count,RTAI_FREQ_APIC,1000000000);	else	    p->count = rtai_imuldiv(p->count,rtai_tunables.cpu_freq,1000000000);	}    rtai_critical_exit(flags);}/** * Uninstall a local APICs timer interrupt handler */void rt_free_apic_timers(void){    unsigned long flags;    TRACE_RTAI_TIMER(TRACE_RTAI_EV_TIMER_APIC_FREE,0,0);    rt_free_linux_irq(RTAI_TIMER_8254_IRQ,&rtai_broadcast_to_local_timers);    flags = rtai_critical_enter(rtai_critical_sync);    rtai_sync_level = 3;    rtai_setup_periodic_apic(RTAI_APIC_ICOUNT,LOCAL_TIMER_VECTOR);    rt_release_irq(RTAI_APIC_TIMER_IPI);    rtai_critical_exit(flags);}#else /* !CONFIG_X86_LOCAL_APIC */void rt_request_apic_timers (void (*handler)(void),			     struct apic_timer_setup_data *tmdata) {}void rt_free_apic_timers(void) {    rt_free_timer();}#endif /* CONFIG_X86_LOCAL_APIC */#ifdef CONFIG_SMP/** * Set IRQ->CPU assignment * * rt_assign_irq_to_cpu forces the assignment of the external interrupt @a irq * to the CPU @a cpu. * * @retval 1 if there is one CPU in the system. * @retval 0 on success if there are at least 2 CPUs. * @return the number of CPUs if @a cpu refers to a non-existent CPU. * @retval EINVAL if @a irq is not a valid IRQ number or some internal data * inconsistency is found. * * @note This functions has effect only on multiprocessors systems. * @note With Linux 2.4.xx such a service has finally been made available * natively within the raw kernel. With such Linux releases * rt_reset_irq_to_sym_mode() resets the original Linux delivery mode, or * deliver affinity as they call it. So be warned that such a name is kept * mainly for compatibility reasons, as for such a kernel the reset operation * does not necessarily implies a symmetric external interrupt delivery. */int rt_assign_irq_to_cpu (int irq, unsigned long cpumask){    unsigned long oldmask, flags;    rtai_local_irq_save(flags);    spin_lock(&rtai_iset_lock);    oldmask = adeos_set_irq_affinity(irq,cpumask);    if (oldmask == 0)	{	/* Oops... Something went wrong. */	spin_unlock(&rtai_iset_lock);	rtai_local_irq_restore(flags);	return -EINVAL;	}    rtai_old_irq_affinity[irq] = oldmask;    rtai_set_irq_affinity[irq] = cpumask;    spin_unlock(&rtai_iset_lock);    rtai_local_irq_restore(flags);    return 0;}/** * reset IRQ->CPU assignment * * rt_reset_irq_to_sym_mode resets the interrupt irq to the symmetric interrupts * management. The symmetric mode distributes the IRQs over all the CPUs. * * @retval 1 if there is one CPU in the system. * @retval 0 on success if there are at least 2 CPUs. * @return the number of CPUs if @a cpu refers to a non-existent CPU. * @retval EINVAL if @a irq is not a valid IRQ number or some internal data * inconsistency is found. * * @note This function has effect only on multiprocessors systems. * @note With Linux 2.4.xx such a service has finally been made available * natively within the raw kernel. With such Linux releases * rt_reset_irq_to_sym_mode() resets the original Linux delivery mode, or

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -