⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 kernelmultitasker.c

📁 上一个上传的有问题,这个是好的。visopsys包括系统内核和GUI的全部SOURCE code ,还包括一些基本的docs文档。里面src子目录对应所有SOURCE code.对于想研究操作系统的朋
💻 C
📖 第 1 页 / 共 5 页
字号:
    // Initialize our empty descriptor  kernelMemClear(&descriptor, sizeof(kernelDescriptor));  // Fill out our descriptor with data from the "official" one that   // corresponds to the selector we were given  status = kernelDescriptorGet(tssSelector, &descriptor);  if (status < 0)    return (status);  // Ok, now we can change the selector in the table  if (busy)    descriptor.attributes1 |= 0x00000002;  else    descriptor.attributes1 &= ~0x00000002;      // Re-set the descriptor in the GDT  status =  kernelDescriptorSetUnformatted(tssSelector,    descriptor.segSizeByte1, descriptor.segSizeByte2, descriptor.baseAddress1,   descriptor.baseAddress2, descriptor.baseAddress3, descriptor.attributes1,    descriptor.attributes2, descriptor.baseAddress4);  if (status < 0)    return (status);  // Return success  return (status = 0);}static int scheduler(void){  // Well, here it is:  the kernel multitasker's scheduler.  This little  // program will run continually in a loop, handing out time slices to  // all processes, including the kernel itself.  // By the time this scheduler is invoked, the kernel should already have  // created itself a process in the task queue.  Thus, the scheduler can   // begin by simply handing all time slices to the kernel.  // Additional processes will be created with calls to the kernel, which  // will create them and place them in the queue.  Thus, when the   // scheduler regains control after a time slice has expired, the   // queue of processes that it examines will have the new process added.  int status = 0;  kernelProcess *miscProcess = NULL;  volatile unsigned theTime = 0;  volatile int timeUsed = 0;  volatile int timerTicks = 0;  int count;  // This is info about the processes we run  kernelProcess *nextProcess = NULL;  kernelProcess *previousProcess = NULL;  unsigned processWeight = 0;  unsigned topProcessWeight = 0;  // Here is where we make decisions about which tasks to schedule,  // and when.  Below is a brief description of the scheduling  // algorithm.    // There will be two "special" queues in the multitasker.  The  // first (highest-priority) queue will be the "real time" queue.  // When there are any processes running and ready at this priority   // level, they will be serviced to the exclusion of all processes from  // other queues.  Not even the kernel process will reside in this  // queue.      // The last (lowest-priority) queue will be the "background"   // queue.  Processes in this queue will only receive processor time  // when there are no ready processes in any other queue.  Unlike  // all of the "middle" queues, it will be possible for processes  // in this background queue to starve.    // The number of priority queues will be somewhat flexible based on  // a configuration macro in the multitasker's header file.  However,   // The special queues mentioned above will exhibit the same behavior   // regardless of the number of "normal" queues in the system.    // Amongst all of the processes in the other queues, there will be  // a more even-handed approach to scheduling.  We will attempt  // a fair algorithm with a weighting scheme.  Among the weighting  // variables will be the following: priority, waiting time, and  // "shortness".  Shortness will probably come later   // (shortest-job-first), so for now we will concentrate on   // priority and shortness.  The formula will look like this:  //  // weight = ((NUM_QUEUES - taskPriority) * PRIORITY_RATIO) + waitTime  //  // This means that the inverse of the process priority will be  // multiplied by the "priority ratio", and to that will be added the  // current waiting time.  For example, if we have 4 priority levels,   // the priority ratio is 3, and we have two tasks as follows:  //  // Task 1: priority=0, waiting time=7  // Task 2: priority=2, waiting time=12  //   // then  //  // task1Weight = ((4 - 0) * 3) + 7  = 19  <- winner  // task2Weight = ((4 - 2) * 3) + 12 = 18  //   // Thus, even though task 2 has been waiting considerably longer,   // task 1's higher priority wins.  However in a slightly different   // scenario -- using the same constants -- if we had:  //  // Task 1: priority=0, waiting time=7  // Task 2: priority=2, waiting time=14  //   // then  //  // task1Weight = ((4 - 0) * 3) + 7  = 19  // task2Weight = ((4 - 2) * 3) + 14 = 20  <- winner  //   // In this case, task 2 gets to run since it has been waiting   // long enough to overcome task 1's higher priority.  This possibility  // helps to ensure that no processes will starve.  The priority   // ratio determines the weighting of priority vs. waiting time.    // A priority ratio of zero would give higher-priority processes   // no advantage over lower-priority, and waiting time would  // determine execution order.  //   // A tie beteen the highest-weighted tasks is broken based on   // queue order.  The queue is neither FIFO nor LIFO, but closer to  // LIFO.  // Here is the scheduler's big loop  while (schedulerStop == 0)    {      // Make sure.  No interrupts allowed inside this task.      kernelProcessorDisableInts();      // Calculate how many timer ticks were used in the previous time slice.      // This will be different depending on whether the previous timeslice      // actually expired, or whether we were called for some other reason      // (for example a yield()).  The timer wraps around if the timeslice      // expired, so we can't use that value -- we use the length of an entire      // timeslice instead.      if (!schedulerSwitchedByCall)	timeUsed = TIME_SLICE_LENGTH;      else	timeUsed = (TIME_SLICE_LENGTH - kernelSysTimerReadValue(0));      // We count the timer ticks that were used      timerTicks += timeUsed;      // Have we had the equivalent of a full timer revolution?  If so, we       // need to call the standard timer interrupt handler      if (timerTicks >= 65535)	{	  // Reset to zero	  timerTicks = 0;	  	  // Artifically register a system timer tick.	  kernelSysTimerTick();	}      // The scheduler is the current process.      kernelCurrentProcess = schedulerProc;            // Remember the previous process we ran      previousProcess = nextProcess;      if (previousProcess)	{	  if (previousProcess->state == proc_running)	    // Change the state of the previous process to ready, since it	    // was interrupted while still on the CPU.	    previousProcess->state = proc_ready;	    // added by Davide Airaghi for FPU-state handling	    FPU_STATUS_SAVE(previousProcess->fpu)	  // Add the last timeslice to the process' CPU time	  previousProcess->cpuTime += timeUsed;	}      // Increment the counts of scheduler time slices and scheduler time      // time      schedulerTimeslices += 1;      schedulerTime += timeUsed;      // Get the system timer time      theTime = kernelSysTimerRead();      // Reset the selected process to NULL so we can evaluate      // potential candidates      nextProcess = NULL;      topProcessWeight = 0;      // We have to loop through the process queue, and determine which      // process to run.      for (count = 0; count < numQueued; count ++)	{	  // Every CPU_PERCENT_TIMESLICES timeslices we will update the %CPU 	  // value for each process currently in the queue	  if ((schedulerTimeslices % CPU_PERCENT_TIMESLICES) == 0)	    {	      // Calculate the CPU percentage	      if (schedulerTime == 0)		processQueue[count]->cpuPercent = 0;	      else		processQueue[count]->cpuPercent = 		  ((processQueue[count]->cpuTime * 100) / schedulerTime);	      // Reset the process' cpuTime counter	      miscProcess->cpuTime = 0;	    }	  // Get a pointer to the process' main process	  miscProcess = processQueue[count];	  // This will change the state of a waiting process to 	  // "ready" if the specified "waiting reason" has come to pass	  if (miscProcess->state == proc_waiting)	    {	      // If the process is waiting for a specified time.  Has the	      // requested time come?	      if ((miscProcess->waitUntil != 0) &&		  (miscProcess->waitUntil < theTime))		// The process is ready to run		miscProcess->state = proc_ready;	      else		// The process must continue waiting		continue;	    }	  // This will dismantle any process that has identified itself	  // as finished	  else if (miscProcess->state == proc_finished)	    {	      kernelMultitaskerKillProcess(miscProcess->processId, 0);	      // This removed it from the queue and placed another process	      // in its place.  Decrement the current loop counter	      count--;	      continue;	    }	  else if (miscProcess->state != proc_ready)	    // Otherwise, this process should not be considered for	    // execution in this time slice (might be stopped, sleeping,	    // or zombie)	    continue;	  // If the process is of the highest (real-time) priority, it	  // should get an infinite weight	  if (miscProcess->priority == 0)	    processWeight = 0xFFFFFFFF;	  // Else if the process is of the lowest priority, it should	  // get a weight of zero	  else if (miscProcess->priority == (PRIORITY_LEVELS - 1))	    processWeight = 0;	  // If this process has yielded this timeslice already, we	  // should give it a low weight this time so that high-priority	  // processes don't gobble time unnecessarily	  else if (schedulerSwitchedByCall &&		   (miscProcess->yieldSlice == theTime))	    processWeight = 0;	  // Otherwise, calculate the weight of this task, using the	  // algorithm described above	  else	    processWeight = (((PRIORITY_LEVELS - miscProcess->priority) *			     PRIORITY_RATIO) + miscProcess->waitTime);	  if (processWeight < topProcessWeight)	    {	      // Increase the waiting time of this process, since it's not	      // the one we're selecting	      miscProcess->waitTime += 1;	      continue;	    }	  else	    {	      if (nextProcess)		{		  // If the process' weight is tied with that of the		  // previously winning process, it will NOT win if the		  // other process has been waiting as long or longer		  if ((processWeight == topProcessWeight) &&		      (nextProcess->waitTime >= miscProcess->waitTime))		    {		      miscProcess->waitTime += 1;		      continue;		    }		  else		    {		      // We have the currently winning process here.		      // Remember it in case we don't find a better one,		      // and increase the waiting time of the process this		      // one is replacing		      nextProcess->waitTime += 1;		    }		}	      	      topProcessWeight = processWeight;	      nextProcess = miscProcess;	    }	}      if ((schedulerTimeslices % CPU_PERCENT_TIMESLICES) == 0)	// Reset the schedulerTime counter	schedulerTime = 0;      // We should now have selected a process to run.  If not, we should      // re-start the loop.  This should only be likely to happen if some      // goombah kills the idle task.  Starting the loop again like this      // might simply result in a hung system -- but maybe that's OK because      // there's simply nothing to do.  However, if there is some process      // that is in a 'waiting' state, then there will be stuff to do when      // it wakes up.      if (nextProcess == NULL)	{	  // Resume the loop	  schedulerSwitchedByCall = 1;	  continue;	}      // Update some info about the next process      nextProcess->waitTime = 0;      nextProcess->state = proc_running;            // Export (to the rest of the multitasker) the pointer to the      // currently selected process.      kernelCurrentProcess = nextProcess;      // Make sure the exception handler process is ready to go      if (exceptionProc)	{	  status = kernelDescriptorSet(		   exceptionProc->tssSelector, // TSS selector		   &(exceptionProc->taskStateSegment), // Starts at...		   sizeof(kernelTSS),      // Maximum size of a TSS selector		   1,                      // Present in memory		   0,                      // Highest privilege level		   0,                      // TSS's are system segs		   0x9,                    // TSS, 32-bit, non-busy		   0,                      // 0 for SMALL size granularity		   0);                     // Must be 0 in TSS	  exceptionProc->taskStateSegment.EIP = (unsigned)	    &kernelExceptionHandler;	}      // Set the system timer 0 to interrupt this task after a known      // period of time (mode 0)      status = kernelSysTimerSetupTimer(0, 0, TIME_SLICE_LENGTH);      if (status < 0)	{	  kernelError(kernel_warn, "The scheduler was unable to control "		      "the system timer");	  // Shut down the scheduler.	  schedulerShutdown();	  // Just in case	  kernelProcessorStop();	}      // In the final part, we do the actual context switch.      // added by Davide Airaghi for FPU-state handling      FPU_STATUS_RESTORE(nextProcess->fpu)      // Move the selected task's selector into the link field      schedulerProc->taskStateSegment.oldTSS = nextProcess->tssSelector;      markTaskBusy(nextProcess->tssSelector, 1);      nextProcess->taskStateSegment.EFLAGS &= ~0x4000;      // int flags = 0, taskreg = 0;      // kernelProcessorGetFlags(flags);      // kernelProcessorGetTaskReg(taskreg);      // kernelTextPrintLine("Flags: 0x%x Task reg: 0x%x\n"      // 			  "Flags: 0x%x EIP: 0x%x Task "      // 			  "reg: 0x%x", flags, taskreg,      // 			  nextProcess->taskStateSegment.EFLAGS,      // 			  nextProcess->taskStateSegment.EIP,      // 			  //nextProcess->taskStateSegment.ESP,      // 			  nextProcess->tssSelector);      // Acknowledge the timer interrupt if one occurred      if (!schedulerSwitchedByCall)	kernelPicEndOfInterrupt(INTERRUPT_NUM_SYSTIMER);      // Reset the "switched by call" flag      schedulerSwitchedByCall = 0;      // Return to the task.  Do an interrupt return.      kernelProcessorIntReturn();      // Continue to loop    }  // If we get here, then the scheduler is supposed to shut down  schedulerShutdown();  // We should never get here  return (status = 0);}static int schedulerInitialize(void){  // This function will do all of the necessary initialization for the  // scheduler.  Returns 0 on success, negative otherwise

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -