⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 kernelmultitasker.c

📁 上一个上传的有问题,这个是好的。visopsys包括系统内核和GUI的全部SOURCE code ,还包括一些基本的docs文档。里面src子目录对应所有SOURCE code.对于想研究操作系统的朋
💻 C
📖 第 1 页 / 共 5 页
字号:
  // be run at the absolute lowest possible priority so that it will not   // be run unless there is absolutely nothing else in the other queues   // that is ready.  while(1);}static int spawnIdleThread(void){  // This function will create the idle thread at initialization time.  // Returns 0 on success, negative otherwise.  int status = 0;  int idleProcId = 0;  // The idle thread needs to be a child of the kernel  idleProcId = kernelMultitaskerSpawn(idleThread, "idle thread", 0, NULL);  if (idleProcId < 0)    return (idleProcId);  idleProc = getProcessById(idleProcId);  if (idleProc == NULL)    return (status = ERR_NOSUCHPROCESS);  // Set it to the lowest priority  status =    kernelMultitaskerSetProcessPriority(idleProcId, (PRIORITY_LEVELS - 1));  if (status < 0)    // There's no reason we should have to fail here, but make a warning    kernelError(kernel_warn, "The multitasker was unable to lower the "		"priority of the idle thread");  // Return success  return (status = 0);}static int markTaskBusy(int tssSelector, int busy){  // This function gets the requested TSS selector from the GDT and  // marks it as busy/not busy.  Returns negative on error.    int status = 0;  kernelDescriptor descriptor;    // Initialize our empty descriptor  kernelMemClear(&descriptor, sizeof(kernelDescriptor));  // Fill out our descriptor with data from the "official" one that   // corresponds to the selector we were given  status = kernelDescriptorGet(tssSelector, &descriptor);  if (status < 0)    return (status);  // Ok, now we can change the selector in the table  if (busy)    descriptor.attributes1 |= 0x00000002;  else    descriptor.attributes1 &= ~0x00000002;      // Re-set the descriptor in the GDT  status =  kernelDescriptorSetUnformatted(tssSelector,    descriptor.segSizeByte1, descriptor.segSizeByte2, descriptor.baseAddress1,   descriptor.baseAddress2, descriptor.baseAddress3, descriptor.attributes1,    descriptor.attributes2, descriptor.baseAddress4);  if (status < 0)    return (status);  // Return success  return (status = 0);}static int schedulerShutdown(void){  // This function will perform all of the necessary shutdown to stop  // the sheduler and return control to the kernel's main task.  // This will probably only be useful at system shutdown time.  // NOTE that this function should NEVER be called directly.  If this  // advice is ignored, you have no assurance that things will occur  // the way you might expect them to.  To shut down the scheduler,  // set the variable schedulerStop to a nonzero value.  The scheduler  // will then invoke this function when it's ready.  int status = 0;  // Restore the normal operation of the system timer 0, which is  // mode 3, initial count of 0  status = kernelSysTimerSetupTimer(0, 3, 0);  if (status < 0)    kernelError(kernel_warn, "Could not restore system timer");  // Remove the task gate that we were using to capture the timer  // interrupt.  Replace it with the old default timer interrupt handler  kernelInterruptHook(INTERRUPT_NUM_SYSTIMER, oldSysTimerHandler);  // Give exclusive control to the current task  markTaskBusy(kernelCurrentProcess->tssSelector, 0);  kernelProcessorFarCall(kernelCurrentProcess->tssSelector);  // We should never get here  return (status = 0);}static int scheduler(void){  // Well, here it is:  the kernel multitasker's scheduler.  This little  // program will run continually in a loop, handing out time slices to  // all processes, including the kernel itself.  // By the time this scheduler is invoked, the kernel should already have  // created itself a process in the task queue.  Thus, the scheduler can   // begin by simply handing all time slices to the kernel.  // Additional processes will be created with calls to the kernel, which  // will create them and place them in the queue.  Thus, when the   // scheduler regains control after a time slice has expired, the   // queue of processes that it examines will have the new process added.  int status = 0;  kernelProcess *miscProcess = NULL;  volatile unsigned theTime = 0;  volatile int timeUsed = 0;  volatile int timerTicks = 0;  int count;  // This is info about the processes we run  kernelProcess *nextProcess = NULL;  kernelProcess *previousProcess = NULL;  unsigned processWeight = 0;  unsigned topProcessWeight = 0;  // Here is where we make decisions about which tasks to schedule,  // and when.  Below is a brief description of the scheduling  // algorithm.    // There will be two "special" queues in the multitasker.  The  // first (highest-priority) queue will be the "real time" queue.  // When there are any processes running and ready at this priority   // level, they will be serviced to the exclusion of all processes from  // other queues.  Not even the kernel process will reside in this  // queue.      // The last (lowest-priority) queue will be the "background"   // queue.  Processes in this queue will only receive processor time  // when there are no ready processes in any other queue.  Unlike  // all of the "middle" queues, it will be possible for processes  // in this background queue to starve.    // The number of priority queues will be somewhat flexible based on  // a configuration macro in the multitasker's header file.  However,   // The special queues mentioned above will exhibit the same behavior   // regardless of the number of "normal" queues in the system.    // Amongst all of the processes in the other queues, there will be  // a more even-handed approach to scheduling.  We will attempt  // a fair algorithm with a weighting scheme.  Among the weighting  // variables will be the following: priority, waiting time, and  // "shortness".  Shortness will probably come later   // (shortest-job-first), so for now we will concentrate on   // priority and shortness.  The formula will look like this:  //  // weight = ((NUM_QUEUES - taskPriority) * PRIORITY_RATIO) + waitTime  //  // This means that the inverse of the process priority will be  // multiplied by the "priority ratio", and to that will be added the  // current waiting time.  For example, if we have 4 priority levels,   // the priority ratio is 3, and we have two tasks as follows:  //  // Task 1: priority=0, waiting time=7  // Task 2: priority=2, waiting time=12  //   // then  //  // task1Weight = ((4 - 0) * 3) + 7  = 19  <- winner  // task2Weight = ((4 - 2) * 3) + 12 = 18  //   // Thus, even though task 2 has been waiting considerably longer,   // task 1's higher priority wins.  However in a slightly different   // scenario -- using the same constants -- if we had:  //  // Task 1: priority=0, waiting time=7  // Task 2: priority=2, waiting time=14  //   // then  //  // task1Weight = ((4 - 0) * 3) + 7  = 19  // task2Weight = ((4 - 2) * 3) + 14 = 20  <- winner  //   // In this case, task 2 gets to run since it has been waiting   // long enough to overcome task 1's higher priority.  This possibility  // helps to ensure that no processes will starve.  The priority   // ratio determines the weighting of priority vs. waiting time.    // A priority ratio of zero would give higher-priority processes   // no advantage over lower-priority, and waiting time would  // determine execution order.  //   // A tie beteen the highest-weighted tasks is broken based on   // queue order.  The queue is neither FIFO nor LIFO, but closer to  // LIFO.  // Here is the scheduler's big loop  while (schedulerStop == 0)    {      // Make sure.  No interrupts allowed inside this task.      kernelProcessorDisableInts();      // Calculate how many timer ticks were used in the previous time slice.      // This will be different depending on whether the previous timeslice      // actually expired, or whether we were called for some other reason      // (for example a yield()).  The timer wraps around if the timeslice      // expired, so we can't use that value -- we use the length of an entire      // timeslice instead.      if (!schedulerSwitchedByCall)	timeUsed = TIME_SLICE_LENGTH;      else	timeUsed = (TIME_SLICE_LENGTH - kernelSysTimerReadValue(0));      // We count the timer ticks that were used      timerTicks += timeUsed;      // Have we had the equivalent of a full timer revolution?  If so, we       // need to call the standard timer interrupt handler      if (timerTicks >= 65535)	{	  // Reset to zero	  timerTicks = 0;	  	  // Artifically register a system timer tick.	  kernelSysTimerTick();	}      // The scheduler is the current process.      kernelCurrentProcess = schedulerProc;            // Remember the previous process we ran      previousProcess = nextProcess;      if (previousProcess)	{	  if (previousProcess->state == proc_running)	    // Change the state of the previous process to ready, since it	    // was interrupted while still on the CPU.	    previousProcess->state = proc_ready;	  // Add the last timeslice to the process' CPU time	  previousProcess->cpuTime += timeUsed;	  if (previousProcess->fpuInUse)	    {	      // Save FPU state	      kernelProcessorFpuStateSave(previousProcess->fpuState);	      previousProcess->fpuStateValid = 1;	    }	}      // Increment the counts of scheduler time slices and scheduler time      // time      schedulerTimeslices += 1;      schedulerTime += timeUsed;      // Get the system timer time      theTime = kernelSysTimerRead();      // Reset the selected process to NULL so we can evaluate      // potential candidates      nextProcess = NULL;      topProcessWeight = 0;      // We have to loop through the process queue, and determine which      // process to run.      for (count = 0; count < numQueued; count ++)	{	  // Every CPU_PERCENT_TIMESLICES timeslices we will update the %CPU 	  // value for each process currently in the queue	  if ((schedulerTimeslices % CPU_PERCENT_TIMESLICES) == 0)	    {	      // Calculate the CPU percentage	      if (schedulerTime == 0)		processQueue[count]->cpuPercent = 0;	      else		processQueue[count]->cpuPercent = 		  ((processQueue[count]->cpuTime * 100) / schedulerTime);	      // Reset the process' cpuTime counter	      miscProcess->cpuTime = 0;	    }	  // Get a pointer to the process' main process	  miscProcess = processQueue[count];	  // This will change the state of a waiting process to 	  // "ready" if the specified "waiting reason" has come to pass	  if (miscProcess->state == proc_waiting)	    {	      // If the process is waiting for a specified time.  Has the	      // requested time come?	      if ((miscProcess->waitUntil != 0) &&		  (miscProcess->waitUntil < theTime))		// The process is ready to run		miscProcess->state = proc_ready;	      else		// The process must continue waiting		continue;	    }	  // This will dismantle any process that has identified itself	  // as finished	  else if (miscProcess->state == proc_finished)	    {	      kernelMultitaskerKillProcess(miscProcess->processId, 0);	      // This removed it from the queue and placed another process	      // in its place.  Decrement the current loop counter	      count--;	      continue;	    }	  else if (miscProcess->state != proc_ready)	    // Otherwise, this process should not be considered for	    // execution in this time slice (might be stopped, sleeping,	    // or zombie)	    continue;	  // If the process is of the highest (real-time) priority, it	  // should get an infinite weight	  if (miscProcess->priority == 0)	    processWeight = 0xFFFFFFFF;	  // Else if the process is of the lowest priority, it should	  // get a weight of zero	  else if (miscProcess->priority == (PRIORITY_LEVELS - 1))	    processWeight = 0;	  // If this process has yielded this timeslice already, we	  // should give it a low weight this time so that high-priority	  // processes don't gobble time unnecessarily	  else if (schedulerSwitchedByCall &&		   (miscProcess->yieldSlice == theTime))	    processWeight = 0;	  // Otherwise, calculate the weight of this task, using the	  // algorithm described above	  else	    processWeight = (((PRIORITY_LEVELS - miscProcess->priority) *			     PRIORITY_RATIO) + miscProcess->waitTime);	  if (processWeight < topProcessWeight)	    {	      // Increase the waiting time of this process, since it's not	      // the one we're selecting	      miscProcess->waitTime += 1;	      continue;	    }	  else	    {	      if (nextProcess)		{		  // If the process' weight is tied with that of the		  // previously winning process, it will NOT win if the		  // other process has been waiting as long or longer		  if ((processWeight == topProcessWeight) &&		      (nextProcess->waitTime >= miscProcess->waitTime))		    {		      miscProcess->waitTime += 1;		      continue;		    }		  else		    {		      // We have the currently winning process here.		      // Remember it in case we don't find a better one,		      // and increase the waiting time of the process this		      // one is replacing		      nextProcess->waitTime += 1;		    }		}	      	      topProcessWeight = processWeight;	      nextProcess = miscProcess;	    }	}      if ((schedulerTimeslices % CPU_PERCENT_TIMESLICES) == 0)	// Reset the schedulerTime counter	schedulerTime = 0;      // We should now have selected a process to run.  If not, we should      // re-start the loop.  This should only be likely to happen if some      // goombah kills the idle task.  Starting the loop again like this      // might simply result in a hung system -- but maybe that's OK because

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -