📄 swparta.htm
字号:
<!doctype html public "-//IETF//DTD HTML//EN">
<HTML>
<HEAD>
<TITLE>EMBEDDED SYSTEMS</TITLE>
<META NAME="GENERATOR" CONTENT="Internet Assistant for Word 1.00">
<META NAME="AUTHOR" CONTENT="Brian Brown">
</HEAD>
<BODY>
<CENTER><H2>Embedded Code Software</H2></CENTER>
<CENTER><B>Copyright Brian Brown, 1989-1995</B></CENTER>
<A HREF="default.htm"><IMG SRC="images/menu.gif" ALT="menu"></A>
<A HREF="swpartb.htm"><IMG SRC="images/next.gif" ALT="next"></A>
<P>
<B>Software Partitioning</B><BR>
This is a term which describes the way in which software is divided into
a number of modules which are executed accordingly. This is done by
determining the
<UL>
<LI>number of modules required
<LI>order of execution of modules
<LI>special needs of modules (response times, data sharing, buffers)
<LI>method of module inter-communication
</UL>
<P>
<HR>
<B>Characteristics of embedded routines</B><BR>
<UL>
<LI>NEVER-ENDING<BR>
It is common for modules or tasks (normally written as functions
or procedures) to continue for-ever.
<PRE>
for( ; ; ) {
......
......
}
</PRE>
A for loop is generally used as most compilers will replace
this with the following assembler equivalent, a label and short
jump instruction.
<PRE>
for001:
......
......
sjmp for001
</PRE>
<P>
<LI>INTER-COMMUNICATE<BR>
Tasks in an embedded system communicate with other tasks using
a variety of accepted methods. Task communication helps split
up the functionality between high level and low level software,
and immediate versus delayed processing. Other features are
the implementation of task synchronization and co-operation.
<P>
<UL>
<LI><B>Shared variables</B><BR>
Tasks inter-communicate via the use of variables which
are accessible to both tasks. These variables are used
to transfer data or synchronize the task activity.
<PRE>
static int data_ready = false, data_used = false; /* shared variables */
task1() {
for( ; ; ) {
if( inportb( 0x300) == 0x80 ) { /* wait till data arrives */
capture_data(); /* read data into queue */
data_ready = true; /* signal data is available */
}
if( data_used == true ) /* check if data has been used*/
data_ready = false; /* and reset flag if so */
}
}
task2() {
for( ; ; ) {
while( data_ready == false ) /* wait till data available */
;
process_data(); /* process the data */
data_used = true; /* indicate data used up */
}
}
</PRE>
In this example, task1() is a producer of data, and
task2() is a consumer of data. The two tasks synchronize
their activity by means of the two shared variables
<i>data_ready</i> and <i>data_used</i>. Note that the
combined execution of these tasks is faster than the
arrival rate of the data.
<P>
In cases where the arrival rate of the data is faster,
a simple method is to use counting semaphores to indicate
the amount of data arrived. The use of semaphores in this
case would need to be protected variables.
<P>
<LI><B>Common shared memory buffers</B><BR>
Shared buffers are used where the incoming data is NOT
all processed before the next data arrives. The size of
the queue is determined by the maximum arrival rate of
the data versus the extraction rate over a given period
of time.
<PRE>
static unsigned char buffer[2000]; /* shared memory pool */
static unsigned int inptr = 0, outptr = 0; /* buff pointers for insert, delete*/
task1() {
for( ; ; ) {
if( inportb( 0x300) == 0x80 ) { /* if data is available */
buffer[inptr] = inportb(0x301); /* read data into queue */
protect(); /* enter critical region */
inptr++; /* adjust insert pointer */
if( inptr >= 2000 ) inptr = 0; /* treat queue as circular */
release(); /* exit critical region */
}
}
}
task2() {
for( ; ; ) {
while( outptr == inptr ) /* while queue empty */
; /* do nothing */
process_data(buffer[outptr]); /* else process the data in queue */
outptr++; /* adjust the delete pointer */
if( outptr >= 2000 ) outptr = 0; /* treat the queue as circular */
}
}
</PRE>
Note that there is no need for task2() to protect the
variable <i>outptr</i>, as its value is not accessed by
the producer task1().
<P>
<LI><B>Block/wakeup</B><BR>
The previous examples have shown that most of the time
spent by tasks is waiting for data to be available. By
yielding the processor when data is not ready, a task
makes more efficient use of the processor.
<P>
The system must ensure that when data arrives, it will
not be too long before the task retests to see if the
data has arrived. There is no point a task releasing the
processor to another task, if it cannot regain control and
thereby miss incoming data.
<P>
The general features of this system are,
<UL>
<LI>tasks can block if data is unavailable - similar to yield
<LI>tasks awaken by other tasks, process_data() waken when data arrives
<LI>tasks blocked are not considered for CPU time by the scheduler
</UL>
<PRE>
task1() {
for( ; ; ) {
while( inportb( 0x300 ) != 0x80 ) /* wait for data to arrive */
yield(); /* give CPU to another task */
buffer[inptr] = inportb(0x301); /* read data into queue */
protect(); /* don't switch tasks whilst */
inptr++; /* altering a shared variable */
if( inptr >= 2000 ) inptr = 0; /* treat queue as circular */
release(); /* okay to switch tasks now */
wakeup( task2 ); /* unblock task2() */
}
}
task2() {
for( ; ; ) {
block(); /* wakes up when data in queue*/
...;
...;
}
}
</PRE>
Task2() immediately blocks upon execution, and consumes no
processor time till waken up by task1(). This occurs when
data arrives. The re-arrangement of tasks as co-operating
together using the block/wakeup system has freed up
processor time, but slightly increased the system overhead.
<P>
<LI><B>Message queues and pipes</B><BR>
Tasks communicate via message queues (memory buffers).
Two kernel primitives sendmessage and receivemessage
provide the mechanism for task inter-communication.
<P>
The general features of message systems are,
<UL>
<LI>data/commands is sent via messages
<LI>tasks respond to messages by performing tasks
<LI>the arrival of a message wakes a task up
<LI>a task waiting on a message does not receive CPU time
<LI>a task sending a message wakes up another task
</UL>
<PRE>
typedef struct {
char buff[maxmsglen]; /* message queue */
int len; /* length of message */
} MSG;
task1() {
MSG messg;
for( ; ; ) {
while( inportb( 0x300 ) != 0x80 ) /* wait for data to arrive */
;
messg.buff[0] = inportb(0x301); /* construct message */
messg.len = 1; /* length of message */
sendmessage( task2, &messg ); /* send message to task2() */
}
}
task2() {
MSG *messg;
for( ; ; ) {
messg = receivemessage(); /* block till message arrives */
processdata( messg ); /* message arrived, process it*/
}
}
</PRE>
In the above example, task1() formats a message which
consists of the data for task2() to process. This data
is then sent via a kernel primitive which copies the
message into task2()'s addressable memory space.
<P>
Task2(), when executed, immediately blocks, and is waken
up when a message is received. Upon processing the message,
it immediately blocks and waits the arrival of the next
message.
<P>
</UL>
</UL>
<HR>
<B>INITIALISE THE COMPUTER HARDWARE</B><BR>
It is necessary for the software to provide the testing and initialization
of the hardware system and peripheral chips. This function is normally
provided by the operating system, but must be provided within the program
for embedded applications.
<P>
Primary features of this are,
<UL>
<LI>test cpu, ram, rom
<LI>setup stack, code and data areas
<LI>initialise data variables
<LI>setup interrupt handlers and initialise peripheral devices
<LI>see handout see setting up stack, initialising data variables etc
</UL>
<PRE>
main() {
asm cli;
setvect( 0x03, powerdown ); /* setup NMI */
setvect( 0x0d, irq3_handler ); /* incoming data handler */
outportb( 0x21, 0xb4 ); /* initialise 8259 PIC */
asm sti;
...;
...;
}
</PRE>
The above example sets an interrupt vector to handle power down
situations, and to handle the arrival of data via interrupt 3. The
PIC enables the interrupts 0 (timer channel 0 for dynamic RAM refresh,
1 (keyboard), 3 (incoming data) and (floppy diskette).
<P>
The setting of interrupt vectors has been done with interrupts disabled.
<P>
<HR>
<B>PERFORM ERROR RECOVERY AND EXCEPTION PROCESSING</B><BR>
The embedded application will provide error recovery and exception
processing. This is normally handled by the operating system, but
with embedded applications must be built into all tasks.
<P>
Typical errors and exceptions are,
<UL>
<LI>divide by zero
<LI>power down
<LI>memory error
<LI>software errors
<LI>watchdog timers
<LI>range checking
<LI>data validation (verify data before passing onto other routines)
</UL>
<P>
In the event of failure, a simple retry loop provides a means of
trying to re-send the original byte.
<PRE>
int sendmsg( char address, char *message) {
int count = 5;
char ch;
ch = address;
while( *message != '\0' ) {
xmitbyte( ch );
while( (count > 0) && (i2cerr == true) ) {
xmitbyte( ch );
count--;
if( count == 0 ) return ERROR;
}
message++;
ch = *message;
}
return NOERROR;
}
</PRE>
The following example uses an interrupt routine for power failure.
The normal time-frame between power fail detection and loss of the +5v
rail is about 5 milli-seconds. During this time, a 80C552 microcontroller
will execute an average of 5000 instructions.
<PRE>
void interrupt powerdown( void )
{
asm cli; /* disable interrupts */
outportb( 0x250, 0x01 ); /* switch in battery backup */
asm halt; /* enter shutdown mode */
}
</PRE>
When the power returns, a power-on reset signal will restart the
processor. As part of the initialization sequence, the start-up code will
check the warm boot flag and skip the RAM initialization.
<P>
<HR>
<B>Scheduling of tasks</B><BR>
The tasks to run in an embedded system can be scheduled in a wide
variety of different ways. The actual scheduling system used will depend
mainly upon the time constraints imposed by the system for the collecting,
processing and storage of the data involved.
<P>
Scheduling of tasks can be <B>asynchronous</B> or <B>synchronous</B>.
<P>
In <B>asynchronous</B> systems, the scheduling of tasks cannot be predicted due
to the events associated with them are normally driven by external
devices. This lends itself to interrupt processing.
<P>
In <B>synchronous</B> systems, each tasks execution is dependent upon the
previous tasks completion, thus the flow of information through the
system is more orderly and predictable. Simple mechanisms can be used
in such cases.
<P>
We shall now look at some scheduling methods.
<P>
<HR>
<B>SEQUENCING (also known as SLOP scheduling)</B><BR>
Traditional programming has expressed programs as a sequence of tasks
executed in sequential order. The characteristics of this type of
scheduling are,
<UL>
<LI>each task runs to completion
<LI>tasks run in order of execution
<LI>simple
<LI>difficult to predict/guarantee response times
<LI>external events can be missed
</UL>
<PRE>
main() {
for( ; ; ) {
task1();
task2();
...;
...;
taskn();
}
}
</PRE>
<HR>
<B>CO-OPERATIVE</B><BR>
Tasks give up the processor voluntarily at some stage in their execution
cycle. This normally occurs when waiting for data arrival or device
ready signals. The design is to free the processor for other tasks which
are not waiting on devices.
<P>
Any task which monopolizes the system will have an adverse effect
upon system performance and response times.
<P>
The features of co-operative scheduling are
<UL>
<LI>each task gives up the processor voluntarily
<LI>each task does not run to completion
<LI>easy to implement
<LI>difficult to predict/guarantee response times (calculate worst case)
<LI>external events can be missed
</UL>
<P>
<PRE>
task1() {
for( ; ; ) {
if( inportb( 0x300 ) != 0x80 )
transfer(); /* give up CPU if data not ready */
else {
...;
}
}
}
task2() {
for( ; ; ) {
if( outptr == inptr )
transfer();
else {
...;
}
}
}
</PRE>
<HR>
<B>PRE-EMPTIVE</B><BR>
A real-time clock interrupts the processor at regular intervals, and
a kernel executive forces switching of the processor between the tasks.
This system enforces regular task execution, thus response times can
be calculated. Tasks will be in various stages of execution, and the
kernel executive schedules tasks for execution according to preset
criteria.
<P>
The general features of pre-emptive schedulers are,
<UL>
<LI>CPU is sliced between tasks at regular intervals
<LI>uses RTC to generate an acceptable frame rate
<LI>each task gets acted upon
<LI>response times more predictable
<LI>system overhead introduced
<LI>can combine co-operative scheduling techniques
<LI>greater range of task control possible (block, suspend, wakeup, resume)
<LI>ready queue can be prioritized
<LI>more difficult to design, test, debug and implement
<LI>shared data may need protecting with semaphores
</UL>
<P>
<HR>
<B>FOREGROUND AND BACKGROUND TASKS</B><BR>
The software is partitioned into foreground and background tasks. The
foreground task ensures adequate response times, whilst the background
task ensu
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -