⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 pgqa.gml

📁 开放源码的编译器open watcom 1.6.0版的源代码
💻 GML
📖 第 1 页 / 共 5 页
字号:
compiled for a small code model (in any one of small or compact memory
models) and as such do not have the correct return instructions.
You should recompile the modules so that all the modules are
compiled for the same memory model.
Combining source modules compiled for different memory models is very
difficult and often leads to strange bugs.
If your program has special considerations and this reference causes
you problems, there is a "work-around".
You could resolve the reference with a PUBLIC declaration in an
assembler file or code the following in &cmpname..
.millust begin
/* rest of your module */

void _small_code( void )
{}
.millust end
.pc
The code generator will generate a single RET instruction
with the public symbol
.id _small_code_
attached to it.
The common epilogue optimizations will probably combine this
function with another function's RET instruction and you will not
even pay the small penalty of one byte of extra code.
.np
There may be another cause of this problem, the "main" function must
be entered in lower case letters ("Main" or "MAIN" are not
identified as being the same as "main" by the compiler).
The compiler will identify the module that contains the definition
of the function "main" by creating the public definition of either
.id _small_code_
or
.id _big_code_
depending on the memory model it was compiled in.
.ix 'undefined references' '_big_code_'
.ix '_big_code_'
.mnote _big_code_
Your module that contains the "main" entry point has been compiled
with a 16-bit small code model (small or compact).
The modules that have this undefined reference have been compiled in
16-bit big code models (medium, large, or huge).
You should recompile the modules so that all the modules are
compiled in the same memory model.
See the explanation for
.id _small_code_
for more details.
.ix 'undefined references' 'main_'
.ix 'main_'
.mnote main_
All C programs
.if '&target' ne 'QNX' .do begin
.ct , except applications developed specifically for Microsoft Windows,
.do end
must have a function called "main".
The name "main" must be in lower case for the compiler to generate
the appropriate information in the "main" module.
.*
.if '&target' ne 'QNX' .do begin
.mnote WINMAIN
.ix 'undefined references' 'WinMain'
.ix 'WinMain'
All Windows programs must have a function called "WinMain".
The function "WinMain" must be declared "pascal" in order that the
compiler generate the appropriate name in the "WinMain" module.
.do end
.endnote
.do end
.*
.if '&lang' eq 'FORTRAN 77' .do begin
.*
.section Why local variable values are not maintained between subprogram calls
.*
.np
.ix 'variables' 'set to zero'
.ix 'SAVE'
.ix 'initializing' 'variables'
By default, the local variables for a subprogram are stored on the
stack and are not initialized.
When the subprogram returns, the variables are popped off the stack
and their values are lost.
If you want to preserve the value of a local variable, after the
execution of a RETURN or END statement in a subprogram, the FORTRAN 77
SAVE statement or the "save" compiler option can be used.
.np
Using the &lang SAVE statement in your program allows you to
explicitly select which values you wish to preserve.
The SAVE statement ensures that space is allocated for a local
variable from static memory and not the stack.
Include a SAVE statement in your &lang code for each local variable
that you wish to preserve.
.np
To automatically preserve all local variables, you can use the "save"
compiler option.
This option adds code to initialize and allocate space for each local
variable in the program.
This is equivalent to specifying a SAVE statement.
The "save" option makes it easier to ensure that all the variables are
preserved during program execution, but it increases the size of the
code that is generated.
You may wish to use this option during debugging to help diagnose bugs
caused by corrupted local values.
Usually, it is more efficient to use SAVE statements rather than the
general "save" compiler option.
You should selectively use the SAVE statement for each subprogram
variable that you want to preserve until the next call.
This leads to smaller code than the "save" option and avoids the
overhead of allocating space and initializing values unnecessarily.
.do end
.*
.if '&lang' eq 'C' or '&lang' eq 'C/C++' .do begin
.*
.section Why my variables are not set to zero
.*
.np
.ix 'initialized global data'
.ix 'variables' 'set to zero'
.ix 'clearing' 'variables'
The linker is the program that handles the organization of code and
data and builds the executable file.
C guarantees that all global and static uninitialized data
will contain zeros.
The "BSS" region contains all uninitialized global and static data for
C programs (the name "BSS" is a remnant of the early UNIX C
compilers).
Most C compilers take advantage of this situation by not explicitly
storing all the zeros to achieve smaller executable file sizes.
.if '&target' ne 'QNX' .do begin
In order for the program to work correctly, there must be some code
(that will be executed before "main") that will clear the "BSS"
region.
The code that is executed before "main" is called "startup" code.
.ix 'BSS segment'
The linker must indicate to the startup code where the "BSS" region is
located.
In order to do this, the &lnkname (&lnkcmdup) treats the "BSS"
segment (region) in a special manner.
.do end
.if '&target' eq 'QNX' .do begin
The &lnkname (&lnkcmdup) treats the "BSS" segment (region) in a
special manner.
.do end
The special variables '_edata' and '_end' are constructed by the
&lnkname so that the startup code knows the beginning and end of
the "BSS" region.
.if '&target' ne 'QNX' .do begin
.np
Some users may prefer to use the linker provided by another compiler
vendor for development.
In order to have the program execute correctly, some extra care must
be taken with other linkers.
.ix 'Microsoft' 'LINK'
.ix 'Microsoft' 'LINK386'
.ix 'LINK'
.ix 'LINK386'
For instance, with the Microsoft linker (LINK) you must ensure that
the '/DOSSEG' command line option is used.
.ix 'Phar Lap' '386LINK'
.ix '386LINK'
With the Phar Lap Linker, you must use the "-DOSORDER" command line
option.
In general, if you must use other linkers, extract the module that
contains
.id _cstart
from
.fi clib?.lib
(? will change depending on the memory model) and specify the object
file containing
.id _cstart
as the first object file to be processed by the linker.
The object file will contain the information necessary for the linker
to build the executable file correctly.
.do end
.do end
.*
.if '&lang' eq 'C' or '&lang' eq 'C/C++' .do begin
.*
.section What does "size of DGROUP exceeds 64K" mean for 16-bit applications?
.*
.np
.ix 'DGROUP size exceeds 64K'
.ix 'size of DGROUP exceeds 64K'
This question applies to 16-bit applications.
There are two types of segments in which data is stored.
The two types of segments are classified as "near" and "far".
There is only one "near" segment while there may be many "far" segments.
The single "near" segment is provided for quick access to data but is
limited to less than 64K in size.
Conversely, the "far" segments can hold more than 64K of data but
suffer from a slight execution time penalty for accessing the data.
The "near" segment is linked by arranging for the different parts of
the "near" segment to fall into a group called DGROUP.
See the section entitled "Memory Layout" in
.if '&target' eq 'QNX' .do begin
this guide
.do end
.el .do begin
the
.book &lnkname User's Guide
.do end
for more details.
.np
The 8086 architecture cannot support segments larger than 64K.
As a result, if the size of DGROUP exceeds 64K, the program cannot
execute correctly.
The basic idea behind solving this problem is to move data out of the
single "near" segment into one or more "far" segments.
Of course, this solution does not come without any penalties.
The penalty is paid in decreased execution speed as a result of
accessing "far" data items.
The magnitude of this execution speed penalty depends on the behavior
of the program and, as such, cannot be predicted (i.e., we cannot say
that the program will take precisely 5% longer to execute).
The specific solution to this problem depends on the memory model
being used in the compilation of the program.
.np
If you are compiling with the tiny, small, or medium memory models
then there are two possible solutions.
The first solution involves changing the program source code so that
any large data items are declared as "far" data items and accessed with
"far" pointers.
The addition of the "far" keyword into the source code makes the source
code non-portable but this might be an acceptable tradeoff.
See the "Advanced Types" chapter in the
.book &company C Language Reference
manual for details on the use of the "near" and "far" keywords.
The second solution is to change memory models and use the large or
compact memory model.
The use of the large or compact memory model allows the compiler to
use "far" segments to store data items that are larger than 32K.
.np
The large and compact memory models will only allocate data items into
"far" segments if the size of the data item exceeds 32K.
If the size of DGROUP exceeds 64K then a good solution is to reduce
the size threshold so that smaller data items will be stored into "far"
segments.
The relevant compiler option to accomplish this task is "zt<num>".
The "zt" option sets a data size threshold which, if exceeded, will
allocate the data item in "far" segments.
For instance, if the option "zt100" is used, any data item larger than
100 bytes will be allocated in "far" segments.
A good starting value for the data threshold is 32 bytes (i.e.,
"zt32").
The number of compilations necessary to reduce the size of DGROUP for
a successful link with &lnkcmdup depends on the program.
Minimally, any files which allocate a lot of data items should be
recompiled.
The "zt<num>" option should be used for all subsequent compiles, but
the recompilation of all the source files in the program is not
necessary.
If the "DGROUP exceeds 64K" &lnkcmdup error persists, the threshold
used in the "zt<num>" option should be reduced and all of the source
files should be recompiled.
.*
.do end
.*
.if '&lang' eq 'C' or '&lang' eq 'C/C++' .do begin
.*
.section What does "NULL assignment detected" mean in 16-bit applications?
.*
.np
.ix 'NULL assignment detected'
.ix 'NULL assignment detected' 'debugging'
.ix 'debugging' 'NULL assignment detected'
.ix 'debugging' 'memory bugs'
.ix 'debugging' 'techniques'
.ix 'ISO/ANSI standard' 'NULL'
This question applies to 16-bit applications.
The C language makes use of the concept of a
.id NULL
pointer.
The
.id NULL
pointer cannot be dereferenced according to the ISO standard.
The &cmpname compiler cannot signal the programmer when the
.id NULL
address has been written to or read from because the Intel-based
personal computers do not have the necessary hardware support.
The best that the run-time system can do is help programmers find
these sorts of errors through indirect means.
The lower 32 bytes of "near" memory have been seeded with 32 bytes of
the value 0x01.
The C run-time function "_exit" checks these 32 bytes to ensure that
they have not been written over.
Any modification of these 32 bytes results in the "NULL assignment
error" being printed before the program terminates.
.np
Here is an overview of a good debugging technique for this sort of
error:
.autopoint
.point
use the &dbgname to debug the program
.point
let the program execute
.point
find out what memory has been incorrectly modified
.point
set a watchpoint on the modified memory address
.point
restart the program with the watchpoint active
.point
let the program execute, for a second time
.point
when the memory location is modified, execution will be suspended
.endpoint
.np
We will go through the commands that are executed for this debugging
session.
First of all, we invoke the &dbgname from the command line as
follows:
.millust begin
&prompt.&dbgcmd myprog
.millust end
.pc
Once we are in the debugger type:
.millust begin
DBG>go
.millust end
.pc
The program will now execute to completion.
At this point we can look at the output screen with the debugger
command, "FLIP".
.millust begin
DBG>flip
.millust end
.pc
We would see that the program had the run-time error "NULL assignment
detected".
At this point, all we have to do is find out what memory locations
were modified by the program.
.np
The following command will display the lower 16 bytes of "near" memory.
.millust begin
DBG>examine __nullarea
.millust end
.pc
The command should display 16 bytes of value 0x01.
Press the space bar to display the next 16 bytes of memory.
This should also display 16 bytes of value 0x01.
Notice that the following data has two bytes which have been
erroneously modified by the program.
.code begin
__nullarea     01 01 56 12 01 01 01 01-01 01 01 01 01 01 01 01
__nullarea+16  01 01 01 01 01 01 01 01-01 01 01 01 01 01 01 01
.code end
.np
The idea behind this debugging technique is to set a watchpoint on the
modified memory so that execution of the program will be suspended
when it modifies the memory.
The following command will "watch" the memory for you.
.millust begin
DBG>watch __nullarea+2
.millust end
.pc
There has to be a way to restart the program without leaving the
&dbgname so that the watchpoint is active during a subsequent
execution of the program.
The &dbgname command "NEW" will reload the program and prepare for a
new invocation of the program.
.millust begin
DBG>new
DBG>go
.millust end
.pc
The &dbgname comman

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -