📄 changelog
字号:
Reenable for the synthetic target
* tests/tm_basic.cxx:
Reenable for the synthetic target
2001-04-17 Jesper Skov <jskov@redhat.com>
* cdl/kernel.cdl: Do cache tests on E7T.
2001-04-05 Nick Garnett <nickg@cygnus.co.uk>
* tests/flag1.cxx: Apply same changes here as were applied to
kflag1 on 2000-07-17. This allows this test to run to completion
in slow targets, especially simulators.
* tests/stress_threads.c: Reduce run time even further in
simulator runs where instrumentation is enabled.
2001-04-03 Jesper Skov <jskov@redhat.com>
* tests/dhrystone.c: Fix feature check.
2001-03-28 Jonathan Larmour <jlarmour@redhat.com>
* cdl/kernel.cdl: Only need to compile dbg_gdb.cxx with
CYGDBG_KERNEL_DEBUG_GDB_THREAD_SUPPORT
* src/debug/dbg_gdb.cxx: Add new dbg_thread_id() function.
2001-02-23 Jonathan Larmour <jlarmour@redhat.com>
* include/thread.inl (attach_stack): Check for non-NULL stack base.
2001-02-11 Jonathan Larmour <jlarmour@redhat.com>
* tests/stress_threads.c: CYGINT_ISO_STDIO_FORMATTED_IO needs a
#ifdef not an #if.
* tests/dhrystone.c: Ditto.
2001-02-04 Jonathan Larmour <jlarmour@redhat.com>
* tests/kill.cxx: Increase delay for all targets, just in case some
are slow.
2001-01-30 Hugo Tyson <hmt@redhat.com>
* src/common/clock.cxx (rem_alarm): Must clear the enabled flag;
this disappeared in the changes to using clists of 2001-01-09.
Symptom was that an alarm, once disabled, could never be
re-attached to its counter because it claimed it already was.
Plus asserts with multiple disables - "bad counter object".
2001-01-30 Hugo Tyson <hmt@redhat.com>
* src/common/thread.cxx (reinitialize): Following change of
2000-12-05, if CYGFUN_KERNEL_THREADS_STACK_CHECKING, this was
using the stack_base/stack_size variables directly to reinitialize
the stack area. This was wrong, and leaked store off the top and
bottom of the stacks because the "buffer zone" was carved off
repeatedly. Fix is to use the published APIs which compensate.
2001-01-26 Nick Garnett <nickg@cygnus.co.uk>
* include/mlqueue.hxx:
* src/sched/mlqueue.cxx:
Restored Cyg_ThreadQueue_Implementation::remove() since it must
clear the thread's queue pointer, which the base clist class
remove() does not.
2001-01-24 Jesper Skov <jskov@redhat.com>
* src/sched/mlqueue.cxx (highpri): Fix trace call.
2001-01-09 Nick Garnett <nickg@cygnus.co.uk>
* include/mlqueue.hxx:
* src/sched/mlqueue.cxx:
Converted to use clist.hxx list implementation. The main effect of
this is to clean up the code and class definitions since much of
what was part of the thread queue and thread classes now moves to
the DNode and CList classes.
* include/clock.hxx:
* src/common/clock.cxx:
Converted to use clist.hxx list implementation. This removes all
the explicit list manipulation code from the counter and alarm
classes, resulting in cleaner, easier to understand code.
* include/kapidata.h: Adjusted cyg_alarm struct to match Cyg_Alarm
using Cyg_DNode.
2000-12-22 Jonathan Larmour <jlarmour@redhat.com>
* include/thread.inl (check_stack): check word alignment with CYG_WORD
not cyg_uint32
Add extra stack checking for when stack limits are used.
(measure_stack_usage): New function to measure stack usage of the thread
(attach_stack): check word alignment with CYG_WORD not cyg_uint32
Initialize stack to preset value when measuring stack usage
(increment_stack_limit): Conditionalize here wrt
CYGFUN_KERNEL_THREADS_STACK_CHECKING and use the version in thread.cxx
instead if CYGFUN_KERNEL_THREADS_STACK_CHECKING is defined.
* src/common/thread.cxx (exit): If verbose stack measurement enabled,
output stack usage
(increment_stack_limit): Add version of this method when
CYGFUN_KERNEL_THREADS_STACK_CHECKING *is* defined. This will add
padding above the stack limit as necessary.
* include/thread.hxx (class Cyg_HardwareThread): Add
measure_stack_usage() member
* cdl/thread.cdl (CYGFUN_KERNEL_THREADS_STACK_MEASUREMENT):
Add to implement stack usage measurement.
* include/kapi.h (cyg_thread_measure_stack_usage): New function
* src/common/kapi.cxx (cyg_thread_measure_stack_usage): New function
2000-12-08 Jonathan Larmour <jlarmour@redhat.com>
* cdl/thread.cdl (CYGFUN_KERNEL_ALL_THREADS_STACK_CHECKING):
Requires threads list, rather than active_if them so that
inference engine can do its thang.
2000-12-07 Jesper Skov <jskov@redhat.com>
* src/debug/dbg-thread-demux.c: Add comment about the use of
DBG_SYSCALL_THREAD_VEC_NUM vs CYGNUM_CALL_IF_DBG_SYSCALL.
2000-12-06 Hugo Tyson <hmt@redhat.com>
* include/thread.inl (attach_stack): Additional assert check for
unsigned wrap of the stack size in subtracting the signature
areas' size. Also round to whole words better.
2000-12-05 Hugo Tyson <hmt@redhat.com>
* cdl/thread.cdl (CYGFUN_KERNEL_THREADS_STACK_CHECKING): New
option, to control new stack check features. Enabled by default,
but only active if CYGPKG_INFRA_DEBUG and CYGDBG_USE_ASSERTS
anyway, plus checking *all* threads is possible, but default off,
iff CYGVAR_KERNEL_THREADS_LIST.
* include/thread.hxx (class Cyg_HardwareThread): Define
check_stack() function.
* include/thread.inl (attach_stack): Add initialization of a
signature in the top and base of the stack, if so configured.
(check_stack): New function to check that signature for
correctness; minor re-ordering to permit more inlining.
* src/sched/sched.cxx (unlock_inner): Check departing and incoming
thread stacks if CYGFUN_KERNEL_THREADS_STACK_CHECKING. Also, if
CYGFUN_KERNEL_ALL_THREADS_STACK_CHECKING, check all registered
thread stacks. This is placed here to get executed every
clocktick and other interrupts that call DSRs, rather than messing
with interrupt_end() or the idle thread.
2000-12-04 Hugo Tyson <hmt@redhat.com>
* tests/kcache2.c (entry0): Make this more robust against a
complete absence of useful caches. Previous change was not
careful enough.
2000-12-01 Hugo Tyson <hmt@redhat.com>
* cdl/kernel.cdl: Build the kcache tests for SA11x0 family; they
were being omitted by default as part of ARM family. They work on
SA1110, so this should be OK. They're OK on EBSAs too. See
associated fix to cache macros in SA11x0 and EBSSA HALs.
* tests/kcache2.c (entry0): Fix the test; the problem was it
assumed that a write to a previously unseen location would end up
in the cache. It ain't so on StrongARMs. Also make tests safe
wrt interrupts possibly perturbing the cache, add explicit tests
for HAL_DCACHE_INVALIDATE_ALL(), ...DISABLE() and ...SYNC(), and
improve the tests for cache line invalidate and store.
2000-10-30 Jesper Skov <jskov@redhat.com>
* cdl/synch.cdl: Replaced CYGINT_KERNEL_SCHEDULER_CAN_YIELD with
CYGINT_KERNEL_SCHEDULER_UNIQUE_PRIORITIES.
* cdl/scheduler.cdl:
CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL requires
CYGINT_KERNEL_SCHEDULER_UNIQUE_PRIORITIES.
* tests/thread2.cxx: Use new option.
* tests/klock.c: Same.
* src/common/thread.cxx: Same.
* src/common/clock.cxx: Same.
* include/bitmap.hxx: Leave unique priority setting to CDL.
* include/mlqueue.hxx: Same.
* include/sched.hxx: Let CDL do sanity check of config.
2000-10-27 Jesper Skov <jskov@redhat.com>
* cdl/scheduler.cdl: Added CYGINT_KERNEL_SCHEDULER_CAN_YIELD
* tests/klock.c: Avoid use of disabled features. Require scheduler
that can yield.
2000-10-20 Jonathan Larmour <jlarmour@redhat.com>
* tests/bin_sem0.cxx:
* tests/bin_sem1.cxx:
* tests/bin_sem2.cxx:
* tests/clock0.cxx:
* tests/clock1.cxx:
* tests/clockcnv.cxx:
* tests/cnt_sem0.cxx:
* tests/cnt_sem1.cxx:
* tests/except1.cxx:
* tests/flag0.cxx:
* tests/flag1.cxx:
* tests/intr0.cxx:
* tests/kill.cxx:
* tests/mbox1.cxx:
* tests/mqueue1.cxx:
* tests/mutex0.cxx:
* tests/mutex1.cxx:
* tests/mutex2.cxx:
* tests/mutex3.cxx:
* tests/philo.cxx:
* tests/release.cxx:
* tests/sched1.cxx:
* tests/sync2.cxx:
* tests/sync3.cxx:
* tests/testaux.hxx:
* tests/thread0.cxx:
* tests/thread1.cxx:
* tests/thread2.cxx:
* tests/tm_basic.cxx:
Make sure default priority constructors have been invoked.
* include/intr.hxx (class Cyg_Interrupt): Make dsr_count volatile
to prevent a potential race condition with overzealous C
compilers.
2000-10-13 Nick Garnett <nickg@cygnus.co.uk>
* src/sched/sched.cxx (unlock_inner): Added condition to test for
DSRs to only call DSRs when the scheduler lock is making a 0->1
transition. Otherwise there is the danger of calling DSRs when the
scheduler lock is > 1. This violates our original assumptions
about how the scheduler lock worked with respect to DSR.
* src/intr/intr.cxx (call_pending_DSRs): Added assert to check
that DSRs are only called when the scheduler lock is exactly 1.
2000-10-13 Jesper Skov <jskov@redhat.com>
* include/intr.hxx: Fixing syntax mistake; volatile keyword must
appear after the type for it to affect the pointer variable.
* src/intr/intr.cxx: Same. Remove volatile from local block.
2000-10-05 Jesper Skov <jskov@redhat.co.uk>
* src/intr/intr.cxx: Made dsr_table_tail volatile as well.
* include/intr.hxx: Ditto.
2000-10-05 Nick Garnett <nickg@cygnus.co.uk>
* src/sched/sched.cxx:
* include/sched.hxx: Converted asr_inhibit from a bool to a
counter. This is necessary to permit nesting of ASR inhibiting
functions.
2000-10-04 Jesper Skov <jskov@redhat.co.uk>
* include/intr.hxx: Made dsr_list volatile.
* src/intr/intr.cxx: Same. Also fix compiler warning.
2000-09-25 Nick Garnett <nickg@cygnus.co.uk>
* src/sched/mlqueue.cxx:
Added test for current thread not runnable in
Cyg_Scheduler_Implementation::timeslice(). This is possible if a
prior DSR has caused the current thread to be descheduled. Added
an assert to Cyg_ThreadQueue_Implementation::rotate() for
additional paranoia. (This problem was originally identified and
fixed (differently) by Andrew Lunn <andrew.lunn@ascom.ch>.)
2000-09-13 Jesper Skov <jskov@redhat.com>
* tests/kexcept1.c (cause_exception): Use separate cause_fpe function.
* tests/except1.cxx (cause_exception): Same.
* tests/kexcept1.c (cause_exception): Do not use division at all.
* tests/except1.cxx (cause_exception): Same.
* tests/kexcept1.c (cause_exception): Do not cause div-by-zero.
* tests/except1.cxx (cause_exception): Same.
2000-09-11 Jonathan Larmour <jlarmour@redhat.com>
* cdl/instrument.cdl (CYGVAR_KERNEL_INSTRUMENT_EXTERNAL_BUFFER):
Bring this option back from the dead
2000-09-08 Nick Garnett <nickg@cygnus.co.uk>
* include/sched.hxx:
* include/sched.inl:
Added Cyg_Scheduler::unlock_reschedule() function. This decrements
the scheduler lock by one but also permits the current thread to
be rescheduled if it ready to sleep, or there is a higher priority
thread ready to run. This is to support use of various
synchronization objects while the scheduler lock is claimed.
* src/sched/sched.cxx (unlock_inner): Modified precondition to
allow for functionality of unlock_reschedule().
* src/sched/mlqueue.cxx:
Now uses Cyg_SchedulerThreadQueue_Implementation for all runqueue
pointers. It was using Cyg_ThreadQueue_Implementation in some
places which meant we were trying to sort the run queues!
Changed yield() so it can be called with the scheduler lock
claimed.
Changed Cyg_Scheduler_Implementation::timeslice() to rotate the
queue itself rather than call yield(). The changes to yield() make
it unsafe to call here any more.
* include/mlqueue.hxx (class Cyg_SchedulerThreadQueue_Implementation):
Made enqueue() member public.
* include/mboxt2.inl:
* src/common/thread.cxx:
* src/sync/mutex.cxx:
* src/sync/cnt_sem2.cxx:
Removed assertions for zero scheduler lock and replaced some
invocations of Cyg_Scheduler::unlock() with
Cyg_Scheduler::unlock_reschedule() or Cyg_Scheduler::reschedule()
where appropriate.
* tests/klock.c:
* cdl/kernel.cdl:
Added klock.c to test functionality while scheduler lock is claimed.
2000-08-17 Hugo Tyson <hmt@cygnus.co.uk>
* src/sched/sched.cxx (unlock_inner): Move an assert to some place
where it's true *all* the time! There was a narrow margin where a
DSR could confuse unlock_inner() by reanimating the current thread
before it had a chance to sleep - hence the call appears to be
pointless. Putting the assert before the DSR calls makes sense.
* include/mboxt.inl:
* src/sync/bin_sem.cxx:
* src/sync/cnt_sem.cxx:
* src/sync/flag.cxx:
* src/sync/mutex.cxx:
All of these now use Cyg_Scheduler::reschedule() rather than an
unlock/lock pair to yield in their functions which can sleep.
They therefore can be called safely and atomically with the
scheduler already locked. This is a Good Thing[tm]. Since the
network stack synch primitives now use this feature, the asserts
that the scheduler was not locked must disappear: this change.
2000-08-07 Jonathan Larmour <jlarmour@redhat.co.uk>
* include/mutex.hxx (class Cyg_Mutex): Add comment explaining
presence of locked.
2000-08-04 Jonathan Larmour <jlarmour@redhat.co.uk>
* tests/stress_threads.c (STACK_SIZE_HANDLER): Increase stack sizes
otherwise it crashes!
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -