📄 gc.h
字号:
/* Limit the heap size to n bytes. Useful when you're debugging, *//* especially on systems that don't handle running out of memory well. *//* n == 0 ==> unbounded. This is the default. */GC_API void GC_set_max_heap_size GC_PROTO((GC_word n));/* Inform the collector that a certain section of statically allocated *//* memory contains no pointers to garbage collected memory. Thus it *//* need not be scanned. This is sometimes important if the application *//* maps large read/write files into the address space, which could be *//* mistaken for dynamic library data segments on some systems. */GC_API void GC_exclude_static_roots GC_PROTO((GC_PTR start, GC_PTR finish));/* Clear the set of root segments. Wizards only. */GC_API void GC_clear_roots GC_PROTO((void));/* Add a root segment. Wizards only. */GC_API void GC_add_roots GC_PROTO((char * low_address, char * high_address_plus_1));/* Remove a root segment. Wizards only. */GC_API void GC_remove_roots GC_PROTO((char * low_address, char * high_address_plus_1));/* Add a displacement to the set of those considered valid by the *//* collector. GC_register_displacement(n) means that if p was returned *//* by GC_malloc, then (char *)p + n will be considered to be a valid *//* pointer to p. N must be small and less than the size of p. *//* (All pointers to the interior of objects from the stack are *//* considered valid in any case. This applies to heap objects and *//* static data.) *//* Preferably, this should be called before any other GC procedures. *//* Calling it later adds to the probability of excess memory *//* retention. *//* This is a no-op if the collector has recognition of *//* arbitrary interior pointers enabled, which is now the default. */GC_API void GC_register_displacement GC_PROTO((GC_word n));/* The following version should be used if any debugging allocation is *//* being done. */GC_API void GC_debug_register_displacement GC_PROTO((GC_word n));/* Explicitly trigger a full, world-stop collection. */GC_API void GC_gcollect GC_PROTO((void));/* Trigger a full world-stopped collection. Abort the collection if *//* and when stop_func returns a nonzero value. Stop_func will be *//* called frequently, and should be reasonably fast. This works even *//* if virtual dirty bits, and hence incremental collection is not *//* available for this architecture. Collections can be aborted faster *//* than normal pause times for incremental collection. However, *//* aborted collections do no useful work; the next collection needs *//* to start from the beginning. *//* Return 0 if the collection was aborted, 1 if it succeeded. */typedef int (* GC_stop_func) GC_PROTO((void));GC_API int GC_try_to_collect GC_PROTO((GC_stop_func stop_func));/* Return the number of bytes in the heap. Excludes collector private *//* data structures. Includes empty blocks and fragmentation loss. *//* Includes some pages that were allocated but never written. */GC_API size_t GC_get_heap_size GC_PROTO((void));/* Return a lower bound on the number of free bytes in the heap. */GC_API size_t GC_get_free_bytes GC_PROTO((void));/* Return the number of bytes allocated since the last collection. */GC_API size_t GC_get_bytes_since_gc GC_PROTO((void));/* Return the total number of bytes allocated in this process. *//* Never decreases, except due to wrapping. */GC_API size_t GC_get_total_bytes GC_PROTO((void));/* Disable garbage collection. Even GC_gcollect calls will be *//* ineffective. */GC_API void GC_disable GC_PROTO((void));/* Reenable garbage collection. GC_disable() and GC_enable() calls *//* nest. Garbage collection is enabled if the number of calls to both *//* both functions is equal. */GC_API void GC_enable GC_PROTO((void));/* Enable incremental/generational collection. *//* Not advisable unless dirty bits are *//* available or most heap objects are *//* pointerfree(atomic) or immutable. *//* Don't use in leak finding mode. *//* Ignored if GC_dont_gc is true. *//* Only the generational piece of this is *//* functional if GC_parallel is TRUE *//* or if GC_time_limit is GC_TIME_UNLIMITED. *//* Causes GC_local_gcj_malloc() to revert to *//* locked allocation. Must be called *//* before any GC_local_gcj_malloc() calls. */GC_API void GC_enable_incremental GC_PROTO((void));/* Does incremental mode write-protect pages? Returns zero or *//* more of the following, or'ed together: */#define GC_PROTECTS_POINTER_HEAP 1 /* May protect non-atomic objs. */#define GC_PROTECTS_PTRFREE_HEAP 2#define GC_PROTECTS_STATIC_DATA 4 /* Curently never. */#define GC_PROTECTS_STACK 8 /* Probably impractical. */#define GC_PROTECTS_NONE 0GC_API int GC_incremental_protection_needs GC_PROTO((void));/* Perform some garbage collection work, if appropriate. *//* Return 0 if there is no more work to be done. *//* Typically performs an amount of work corresponding roughly *//* to marking from one page. May do more work if further *//* progress requires it, e.g. if incremental collection is *//* disabled. It is reasonable to call this in a wait loop *//* until it returns 0. */GC_API int GC_collect_a_little GC_PROTO((void));/* Allocate an object of size lb bytes. The client guarantees that *//* as long as the object is live, it will be referenced by a pointer *//* that points to somewhere within the first 256 bytes of the object. *//* (This should normally be declared volatile to prevent the compiler *//* from invalidating this assertion.) This routine is only useful *//* if a large array is being allocated. It reduces the chance of *//* accidentally retaining such an array as a result of scanning an *//* integer that happens to be an address inside the array. (Actually, *//* it reduces the chance of the allocator not finding space for such *//* an array, since it will try hard to avoid introducing such a false *//* reference.) On a SunOS 4.X or MS Windows system this is recommended *//* for arrays likely to be larger than 100K or so. For other systems, *//* or if the collector is not configured to recognize all interior *//* pointers, the threshold is normally much higher. */GC_API GC_PTR GC_malloc_ignore_off_page GC_PROTO((size_t lb));GC_API GC_PTR GC_malloc_atomic_ignore_off_page GC_PROTO((size_t lb));#if defined(__sgi) && !defined(__GNUC__) && _COMPILER_VERSION >= 720# define GC_ADD_CALLER# define GC_RETURN_ADDR (GC_word)__return_address#endif#ifdef __linux__# include <features.h># if (__GLIBC__ == 2 && __GLIBC_MINOR__ >= 1 || __GLIBC__ > 2) \ && !defined(__ia64__)# ifndef GC_HAVE_BUILTIN_BACKTRACE# define GC_HAVE_BUILTIN_BACKTRACE# endif# endif# if defined(__i386__) || defined(__x86_64__)# define GC_CAN_SAVE_CALL_STACKS# endif#endif#if defined(GC_HAVE_BUILTIN_BACKTRACE) && !defined(GC_CAN_SAVE_CALL_STACKS)# define GC_CAN_SAVE_CALL_STACKS#endif#if defined(__sparc__)# define GC_CAN_SAVE_CALL_STACKS#endif/* If we're on an a platform on which we can't save call stacks, but *//* gcc is normally used, we go ahead and define GC_ADD_CALLER. *//* We make this decision independent of whether gcc is actually being *//* used, in order to keep the interface consistent, and allow mixing *//* of compilers. *//* This may also be desirable if it is possible but expensive to *//* retrieve the call chain. */#if (defined(__linux__) || defined(__NetBSD__) || defined(__OpenBSD__) \ || defined(__FreeBSD__)) & !defined(GC_CAN_SAVE_CALL_STACKS)# define GC_ADD_CALLER# if __GNUC__ >= 3 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 95) /* gcc knows how to retrieve return address, but we don't know */ /* how to generate call stacks. */# define GC_RETURN_ADDR (GC_word)__builtin_return_address(0)# else /* Just pass 0 for gcc compatibility. */# define GC_RETURN_ADDR 0# endif#endif#ifdef GC_ADD_CALLER# define GC_EXTRAS GC_RETURN_ADDR, __FILE__, __LINE__# define GC_EXTRA_PARAMS GC_word ra, GC_CONST char * s, int i#else# define GC_EXTRAS __FILE__, __LINE__# define GC_EXTRA_PARAMS GC_CONST char * s, int i#endif/* Debugging (annotated) allocation. GC_gcollect will check *//* objects allocated in this way for overwrites, etc. */GC_API GC_PTR GC_debug_malloc GC_PROTO((size_t size_in_bytes, GC_EXTRA_PARAMS));GC_API GC_PTR GC_debug_malloc_atomic GC_PROTO((size_t size_in_bytes, GC_EXTRA_PARAMS));GC_API GC_PTR GC_debug_malloc_uncollectable GC_PROTO((size_t size_in_bytes, GC_EXTRA_PARAMS));GC_API GC_PTR GC_debug_malloc_stubborn GC_PROTO((size_t size_in_bytes, GC_EXTRA_PARAMS));GC_API GC_PTR GC_debug_malloc_ignore_off_page GC_PROTO((size_t size_in_bytes, GC_EXTRA_PARAMS));GC_API GC_PTR GC_debug_malloc_atomic_ignore_off_page GC_PROTO((size_t size_in_bytes, GC_EXTRA_PARAMS));GC_API void GC_debug_free GC_PROTO((GC_PTR object_addr));GC_API GC_PTR GC_debug_realloc GC_PROTO((GC_PTR old_object, size_t new_size_in_bytes, GC_EXTRA_PARAMS));GC_API void GC_debug_change_stubborn GC_PROTO((GC_PTR));GC_API void GC_debug_end_stubborn_change GC_PROTO((GC_PTR));/* Routines that allocate objects with debug information (like the *//* above), but just fill in dummy file and line number information. *//* Thus they can serve as drop-in malloc/realloc replacements. This *//* can be useful for two reasons: *//* 1) It allows the collector to be built with DBG_HDRS_ALL defined *//* even if some allocation calls come from 3rd party libraries *//* that can't be recompiled. *//* 2) On some platforms, the file and line information is redundant, *//* since it can be reconstructed from a stack trace. On such *//* platforms it may be more convenient not to recompile, e.g. for *//* leak detection. This can be accomplished by instructing the *//* linker to replace malloc/realloc with these. */GC_API GC_PTR GC_debug_malloc_replacement GC_PROTO((size_t size_in_bytes));GC_API GC_PTR GC_debug_realloc_replacement GC_PROTO((GC_PTR object_addr, size_t size_in_bytes)); # ifdef GC_DEBUG# define GC_MALLOC(sz) GC_debug_malloc(sz, GC_EXTRAS)# define GC_MALLOC_ATOMIC(sz) GC_debug_malloc_atomic(sz, GC_EXTRAS)# define GC_MALLOC_UNCOLLECTABLE(sz) \ GC_debug_malloc_uncollectable(sz, GC_EXTRAS)# define GC_MALLOC_IGNORE_OFF_PAGE(sz) \ GC_debug_malloc_ignore_off_page(sz, GC_EXTRAS)# define GC_MALLOC_ATOMIC_IGNORE_OFF_PAGE(sz) \ GC_debug_malloc_atomic_ignore_off_page(sz, GC_EXTRAS)# define GC_REALLOC(old, sz) GC_debug_realloc(old, sz, GC_EXTRAS)# define GC_FREE(p) GC_debug_free(p)# define GC_REGISTER_FINALIZER(p, f, d, of, od) \ GC_debug_register_finalizer(p, f, d, of, od)# define GC_REGISTER_FINALIZER_IGNORE_SELF(p, f, d, of, od) \ GC_debug_register_finalizer_ignore_self(p, f, d, of, od)# define GC_REGISTER_FINALIZER_NO_ORDER(p, f, d, of, od) \ GC_debug_register_finalizer_no_order(p, f, d, of, od)# define GC_MALLOC_STUBBORN(sz) GC_debug_malloc_stubborn(sz, GC_EXTRAS);# define GC_CHANGE_STUBBORN(p) GC_debug_change_stubborn(p)# define GC_END_STUBBORN_CHANGE(p) GC_debug_end_stubborn_change(p)# define GC_GENERAL_REGISTER_DISAPPEARING_LINK(link, obj) \ GC_general_register_disappearing_link(link, GC_base(obj))# define GC_REGISTER_DISPLACEMENT(n) GC_debug_register_displacement(n)# else# define GC_MALLOC(sz) GC_malloc(sz)# define GC_MALLOC_ATOMIC(sz) GC_malloc_atomic(sz)# define GC_MALLOC_UNCOLLECTABLE(sz) GC_malloc_uncollectable(sz)# define GC_MALLOC_IGNORE_OFF_PAGE(sz) \ GC_malloc_ignore_off_page(sz)# define GC_MALLOC_ATOMIC_IGNORE_OFF_PAGE(sz) \ GC_malloc_atomic_ignore_off_page(sz)# define GC_REALLOC(old, sz) GC_realloc(old, sz)# define GC_FREE(p) GC_free(p)# define GC_REGISTER_FINALIZER(p, f, d, of, od) \ GC_register_finalizer(p, f, d, of, od)# define GC_REGISTER_FINALIZER_IGNORE_SELF(p, f, d, of, od) \ GC_register_finalizer_ignore_self(p, f, d, of, od)# define GC_REGISTER_FINALIZER_NO_ORDER(p, f, d, of, od) \ GC_register_finalizer_no_order(p, f, d, of, od)# define GC_MALLOC_STUBBORN(sz) GC_malloc_stubborn(sz)# define GC_CHANGE_STUBBORN(p) GC_change_stubborn(p)# define GC_END_STUBBORN_CHANGE(p) GC_end_stubborn_change(p)# define GC_GENERAL_REGISTER_DISAPPEARING_LINK(link, obj) \ GC_general_register_disappearing_link(link, obj)# define GC_REGISTER_DISPLACEMENT(n) GC_register_displacement(n)# endif/* The following are included because they are often convenient, and *//* reduce the chance for a misspecifed size argument. But calls may *//* expand to something syntactically incorrect if t is a complicated *//* type expression. */# define GC_NEW(t) (t *)GC_MALLOC(sizeof (t))# define GC_NEW_ATOMIC(t) (t *)GC_MALLOC_ATOMIC(sizeof (t))# define GC_NEW_STUBBORN(t) (t *)GC_MALLOC_STUBBORN(sizeof (t))# define GC_NEW_UNCOLLECTABLE(t) (t *)GC_MALLOC_UNCOLLECTABLE(sizeof (t))/* Finalization. Some of these primitives are grossly unsafe. *//* The idea is to make them both cheap, and sufficient to build *//* a safer layer, closer to Modula-3, Java, or PCedar finalization. *//* The interface represents my conclusions from a long discussion *//* with Alan Demers, Dan Greene, Carl Hauser, Barry Hayes, *//* Christian Jacobi, and Russ Atkinson. It's not perfect, and *//* probably nobody else agrees with it. Hans-J. Boehm 3/13/92 */typedef void (*GC_finalization_proc) GC_PROTO((GC_PTR obj, GC_PTR client_data));GC_API void GC_register_finalizer GC_PROTO((GC_PTR obj, GC_finalization_proc fn, GC_PTR cd, GC_finalization_proc *ofn, GC_PTR *ocd));GC_API void GC_debug_register_finalizer GC_PROTO((GC_PTR obj, GC_finalization_proc fn, GC_PTR cd, GC_finalization_proc *ofn, GC_PTR *ocd)); /* When obj is no longer accessible, invoke */ /* (*fn)(obj, cd). If a and b are inaccessible, and */ /* a points to b (after disappearing links have been */ /* made to disappear), then only a will be */ /* finalized. (If this does not create any new */ /* pointers to b, then b will be finalized after the */ /* next collection.) Any finalizable object that */ /* is reachable from itself by following one or more */ /* pointers will not be finalized (or collected). */ /* Thus cycles involving finalizable objects should */ /* be avoided, or broken by disappearing links. */ /* All but the last finalizer registered for an object */ /* is ignored. */ /* Finalization may be removed by passing 0 as fn. */ /* Finalizers are implicitly unregistered just before */ /* they are invoked. */ /* The old finalizer and client data are stored in */ /* *ofn and *ocd. */ /* Fn is never invoked on an accessible object, */ /* provided hidden pointers are converted to real */ /* pointers only if the allocation lock is held, and */ /* such conversions are not performed by finalization */ /* routines. */ /* If GC_register_finalizer is aborted as a result of */ /* a signal, the object may be left with no */ /* finalization, even if neither the old nor new */ /* finalizer were NULL. */ /* Obj should be the nonNULL starting address of an */ /* object allocated by GC_malloc or friends. */ /* Note that any garbage collectable object referenced */ /* by cd will be considered accessible until the */ /* finalizer is invoked. *//* Another versions of the above follow. It ignores *//* self-cycles, i.e. pointers from a finalizable object to *//* itself. There is a stylistic argument that this is wrong, *//* but it's unavoidable for C++, since the compiler may *//* silently introduce these. It's also benign in that specific *//* case. And it helps if finalizable objects are split to *//* avoid cycles. *//* Note that cd will still be viewed as accessible, even if it *//* refers to the object itself. */GC_API void GC_register_finalizer_ignore_self
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -