⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 spaces.h.svn-base

📁 Google浏览器V8内核代码
💻 SVN-BASE
📖 第 1 页 / 共 4 页
字号:
// Copyright 2006-2008 the V8 project authors. All rights reserved.// Redistribution and use in source and binary forms, with or without// modification, are permitted provided that the following conditions are// met:////     * Redistributions of source code must retain the above copyright//       notice, this list of conditions and the following disclaimer.//     * Redistributions in binary form must reproduce the above//       copyright notice, this list of conditions and the following//       disclaimer in the documentation and/or other materials provided//       with the distribution.//     * Neither the name of Google Inc. nor the names of its//       contributors may be used to endorse or promote products derived//       from this software without specific prior written permission.//// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.#ifndef V8_SPACES_H_#define V8_SPACES_H_#include "list-inl.h"#include "log.h"namespace v8 { namespace internal {// -----------------------------------------------------------------------------// Heap structures://// A JS heap consists of a young generation, an old generation, and a large// object space. The young generation is divided into two semispaces. A// scavenger implements Cheney's copying algorithm. The old generation is// separated into a map space and an old object space. The map space contains// all (and only) map objects, the rest of old objects go into the old space.// The old generation is collected by a mark-sweep-compact collector.//// The semispaces of the young generation are contiguous.  The old and map// spaces consists of a list of pages. A page has a page header, a remembered// set area, and an object area. A page size is deliberately chosen as 8K// bytes. The first word of a page is an opaque page header that has the// address of the next page and its ownership information. The second word may// have the allocation top address of this page. The next 248 bytes are// remembered sets. Heap objects are aligned to the pointer size (4 bytes). A// remembered set bit corresponds to a pointer in the object area.//// There is a separate large object space for objects larger than// Page::kMaxHeapObjectSize, so that they do not have to move during// collection.  The large object space is paged and uses the same remembered// set implementation.  Pages in large object space may be larger than 8K.//// NOTE: The mark-compact collector rebuilds the remembered set after a// collection. It reuses first a few words of the remembered set for// bookkeeping relocation information.// Some assertion macros used in the debugging mode.#define ASSERT_PAGE_ALIGNED(address)                  \  ASSERT((OffsetFrom(address) & Page::kPageAlignmentMask) == 0)#define ASSERT_OBJECT_ALIGNED(address)                \  ASSERT((OffsetFrom(address) & kObjectAlignmentMask) == 0)#define ASSERT_OBJECT_SIZE(size)                      \  ASSERT((0 < size) && (size <= Page::kMaxHeapObjectSize))#define ASSERT_PAGE_OFFSET(offset)                    \  ASSERT((Page::kObjectStartOffset <= offset)         \      && (offset <= Page::kPageSize))#define ASSERT_MAP_PAGE_INDEX(index)                            \  ASSERT((0 <= index) && (index <= MapSpace::kMaxMapPageIndex))class PagedSpace;class MemoryAllocator;class AllocationInfo;// -----------------------------------------------------------------------------// A page normally has 8K bytes. Large object pages may be larger.  A page// address is always aligned to the 8K page size.  A page is divided into// three areas: the first two words are used for bookkeeping, the next 248// bytes are used as remembered set, and the rest of the page is the object// area.//// Pointers are aligned to the pointer size (4 bytes), only 1 bit is needed// for a pointer in the remembered set. Given an address, its remembered set// bit position (offset from the start of the page) is calculated by dividing// its page offset by 32. Therefore, the object area in a page starts at the// 256th byte (8K/32). Bytes 0 to 255 do not need the remembered set, so that// the first two words (64 bits) in a page can be used for other purposes.//// The mark-compact collector transforms a map pointer into a page index and a// page offset. The map space can have up to 1024 pages, and 8M bytes (1024 *// 8K) in total.  Because a map pointer is aligned to the pointer size (4// bytes), 11 bits are enough to encode the page offset. 21 bits (10 for the// page index + 11 for the offset in the page) are required to encode a map// pointer.//// The only way to get a page pointer is by calling factory methods://   Page* p = Page::FromAddress(addr); or//   Page* p = Page::FromAllocationTop(top);class Page { public:  // Returns the page containing a given address. The address ranges  // from [page_addr .. page_addr + kPageSize[  //  // Note that this function only works for addresses in normal paged  // spaces and addresses in the first 8K of large object pages (ie,  // the start of large objects but not necessarily derived pointers  // within them).  INLINE(static Page* FromAddress(Address a)) {    return reinterpret_cast<Page*>(OffsetFrom(a) & ~kPageAlignmentMask);  }  // Returns the page containing an allocation top. Because an allocation  // top address can be the upper bound of the page, we need to subtract  // it with kPointerSize first. The address ranges from  // [page_addr + kObjectStartOffset .. page_addr + kPageSize].  INLINE(static Page* FromAllocationTop(Address top)) {    Page* p = FromAddress(top - kPointerSize);    ASSERT_PAGE_OFFSET(p->Offset(top));    return p;  }  // Returns the start address of this page.  Address address() { return reinterpret_cast<Address>(this); }  // Checks whether this is a valid page address.  bool is_valid() { return address() != NULL; }  // Returns the next page of this page.  inline Page* next_page();  // Return the end of allocation in this page. Undefined for unused pages.  inline Address AllocationTop();  // Returns the start address of the object area in this page.  Address ObjectAreaStart() { return address() + kObjectStartOffset; }  // Returns the end address (exclusive) of the object area in this page.  Address ObjectAreaEnd() { return address() + Page::kPageSize; }  // Returns the start address of the remembered set area.  Address RSetStart() { return address() + kRSetStartOffset; }  // Returns the end address of the remembered set area (exclusive).  Address RSetEnd() { return address() + kRSetEndOffset; }  // Checks whether an address is page aligned.  static bool IsAlignedToPageSize(Address a) {    return 0 == (OffsetFrom(a) & kPageAlignmentMask);  }  // True if this page is a large object page.  bool IsLargeObjectPage() { return (is_normal_page & 0x1) == 0; }  // Returns the offset of a given address to this page.  INLINE(int Offset(Address a)) {    int offset = a - address();    ASSERT_PAGE_OFFSET(offset);    return offset;  }  // Returns the address for a given offset to the this page.  Address OffsetToAddress(int offset) {    ASSERT_PAGE_OFFSET(offset);    return address() + offset;  }  // ---------------------------------------------------------------------  // Remembered set support  // Clears remembered set in this page.  inline void ClearRSet();  // Return the address of the remembered set word corresponding to an  // object address/offset pair, and the bit encoded as a single-bit  // mask in the output parameter 'bitmask'.  INLINE(static Address ComputeRSetBitPosition(Address address, int offset,                                               uint32_t* bitmask));  // Sets the corresponding remembered set bit for a given address.  INLINE(static void SetRSet(Address address, int offset));  // Clears the corresponding remembered set bit for a given address.  static inline void UnsetRSet(Address address, int offset);  // Checks whether the remembered set bit for a given address is set.  static inline bool IsRSetSet(Address address, int offset);#ifdef DEBUG  // Use a state to mark whether remembered set space can be used for other  // purposes.  enum RSetState { IN_USE,  NOT_IN_USE };  static bool is_rset_in_use() { return rset_state_ == IN_USE; }  static void set_rset_state(RSetState state) { rset_state_ = state; }#endif  // 8K bytes per page.  static const int kPageSizeBits = 13;  // Page size in bytes.  This must be a multiple of the OS page size.  static const int kPageSize = 1 << kPageSizeBits;  // Page size mask.  static const int kPageAlignmentMask = (1 << kPageSizeBits) - 1;  // The end offset of the remembered set in a page  // (heaps are aligned to pointer size).  static const int kRSetEndOffset= kPageSize / kBitsPerPointer;  // The start offset of the remembered set in a page.  static const int kRSetStartOffset = kRSetEndOffset / kBitsPerPointer;  // The start offset of the object area in a page.  static const int kObjectStartOffset = kRSetEndOffset;  // Object area size in bytes.  static const int kObjectAreaSize = kPageSize - kObjectStartOffset;  // Maximum object size that fits in a page.  static const int kMaxHeapObjectSize = kObjectAreaSize;  //---------------------------------------------------------------------------  // Page header description.  //  // If a page is not in the large object space, the first word,  // opaque_header, encodes the next page address (aligned to kPageSize 8K)  // and the chunk number (0 ~ 8K-1).  Only MemoryAllocator should use  // opaque_header. The value range of the opaque_header is [0..kPageSize[,  // or [next_page_start, next_page_end[. It cannot point to a valid address  // in the current page.  If a page is in the large object space, the first  // word *may* (if the page start and large object chunk start are the  // same) contain the address of the next large object chunk.  int opaque_header;  // If the page is not in the large object space, the low-order bit of the  // second word is set. If the page is in the large object space, the  // second word *may* (if the page start and large object chunk start are  // the same) contain the large object chunk size.  In either case, the  // low-order bit for large object pages will be cleared.  int is_normal_page;  // The following fields overlap with remembered set, they can only  // be used in the mark-compact collector when remembered set is not  // used.  // The allocation pointer after relocating objects to this page.  Address mc_relocation_top;  // The index of the page in its owner space.  int mc_page_index;  // The forwarding address of the first live object in this page.  Address mc_first_forwarded;#ifdef DEBUG private:  static RSetState rset_state_;  // state of the remembered set#endif};// ----------------------------------------------------------------------------// Space is the abstract superclass for all allocation spaces.class Space : public Malloced { public:  Space(AllocationSpace id, Executability executable)      : id_(id), executable_(executable) {}  virtual ~Space() {}  // Does the space need executable memory?  Executability executable() { return executable_; }  // Identity used in error reporting.  AllocationSpace identity() { return id_; }  virtual int Size() = 0;#ifdef DEBUG  virtual void Verify() = 0;  virtual void Print() = 0;#endif private:  AllocationSpace id_;  Executability executable_;};// ----------------------------------------------------------------------------// A space acquires chunks of memory from the operating system. The memory// allocator manages chunks for the paged heap spaces (old space and map// space).  A paged chunk consists of pages. Pages in a chunk have contiguous// addresses and are linked as a list.//// The allocator keeps an initial chunk which is used for the new space.  The// leftover regions of the initial chunk are used for the initial chunks of// old space and map space if they are big enough to hold at least one page.// The allocator assumes that there is one old space and one map space, each// expands the space by allocating kPagesPerChunk pages except the last// expansion (before running out of space).  The first chunk may contain fewer// than kPagesPerChunk pages as well.//// The memory allocator also allocates chunks for the large object space, but// they are managed by the space itself.  The new space does not expand.class MemoryAllocator : public AllStatic { public:  // Initializes its internal bookkeeping structures.  // Max capacity of the total space.  static bool Setup(int max_capacity);  // Deletes valid chunks.  static void TearDown();  // Reserves an initial address range of virtual memory to be split between  // the two new space semispaces, the old space, and the map space.  The  // memory is not yet committed or assigned to spaces and split into pages.  // The initial chunk is unmapped when the memory allocator is torn down.  // This function should only be called when there is not already a reserved  // initial chunk (initial_chunk_ should be NULL).  It returns the start  // address of the initial chunk if successful, with the side effect of  // setting the initial chunk, or else NULL if unsuccessful and leaves the  // initial chunk NULL.  static void* ReserveInitialChunk(const size_t requested);  // Commits pages from an as-yet-unmanaged block of virtual memory into a  // paged space.  The block should be part of the initial chunk reserved via  // a call to ReserveInitialChunk.  The number of pages is always returned in  // the output parameter num_pages.  This function assumes that the start  // address is non-null and that it is big enough to hold at least one  // page-aligned page.  The call always succeeds, and num_pages is always  // greater than zero.  static Page* CommitPages(Address start, size_t size, PagedSpace* owner,                           int* num_pages);  // Commit a contiguous block of memory from the initial chunk.  Assumes that  // the address is not NULL, the size is greater than zero, and that the  // block is contained in the initial chunk.  Returns true if it succeeded  // and false otherwise.  static bool CommitBlock(Address start, size_t size, Executability executable);  // Attempts to allocate the requested (non-zero) number of pages from the  // OS.  Fewer pages might be allocated than requested. If it fails to  // allocate memory for the OS or cannot allocate a single page, this  // function returns an invalid page pointer (NULL). The caller must check  // whether the returned page is valid (by calling Page::is_valid()).  It is  // guaranteed that allocated pages have contiguous addresses.  The actual  // number of allocated page is returned in the output parameter  // allocated_pages.  static Page* AllocatePages(int requested_pages, int* allocated_pages,                             PagedSpace* owner);  // Frees pages from a given page and after. If 'p' is the first page  // of a chunk, pages from 'p' are freed and this function returns an  // invalid page pointer. Otherwise, the function searches a page  // after 'p' that is the first page of a chunk. Pages after the  // found page are freed and the function returns 'p'.  static Page* FreePages(Page* p);  // Allocates and frees raw memory of certain size.  // These are just thin wrappers around OS::Allocate and OS::Free,  // but keep track of allocated bytes as part of heap.  static void* AllocateRawMemory(const size_t requested,                                 size_t* allocated,                                 Executability executable);  static void FreeRawMemory(void* buf, size_t length);  // Returns the maximum available bytes of heaps.  static int Available() { return capacity_ < size_ ? 0 : capacity_ - size_; }  // Returns maximum available bytes that the old space can have.  static int MaxAvailable() {    return (Available() / Page::kPageSize) * Page::kObjectAreaSize;  }  // Links two pages.  static inline void SetNextPage(Page* prev, Page* next);  // Returns the next page of a given page.  static inline Page* GetNextPage(Page* p);  // Checks whether a page belongs to a space.  static inline bool IsPageInSpace(Page* p, PagedSpace* space);  // Returns the space that owns the given page.  static inline PagedSpace* PageOwner(Page* page);  // Finds the first/last page in the same chunk as a given page.  static Page* FindFirstPageInSameChunk(Page* p);  static Page* FindLastPageInSameChunk(Page* p);#ifdef DEBUG  // Reports statistic info of the space.  static void ReportStatistics();#endif  // Due to encoding limitation, we can only have 8K chunks.  static const int kMaxNofChunks = 1 << Page::kPageSizeBits;  // If a chunk has at least 32 pages, the maximum heap size is about  // 8 * 1024 * 32 * 8K = 2G bytes.  static const int kPagesPerChunk = 64;  static const int kChunkSize = kPagesPerChunk * Page::kPageSize; private:  // Maximum space size in bytes.  static int capacity_;  // Allocated space size in bytes.  static int size_;  // The initial chunk of virtual memory.  static VirtualMemory* initial_chunk_;

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -