⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 leon3.in.help

📁 leon3 source code 虽然gaisler网站上有下载
💻 HELP
📖 第 1 页 / 共 2 页
字号:
Number of processorsCONFIG_PROC_NUM  The number of processor cores. The LEON3MP design can accomodate  up to 4 LEON3 processor cores. Use 1 unless you know what you are  doing ...Number of SPARC register windowsCONFIG_IU_NWINDOWS  The SPARC architecture (and LEON) allows 2 - 32 register windows.  However, any number except 8 will require that you modify and   recompile your run-time system or kernel. Unless you know what  you are doing, use 8.SPARC V8 multiply and divide instructionCONFIG_IU_V8MULDIV  If you say Y here, the SPARC V8 multiply and divide instructions  will be implemented. The instructions are: UMUL, UMULCC, SMUL,  SMULCC, UDIV, UDIVCC, SDIV, SDIVCC. In code containing frequent  integer multiplications and divisions, significant performance  increase can be achieved. Emulated floating-point operations will  also benefit from this option.  By default, the gcc compiler does not emit multiply or divide  instructions and your code must be compiled with -mv8 to see any  performance increase. On the other hand, code compiled with -mv8  will generate an illegal instruction trap when executed on processors  with this option disabled.  The divider consumes approximately 2 kgates, the multiplier 6 kgates.Multiplier latencyCONFIG_IU_MUL_LATENCY_2  Implementation options for the integer multiplier.    Type        Implementation              issue-rate/latency  2-clocks    32x32 pipelined multiplier     1/2   4-clocks    16x16 standard multiplier      4/4  5-clocks    16x16 pipelined multiplier     4/5Multiplier latencyCONFIG_IU_MUL_MAC  If you say Y here, the SPARC V8e UMAC/SMAC (multiply-accumulate)  instructions will be enabled. The instructions implement a  single-cycle 16x16->32 bits multiply with a 40-bits accumulator.  The details of these instructions can be found in the LEON manual,  This option is only available when 16x16 multiplier is used.Single vector trappingCONFIG_IU_SVT  Single-vector trapping is a SPARC V8e option to reduce code-size  in small applications. If enabled, the processor will jump to   the address of trap 0 (tt = 0x00) for all traps. No trap table  is then needed. The trap type is present in %psr.tt and must  be decoded by the O/S. Saves 4 Kbyte of code, but increases  trap and interrupt overhead. Currently, the only O/S supporting  this option is eCos. To enable SVT, the O/S must also set bit 13  in %asr17.Load latencyCONFIG_IU_LDELAY  Defines the pipeline load delay (= pipeline cycles before the data  from a load instruction is available for the next instruction).  One cycle gives best performance, but might create a critical path  on targets with slow (data) cache memories. A 2-cycle delay can  improve timing but will reduce performance with about 5%.Reset addressCONFIG_IU_RSTADDR  By default, a SPARC processor starts execution at address 0.  With this option, any 4-kbyte aligned reset start address can be  choosen. Keep at 0 unless you really know what you are doing.Power-downCONFIG_PWD  Say Y here to enable the power-down feature of the processor.  Might reduce the maximum frequency slightly on FPGA targets.  For details on the power-down operation, see the LEON3 manual.Hardware watchpointsCONFIG_IU_WATCHPOINTS  The processor can have up to 4 hardware watchpoints, allowing to   create both data and instruction breakpoints at any memory location,  also in PROM. Each watchpoint will use approximately 500 gates.  Use 0 to disable the watchpoint function.Floating-point enableCONFIG_FPU_ENABLE  Say Y here to enable the floating-point interface for the MEIKO  or GRFPU. Note that no FPU's are provided with the GPL version  of GRLIB. Both the Gaisler GRFPU and the Meiko FPU are commercial   cores and must be obtained separately. FPU selectionCONFIG_FPU_GRFPU  Select between Gaisler Research's GRFPU and GRFPU-lite FPUs or the Sun   Meiko FPU core. All cores  are fully IEEE-754 compatible and support  all SPARC FPU instructions.GRFPU MultiplierCONFIG_FPU_GRFPU_INFMUL  On FPGA targets choose inferred multiplier. For ASIC implementations   choose between Synopsys Design Ware (DW) multiplier or Module   Generator (ModGen) multiplier. The DW multiplier gives better results   (smaller area and better timing) but requires a DW license.   The ModGen multiplier is part of GRLIB and does not require a license. Shared GRFPUCONFIG_FPU_GRFPU_SH  If enabled multiple CPU cores will share one GRFPU.	GRFPC ConfigurationCONFIG_FPU_GRFPC0  Configures the GRFPU-LITE controller.   In simple configuration controller executes FP instructions   in parallel with  integer instructions. FP operands are fetched   in the register file stage and the result is written in the write   stage. This option uses least area resources.  Data forwarding configuration gives ~ 10 % higher FP performance than   the simple configuration by adding data forwarding between the pipeline  stages.   Non-blocking controller allows FP load and store instructions to  execute in parallel with FP instructions. The performance increase is   ~ 20 % for FP applications. This option uses most logic resources and   is suitable for ASIC implementations.   Floating-point netlistCONFIG_FPU_NETLIST  Say Y here to use a VHDL netlist of the GRFPU-Lite. This is  only available in certain versions of grlib.Enable Instruction cacheCONFIG_ICACHE_ENABLE  The instruction cache should always be enabled to allow  maximum performance. Some low-end system might want to  save area and disable the cache, but this will reduce  the performance with a factor of 2 - 3.Enable Data cacheCONFIG_DCACHE_ENABLE  The data cache should always be enabled to allow  maximum performance. Some low-end system might want to  save area and disable the cache, but this will reduce  the performance with a factor of 2 at least.Instruction cache associativityCONFIG_ICACHE_ASSO1  The instruction cache can be implemented as a multi-set cache with  1 - 4 sets. Higher associativity usually increases the cache hit  rate and thereby the performance. The downside is higher power  consumption and increased gate-count for tag comparators.  Note that a 1-set cache is effectively a direct-mapped cache.Instruction cache set sizeCONFIG_ICACHE_SZ1  The size of each set in the instuction cache (kbytes). Valid values  are 1 - 64 in binary steps. Note that the full range is only supported  by the generic and virtex2 targets. Most target packages are limited  to 2 - 16 kbyte. Large set size gives higher performance but might  affect the maximum frequency (on ASIC targets). The total instruction  cache size is the number of set multiplied with the set size.Instruction cache line sizeCONFIG_ICACHE_LZ16  The instruction cache line size. Can be set to either 16 or 32  bytes per line. Instruction caches typically benefit from larger  line sizes, but on small caches it migh be better with 16 bytes/line  to limit eviction miss rate.Instruction cache replacement algorithmCONFIG_ICACHE_ALGORND  Cache replacement algorithm for caches with 2 - 4 sets. The 'random'  algorithm selects the set to evict randomly. The least-recently-used  (LRR) algorithm evicts the set least recently replaced. The least-  recently-used (LRU) algorithm evicts the set least recently accessed.  The random algorithm uses a simple 1- or 2-bit counter to select  the eviction set and has low area overhead. The LRR scheme uses one  extra bit in the tag ram and has therefore also low area overhead.  However, the LRR scheme can only be used with 2-set caches. The LRU  scheme has typically the best performance but also highest area overhead.  A 2-set LRU uses 1 flip-flop per line, a 3-set LRU uses 3 flip-flops  per line, and a 4-set LRU uses 5 flip-flops per line to store the access  history.Instruction cache lockingCONFIG_ICACHE_LOCK  Say Y here to enable cache locking in the instruction cache.  Locking can be done on cache-line level, but will increase the  width of the tag ram with one bit. If you don't know what  locking is good for, it is safe to say N.Data cache associativityCONFIG_DCACHE_ASSO1  The data cache can be implemented as a multi-set cache with  1 - 4 sets. Higher associativity usually increases the cache hit  rate and thereby the performance. The downside is higher power  consumption and increased gate-count for tag comparators.  Note that a 1-set cache is effectively a direct-mapped cache.Data cache set sizeCONFIG_DCACHE_SZ1  The size of each set in the data cache (kbytes). Valid values are  1 - 64 in binary steps. Note that the full range is only supported

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -