📄 config.help
字号:
Prompt for target technologyCONFIG_SYN_INFERRED Selects the target technology for memory and pads. The following are available: - Inferred: Generic FPGA or ASIC targets if your synthesis tool is capable of inferring RAMs and pads automatically. - Altera: Any Altera FPGA family - ATC18: Atmel-Nantes 0.18 um rad-hard CMOS - IHP25: IHP 0.25 um CMOS - UMC-0.18 : UMC 0.18 um CMOS with Virtual Silicon libraries - Xilinx-Virtex/E: Xilinx Virtex/E libraries - Xilinx-Virtex2: Xilinx Virtex2 libraries - Xilinx-Spartan/2/3: Xilinx Spartan/2/3 libraries - Actel ProAsic/P/3 and Axellerator FPGAsRam libraryCONFIG_MEM_VIRAGE Select RAM generators for ASIC targets. Currently, only Virage RAMS are supported.Infer ramCONFIG_SYN_INFER_RAM Say Y here if you want the synthesis tool to infer your RAM automatically. Say N to directly instantiate technology- specific RAM cells for the selected target technology package.Infer padsCONFIG_SYN_INFER_PADS Say Y here if you want the synthesis tool to infer pads. Say N to directly instantiate technology-specific pads from the selected target technology package.No async resetCONFIG_SYN_NO_ASYNC Say Y here if you disable asynchronous reset in some of the IP cores. Might be necessary if the target library does not have cells with asynchronous set/reset.Use Virtex CLKDLL for clock synchronisationCONFIG_CLK_INFERRED Certain target technologies include clock generators to scale or phase-adjust the system and SDRAM clocks. This is currently supported for Xilinx and Altera FPGAs. Depending on technology, you can select to use the Xilinx CKLDLL macro (Virtex, VirtexE, Spartan1/2), the Xilinx DCM (Virtex-2, Spartan3, Virtex-4), or the Altera ALTDLL (Stratix, Cyclone). Choose the 'inferred' option if you are not using Xilinx or Altera FPGAs. Using a technology specific clock generator is necessary to re-syncronize the SDRAM clock. For this to work, connect the external SDCLK signal with PLLREF. Clock multiplierCONFIG_CLK_MUL When using the Xilinx DCM or Altera ALTPLL, the system clock can be multiplied with a factor of 2 - 32, and divided by a factor of 1 - 32. This makes it possible to generate almost any desired processor frequency. When using the Xilinx CLKDLL generator, the resulting frequency scale factor (mul/div) must be one of 1/2, 1 or 2. WARNING: The resulting clock must be within the limits specified by the target FPGA family.Clock dividerCONFIG_CLK_DIV When using the Xilinx DCM or Altera ALTPLL, the system clock can be multiplied with a factor of 2 - 32, and divided by a factor of 1 - 32. This makes it possible to generate almost any desired processor frequency. When using the Xilinx CLKDLL generator, the resulting frequency scale factor (mul/div) must be one of 1/2, 1 or 2. WARNING: The resulting clock must be within the limits specified by the target FPGA family.System clock multiplierCONFIG_CLKDLL_1_2 The Xilinx CLKDLL can scale the input clock with a factor of 0.5, 1.0, or 2.0. Useful when the target board has an oscillator with a too high (or low) frequency for your design. The divided clock will be used as the main clock for the whole processor (except PCI and ethernet clocks).System clock multiplierCONFIG_DCM_2_3 The Xilinx DCM and Altera ALTDLL can scale the input clock with a large range of factors. Useful when the target board has an oscillator with a too high (or low) frequency for your design. The divided clock will be used as the main clock for the whole processor (except PCI and ethernet clocks). NOTE: the resulting frequency must be at least 24 MHz or the DCM and ALTDLL might not work.Enable CLKDLL for PCI clockCONFIG_PCI_CLKDLL Say Y here to re-synchronize the PCI clock using a Virtex BUFGDLL macro. Will improve PCI clock-to-output delays on the expense of input-setup requirements.Use PCI clock system clockCONFIG_PCI_SYSCLK Say Y here to the PCI clock to generate the system clock. The PCI clock can be scaled using the DCM or CLKDLL to generate a suitable processor clock.External SDRAM clock feedbackCONFIG_CLK_NOFB Say Y here to disable the external clock feedback to synchronize the SDRAM clock. This option is necessary if your board or design does not have an external clock feedback that is connected to the pllref input of the clock generator.Number of processorsCONFIG_PROC_NUM The number of processor cores. The LEON3MP design can accomodate up to 4 LEON3 processor cores. Use 1 unless you know what you are doing ...Number of SPARC register windowsCONFIG_IU_NWINDOWS The SPARC architecture (and LEON) allows 2 - 32 register windows. However, any number except 8 will require that you modify and recompile your run-time system or kernel. Unless you know what you are doing, use 8.SPARC V8 multiply and divide instructionCONFIG_IU_V8MULDIV If you say Y here, the SPARC V8 multiply and divide instructions will be implemented. The instructions are: UMUL, UMULCC, SMUL, SMULCC, UDIV, UDIVCC, SDIV, SDIVCC. In code containing frequent integer multiplications and divisions, significant performance increase can be achieved. Emulated floating-point operations will also benefit from this option. By default, the gcc compiler does not emit multiply or divide instructions and your code must be compiled with -mv8 to see any performance increase. On the other hand, code compiled with -mv8 will generate an illegal instruction trap when executed on processors with this option disabled. The divider consumes approximately 2 kgates, the multiplier 6 kgates.Multiplier latencyCONFIG_IU_MUL_LATENCY_4 The multiplier used for UMUL/SMUL instructions is implemented with a 16x16 multiplier which is iterated 4 times. This leads to a 4-cycle latency for multiply operations. To improve timing, a pipeline stage can be inserted into the 16x16 multiplier which will lead to a 5-cycle latency for the multiply oprations.Multiplier latencyCONFIG_IU_MUL_MAC If you say Y here, the SPARC V8e UMAC/SMAC (multiply-accumulate) instructions will be enabled. The instructions implement a single-cycle 16x16->32 bits multiply with a 40-bits accumulator. The details of these instructions can be found in the LEON manual,Single vector trappingCONFIG_IU_SVT Single-vector trapping is a SPARC V8e option to reduce code-size in small applications. If enabled, the processor will jump to the address of trap 0 (tt = 0x00) for all traps. No trap table is then needed. The trap type is present in %psr.tt and must be decoded by the O/S. Saves 4 Kbyte of code, but increases trap and interrupt overhead. Currently, the only O/S supporting this option is eCos. To enable SVT, the O/S must also set bit 13 in %asr17.Load latencyCONFIG_IU_LDELAY Defines the pipeline load delay (= pipeline cycles before the data from a load instruction is available for the next instruction). One cycle gives best performance, but might create a critical path on targets with slow (data) cache memories. A 2-cycle delay can improve timing but will reduce performance with about 5%.Reset addressCONFIG_IU_RSTADDR By default, a SPARC processor starts execution at address 0. With this option, any 4-kbyte aligned reset start address can be choosen. Keep at 0 unless you really know what you are doing.Power-downCONFIG_PWD Say Y here to enable the power-down feature of the processor. Might reduce the maximum frequency slightly on FPGA targets. For details on the power-down operation, see the LEON3 manual.Hardware watchpointsCONFIG_IU_WATCHPOINTS The processor can have up to 4 hardware watchpoints, allowing to create both data and instruction breakpoints at any memory location, also in PROM. Each watchpoint will use approximately 500 gates. Use 0 to disable the watchpoint function.Floating-point enableCONFIG_FPU_ENABLE Say Y here to enable the floating-point interface for the MEIKO or GRFPU. Note that no FPU's are provided with the GPL version of GRLIB. Both the Gaisler GRFPU and the Meiko FPU are commercial cores and must be obtained separately. FPU selectionCONFIG_FPU_GRFPU Select between Gaisler Research's GRFPU or the Sun Meiko FPU core. Both cores are fully IEEE-754 compatible and supports all SPARC FPU instructions.GRFPC ConfigurationCONFIG_FPU_GRFPC0 Configures the GRFPU-LITE controller. In simple configuration controller executes FP instructions in parallel with integer instructions. FP operands are fetched in the register file stage and the result is written in the write stage. This option uses least area resources. Data forwarding configuration gives ~ 10 % higher FP performance than the simple configuration by adding data forwarding between the pipeline stages. Non-blocking controller allows FP load and store instructions to execute in parallel with FP instructions. The performance increase is ~ 20 % for FP applications. This option uses most logic resources and is suitable for ASIC implementations. Enable Instruction cacheCONFIG_ICACHE_ENABLE The instruction cache should always be enabled to allow maximum performance. Some low-end system might want to save area and disable the cache, but this will reduce the performance with a factor of 2 - 3.Enable Data cacheCONFIG_DCACHE_ENABLE The data cache should always be enabled to allow maximum performance. Some low-end system might want to save area and disable the cache, but this will reduce the performance with a factor of 2 at least.Instruction cache associativityCONFIG_ICACHE_ASSO1 The instruction cache can be implemented as a multi-set cache with 1 - 4 sets. Higher associativity usually increases the cache hit rate and thereby the performance. The downside is higher power consumption and increased gate-count for tag comparators. Note that a 1-set cache is effectively a direct-mapped cache.Instruction cache set sizeCONFIG_ICACHE_SZ1 The size of each set in the instuction cache (kbytes). Valid values are 1 - 64 in binary steps. Note that the full range is only supported by the generic and virtex2 targets. Most target packages are limited to 2 - 16 kbyte. Large set size gives higher performance but might affect the maximum frequency (on ASIC targets). The total instruction cache size is the number of set multiplied with the set size.Instruction cache line sizeCONFIG_ICACHE_LZ16 The instruction cache line size. Can be set to either 16 or 32 bytes per line. Instruction caches typically benefit from larger line sizes, but on small caches it migh be better with 16 bytes/line to limit eviction miss rate.Instruction cache replacement algorithmCONFIG_ICACHE_ALGORND Cache replacement algorithm for caches with 2 - 4 sets. The 'random' algorithm selects the set to evict randomly. The least-recently-used (LRR) algorithm evicts the set least recently replaced. The least- recently-used (LRU) algorithm evicts the set least recently accessed. The random algorithm uses a simple 1- or 2-bit counter to select the eviction set and has low area overhead. The LRR scheme uses one extra bit in the tag ram and has therefore also low area overhead. However, the LRR scheme can only be used with 2-set caches. The LRU scheme has typically the best performance but also highest area overhead. A 2-set LRU uses 1 flip-flop per line, a 3-set LRU uses 3 flip-flops per line, and a 4-set LRU uses 5 flip-flops per line to store the access history.Instruction cache lockingCONFIG_ICACHE_LOCK Say Y here to enable cache locking in the instruction cache. Locking can be done on cache-line level, but will increase the width of the tag ram with one bit. If you don't know what locking is good for, it is safe to say N.Data cache associativityCONFIG_DCACHE_ASSO1 The data cache can be implemented as a multi-set cache with 1 - 4 sets. Higher associativity usually increases the cache hit rate and thereby the performance. The downside is higher power consumption and increased gate-count for tag comparators. Note that a 1-set cache is effectively a direct-mapped cache.Data cache set sizeCONFIG_DCACHE_SZ1 The size of each set in the data cache (kbytes). Valid values are 1 - 64 in binary steps. Note that the full range is only supported by the generic and virtex2 targets. Most target packages are limited to 2 - 16 kbyte. A large cache gives higher performance but the data cache is timing critical an a too large setting might affect the maximum frequency (on ASIC targets). The total data cache size is the number of set multiplied with the set size.Data cache line sizeCONFIG_DCACHE_LZ16 The data cache line size. Can be set to either 16 or 32 bytes per line. A smaller line size gives better associativity and higher cache hit rate, but requires a larger tag memory.Data cache replacement algorithmCONFIG_DCACHE_ALGORND See the explanation for instruction cache replacement algorithm.Data cache lockingCONFIG_DCACHE_LOCK Say Y here to enable cache locking in the data cache. Locking can be done on cache-line level, but will increase the width of the tag ram with one bit. If you don't know what locking is good for, it is safe to say N.Data cache snoopingCONFIG_DCACHE_SNOOP Say Y here to enable data cache snooping on the AHB bus. Is only useful if you have additional AHB masters such as the DSU or a target PCI interface. Note that the target technology must support dual-port RAMs for this option to be enabled. Dual-port RAMS are currently supported on Virtex/2, Virage and Actel targets.Data cache snooping implementationCONFIG_DCACHE_SNOOP_FAST The default snooping implementation is 'slow', which works if you don't have AHB slaves in cacheable areas capable of zero-waitstates non-sequential write accesses. Otherwise use 'fast' and suffer a few kgates extra area. This option is currently only needed in multi-master systems with the SSRAM or DDR memory controllers.Fixed cacheability mapCONFIG_CACHE_FIXED If this variable is 0, the cacheable memory regions are defined by the AHB plug&play information (default). To overriden the plug&play settings, this variable can be set to indicate which areas should be cached. The value is treated as a 16-bit hex value with each bit defining if a 256 Mbyte segment should be cached or not. The right-most (LSB) bit defines the cacheability of AHB address 0 - 256 MByte, while the left-most bit (MSB) defines AHB address 3840 - 4096 MByte. If the bit is set, the corresponding area is cacheable. A value of 00F3 defines address 0 - 0x20000000 and 0x40000000 - 0x80000000 as cacheable.Local data ramCONFIG_DCACHE_LRAM Say Y here to add a local ram to the data cache controller. Accesses to the ram (load/store) will be performed at 0 waitstates and store data will never be written back to the AHB bus.Size of local data ramCONFIG_DCACHE_LRAM_SZ1 Defines the size of the local data ram in Kbytes. Note that most technology libraries do not support larger rams than 16 Kbyte.Start address of local data ramCONFIG_DCACHE_LRSTART Defines the 8 MSB bits of start address of the local data ram. By default set to 8f (start address = 0x8f000000), but any value (except 0) is possible. Note that the local data ram 'shadows' a 16 Mbyte block of the address space.MMU enableCONFIG_MMU_ENABLE
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -