📄 video_copy.cfg
字号:
/* * Copyright 2008 by Texas Instruments Incorporated. * * All rights reserved. Property of Texas Instruments Incorporated. * Restricted rights to use, duplicate or disclose this code are * granted through contract. * *//* * ======== video_copy.cfg ======== * * For details about the packages and configuration parameters used throughout * this config script, see the Codec Engine Configuration Guide (link * provided in the release notes). *//* * Configure CE's OSAL. This codec server only builds for the BIOS-side of * a heterogeneous system, so use the "DSPLINK_BIOS" configuration. */var osalGlobal = xdc.useModule('ti.sdo.ce.osal.Global');osalGlobal.runtimeEnv = osalGlobal.DSPLINK_BIOS;/* configure default memory seg id to BIOS-defined "DDR2" */osalGlobal.defaultMemSegId = "DDR2";/* activate BIOS logging module */var LogServer = xdc.useModule('ti.sdo.ce.bioslog.LogServer');/* * "Use" the various codec modules; i.e., implementation of codecs. * All these "xdc.useModule" commands provide a handle to the codecs, * which we'll use below to add them to the Server.algs array. */var VIDDEC_COPY = xdc.useModule('ti.sdo.ce.examples.codecs.viddec_copy.VIDDEC_COPY');var VIDENC_COPY = xdc.useModule('ti.sdo.ce.examples.codecs.videnc_copy.VIDENC_COPY');/* * ======== Server Configuration ======== */var Server = xdc.useModule('ti.sdo.ce.Server');/* The server's stackSize */Server.threadAttrs.stackSize = 2048;/* The servers execution priority */Server.threadAttrs.priority = Server.MINPRI;/* * The array of algorithms this server can serve up. This array also configures * details about the threads which will be created to run the algorithms * (e.g. stack sizes, priorities, etc.). */Server.algs = [ { name: "viddec_copy", // C name for the of codec mod: VIDDEC_COPY, // var VIDDEC_COPY defined above threadAttrs: { stackMemId: 0, // BIOS MEM seg. ID for task's stack priority: Server.MINPRI + 1 // task priority }, groupId : 0, // scratch group ID (see DSKT2 below) }, { name: "videnc_copy", mod: VIDENC_COPY, threadAttrs: { stackMemId: 0, priority: Server.MINPRI + 1 }, groupId : 0, },];/* we can use DMA in the VIDENC_COPY codecs */VIDENC_COPY.useDMA = true;/* * Note that we presume this server runs on a system with DSKT2 and DMAN3, * so we configure those modules here. *//* * ======== DSKT2 (xDAIS Alg. memory allocation) configuration ======== * * DSKT2 is the memory manager for all algorithms running in the system, * granting them persistent and temporary ("scratch") internal and external * memory. We configure it here to define its memory allocation policy. * * DSKT2 settings are critical for algorithm performance. * * First we assign various types of algorithm internal memory (DARAM0..2, * SARAM0..2,IPROG, which are all the same on a C64+ DSP) to "L1DHEAP" * defined in the .tcf file as an internal memory heap. (For instance, if * an algorithm asks for 5K of DARAM1 memory, DSKT2 will allocate 5K from * L1DHEAP, if available, and give it to the algorithm; if the 5K is not * available in the L1DHEAP, that algorithm's creation will fail.) * * The remaining segments we point to the "DDRALGHEAP" external memory segment * (also defined in the.tcf) except for DSKT2_HEAP which stores DSKT2's * internal dynamically allocated objects, which must be preserved even if * no codec instances are running, so we place them in "DDR2" memory segment * with the rest of system code and static data. */var DSKT2 = xdc.useModule('ti.sdo.fc.dskt2.DSKT2');DSKT2.DARAM0 = "L1DHEAP";DSKT2.DARAM1 = "L1DHEAP";DSKT2.DARAM2 = "L1DHEAP";DSKT2.SARAM0 = "L1DHEAP";DSKT2.SARAM1 = "L1DHEAP";DSKT2.SARAM2 = "L1DHEAP";DSKT2.ESDATA = "DDRALGHEAP";DSKT2.IPROG = "L1DHEAP";DSKT2.EPROG = "DDRALGHEAP";DSKT2.DSKT2_HEAP = "DDR2";/* * Next we define how to fulfill algorithms' requests for fast ("scratch") * internal memory allocation; "scratch" is an area an algorithm writes to * while it processes a frame of data and * * First we turn off the switch that allows the DSKT2 algorithm memory manager * to give to an algorithm external memory for scratch if the system has run * out of internal memory. In that case, if an algorithm fails to get its * requested scratch memory, it will fail at creation rather than proceed to * run at poor performance. (If your algorithms fail to create, you may try * changing this value to "true" just to get it running and optimize other * scratch settings later.) * * Next we set "algorithm scratch sizes", a scheme we use to minimize internal * memory resources for algorithms' scratch memory allocation. Algorithms that * belong to the same "scratch group ID" -- field "groupId" in the algorithm's * Server.algs entry above, reflecting the priority of the task running the * algorithm -- don't run at the same time and thus can share the same * scratch area. When creating the first algorithm in a given "scratch group" * (between 0 and 19), a shared scratch area for that groupId is created with * a size equal to SARAM_SCRATCH_SIZES[<alg's groupId>] below -- unless the * algorithm requests more than that number, in which case the size will be * what the algorithm asks for. So SARAM_SCRATCH_SIZES[<alg's groupId>] size is * more of a groupId size guideline -- if the algorithm needs more it will get * it, but getting these size guidelines right is important for optimal use of * internal memory. The reason for this is that if an algorithm comes along * that needs more scratch memory than its groupId scratch area's size, it * will get that memory allocated separately, without sharing. * * This DSKT2.SARAM_SCRATCH_SIZES[<groupId>] does not mean it is a scratch size * that will be automatically allocated for the group <groupId> at system * startup, but only that is a preferred minimum scratch size to use for the * first algorithm that gets created in the <groupId> group, if any. * * (An example: if algorithms A and B with the same groupId = 0 require 10K and * 20K of scratch, and if SARAM_SCRATCH_SIZES[0] is 0, if A gets created first * DSKT2 allocates a shared scratch area for group 0 of size 10K, as A needs. * If then B gets to be created, the 20K scratch area it gets will not be * shared with A's -- or anyone else's; the total internal memory use will be * 30K. By contrast, if B gets created first, a 20K shared scratch will be * allocated, and when A comes along, it will get its 10K from the existing * group 0's 20K area. To eliminate such surprises, we set * SARAM_SCRATCH_SIZES[0] to 20K and always spend exactly 20K on A and B's * shared needs -- independent of their creation order. Not only do we save 10K * of precious internal memory, but we avoid the possibility that B can't be * created because less than 20K was available in the DSKT2 internal heaps.) * * In our example below, we set the size of groupId 0 to 32K -- as an example, * even though our codecs don't use it. * * Finally, note that if the codecs correctly implement the * ti.sdo.ce.ICodec.getDaramScratchSize() and .getSaramScratchSize() methods, * this scratch size configuration can be autogenerated by * configuring Server.autoGenScratchSizeArrays = true. */DSKT2.ALLOW_EXTERNAL_SCRATCH = false;DSKT2.SARAM_SCRATCH_SIZES[0] = 32*1024;/* * ======== DMAN3 (DMA manager) configuration ======== */var DMAN3 = xdc.useModule('ti.sdo.fc.dman3.DMAN3');/* First we configure how DMAN3 handles memory allocations: * * Essentially the configuration below should work for most codec combinations. * If it doesn't work for yours -- meaning an algorithm fails to create due * to insufficient internal memory -- try the alternative (commented out * line that assigns "DDRALGHEAP" to DMAN3.heapInternal). * * What follows is an FYI -- an explanation for what the alternative would do: * * When we use an external memory segment (DDRALGHEAP) for DMAN3 internal * segment, we force algorithms to use external memory for what they think is * internal memory -- we do this in a memory-constrained environment * where all internal memory is used by cache and/or algorithm scratch * memory, pessimistically assuming that if DMAN3 uses any internal memory, * other components (algorithms) will not get the internal memory they need. * * This setting would affect performance very lightly. * * By setting DMAN3.heapInternal = <external-heap> DMAN3 *may not* supply * ACPY3_PROTOCOL IDMA3 channels the protocol required internal memory for * IDMA3 channel 'env' memory. To deal with this catch-22 situation we * configure DMAN3 with hook-functions to obtain internal-scratch memory * from the shared scratch pool for the associated algorithm's * scratch-group (i.e. it first tries to get the internal scratch memory * from DSKT2 shared allocation pool, hoping there is enough extra memory * in the shared pool, if that doesn't work it will try persistent * allocation from DMAN3.internalHeap). */DMAN3.heapInternal = "L1DHEAP"; /* L1DHEAP is an internal segment */// DMAN3.heapInternal = "DDRALGHEAP"; /* DDRALGHEAP is an external segment */DMAN3.heapExternal = "DDRALGHEAP";DMAN3.idma3Internal = false;DMAN3.scratchAllocFxn = "DSKT2_allocScratch";DMAN3.scratchFreeFxn = "DSKT2_freeScratch";/* Next, we configure all the physical resources that DMAN3 is granted * exclusively. These settings are optimized for the DSP on DM6446 (DaVinci). * * We assume PaRams 0..79 are taken by the Arm drivers, so we reserve * all the rest, up to 127 (there are 128 PaRam sets on DM6446). * DMAN3 takes TCC's 32 through 63 (hence the High TCC mask is 0xFFFFFFFF * and the Low TCC mask is 0). Of the 48 PaRams we reserved, we assign * all of them to scratch group 0; similarly, of the 32 TCCs we reserved, * we assign all of them to scratch group 0. * * If we had more scratch groups with algorithms that require EDMA, we would * split those 48 PaRams and 32 TCCs appropriately. For example, if we had * a video encoder alg. in group 0 and video decoder alg. in group 1, and they * both needed a number of EDMA channels, we could assing 24 PaRams and 16 * TCCs to Groups [0] and [1] each. (Assuming both algorithms needed no more * than 24 channels to run properly.) */DMAN3.paRamBaseIndex = 80; // 1st EDMA3 PaRAM set available for DMAN3DMAN3.numQdmaChannels = 8; // number of device's QDMA channels to useDMAN3.qdmaChannels = [0,1,2,3,4,5,6,7]; // choice of QDMA channels to useDMAN3.numPaRamEntries = 48; // number of PaRAM sets exclusively used by DMANDMAN3.numPaRamGroup[0] = 48; // number of PaRAM sets for scratch group 0DMAN3.numTccGroup[0] = 32; // number of TCCs assigned to scratch group 0DMAN3.tccAllocationMaskL = 0; // bit mask indicating which TCCs 0..31 to useDMAN3.tccAllocationMaskH = 0xffffffff; // assign all TCCs 32..63 for DMAN/* The remaining DMAN3 configuration settings are as defined in ti.sdo.fc.DMAN3 * defaults. You may need to override them to add more QDMA channels and * configure per-scratch-group resource sub-allocations. *//* * @(#) ti.sdo.ce.examples.servers.video_copy.evmDM6446; 1, 0, 0,55; 1-14-2008 09:54:58; /db/atree/library/trees/ce-g30x/src/ */
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -