⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 zhlee_memcpy.s

📁 memcpy函数优化代码,使用汇编实现,可提高memcpy的实现性能.
💻 S
字号:
#define ENTRY(_LABEL) \\ .global _LABEL; _LABEL:.globl zhleememcpyzhleememcpy:/*ENTRY(memcpy)*/stmfd sp!, {r0, r12, lr}bl _zhleememcpyldmfd sp!, {r0, r12, pc}  .globl zhleememovezhleememmove:/*ENTRY(memmove)*/stmfd sp!, {r0, r12, lr}bl _zhleememcpyldmfd sp!, {r0, r12, pc}   /** This is one fun bit of code ...* Some easy listening music is suggested while trying to understand this* code e.g. Iron Maiden** For anyone attempting to understand it :** The core code is implemented here with simple stubs for memcpy()* memmove() and bcopy().** All local labels are prefixed with Lmemcpy_* Following the prefix a label starting f is used in the forward copy code* while a label using b is used in the backwards copy code* The source and destination addresses determine whether a forward or* backward copy is performed.* Separate bits of code are used to deal with the following situations* for both the forward and backwards copy.* unaligned source address* unaligned destination address* Separate copy routines are used to produce an optimised result for each* of these cases.* The copy code will use LDM/STM instructions to copy up to 32 bytes at* a time where possible.** Note: r12 (aka ip) can be trashed during the function along with* r0-r3 although r0-r2 have defined uses i.e. src, dest, len through out.* Additional registers are preserved prior to use i.e. r4, r5 & lr** Apologies for the state of the comments;-)*/  _zhleememcpy:/*ENTRY(_memcpy)*//* Determine copy direction */cmp r1, r0bcc Lmemcpy_backwards moveq r0, #0   /* Quick abort for len=0 */moveq pc, lr stmdb sp!, {r0, lr}  /* memcpy() returns dest addr */subs r2, r2, #4blt Lmemcpy_fl4  /* less than 4 bytes */ands r12, r0, #3bne Lmemcpy_fdestul  /* oh unaligned destination addr */ands r12, r1, #3bne Lmemcpy_fsrcul  /* oh unaligned source addr */ Lmemcpy_ft8:/* We have aligned source and destination */subs r2, r2, #8blt Lmemcpy_fl12  /* less than 12 bytes (4 from above) */subs r2, r2, #0x14        blt Lmemcpy_fl32  /* less than 32 bytes (12 from above) */stmdb sp!, {r4}  /* borrow r4 */ /* blat 32 bytes at a time *//* XXX for really big copies perhaps we should use more registers */Lmemcpy_floop32:ldmia r1!, {r3, r4, r12, lr}stmia r0!, {r3, r4, r12, lr}ldmia r1!, {r3, r4, r12, lr}stmia r0!, {r3, r4, r12, lr}subs r2, r2, #0x20        bge Lmemcpy_floop32 cmn r2, #0x10ldmgeia r1!, {r3, r4, r12, lr} /* blat a remaining 16 bytes */stmgeia r0!, {r3, r4, r12, lr}subge r2, r2, #0x10        ldmia sp!, {r4}  /* return r4 */ Lmemcpy_fl32:adds r2, r2, #0x14         /* blat 12 bytes at a time */Lmemcpy_floop12:ldmgeia r1!, {r3, r12, lr}stmgeia r0!, {r3, r12, lr}subges r2, r2, #0x0c        bge Lmemcpy_floop12 Lmemcpy_fl12:adds r2, r2, #8blt Lmemcpy_fl4 subs r2, r2, #4ldrlt r3, [r1], #4strlt r3, [r0], #4ldmgeia r1!, {r3, r12}stmgeia r0!, {r3, r12}subge r2, r2, #4 Lmemcpy_fl4:/* less than 4 bytes to go */adds r2, r2, #4ldmeqia sp!, {r0, pc}  /* done */ /* copy the crud byte at a time */cmp r2, #2ldrb r3, [r1], #1strb r3, [r0], #1ldrgeb r3, [r1], #1strgeb r3, [r0], #1ldrgtb r3, [r1], #1strgtb r3, [r0], #1ldmia sp!, {r0, pc} /* erg - unaligned destination */Lmemcpy_fdestul:rsb r12, r12, #4cmp r12, #2 /* align destination with byte copies */ldrb r3, [r1], #1strb r3, [r0], #1ldrgeb r3, [r1], #1strgeb r3, [r0], #1ldrgtb r3, [r1], #1strgtb r3, [r0], #1subs r2, r2, r12blt Lmemcpy_fl4  /* less the 4 bytes */ ands r12, r1, #3beq Lmemcpy_ft8  /* we have an aligned source */ /* erg - unaligned source *//* This is where it gets nasty ... */Lmemcpy_fsrcul:bic r1, r1, #3ldr lr, [r1], #4cmp r12, #2bgt Lmemcpy_fsrcul3beq Lmemcpy_fsrcul2cmp r2, #0x0c            blt Lmemcpy_fsrcul1loop4sub r2, r2, #0x0c        stmdb sp!, {r4, r5} Lmemcpy_fsrcul1loop16:mov r3, lr, lsr #8ldmia r1!, {r4, r5, r12, lr}orr r3, r3, r4, lsl #24mov r4, r4, lsr #8orr r4, r4, r5, lsl #24mov r5, r5, lsr #8orr r5, r5, r12, lsl #24mov r12, r12, lsr #8orr r12, r12, lr, lsl #24stmia r0!, {r3-r5, r12}subs r2, r2, #0x10        bge Lmemcpy_fsrcul1loop16ldmia sp!, {r4, r5}adds r2, r2, #0x0c        blt Lmemcpy_fsrcul1l4 Lmemcpy_fsrcul1loop4:mov r12, lr, lsr #8ldr lr, [r1], #4orr r12, r12, lr, lsl #24str r12, [r0], #4subs r2, r2, #4bge Lmemcpy_fsrcul1loop4 Lmemcpy_fsrcul1l4:sub r1, r1, #3b Lmemcpy_fl4 Lmemcpy_fsrcul2:cmp r2, #0x0c            blt Lmemcpy_fsrcul2loop4sub r2, r2, #0x0c        stmdb sp!, {r4, r5} Lmemcpy_fsrcul2loop16:mov r3, lr, lsr #16ldmia r1!, {r4, r5, r12, lr}orr r3, r3, r4, lsl #16mov r4, r4, lsr #16orr r4, r4, r5, lsl #16mov r5, r5, lsr #16orr r5, r5, r12, lsl #16mov r12, r12, lsr #16orr r12, r12, lr, lsl #16stmia r0!, {r3-r5, r12}subs r2, r2, #0x10        bge Lmemcpy_fsrcul2loop16ldmia sp!, {r4, r5}adds r2, r2, #0x0c        blt Lmemcpy_fsrcul2l4 Lmemcpy_fsrcul2loop4:mov r12, lr, lsr #16ldr lr, [r1], #4orr r12, r12, lr, lsl #16str r12, [r0], #4subs r2, r2, #4bge Lmemcpy_fsrcul2loop4 Lmemcpy_fsrcul2l4:sub r1, r1, #2b Lmemcpy_fl4 Lmemcpy_fsrcul3:cmp r2, #0x0c            blt Lmemcpy_fsrcul3loop4sub r2, r2, #0x0c        stmdb sp!, {r4, r5} Lmemcpy_fsrcul3loop16:mov r3, lr, lsr #24ldmia r1!, {r4, r5, r12, lr}orr r3, r3, r4, lsl #8mov r4, r4, lsr #24orr r4, r4, r5, lsl #8mov r5, r5, lsr #24orr r5, r5, r12, lsl #8mov r12, r12, lsr #24orr r12, r12, lr, lsl #8stmia r0!, {r3-r5, r12}subs r2, r2, #0x10        bge Lmemcpy_fsrcul3loop16ldmia sp!, {r4, r5}adds r2, r2, #0x0c        blt Lmemcpy_fsrcul3l4 Lmemcpy_fsrcul3loop4:mov r12, lr, lsr #24ldr lr, [r1], #4orr r12, r12, lr, lsl #8str r12, [r0], #4subs r2, r2, #4bge Lmemcpy_fsrcul3loop4 Lmemcpy_fsrcul3l4:sub r1, r1, #1b Lmemcpy_fl4 Lmemcpy_backwards:add r1, r1, r2add r0, r0, r2subs r2, r2, #4blt Lmemcpy_bl4  /* less than 4 bytes */ands r12, r0, #3bne Lmemcpy_bdestul  /* oh unaligned destination addr */ands r12, r1, #3bne Lmemcpy_bsrcul  /* oh unaligned source addr */ Lmemcpy_bt8:/* We have aligned source and destination */subs r2, r2, #8blt Lmemcpy_bl12  /* less than 12 bytes (4 from above) */stmdb sp!, {r4, lr}subs r2, r2, #0x14  /* less than 32 bytes (12 from above) */blt Lmemcpy_bl32 /* blat 32 bytes at a time *//* XXX for really big copies perhaps we should use more registers */Lmemcpy_bloop32:ldmdb r1!, {r3, r4, r12, lr}stmdb r0!, {r3, r4, r12, lr}ldmdb r1!, {r3, r4, r12, lr}stmdb r0!, {r3, r4, r12, lr}subs r2, r2, #0x20        bge Lmemcpy_bloop32 Lmemcpy_bl32:cmn r2, #0x10            ldmgedb r1!, {r3, r4, r12, lr} /* blat a remaining 16 bytes */stmgedb r0!, {r3, r4, r12, lr}subge r2, r2, #0x10        adds r2, r2, #0x14        ldmgedb r1!, {r3, r12, lr} /* blat a remaining 12 bytes */stmgedb r0!, {r3, r12, lr}subge r2, r2, #0x0c        ldmia sp!, {r4, lr} Lmemcpy_bl12:adds r2, r2, #8blt Lmemcpy_bl4subs r2, r2, #4ldrlt r3, [r1, #-4]!strlt r3, [r0, #-4]!ldmgedb r1!, {r3, r12}stmgedb r0!, {r3, r12}subge r2, r2, #4 Lmemcpy_bl4:/* less than 4 bytes to go */adds r2, r2, #4moveq pc, lr   /* done */ /* copy the crud byte at a time */cmp r2, #2ldrb r3, [r1, #-1]!strb r3, [r0, #-1]!ldrgeb r3, [r1, #-1]!strgeb r3, [r0, #-1]!ldrgtb r3, [r1, #-1]!strgtb r3, [r0, #-1]!mov pc, lr /* erg - unaligned destination */Lmemcpy_bdestul:cmp r12, #2 /* align destination with byte copies */ldrb r3, [r1, #-1]!strb r3, [r0, #-1]!ldrgeb r3, [r1, #-1]!strgeb r3, [r0, #-1]!ldrgtb r3, [r1, #-1]!strgtb r3, [r0, #-1]!subs r2, r2, r12blt Lmemcpy_bl4  /* less than 4 bytes to go */ands r12, r1, #3beq Lmemcpy_bt8  /* we have an aligned source */ /* erg - unaligned source *//* This is where it gets nasty ... */Lmemcpy_bsrcul:bic r1, r1, #3ldr r3, [r1, #0]cmp r12, #2blt Lmemcpy_bsrcul1beq Lmemcpy_bsrcul2cmp r2, #0x0c            blt Lmemcpy_bsrcul3loop4sub r2, r2, #0x0c        stmdb sp!, {r4, r5, lr} Lmemcpy_bsrcul3loop16:mov lr, r3, lsl #8ldmdb r1!, {r3-r5, r12}orr lr, lr, r12, lsr #24mov r12, r12, lsl #8orr r12, r12, r5, lsr #24mov r5, r5, lsl #8orr r5, r5, r4, lsr #24mov r4, r4, lsl #8orr r4, r4, r3, lsr #24stmdb r0!, {r4, r5, r12, lr}subs r2, r2, #0x10        bge Lmemcpy_bsrcul3loop16ldmia sp!, {r4, r5, lr}adds r2, r2, #0x0c        blt Lmemcpy_bsrcul3l4 Lmemcpy_bsrcul3loop4:mov r12, r3, lsl #8ldr r3, [r1, #-4]!orr r12, r12, r3, lsr #24str r12, [r0, #-4]!subs r2, r2, #4bge Lmemcpy_bsrcul3loop4 Lmemcpy_bsrcul3l4:add r1, r1, #3b Lmemcpy_bl4 Lmemcpy_bsrcul2:cmp r2, #0x0c            blt Lmemcpy_bsrcul2loop4sub r2, r2, #0x0c        stmdb sp!, {r4, r5, lr} Lmemcpy_bsrcul2loop16:mov lr, r3, lsl #16ldmdb r1!, {r3-r5, r12}orr lr, lr, r12, lsr #16mov r12, r12, lsl #16orr r12, r12, r5, lsr #16mov r5, r5, lsl #16orr r5, r5, r4, lsr #16mov r4, r4, lsl #16orr r4, r4, r3, lsr #16stmdb r0!, {r4, r5, r12, lr}subs r2, r2, #0x10        bge Lmemcpy_bsrcul2loop16ldmia sp!, {r4, r5, lr}adds r2, r2, #0x0c        blt Lmemcpy_bsrcul2l4 Lmemcpy_bsrcul2loop4:mov r12, r3, lsl #16ldr r3, [r1, #-4]!orr r12, r12, r3, lsr #16str r12, [r0, #-4]!subs r2, r2, #4bge Lmemcpy_bsrcul2loop4 Lmemcpy_bsrcul2l4:add r1, r1, #2b Lmemcpy_bl4 Lmemcpy_bsrcul1:cmp r2, #0x0c            blt Lmemcpy_bsrcul1loop4sub r2, r2, #0x0c        stmdb sp!, {r4, r5, lr} Lmemcpy_bsrcul1loop32:mov lr, r3, lsl #24ldmdb r1!, {r3-r5, r12}orr lr, lr, r12, lsr #8mov r12, r12, lsl #24orr r12, r12, r5, lsr #8mov r5, r5, lsl #24orr r5, r5, r4, lsr #8mov r4, r4, lsl #24orr r4, r4, r3, lsr #8stmdb r0!, {r4, r5, r12, lr}subs r2, r2, #0x10        bge Lmemcpy_bsrcul1loop32ldmia sp!, {r4, r5, lr}adds r2, r2, #0x0c        blt Lmemcpy_bsrcul1l4 Lmemcpy_bsrcul1loop4:mov r12, r3, lsl #24ldr r3, [r1, #-4]!orr r12, r12, r3, lsr #8str r12, [r0, #-4]!subs r2, r2, #4bge Lmemcpy_bsrcul1loop4 Lmemcpy_bsrcul1l4:add r1, r1, #1b Lmemcpy_bl4

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -