⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 toir.c

📁 The Valgrind distribution has multiple tools. The most popular is the memory checking tool (called M
💻 C
📖 第 1 页 / 共 5 页
字号:
/*--------------------------------------------------------------------*//*---                                                              ---*//*--- This file (guest-amd64/toIR.c) is                            ---*//*--- Copyright (C) OpenWorks LLP.  All rights reserved.           ---*//*---                                                              ---*//*--------------------------------------------------------------------*//*   This file is part of LibVEX, a library for dynamic binary   instrumentation and translation.   Copyright (C) 2004-2006 OpenWorks LLP.  All rights reserved.   This library is made available under a dual licensing scheme.   If you link LibVEX against other code all of which is itself   licensed under the GNU General Public License, version 2 dated June   1991 ("GPL v2"), then you may use LibVEX under the terms of the GPL   v2, as appearing in the file LICENSE.GPL.  If the file LICENSE.GPL   is missing, you can obtain a copy of the GPL v2 from the Free   Software Foundation Inc., 51 Franklin St, Fifth Floor, Boston, MA   02110-1301, USA.   For any other uses of LibVEX, you must first obtain a commercial   license from OpenWorks LLP.  Please contact info@open-works.co.uk   for information about commercial licensing.   This software is provided by OpenWorks LLP "as is" and any express   or implied warranties, including, but not limited to, the implied   warranties of merchantability and fitness for a particular purpose   are disclaimed.  In no event shall OpenWorks LLP be liable for any   direct, indirect, incidental, special, exemplary, or consequential   damages (including, but not limited to, procurement of substitute   goods or services; loss of use, data, or profits; or business   interruption) however caused and on any theory of liability,   whether in contract, strict liability, or tort (including   negligence or otherwise) arising in any way out of the use of this   software, even if advised of the possibility of such damage.   Neither the names of the U.S. Department of Energy nor the   University of California nor the names of its contributors may be   used to endorse or promote products derived from this software   without prior written permission.*//* LIMITATIONS:    LOCK prefix handling is only safe in the situation where   Vex-generated code is run single-threadedly.  (This is not the same   as saying that Valgrind can't safely use Vex to run multithreaded   programs).  See comment attached to LOCK prefix handling in   disInstr for details.*//* TODO:   All Puts to CC_OP/CC_DEP1/CC_DEP2/CC_NDEP should really be checked   to ensure a 64-bit value is being written.//..    x87 FP Limitations://.. //..    * all arithmetic done at 64 bits//.. //..    * no FP exceptions, except for handling stack over/underflow//.. //..    * FP rounding mode observed only for float->int conversions//..      and int->float conversions which could lose accuracy, and//..      for float-to-float rounding.  For all other operations, //..      round-to-nearest is used, regardless.//.. //..    * FP sin/cos/tan/sincos: C2 flag is always cleared.  IOW the//..      simulation claims the argument is in-range (-2^63 <= arg <= 2^63)//..      even when it isn't.//.. //..    * some of the FCOM cases could do with testing -- not convinced//..      that the args are the right way round.//.. //..    * FSAVE does not re-initialise the FPU; it should do//.. //..    * FINIT not only initialises the FPU environment, it also//..      zeroes all the FP registers.  It should leave the registers//..      unchanged.//.. //..    RDTSC returns zero, always.//.. //..    SAHF should cause eflags[1] == 1, and in fact it produces 0.  As//..    per Intel docs this bit has no meaning anyway.  Since PUSHF is the//..    only way to observe eflags[1], a proper fix would be to make that//..    bit be set by PUSHF.//.. //..    This module uses global variables and so is not MT-safe (if that//..    should ever become relevant).*//* Notes re address size overrides (0x67).   According to the AMD documentation (24594 Rev 3.09, Sept 2003,   "AMD64 Architecture Programmer's Manual Volume 3: General-Purpose   and System Instructions"), Section 1.2.3 ("Address-Size Override   Prefix"):   0x67 applies to all explicit memory references, causing the top   32 bits of the effective address to become zero.   0x67 has no effect on stack references (push/pop); these always   use a 64-bit address.   0x67 changes the interpretation of instructions which implicitly   reference RCX/RSI/RDI, so that in fact ECX/ESI/EDI are used   instead.  These are:      cmp{s,sb,sw,sd,sq}      in{s,sb,sw,sd}      jcxz, jecxz, jrcxz      lod{s,sb,sw,sd,sq}      loop{,e,bz,be,z}      mov{s,sb,sw,sd,sq}      out{s,sb,sw,sd}      rep{,e,ne,nz}      sca{s,sb,sw,sd,sq}      sto{s,sb,sw,sd,sq}      xlat{,b} *//* "Special" instructions.   This instruction decoder can decode three special instructions   which mean nothing natively (are no-ops as far as regs/mem are   concerned) but have meaning for supporting Valgrind.  A special   instruction is flagged by the 16-byte preamble 48C1C703 48C1C70D   48C1C73D 48C1C733 (in the standard interpretation, that means: rolq   $3, %rdi; rolq $13, %rdi; rolq $61, %rdi; rolq $51, %rdi).   Following that, one of the following 3 are allowed (standard   interpretation in parentheses):      4887DB (xchgq %rbx,%rbx)   %RDX = client_request ( %RAX )      4887C9 (xchgq %rcx,%rcx)   %RAX = guest_NRADDR      4887D2 (xchgq %rdx,%rdx)   call-noredir *%RAX   Any other bytes following the 16-byte preamble are illegal and   constitute a failure in instruction decoding.  This all assumes   that the preamble will never occur except in specific code   fragments designed for Valgrind to catch.   No prefixes may precede a "Special" instruction.  *//* Translates AMD64 code to IR. */#include "libvex_basictypes.h"#include "libvex_ir.h"#include "libvex.h"#include "libvex_guest_amd64.h"#include "main/vex_util.h"#include "main/vex_globals.h"#include "guest-generic/bb_to_IR.h"#include "guest-generic/g_generic_x87.h"#include "guest-amd64/gdefs.h"/*------------------------------------------------------------*//*--- Globals                                              ---*//*------------------------------------------------------------*//* These are set at the start of the translation of an insn, right   down in disInstr_AMD64, so that we don't have to pass them around   endlessly.  They are all constant during the translation of any   given insn. *//* These are set at the start of the translation of a BB, so   that we don't have to pass them around endlessly. *//* We need to know this to do sub-register accesses correctly. */static Bool host_is_bigendian;/* Pointer to the guest code area (points to start of BB, not to the   insn being processed). */static UChar* guest_code;/* The guest address corresponding to guest_code[0]. */static Addr64 guest_RIP_bbstart;/* The guest address for the instruction currently being   translated. */static Addr64 guest_RIP_curr_instr;/* The IRBB* into which we're generating code. */static IRBB* irbb;/* For ensuring that %rip-relative addressing is done right.  A read   of %rip generates the address of the next instruction.  It may be   that we don't conveniently know that inside disAMode().  For sanity   checking, if the next insn %rip is needed, we make a guess at what   it is, record that guess here, and set the accompanying Bool to   indicate that -- after this insn's decode is finished -- that guess   needs to be checked.  *//* At the start of each insn decode, is set to (0, False).   After the decode, if _mustcheck is now True, _assumed is   checked. */static Addr64 guest_RIP_next_assumed;static Bool   guest_RIP_next_mustcheck;/*------------------------------------------------------------*//*--- Helpers for constructing IR.                         ---*//*------------------------------------------------------------*/ /* Generate a new temporary of the given type. */static IRTemp newTemp ( IRType ty ){   vassert(isPlausibleIRType(ty));   return newIRTemp( irbb->tyenv, ty );}/* Add a statement to the list held by "irbb". */static void stmt ( IRStmt* st ){   addStmtToIRBB( irbb, st );}/* Generate a statement "dst := e". */ static void assign ( IRTemp dst, IRExpr* e ){   stmt( IRStmt_Tmp(dst, e) );}static IRExpr* unop ( IROp op, IRExpr* a ){   return IRExpr_Unop(op, a);}static IRExpr* binop ( IROp op, IRExpr* a1, IRExpr* a2 ){   return IRExpr_Binop(op, a1, a2);}static IRExpr* triop ( IROp op, IRExpr* a1, IRExpr* a2, IRExpr* a3 ){   return IRExpr_Triop(op, a1, a2, a3);}static IRExpr* mkexpr ( IRTemp tmp ){   return IRExpr_Tmp(tmp);}static IRExpr* mkU8 ( ULong i ){   vassert(i < 256);   return IRExpr_Const(IRConst_U8( (UChar)i ));}static IRExpr* mkU16 ( ULong i ){   vassert(i < 0x10000ULL);   return IRExpr_Const(IRConst_U16( (UShort)i ));}static IRExpr* mkU32 ( ULong i ){   vassert(i < 0x100000000ULL);   return IRExpr_Const(IRConst_U32( (UInt)i ));}static IRExpr* mkU64 ( ULong i ){   return IRExpr_Const(IRConst_U64(i));}static IRExpr* mkU ( IRType ty, ULong i ){   switch (ty) {      case Ity_I8:  return mkU8(i);      case Ity_I16: return mkU16(i);      case Ity_I32: return mkU32(i);      case Ity_I64: return mkU64(i);      default: vpanic("mkU(amd64)");   }}static void storeLE ( IRExpr* addr, IRExpr* data ){   stmt( IRStmt_Store(Iend_LE,addr,data) );}static IRExpr* loadLE ( IRType ty, IRExpr* data ){   return IRExpr_Load(Iend_LE,ty,data);}static IROp mkSizedOp ( IRType ty, IROp op8 ){   vassert(op8 == Iop_Add8 || op8 == Iop_Sub8            || op8 == Iop_Mul8            || op8 == Iop_Or8 || op8 == Iop_And8 || op8 == Iop_Xor8           || op8 == Iop_Shl8 || op8 == Iop_Shr8 || op8 == Iop_Sar8           || op8 == Iop_CmpEQ8 || op8 == Iop_CmpNE8           || op8 == Iop_Not8 );   switch (ty) {      case Ity_I8:  return 0 +op8;      case Ity_I16: return 1 +op8;      case Ity_I32: return 2 +op8;      case Ity_I64: return 3 +op8;      default: vpanic("mkSizedOp(amd64)");   }}static IRExpr* doScalarWidening ( Int szSmall, Int szBig, Bool signd, IRExpr* src ){   if (szSmall == 1 && szBig == 4) {      return unop(signd ? Iop_8Sto32 : Iop_8Uto32, src);   }   if (szSmall == 1 && szBig == 2) {      return unop(signd ? Iop_8Sto16 : Iop_8Uto16, src);   }   if (szSmall == 2 && szBig == 4) {      return unop(signd ? Iop_16Sto32 : Iop_16Uto32, src);   }   if (szSmall == 1 && szBig == 8 && !signd) {      return unop(Iop_8Uto64, src);   }   if (szSmall == 1 && szBig == 8 && signd) {      return unop(Iop_8Sto64, src);   }   if (szSmall == 2 && szBig == 8 && !signd) {      return unop(Iop_16Uto64, src);   }   if (szSmall == 2 && szBig == 8 && signd) {      return unop(Iop_16Sto64, src);   }   vpanic("doScalarWidening(amd64)");}/*------------------------------------------------------------*//*--- Debugging output                                     ---*//*------------------------------------------------------------*//* Bomb out if we can't handle something. */__attribute__ ((noreturn))static void unimplemented ( HChar* str ){   vex_printf("amd64toIR: unimplemented feature\n");   vpanic(str);}#define DIP(format, args...)           \   if (vex_traceflags & VEX_TRACE_FE)  \      vex_printf(format, ## args)#define DIS(buf, format, args...)      \   if (vex_traceflags & VEX_TRACE_FE)  \      vex_sprintf(buf, format, ## args)/*------------------------------------------------------------*//*--- Offsets of various parts of the amd64 guest state.   ---*//*------------------------------------------------------------*/#define OFFB_RAX       offsetof(VexGuestAMD64State,guest_RAX)#define OFFB_RBX       offsetof(VexGuestAMD64State,guest_RBX)#define OFFB_RCX       offsetof(VexGuestAMD64State,guest_RCX)#define OFFB_RDX       offsetof(VexGuestAMD64State,guest_RDX)#define OFFB_RSP       offsetof(VexGuestAMD64State,guest_RSP)#define OFFB_RBP       offsetof(VexGuestAMD64State,guest_RBP)#define OFFB_RSI       offsetof(VexGuestAMD64State,guest_RSI)#define OFFB_RDI       offsetof(VexGuestAMD64State,guest_RDI)#define OFFB_R8        offsetof(VexGuestAMD64State,guest_R8)#define OFFB_R9        offsetof(VexGuestAMD64State,guest_R9)#define OFFB_R10       offsetof(VexGuestAMD64State,guest_R10)#define OFFB_R11       offsetof(VexGuestAMD64State,guest_R11)#define OFFB_R12       offsetof(VexGuestAMD64State,guest_R12)#define OFFB_R13       offsetof(VexGuestAMD64State,guest_R13)#define OFFB_R14       offsetof(VexGuestAMD64State,guest_R14)#define OFFB_R15       offsetof(VexGuestAMD64State,guest_R15)#define OFFB_RIP       offsetof(VexGuestAMD64State,guest_RIP)#define OFFB_FS_ZERO   offsetof(VexGuestAMD64State,guest_FS_ZERO)#define OFFB_CC_OP     offsetof(VexGuestAMD64State,guest_CC_OP)#define OFFB_CC_DEP1   offsetof(VexGuestAMD64State,guest_CC_DEP1)#define OFFB_CC_DEP2   offsetof(VexGuestAMD64State,guest_CC_DEP2)#define OFFB_CC_NDEP   offsetof(VexGuestAMD64State,guest_CC_NDEP)#define OFFB_FPREGS    offsetof(VexGuestAMD64State,guest_FPREG[0])#define OFFB_FPTAGS    offsetof(VexGuestAMD64State,guest_FPTAG[0])#define OFFB_DFLAG     offsetof(VexGuestAMD64State,guest_DFLAG)#define OFFB_IDFLAG    offsetof(VexGuestAMD64State,guest_IDFLAG)#define OFFB_FTOP      offsetof(VexGuestAMD64State,guest_FTOP)#define OFFB_FC3210    offsetof(VexGuestAMD64State,guest_FC3210)#define OFFB_FPROUND   offsetof(VexGuestAMD64State,guest_FPROUND)//.. //.. #define OFFB_CS        offsetof(VexGuestX86State,guest_CS)//.. #define OFFB_DS        offsetof(VexGuestX86State,guest_DS)//.. #define OFFB_ES        offsetof(VexGuestX86State,guest_ES)//.. #define OFFB_FS        offsetof(VexGuestX86State,guest_FS)//.. #define OFFB_GS        offsetof(VexGuestX86State,guest_GS)//.. #define OFFB_SS        offsetof(VexGuestX86State,guest_SS)//.. #define OFFB_LDT       offsetof(VexGuestX86State,guest_LDT)//.. #define OFFB_GDT       offsetof(VexGuestX86State,guest_GDT)#define OFFB_SSEROUND  offsetof(VexGuestAMD64State,guest_SSEROUND)#define OFFB_XMM0      offsetof(VexGuestAMD64State,guest_XMM0)#define OFFB_XMM1      offsetof(VexGuestAMD64State,guest_XMM1)#define OFFB_XMM2      offsetof(VexGuestAMD64State,guest_XMM2)#define OFFB_XMM3      offsetof(VexGuestAMD64State,guest_XMM3)#define OFFB_XMM4      offsetof(VexGuestAMD64State,guest_XMM4)#define OFFB_XMM5      offsetof(VexGuestAMD64State,guest_XMM5)#define OFFB_XMM6      offsetof(VexGuestAMD64State,guest_XMM6)#define OFFB_XMM7      offsetof(VexGuestAMD64State,guest_XMM7)#define OFFB_XMM8      offsetof(VexGuestAMD64State,guest_XMM8)

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -