|
LLVM 23.0.0git
|
#include "X86SelectionDAGInfo.h"#include "llvm/CodeGen/MachineFunction.h"#include "llvm/CodeGen/TargetLowering.h"Go to the source code of this file.
Classes | |
| class | llvm::X86TargetLowering |
| class | llvm::X86MaskedGatherScatterSDNode |
| class | llvm::X86MaskedGatherSDNode |
| class | llvm::X86MaskedScatterSDNode |
Namespaces | |
| namespace | llvm |
| This is an optimization pass for GlobalISel generic memory operations. | |
| namespace | llvm::X86 |
| Define some predicates that are used for node matching. | |
Enumerations | |
| enum | llvm::X86::RoundingMode { llvm::X86::rmInvalid = -1 , llvm::X86::rmToNearest = 0 , llvm::X86::rmDownward = 1 << 10 , llvm::X86::rmUpward = 2 << 10 , llvm::X86::rmTowardZero = 3 << 10 , llvm::X86::rmMask = 3 << 10 } |
| Current rounding mode is represented in bits 11:10 of FPSR. More... | |
Functions | |
| bool | llvm::X86::isZeroNode (SDValue Elt) |
| Returns true if Elt is a constant zero or floating point constant +0.0. | |
| bool | llvm::X86::isOffsetSuitableForCodeModel (int64_t Offset, CodeModel::Model M, bool hasSymbolicDisplacement) |
| Returns true of the given offset can be fit into displacement field of the instruction. | |
| bool | llvm::X86::isCalleePop (CallingConv::ID CallingConv, bool is64Bit, bool IsVarArg, bool GuaranteeTCO) |
| Determines whether the callee is required to pop its own arguments. | |
| bool | llvm::X86::isConstantSplat (SDValue Op, APInt &SplatVal, bool AllowPartialUndefs=true) |
If Op is a constant whose elements are all the same constant or undefined, return true and return the constant value in SplatVal. | |
| bool | llvm::X86::mayFoldLoad (SDValue Op, const X86Subtarget &Subtarget, bool AssumeSingleUse=false, bool IgnoreAlignment=false) |
| Check if Op is a load operation that could be folded into some other x86 instruction as a memory operand. | |
| bool | llvm::X86::mayFoldLoadIntoBroadcastFromMem (SDValue Op, MVT EltVT, const X86Subtarget &Subtarget, bool AssumeSingleUse=false) |
| Check if Op is a load operation that could be folded into a vector splat instruction as a memory operand. | |
| bool | llvm::X86::mayFoldIntoStore (SDValue Op) |
| Check if Op is a value that could be used to fold a store into some other x86 instruction as a memory operand. | |
| bool | llvm::X86::mayFoldIntoZeroExtend (SDValue Op) |
| Check if Op is an operation that could be folded into a zero extend x86 instruction. | |
| bool | llvm::X86::isExtendedSwiftAsyncFrameSupported (const X86Subtarget &Subtarget, const MachineFunction &MF) |
| True if the target supports the extended frame for async Swift functions. | |
| int | llvm::X86::getRoundingModeX86 (unsigned RM) |
| Convert LLVM rounding mode to X86 rounding mode. | |
| FastISel * | llvm::X86::createFastISel (FunctionLoweringInfo &funcInfo, const TargetLibraryInfo *libInfo, const LibcallLoweringInfo *libcallLowering) |
| void | llvm::createUnpackShuffleMask (EVT VT, SmallVectorImpl< int > &Mask, bool Lo, bool Unary) |
| Generate unpacklo/unpackhi shuffle mask. | |
| void | llvm::createSplat2ShuffleMask (MVT VT, SmallVectorImpl< int > &Mask, bool Lo) |
| Similar to unpacklo/unpackhi, but without the 128-bit lane limitation imposed by AVX and specific to the unary pattern. | |