mirror of
https://github.com/llvm/llvm-project.git
synced 2025-04-26 06:36:07 +00:00
Implementation of asm-goto support in LLVM
This patch accompanies the RFC posted here: http://lists.llvm.org/pipermail/llvm-dev/2018-October/127239.html This patch adds a new CallBr IR instruction to support asm-goto inline assembly like gcc as used by the linux kernel. This instruction is both a call instruction and a terminator instruction with multiple successors. Only inline assembly usage is supported today. This also adds a new INLINEASM_BR opcode to SelectionDAG and MachineIR to represent an INLINEASM block that is also considered a terminator instruction. There will likely be more bug fixes and optimizations to follow this, but we felt it had reached a point where we would like to switch to an incremental development model. Patch by Craig Topper, Alexander Ivchenko, Mikhail Dvoretckii Differential Revision: https://reviews.llvm.org/D53765 llvm-svn: 353563
This commit is contained in:
parent
0e5dd512aa
commit
784929d045
@ -6513,6 +6513,7 @@ control flow, not values (the one exception being the
|
|||||||
The terminator instructions are: ':ref:`ret <i_ret>`',
|
The terminator instructions are: ':ref:`ret <i_ret>`',
|
||||||
':ref:`br <i_br>`', ':ref:`switch <i_switch>`',
|
':ref:`br <i_br>`', ':ref:`switch <i_switch>`',
|
||||||
':ref:`indirectbr <i_indirectbr>`', ':ref:`invoke <i_invoke>`',
|
':ref:`indirectbr <i_indirectbr>`', ':ref:`invoke <i_invoke>`',
|
||||||
|
':ref:`callbr <i_callbr>`'
|
||||||
':ref:`resume <i_resume>`', ':ref:`catchswitch <i_catchswitch>`',
|
':ref:`resume <i_resume>`', ':ref:`catchswitch <i_catchswitch>`',
|
||||||
':ref:`catchret <i_catchret>`',
|
':ref:`catchret <i_catchret>`',
|
||||||
':ref:`cleanupret <i_cleanupret>`',
|
':ref:`cleanupret <i_cleanupret>`',
|
||||||
@ -6837,6 +6838,85 @@ Example:
|
|||||||
%retval = invoke coldcc i32 %Testfnptr(i32 15) to label %Continue
|
%retval = invoke coldcc i32 %Testfnptr(i32 15) to label %Continue
|
||||||
unwind label %TestCleanup ; i32:retval set
|
unwind label %TestCleanup ; i32:retval set
|
||||||
|
|
||||||
|
.. _i_callbr:
|
||||||
|
|
||||||
|
'``callbr``' Instruction
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
Syntax:
|
||||||
|
"""""""
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
<result> = callbr [cconv] [ret attrs] [addrspace(<num>)] [<ty>|<fnty> <fnptrval>(<function args>) [fn attrs]
|
||||||
|
[operand bundles] to label <normal label> or jump [other labels]
|
||||||
|
|
||||||
|
Overview:
|
||||||
|
"""""""""
|
||||||
|
|
||||||
|
The '``callbr``' instruction causes control to transfer to a specified
|
||||||
|
function, with the possibility of control flow transfer to either the
|
||||||
|
'``normal``' label or one of the '``other``' labels.
|
||||||
|
|
||||||
|
This instruction should only be used to implement the "goto" feature of gcc
|
||||||
|
style inline assembly. Any other usage is an error in the IR verifier.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
""""""""""
|
||||||
|
|
||||||
|
This instruction requires several arguments:
|
||||||
|
|
||||||
|
#. The optional "cconv" marker indicates which :ref:`calling
|
||||||
|
convention <callingconv>` the call should use. If none is
|
||||||
|
specified, the call defaults to using C calling conventions.
|
||||||
|
#. The optional :ref:`Parameter Attributes <paramattrs>` list for return
|
||||||
|
values. Only '``zeroext``', '``signext``', and '``inreg``' attributes
|
||||||
|
are valid here.
|
||||||
|
#. The optional addrspace attribute can be used to indicate the address space
|
||||||
|
of the called function. If it is not specified, the program address space
|
||||||
|
from the :ref:`datalayout string<langref_datalayout>` will be used.
|
||||||
|
#. '``ty``': the type of the call instruction itself which is also the
|
||||||
|
type of the return value. Functions that return no value are marked
|
||||||
|
``void``.
|
||||||
|
#. '``fnty``': shall be the signature of the function being called. The
|
||||||
|
argument types must match the types implied by this signature. This
|
||||||
|
type can be omitted if the function is not varargs.
|
||||||
|
#. '``fnptrval``': An LLVM value containing a pointer to a function to
|
||||||
|
be called. In most cases, this is a direct function call, but
|
||||||
|
indirect ``callbr``'s are just as possible, calling an arbitrary pointer
|
||||||
|
to function value.
|
||||||
|
#. '``function args``': argument list whose types match the function
|
||||||
|
signature argument types and parameter attributes. All arguments must
|
||||||
|
be of :ref:`first class <t_firstclass>` type. If the function signature
|
||||||
|
indicates the function accepts a variable number of arguments, the
|
||||||
|
extra arguments can be specified.
|
||||||
|
#. '``normal label``': the label reached when the called function
|
||||||
|
executes a '``ret``' instruction.
|
||||||
|
#. '``other labels``': the labels reached when a callee transfers control
|
||||||
|
to a location other than the normal '``normal label``'
|
||||||
|
#. The optional :ref:`function attributes <fnattrs>` list.
|
||||||
|
#. The optional :ref:`operand bundles <opbundles>` list.
|
||||||
|
|
||||||
|
Semantics:
|
||||||
|
""""""""""
|
||||||
|
|
||||||
|
This instruction is designed to operate as a standard '``call``'
|
||||||
|
instruction in most regards. The primary difference is that it
|
||||||
|
establishes an association with additional labels to define where control
|
||||||
|
flow goes after the call.
|
||||||
|
|
||||||
|
The only use of this today is to implement the "goto" feature of gcc inline
|
||||||
|
assembly where additional labels can be provided as locations for the inline
|
||||||
|
assembly to jump to.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
""""""""
|
||||||
|
|
||||||
|
.. code-block:: llvm
|
||||||
|
|
||||||
|
callbr void asm "", "r,x"(i32 %x, i8 *blockaddress(@foo, %fail))
|
||||||
|
to label %normal or jump [label %fail]
|
||||||
|
|
||||||
.. _i_resume:
|
.. _i_resume:
|
||||||
|
|
||||||
'``resume``' Instruction
|
'``resume``' Instruction
|
||||||
|
@ -65,6 +65,7 @@ typedef enum {
|
|||||||
LLVMInvoke = 5,
|
LLVMInvoke = 5,
|
||||||
/* removed 6 due to API changes */
|
/* removed 6 due to API changes */
|
||||||
LLVMUnreachable = 7,
|
LLVMUnreachable = 7,
|
||||||
|
LLVMCallBr = 67,
|
||||||
|
|
||||||
/* Standard Unary Operators */
|
/* Standard Unary Operators */
|
||||||
LLVMFNeg = 66,
|
LLVMFNeg = 66,
|
||||||
|
@ -329,12 +329,8 @@ void SparseSolver<LatticeKey, LatticeVal, KeyInfo>::getFeasibleSuccessors(
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (TI.isExceptionalTerminator()) {
|
if (TI.isExceptionalTerminator() ||
|
||||||
Succs.assign(Succs.size(), true);
|
TI.isIndirectTerminator()) {
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (isa<IndirectBrInst>(TI)) {
|
|
||||||
Succs.assign(Succs.size(), true);
|
Succs.assign(Succs.size(), true);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
@ -535,6 +535,8 @@ enum FunctionCodes {
|
|||||||
// 54 is unused.
|
// 54 is unused.
|
||||||
FUNC_CODE_OPERAND_BUNDLE = 55, // OPERAND_BUNDLE: [tag#, value...]
|
FUNC_CODE_OPERAND_BUNDLE = 55, // OPERAND_BUNDLE: [tag#, value...]
|
||||||
FUNC_CODE_INST_UNOP = 56, // UNOP: [opcode, ty, opval]
|
FUNC_CODE_INST_UNOP = 56, // UNOP: [opcode, ty, opval]
|
||||||
|
FUNC_CODE_INST_CALLBR = 57, // CALLBR: [attr, cc, norm, transfs,
|
||||||
|
// fnty, fnid, args...]
|
||||||
};
|
};
|
||||||
|
|
||||||
enum UseListCodes {
|
enum UseListCodes {
|
||||||
|
@ -252,6 +252,8 @@ private:
|
|||||||
|
|
||||||
bool translateInvoke(const User &U, MachineIRBuilder &MIRBuilder);
|
bool translateInvoke(const User &U, MachineIRBuilder &MIRBuilder);
|
||||||
|
|
||||||
|
bool translateCallBr(const User &U, MachineIRBuilder &MIRBuilder);
|
||||||
|
|
||||||
bool translateLandingPad(const User &U, MachineIRBuilder &MIRBuilder);
|
bool translateLandingPad(const User &U, MachineIRBuilder &MIRBuilder);
|
||||||
|
|
||||||
/// Translate one of LLVM's cast instructions into MachineInstrs, with the
|
/// Translate one of LLVM's cast instructions into MachineInstrs, with the
|
||||||
|
@ -667,6 +667,9 @@ namespace ISD {
|
|||||||
/// SDOperands.
|
/// SDOperands.
|
||||||
INLINEASM,
|
INLINEASM,
|
||||||
|
|
||||||
|
/// INLINEASM_BR - Terminator version of inline asm. Used by asm-goto.
|
||||||
|
INLINEASM_BR,
|
||||||
|
|
||||||
/// EH_LABEL - Represents a label in mid basic block used to track
|
/// EH_LABEL - Represents a label in mid basic block used to track
|
||||||
/// locations needed for debug and exception handling tables. These nodes
|
/// locations needed for debug and exception handling tables. These nodes
|
||||||
/// take a chain as input and return a chain.
|
/// take a chain as input and return a chain.
|
||||||
|
@ -1011,7 +1011,10 @@ public:
|
|||||||
}
|
}
|
||||||
bool isKill() const { return getOpcode() == TargetOpcode::KILL; }
|
bool isKill() const { return getOpcode() == TargetOpcode::KILL; }
|
||||||
bool isImplicitDef() const { return getOpcode()==TargetOpcode::IMPLICIT_DEF; }
|
bool isImplicitDef() const { return getOpcode()==TargetOpcode::IMPLICIT_DEF; }
|
||||||
bool isInlineAsm() const { return getOpcode() == TargetOpcode::INLINEASM; }
|
bool isInlineAsm() const {
|
||||||
|
return getOpcode() == TargetOpcode::INLINEASM ||
|
||||||
|
getOpcode() == TargetOpcode::INLINEASM_BR;
|
||||||
|
}
|
||||||
|
|
||||||
bool isMSInlineAsm() const {
|
bool isMSInlineAsm() const {
|
||||||
return isInlineAsm() && getInlineAsmDialect() == InlineAsm::AD_Intel;
|
return isInlineAsm() && getInlineAsmDialect() == InlineAsm::AD_Intel;
|
||||||
|
@ -302,7 +302,7 @@ public:
|
|||||||
private:
|
private:
|
||||||
|
|
||||||
// Calls to these functions are generated by tblgen.
|
// Calls to these functions are generated by tblgen.
|
||||||
void Select_INLINEASM(SDNode *N);
|
void Select_INLINEASM(SDNode *N, bool Branch);
|
||||||
void Select_READ_REGISTER(SDNode *Op);
|
void Select_READ_REGISTER(SDNode *Op);
|
||||||
void Select_WRITE_REGISTER(SDNode *Op);
|
void Select_WRITE_REGISTER(SDNode *Op);
|
||||||
void Select_UNDEF(SDNode *N);
|
void Select_UNDEF(SDNode *N);
|
||||||
|
@ -7,8 +7,8 @@
|
|||||||
//===----------------------------------------------------------------------===//
|
//===----------------------------------------------------------------------===//
|
||||||
//
|
//
|
||||||
// This file defines the CallSite class, which is a handy wrapper for code that
|
// This file defines the CallSite class, which is a handy wrapper for code that
|
||||||
// wants to treat Call and Invoke instructions in a generic way. When in non-
|
// wants to treat Call, Invoke and CallBr instructions in a generic way. When
|
||||||
// mutation context (e.g. an analysis) ImmutableCallSite should be used.
|
// in non-mutation context (e.g. an analysis) ImmutableCallSite should be used.
|
||||||
// Finally, when some degree of customization is necessary between these two
|
// Finally, when some degree of customization is necessary between these two
|
||||||
// extremes, CallSiteBase<> can be supplied with fine-tuned parameters.
|
// extremes, CallSiteBase<> can be supplied with fine-tuned parameters.
|
||||||
//
|
//
|
||||||
@ -17,7 +17,7 @@
|
|||||||
// They are efficiently copyable, assignable and constructable, with cost
|
// They are efficiently copyable, assignable and constructable, with cost
|
||||||
// equivalent to copying a pointer (notice that they have only a single data
|
// equivalent to copying a pointer (notice that they have only a single data
|
||||||
// member). The internal representation carries a flag which indicates which of
|
// member). The internal representation carries a flag which indicates which of
|
||||||
// the two variants is enclosed. This allows for cheaper checks when various
|
// the three variants is enclosed. This allows for cheaper checks when various
|
||||||
// accessors of CallSite are employed.
|
// accessors of CallSite are employed.
|
||||||
//
|
//
|
||||||
//===----------------------------------------------------------------------===//
|
//===----------------------------------------------------------------------===//
|
||||||
@ -48,45 +48,50 @@ namespace Intrinsic {
|
|||||||
enum ID : unsigned;
|
enum ID : unsigned;
|
||||||
}
|
}
|
||||||
|
|
||||||
template <typename FunTy = const Function,
|
template <typename FunTy = const Function, typename BBTy = const BasicBlock,
|
||||||
typename BBTy = const BasicBlock,
|
typename ValTy = const Value, typename UserTy = const User,
|
||||||
typename ValTy = const Value,
|
typename UseTy = const Use, typename InstrTy = const Instruction,
|
||||||
typename UserTy = const User,
|
|
||||||
typename UseTy = const Use,
|
|
||||||
typename InstrTy = const Instruction,
|
|
||||||
typename CallTy = const CallInst,
|
typename CallTy = const CallInst,
|
||||||
typename InvokeTy = const InvokeInst,
|
typename InvokeTy = const InvokeInst,
|
||||||
|
typename CallBrTy = const CallBrInst,
|
||||||
typename IterTy = User::const_op_iterator>
|
typename IterTy = User::const_op_iterator>
|
||||||
class CallSiteBase {
|
class CallSiteBase {
|
||||||
protected:
|
protected:
|
||||||
PointerIntPair<InstrTy*, 1, bool> I;
|
PointerIntPair<InstrTy *, 2, int> I;
|
||||||
|
|
||||||
CallSiteBase() = default;
|
CallSiteBase() = default;
|
||||||
CallSiteBase(CallTy *CI) : I(CI, true) { assert(CI); }
|
CallSiteBase(CallTy *CI) : I(CI, 1) { assert(CI); }
|
||||||
CallSiteBase(InvokeTy *II) : I(II, false) { assert(II); }
|
CallSiteBase(InvokeTy *II) : I(II, 0) { assert(II); }
|
||||||
|
CallSiteBase(CallBrTy *CBI) : I(CBI, 2) { assert(CBI); }
|
||||||
explicit CallSiteBase(ValTy *II) { *this = get(II); }
|
explicit CallSiteBase(ValTy *II) { *this = get(II); }
|
||||||
|
|
||||||
private:
|
private:
|
||||||
/// This static method is like a constructor. It will create an appropriate
|
/// This static method is like a constructor. It will create an appropriate
|
||||||
/// call site for a Call or Invoke instruction, but it can also create a null
|
/// call site for a Call, Invoke or CallBr instruction, but it can also create
|
||||||
/// initialized CallSiteBase object for something which is NOT a call site.
|
/// a null initialized CallSiteBase object for something which is NOT a call
|
||||||
|
/// site.
|
||||||
static CallSiteBase get(ValTy *V) {
|
static CallSiteBase get(ValTy *V) {
|
||||||
if (InstrTy *II = dyn_cast<InstrTy>(V)) {
|
if (InstrTy *II = dyn_cast<InstrTy>(V)) {
|
||||||
if (II->getOpcode() == Instruction::Call)
|
if (II->getOpcode() == Instruction::Call)
|
||||||
return CallSiteBase(static_cast<CallTy*>(II));
|
return CallSiteBase(static_cast<CallTy*>(II));
|
||||||
else if (II->getOpcode() == Instruction::Invoke)
|
if (II->getOpcode() == Instruction::Invoke)
|
||||||
return CallSiteBase(static_cast<InvokeTy*>(II));
|
return CallSiteBase(static_cast<InvokeTy*>(II));
|
||||||
|
if (II->getOpcode() == Instruction::CallBr)
|
||||||
|
return CallSiteBase(static_cast<CallBrTy *>(II));
|
||||||
}
|
}
|
||||||
return CallSiteBase();
|
return CallSiteBase();
|
||||||
}
|
}
|
||||||
|
|
||||||
public:
|
public:
|
||||||
/// Return true if a CallInst is enclosed. Note that !isCall() does not mean
|
/// Return true if a CallInst is enclosed.
|
||||||
/// an InvokeInst is enclosed. It may also signify a NULL instruction pointer.
|
bool isCall() const { return I.getInt() == 1; }
|
||||||
bool isCall() const { return I.getInt(); }
|
|
||||||
|
|
||||||
/// Return true if a InvokeInst is enclosed.
|
/// Return true if a InvokeInst is enclosed. !I.getInt() may also signify a
|
||||||
bool isInvoke() const { return getInstruction() && !I.getInt(); }
|
/// NULL instruction pointer, so check that.
|
||||||
|
bool isInvoke() const { return getInstruction() && I.getInt() == 0; }
|
||||||
|
|
||||||
|
/// Return true if a CallBrInst is enclosed.
|
||||||
|
bool isCallBr() const { return I.getInt() == 2; }
|
||||||
|
|
||||||
InstrTy *getInstruction() const { return I.getPointer(); }
|
InstrTy *getInstruction() const { return I.getPointer(); }
|
||||||
InstrTy *operator->() const { return I.getPointer(); }
|
InstrTy *operator->() const { return I.getPointer(); }
|
||||||
@ -97,7 +102,7 @@ public:
|
|||||||
|
|
||||||
/// Return the pointer to function that is being called.
|
/// Return the pointer to function that is being called.
|
||||||
ValTy *getCalledValue() const {
|
ValTy *getCalledValue() const {
|
||||||
assert(getInstruction() && "Not a call or invoke instruction!");
|
assert(getInstruction() && "Not a call, invoke or callbr instruction!");
|
||||||
return *getCallee();
|
return *getCallee();
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -114,17 +119,16 @@ public:
|
|||||||
return false;
|
return false;
|
||||||
if (isa<FunTy>(V) || isa<Constant>(V))
|
if (isa<FunTy>(V) || isa<Constant>(V))
|
||||||
return false;
|
return false;
|
||||||
if (const CallInst *CI = dyn_cast<CallInst>(getInstruction())) {
|
if (const CallBase *CB = dyn_cast<CallBase>(getInstruction()))
|
||||||
if (CI->isInlineAsm())
|
if (CB->isInlineAsm())
|
||||||
return false;
|
return false;
|
||||||
}
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Set the callee to the specified value. Unlike the function of the same
|
/// Set the callee to the specified value. Unlike the function of the same
|
||||||
/// name on CallBase, does not modify the type!
|
/// name on CallBase, does not modify the type!
|
||||||
void setCalledFunction(Value *V) {
|
void setCalledFunction(Value *V) {
|
||||||
assert(getInstruction() && "Not a call or invoke instruction!");
|
assert(getInstruction() && "Not a call, callbr, or invoke instruction!");
|
||||||
assert(cast<PointerType>(V->getType())->getElementType() ==
|
assert(cast<PointerType>(V->getType())->getElementType() ==
|
||||||
cast<CallBase>(getInstruction())->getFunctionType() &&
|
cast<CallBase>(getInstruction())->getFunctionType() &&
|
||||||
"New callee type does not match FunctionType on call");
|
"New callee type does not match FunctionType on call");
|
||||||
@ -192,7 +196,7 @@ public:
|
|||||||
}
|
}
|
||||||
|
|
||||||
void setArgument(unsigned ArgNo, Value* newVal) {
|
void setArgument(unsigned ArgNo, Value* newVal) {
|
||||||
assert(getInstruction() && "Not a call or invoke instruction!");
|
assert(getInstruction() && "Not a call, invoke or callbr instruction!");
|
||||||
assert(arg_begin() + ArgNo < arg_end() && "Argument # out of range!");
|
assert(arg_begin() + ArgNo < arg_end() && "Argument # out of range!");
|
||||||
getInstruction()->setOperand(ArgNo, newVal);
|
getInstruction()->setOperand(ArgNo, newVal);
|
||||||
}
|
}
|
||||||
@ -206,7 +210,7 @@ public:
|
|||||||
/// Given a use for an argument, get the argument number that corresponds to
|
/// Given a use for an argument, get the argument number that corresponds to
|
||||||
/// it.
|
/// it.
|
||||||
unsigned getArgumentNo(const Use *U) const {
|
unsigned getArgumentNo(const Use *U) const {
|
||||||
assert(getInstruction() && "Not a call or invoke instruction!");
|
assert(getInstruction() && "Not a call, invoke or callbr instruction!");
|
||||||
assert(isArgOperand(U) && "Argument # out of range!");
|
assert(isArgOperand(U) && "Argument # out of range!");
|
||||||
return U - arg_begin();
|
return U - arg_begin();
|
||||||
}
|
}
|
||||||
@ -230,7 +234,7 @@ public:
|
|||||||
/// Given a use for a data operand, get the data operand number that
|
/// Given a use for a data operand, get the data operand number that
|
||||||
/// corresponds to it.
|
/// corresponds to it.
|
||||||
unsigned getDataOperandNo(const Use *U) const {
|
unsigned getDataOperandNo(const Use *U) const {
|
||||||
assert(getInstruction() && "Not a call or invoke instruction!");
|
assert(getInstruction() && "Not a call, invoke or callbr instruction!");
|
||||||
assert(isDataOperand(U) && "Data operand # out of range!");
|
assert(isDataOperand(U) && "Data operand # out of range!");
|
||||||
return U - data_operands_begin();
|
return U - data_operands_begin();
|
||||||
}
|
}
|
||||||
@ -240,10 +244,11 @@ public:
|
|||||||
using data_operand_iterator = IterTy;
|
using data_operand_iterator = IterTy;
|
||||||
|
|
||||||
/// data_operands_begin/data_operands_end - Return iterators iterating over
|
/// data_operands_begin/data_operands_end - Return iterators iterating over
|
||||||
/// the call / invoke argument list and bundle operands. For invokes, this is
|
/// the call / invoke / callbr argument list and bundle operands. For invokes,
|
||||||
/// the set of instruction operands except the invoke target and the two
|
/// this is the set of instruction operands except the invoke target and the
|
||||||
/// successor blocks; and for calls this is the set of instruction operands
|
/// two successor blocks; for calls this is the set of instruction operands
|
||||||
/// except the call target.
|
/// except the call target; for callbrs the number of labels to skip must be
|
||||||
|
/// determined first.
|
||||||
|
|
||||||
IterTy data_operands_begin() const {
|
IterTy data_operands_begin() const {
|
||||||
assert(getInstruction() && "Not a call or invoke instruction!");
|
assert(getInstruction() && "Not a call or invoke instruction!");
|
||||||
@ -280,17 +285,19 @@ public:
|
|||||||
return isCall() && cast<CallInst>(getInstruction())->isTailCall();
|
return isCall() && cast<CallInst>(getInstruction())->isTailCall();
|
||||||
}
|
}
|
||||||
|
|
||||||
#define CALLSITE_DELEGATE_GETTER(METHOD) \
|
#define CALLSITE_DELEGATE_GETTER(METHOD) \
|
||||||
InstrTy *II = getInstruction(); \
|
InstrTy *II = getInstruction(); \
|
||||||
return isCall() \
|
return isCall() ? cast<CallInst>(II)->METHOD \
|
||||||
? cast<CallInst>(II)->METHOD \
|
: isCallBr() ? cast<CallBrInst>(II)->METHOD \
|
||||||
: cast<InvokeInst>(II)->METHOD
|
: cast<InvokeInst>(II)->METHOD
|
||||||
|
|
||||||
#define CALLSITE_DELEGATE_SETTER(METHOD) \
|
#define CALLSITE_DELEGATE_SETTER(METHOD) \
|
||||||
InstrTy *II = getInstruction(); \
|
InstrTy *II = getInstruction(); \
|
||||||
if (isCall()) \
|
if (isCall()) \
|
||||||
cast<CallInst>(II)->METHOD; \
|
cast<CallInst>(II)->METHOD; \
|
||||||
else \
|
else if (isCallBr()) \
|
||||||
|
cast<CallBrInst>(II)->METHOD; \
|
||||||
|
else \
|
||||||
cast<InvokeInst>(II)->METHOD
|
cast<InvokeInst>(II)->METHOD
|
||||||
|
|
||||||
unsigned getNumArgOperands() const {
|
unsigned getNumArgOperands() const {
|
||||||
@ -306,9 +313,7 @@ public:
|
|||||||
}
|
}
|
||||||
|
|
||||||
bool isInlineAsm() const {
|
bool isInlineAsm() const {
|
||||||
if (isCall())
|
return cast<CallBase>(getInstruction())->isInlineAsm();
|
||||||
return cast<CallInst>(getInstruction())->isInlineAsm();
|
|
||||||
return false;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Get the calling convention of the call.
|
/// Get the calling convention of the call.
|
||||||
@ -392,10 +397,10 @@ public:
|
|||||||
/// Return true if the data operand at index \p i directly or indirectly has
|
/// Return true if the data operand at index \p i directly or indirectly has
|
||||||
/// the attribute \p A.
|
/// the attribute \p A.
|
||||||
///
|
///
|
||||||
/// Normal call or invoke arguments have per operand attributes, as specified
|
/// Normal call, invoke or callbr arguments have per operand attributes, as
|
||||||
/// in the attribute set attached to this instruction, while operand bundle
|
/// specified in the attribute set attached to this instruction, while operand
|
||||||
/// operands may have some attributes implied by the type of its containing
|
/// bundle operands may have some attributes implied by the type of its
|
||||||
/// operand bundle.
|
/// containing operand bundle.
|
||||||
bool dataOperandHasImpliedAttr(unsigned i, Attribute::AttrKind Kind) const {
|
bool dataOperandHasImpliedAttr(unsigned i, Attribute::AttrKind Kind) const {
|
||||||
CALLSITE_DELEGATE_GETTER(dataOperandHasImpliedAttr(i, Kind));
|
CALLSITE_DELEGATE_GETTER(dataOperandHasImpliedAttr(i, Kind));
|
||||||
}
|
}
|
||||||
@ -661,12 +666,13 @@ private:
|
|||||||
|
|
||||||
class CallSite : public CallSiteBase<Function, BasicBlock, Value, User, Use,
|
class CallSite : public CallSiteBase<Function, BasicBlock, Value, User, Use,
|
||||||
Instruction, CallInst, InvokeInst,
|
Instruction, CallInst, InvokeInst,
|
||||||
User::op_iterator> {
|
CallBrInst, User::op_iterator> {
|
||||||
public:
|
public:
|
||||||
CallSite() = default;
|
CallSite() = default;
|
||||||
CallSite(CallSiteBase B) : CallSiteBase(B) {}
|
CallSite(CallSiteBase B) : CallSiteBase(B) {}
|
||||||
CallSite(CallInst *CI) : CallSiteBase(CI) {}
|
CallSite(CallInst *CI) : CallSiteBase(CI) {}
|
||||||
CallSite(InvokeInst *II) : CallSiteBase(II) {}
|
CallSite(InvokeInst *II) : CallSiteBase(II) {}
|
||||||
|
CallSite(CallBrInst *CBI) : CallSiteBase(CBI) {}
|
||||||
explicit CallSite(Instruction *II) : CallSiteBase(II) {}
|
explicit CallSite(Instruction *II) : CallSiteBase(II) {}
|
||||||
explicit CallSite(Value *V) : CallSiteBase(V) {}
|
explicit CallSite(Value *V) : CallSiteBase(V) {}
|
||||||
|
|
||||||
@ -888,6 +894,7 @@ public:
|
|||||||
ImmutableCallSite() = default;
|
ImmutableCallSite() = default;
|
||||||
ImmutableCallSite(const CallInst *CI) : CallSiteBase(CI) {}
|
ImmutableCallSite(const CallInst *CI) : CallSiteBase(CI) {}
|
||||||
ImmutableCallSite(const InvokeInst *II) : CallSiteBase(II) {}
|
ImmutableCallSite(const InvokeInst *II) : CallSiteBase(II) {}
|
||||||
|
ImmutableCallSite(const CallBrInst *CBI) : CallSiteBase(CBI) {}
|
||||||
explicit ImmutableCallSite(const Instruction *II) : CallSiteBase(II) {}
|
explicit ImmutableCallSite(const Instruction *II) : CallSiteBase(II) {}
|
||||||
explicit ImmutableCallSite(const Value *V) : CallSiteBase(V) {}
|
explicit ImmutableCallSite(const Value *V) : CallSiteBase(V) {}
|
||||||
ImmutableCallSite(CallSite CS) : CallSiteBase(CS.getInstruction()) {}
|
ImmutableCallSite(CallSite CS) : CallSiteBase(CS.getInstruction()) {}
|
||||||
|
@ -943,6 +943,42 @@ public:
|
|||||||
Callee, NormalDest, UnwindDest, Args, Name);
|
Callee, NormalDest, UnwindDest, Args, Name);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// \brief Create a callbr instruction.
|
||||||
|
CallBrInst *CreateCallBr(FunctionType *Ty, Value *Callee,
|
||||||
|
BasicBlock *DefaultDest,
|
||||||
|
ArrayRef<BasicBlock *> IndirectDests,
|
||||||
|
ArrayRef<Value *> Args = None,
|
||||||
|
const Twine &Name = "") {
|
||||||
|
return Insert(CallBrInst::Create(Ty, Callee, DefaultDest, IndirectDests,
|
||||||
|
Args), Name);
|
||||||
|
}
|
||||||
|
CallBrInst *CreateCallBr(FunctionType *Ty, Value *Callee,
|
||||||
|
BasicBlock *DefaultDest,
|
||||||
|
ArrayRef<BasicBlock *> IndirectDests,
|
||||||
|
ArrayRef<Value *> Args,
|
||||||
|
ArrayRef<OperandBundleDef> OpBundles,
|
||||||
|
const Twine &Name = "") {
|
||||||
|
return Insert(
|
||||||
|
CallBrInst::Create(Ty, Callee, DefaultDest, IndirectDests, Args,
|
||||||
|
OpBundles), Name);
|
||||||
|
}
|
||||||
|
|
||||||
|
CallBrInst *CreateCallBr(FunctionCallee Callee, BasicBlock *DefaultDest,
|
||||||
|
ArrayRef<BasicBlock *> IndirectDests,
|
||||||
|
ArrayRef<Value *> Args = None,
|
||||||
|
const Twine &Name = "") {
|
||||||
|
return CreateCallBr(Callee.getFunctionType(), Callee.getCallee(),
|
||||||
|
DefaultDest, IndirectDests, Args, Name);
|
||||||
|
}
|
||||||
|
CallBrInst *CreateCallBr(FunctionCallee Callee, BasicBlock *DefaultDest,
|
||||||
|
ArrayRef<BasicBlock *> IndirectDests,
|
||||||
|
ArrayRef<Value *> Args,
|
||||||
|
ArrayRef<OperandBundleDef> OpBundles,
|
||||||
|
const Twine &Name = "") {
|
||||||
|
return CreateCallBr(Callee.getFunctionType(), Callee.getCallee(),
|
||||||
|
DefaultDest, IndirectDests, Args, Name);
|
||||||
|
}
|
||||||
|
|
||||||
ResumeInst *CreateResume(Value *Exn) {
|
ResumeInst *CreateResume(Value *Exn) {
|
||||||
return Insert(ResumeInst::Create(Exn));
|
return Insert(ResumeInst::Create(Exn));
|
||||||
}
|
}
|
||||||
|
@ -217,14 +217,17 @@ public:
|
|||||||
RetTy visitVACopyInst(VACopyInst &I) { DELEGATE(IntrinsicInst); }
|
RetTy visitVACopyInst(VACopyInst &I) { DELEGATE(IntrinsicInst); }
|
||||||
RetTy visitIntrinsicInst(IntrinsicInst &I) { DELEGATE(CallInst); }
|
RetTy visitIntrinsicInst(IntrinsicInst &I) { DELEGATE(CallInst); }
|
||||||
|
|
||||||
// Call and Invoke are slightly different as they delegate first through
|
// Call, Invoke and CallBr are slightly different as they delegate first
|
||||||
// a generic CallSite visitor.
|
// through a generic CallSite visitor.
|
||||||
RetTy visitCallInst(CallInst &I) {
|
RetTy visitCallInst(CallInst &I) {
|
||||||
return static_cast<SubClass*>(this)->visitCallSite(&I);
|
return static_cast<SubClass*>(this)->visitCallSite(&I);
|
||||||
}
|
}
|
||||||
RetTy visitInvokeInst(InvokeInst &I) {
|
RetTy visitInvokeInst(InvokeInst &I) {
|
||||||
return static_cast<SubClass*>(this)->visitCallSite(&I);
|
return static_cast<SubClass*>(this)->visitCallSite(&I);
|
||||||
}
|
}
|
||||||
|
RetTy visitCallBrInst(CallBrInst &I) {
|
||||||
|
return static_cast<SubClass *>(this)->visitCallSite(&I);
|
||||||
|
}
|
||||||
|
|
||||||
// While terminators don't have a distinct type modeling them, we support
|
// While terminators don't have a distinct type modeling them, we support
|
||||||
// intercepting them with dedicated a visitor callback.
|
// intercepting them with dedicated a visitor callback.
|
||||||
@ -270,14 +273,14 @@ public:
|
|||||||
// The next level delegation for `CallBase` is slightly more complex in order
|
// The next level delegation for `CallBase` is slightly more complex in order
|
||||||
// to support visiting cases where the call is also a terminator.
|
// to support visiting cases where the call is also a terminator.
|
||||||
RetTy visitCallBase(CallBase &I) {
|
RetTy visitCallBase(CallBase &I) {
|
||||||
if (isa<InvokeInst>(I))
|
if (isa<InvokeInst>(I) || isa<CallBrInst>(I))
|
||||||
return static_cast<SubClass *>(this)->visitTerminator(I);
|
return static_cast<SubClass *>(this)->visitTerminator(I);
|
||||||
|
|
||||||
DELEGATE(Instruction);
|
DELEGATE(Instruction);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Provide a legacy visitor for a 'callsite' that visits both calls and
|
// Provide a legacy visitor for a 'callsite' that visits calls, invokes,
|
||||||
// invokes.
|
// and calbrs.
|
||||||
//
|
//
|
||||||
// Prefer overriding the type system based `CallBase` instead.
|
// Prefer overriding the type system based `CallBase` instead.
|
||||||
RetTy visitCallSite(CallSite CS) {
|
RetTy visitCallSite(CallSite CS) {
|
||||||
|
@ -1033,16 +1033,23 @@ protected:
|
|||||||
return 0;
|
return 0;
|
||||||
case Instruction::Invoke:
|
case Instruction::Invoke:
|
||||||
return 2;
|
return 2;
|
||||||
|
case Instruction::CallBr:
|
||||||
|
return getNumSubclassExtraOperandsDynamic();
|
||||||
}
|
}
|
||||||
llvm_unreachable("Invalid opcode!");
|
llvm_unreachable("Invalid opcode!");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Get the number of extra operands for instructions that don't have a fixed
|
||||||
|
/// number of extra operands.
|
||||||
|
unsigned getNumSubclassExtraOperandsDynamic() const;
|
||||||
|
|
||||||
public:
|
public:
|
||||||
using Instruction::getContext;
|
using Instruction::getContext;
|
||||||
|
|
||||||
static bool classof(const Instruction *I) {
|
static bool classof(const Instruction *I) {
|
||||||
return I->getOpcode() == Instruction::Call ||
|
return I->getOpcode() == Instruction::Call ||
|
||||||
I->getOpcode() == Instruction::Invoke;
|
I->getOpcode() == Instruction::Invoke ||
|
||||||
|
I->getOpcode() == Instruction::CallBr;
|
||||||
}
|
}
|
||||||
static bool classof(const Value *V) {
|
static bool classof(const Value *V) {
|
||||||
return isa<Instruction>(V) && classof(cast<Instruction>(V));
|
return isa<Instruction>(V) && classof(cast<Instruction>(V));
|
||||||
|
@ -134,89 +134,90 @@ HANDLE_TERM_INST ( 7, Unreachable , UnreachableInst)
|
|||||||
HANDLE_TERM_INST ( 8, CleanupRet , CleanupReturnInst)
|
HANDLE_TERM_INST ( 8, CleanupRet , CleanupReturnInst)
|
||||||
HANDLE_TERM_INST ( 9, CatchRet , CatchReturnInst)
|
HANDLE_TERM_INST ( 9, CatchRet , CatchReturnInst)
|
||||||
HANDLE_TERM_INST (10, CatchSwitch , CatchSwitchInst)
|
HANDLE_TERM_INST (10, CatchSwitch , CatchSwitchInst)
|
||||||
LAST_TERM_INST (10)
|
HANDLE_TERM_INST (11, CallBr , CallBrInst) // A call-site terminator
|
||||||
|
LAST_TERM_INST (11)
|
||||||
|
|
||||||
// Standard unary operators...
|
// Standard unary operators...
|
||||||
FIRST_UNARY_INST(11)
|
FIRST_UNARY_INST(12)
|
||||||
HANDLE_UNARY_INST(11, FNeg , UnaryOperator)
|
HANDLE_UNARY_INST(12, FNeg , UnaryOperator)
|
||||||
LAST_UNARY_INST(11)
|
LAST_UNARY_INST(12)
|
||||||
|
|
||||||
// Standard binary operators...
|
// Standard binary operators...
|
||||||
FIRST_BINARY_INST(12)
|
FIRST_BINARY_INST(13)
|
||||||
HANDLE_BINARY_INST(12, Add , BinaryOperator)
|
HANDLE_BINARY_INST(13, Add , BinaryOperator)
|
||||||
HANDLE_BINARY_INST(13, FAdd , BinaryOperator)
|
HANDLE_BINARY_INST(14, FAdd , BinaryOperator)
|
||||||
HANDLE_BINARY_INST(14, Sub , BinaryOperator)
|
HANDLE_BINARY_INST(15, Sub , BinaryOperator)
|
||||||
HANDLE_BINARY_INST(15, FSub , BinaryOperator)
|
HANDLE_BINARY_INST(16, FSub , BinaryOperator)
|
||||||
HANDLE_BINARY_INST(16, Mul , BinaryOperator)
|
HANDLE_BINARY_INST(17, Mul , BinaryOperator)
|
||||||
HANDLE_BINARY_INST(17, FMul , BinaryOperator)
|
HANDLE_BINARY_INST(18, FMul , BinaryOperator)
|
||||||
HANDLE_BINARY_INST(18, UDiv , BinaryOperator)
|
HANDLE_BINARY_INST(19, UDiv , BinaryOperator)
|
||||||
HANDLE_BINARY_INST(19, SDiv , BinaryOperator)
|
HANDLE_BINARY_INST(20, SDiv , BinaryOperator)
|
||||||
HANDLE_BINARY_INST(20, FDiv , BinaryOperator)
|
HANDLE_BINARY_INST(21, FDiv , BinaryOperator)
|
||||||
HANDLE_BINARY_INST(21, URem , BinaryOperator)
|
HANDLE_BINARY_INST(22, URem , BinaryOperator)
|
||||||
HANDLE_BINARY_INST(22, SRem , BinaryOperator)
|
HANDLE_BINARY_INST(23, SRem , BinaryOperator)
|
||||||
HANDLE_BINARY_INST(23, FRem , BinaryOperator)
|
HANDLE_BINARY_INST(24, FRem , BinaryOperator)
|
||||||
|
|
||||||
// Logical operators (integer operands)
|
// Logical operators (integer operands)
|
||||||
HANDLE_BINARY_INST(24, Shl , BinaryOperator) // Shift left (logical)
|
HANDLE_BINARY_INST(25, Shl , BinaryOperator) // Shift left (logical)
|
||||||
HANDLE_BINARY_INST(25, LShr , BinaryOperator) // Shift right (logical)
|
HANDLE_BINARY_INST(26, LShr , BinaryOperator) // Shift right (logical)
|
||||||
HANDLE_BINARY_INST(26, AShr , BinaryOperator) // Shift right (arithmetic)
|
HANDLE_BINARY_INST(27, AShr , BinaryOperator) // Shift right (arithmetic)
|
||||||
HANDLE_BINARY_INST(27, And , BinaryOperator)
|
HANDLE_BINARY_INST(28, And , BinaryOperator)
|
||||||
HANDLE_BINARY_INST(28, Or , BinaryOperator)
|
HANDLE_BINARY_INST(29, Or , BinaryOperator)
|
||||||
HANDLE_BINARY_INST(29, Xor , BinaryOperator)
|
HANDLE_BINARY_INST(30, Xor , BinaryOperator)
|
||||||
LAST_BINARY_INST(29)
|
LAST_BINARY_INST(30)
|
||||||
|
|
||||||
// Memory operators...
|
// Memory operators...
|
||||||
FIRST_MEMORY_INST(30)
|
FIRST_MEMORY_INST(31)
|
||||||
HANDLE_MEMORY_INST(30, Alloca, AllocaInst) // Stack management
|
HANDLE_MEMORY_INST(31, Alloca, AllocaInst) // Stack management
|
||||||
HANDLE_MEMORY_INST(31, Load , LoadInst ) // Memory manipulation instrs
|
HANDLE_MEMORY_INST(32, Load , LoadInst ) // Memory manipulation instrs
|
||||||
HANDLE_MEMORY_INST(32, Store , StoreInst )
|
HANDLE_MEMORY_INST(33, Store , StoreInst )
|
||||||
HANDLE_MEMORY_INST(33, GetElementPtr, GetElementPtrInst)
|
HANDLE_MEMORY_INST(34, GetElementPtr, GetElementPtrInst)
|
||||||
HANDLE_MEMORY_INST(34, Fence , FenceInst )
|
HANDLE_MEMORY_INST(35, Fence , FenceInst )
|
||||||
HANDLE_MEMORY_INST(35, AtomicCmpXchg , AtomicCmpXchgInst )
|
HANDLE_MEMORY_INST(36, AtomicCmpXchg , AtomicCmpXchgInst )
|
||||||
HANDLE_MEMORY_INST(36, AtomicRMW , AtomicRMWInst )
|
HANDLE_MEMORY_INST(37, AtomicRMW , AtomicRMWInst )
|
||||||
LAST_MEMORY_INST(36)
|
LAST_MEMORY_INST(37)
|
||||||
|
|
||||||
// Cast operators ...
|
// Cast operators ...
|
||||||
// NOTE: The order matters here because CastInst::isEliminableCastPair
|
// NOTE: The order matters here because CastInst::isEliminableCastPair
|
||||||
// NOTE: (see Instructions.cpp) encodes a table based on this ordering.
|
// NOTE: (see Instructions.cpp) encodes a table based on this ordering.
|
||||||
FIRST_CAST_INST(37)
|
FIRST_CAST_INST(38)
|
||||||
HANDLE_CAST_INST(37, Trunc , TruncInst ) // Truncate integers
|
HANDLE_CAST_INST(38, Trunc , TruncInst ) // Truncate integers
|
||||||
HANDLE_CAST_INST(38, ZExt , ZExtInst ) // Zero extend integers
|
HANDLE_CAST_INST(39, ZExt , ZExtInst ) // Zero extend integers
|
||||||
HANDLE_CAST_INST(39, SExt , SExtInst ) // Sign extend integers
|
HANDLE_CAST_INST(40, SExt , SExtInst ) // Sign extend integers
|
||||||
HANDLE_CAST_INST(40, FPToUI , FPToUIInst ) // floating point -> UInt
|
HANDLE_CAST_INST(41, FPToUI , FPToUIInst ) // floating point -> UInt
|
||||||
HANDLE_CAST_INST(41, FPToSI , FPToSIInst ) // floating point -> SInt
|
HANDLE_CAST_INST(42, FPToSI , FPToSIInst ) // floating point -> SInt
|
||||||
HANDLE_CAST_INST(42, UIToFP , UIToFPInst ) // UInt -> floating point
|
HANDLE_CAST_INST(43, UIToFP , UIToFPInst ) // UInt -> floating point
|
||||||
HANDLE_CAST_INST(43, SIToFP , SIToFPInst ) // SInt -> floating point
|
HANDLE_CAST_INST(44, SIToFP , SIToFPInst ) // SInt -> floating point
|
||||||
HANDLE_CAST_INST(44, FPTrunc , FPTruncInst ) // Truncate floating point
|
HANDLE_CAST_INST(45, FPTrunc , FPTruncInst ) // Truncate floating point
|
||||||
HANDLE_CAST_INST(45, FPExt , FPExtInst ) // Extend floating point
|
HANDLE_CAST_INST(46, FPExt , FPExtInst ) // Extend floating point
|
||||||
HANDLE_CAST_INST(46, PtrToInt, PtrToIntInst) // Pointer -> Integer
|
HANDLE_CAST_INST(47, PtrToInt, PtrToIntInst) // Pointer -> Integer
|
||||||
HANDLE_CAST_INST(47, IntToPtr, IntToPtrInst) // Integer -> Pointer
|
HANDLE_CAST_INST(48, IntToPtr, IntToPtrInst) // Integer -> Pointer
|
||||||
HANDLE_CAST_INST(48, BitCast , BitCastInst ) // Type cast
|
HANDLE_CAST_INST(49, BitCast , BitCastInst ) // Type cast
|
||||||
HANDLE_CAST_INST(49, AddrSpaceCast, AddrSpaceCastInst) // addrspace cast
|
HANDLE_CAST_INST(50, AddrSpaceCast, AddrSpaceCastInst) // addrspace cast
|
||||||
LAST_CAST_INST(49)
|
LAST_CAST_INST(50)
|
||||||
|
|
||||||
FIRST_FUNCLETPAD_INST(50)
|
FIRST_FUNCLETPAD_INST(51)
|
||||||
HANDLE_FUNCLETPAD_INST(50, CleanupPad, CleanupPadInst)
|
HANDLE_FUNCLETPAD_INST(51, CleanupPad, CleanupPadInst)
|
||||||
HANDLE_FUNCLETPAD_INST(51, CatchPad , CatchPadInst)
|
HANDLE_FUNCLETPAD_INST(52, CatchPad , CatchPadInst)
|
||||||
LAST_FUNCLETPAD_INST(51)
|
LAST_FUNCLETPAD_INST(52)
|
||||||
|
|
||||||
// Other operators...
|
// Other operators...
|
||||||
FIRST_OTHER_INST(52)
|
FIRST_OTHER_INST(53)
|
||||||
HANDLE_OTHER_INST(52, ICmp , ICmpInst ) // Integer comparison instruction
|
HANDLE_OTHER_INST(53, ICmp , ICmpInst ) // Integer comparison instruction
|
||||||
HANDLE_OTHER_INST(53, FCmp , FCmpInst ) // Floating point comparison instr.
|
HANDLE_OTHER_INST(54, FCmp , FCmpInst ) // Floating point comparison instr.
|
||||||
HANDLE_OTHER_INST(54, PHI , PHINode ) // PHI node instruction
|
HANDLE_OTHER_INST(55, PHI , PHINode ) // PHI node instruction
|
||||||
HANDLE_OTHER_INST(55, Call , CallInst ) // Call a function
|
HANDLE_OTHER_INST(56, Call , CallInst ) // Call a function
|
||||||
HANDLE_OTHER_INST(56, Select , SelectInst ) // select instruction
|
HANDLE_OTHER_INST(57, Select , SelectInst ) // select instruction
|
||||||
HANDLE_USER_INST (57, UserOp1, Instruction) // May be used internally in a pass
|
HANDLE_USER_INST (58, UserOp1, Instruction) // May be used internally in a pass
|
||||||
HANDLE_USER_INST (58, UserOp2, Instruction) // Internal to passes only
|
HANDLE_USER_INST (59, UserOp2, Instruction) // Internal to passes only
|
||||||
HANDLE_OTHER_INST(59, VAArg , VAArgInst ) // vaarg instruction
|
HANDLE_OTHER_INST(60, VAArg , VAArgInst ) // vaarg instruction
|
||||||
HANDLE_OTHER_INST(60, ExtractElement, ExtractElementInst)// extract from vector
|
HANDLE_OTHER_INST(61, ExtractElement, ExtractElementInst)// extract from vector
|
||||||
HANDLE_OTHER_INST(61, InsertElement, InsertElementInst) // insert into vector
|
HANDLE_OTHER_INST(62, InsertElement, InsertElementInst) // insert into vector
|
||||||
HANDLE_OTHER_INST(62, ShuffleVector, ShuffleVectorInst) // shuffle two vectors.
|
HANDLE_OTHER_INST(63, ShuffleVector, ShuffleVectorInst) // shuffle two vectors.
|
||||||
HANDLE_OTHER_INST(63, ExtractValue, ExtractValueInst)// extract from aggregate
|
HANDLE_OTHER_INST(64, ExtractValue, ExtractValueInst)// extract from aggregate
|
||||||
HANDLE_OTHER_INST(64, InsertValue, InsertValueInst) // insert into aggregate
|
HANDLE_OTHER_INST(65, InsertValue, InsertValueInst) // insert into aggregate
|
||||||
HANDLE_OTHER_INST(65, LandingPad, LandingPadInst) // Landing pad instruction.
|
HANDLE_OTHER_INST(66, LandingPad, LandingPadInst) // Landing pad instruction.
|
||||||
LAST_OTHER_INST(65)
|
LAST_OTHER_INST(66)
|
||||||
|
|
||||||
#undef FIRST_TERM_INST
|
#undef FIRST_TERM_INST
|
||||||
#undef HANDLE_TERM_INST
|
#undef HANDLE_TERM_INST
|
||||||
|
@ -135,6 +135,9 @@ public:
|
|||||||
bool isExceptionalTerminator() const {
|
bool isExceptionalTerminator() const {
|
||||||
return isExceptionalTerminator(getOpcode());
|
return isExceptionalTerminator(getOpcode());
|
||||||
}
|
}
|
||||||
|
bool isIndirectTerminator() const {
|
||||||
|
return isIndirectTerminator(getOpcode());
|
||||||
|
}
|
||||||
|
|
||||||
static const char* getOpcodeName(unsigned OpCode);
|
static const char* getOpcodeName(unsigned OpCode);
|
||||||
|
|
||||||
@ -202,6 +205,17 @@ public:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Returns true if the OpCode is a terminator with indirect targets.
|
||||||
|
static inline bool isIndirectTerminator(unsigned OpCode) {
|
||||||
|
switch (OpCode) {
|
||||||
|
case Instruction::IndirectBr:
|
||||||
|
case Instruction::CallBr:
|
||||||
|
return true;
|
||||||
|
default:
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
//===--------------------------------------------------------------------===//
|
//===--------------------------------------------------------------------===//
|
||||||
// Metadata manipulation.
|
// Metadata manipulation.
|
||||||
//===--------------------------------------------------------------------===//
|
//===--------------------------------------------------------------------===//
|
||||||
|
@ -3886,6 +3886,249 @@ InvokeInst::InvokeInst(FunctionType *Ty, Value *Func, BasicBlock *IfNormal,
|
|||||||
init(Ty, Func, IfNormal, IfException, Args, Bundles, NameStr);
|
init(Ty, Func, IfNormal, IfException, Args, Bundles, NameStr);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
//===----------------------------------------------------------------------===//
|
||||||
|
// CallBrInst Class
|
||||||
|
//===----------------------------------------------------------------------===//
|
||||||
|
|
||||||
|
/// CallBr instruction, tracking function calls that may not return control but
|
||||||
|
/// instead transfer it to a third location. The SubclassData field is used to
|
||||||
|
/// hold the calling convention of the call.
|
||||||
|
///
|
||||||
|
class CallBrInst : public CallBase {
|
||||||
|
|
||||||
|
unsigned NumIndirectDests;
|
||||||
|
|
||||||
|
CallBrInst(const CallBrInst &BI);
|
||||||
|
|
||||||
|
/// Construct a CallBrInst given a range of arguments.
|
||||||
|
///
|
||||||
|
/// Construct a CallBrInst from a range of arguments
|
||||||
|
inline CallBrInst(FunctionType *Ty, Value *Func, BasicBlock *DefaultDest,
|
||||||
|
ArrayRef<BasicBlock *> IndirectDests,
|
||||||
|
ArrayRef<Value *> Args,
|
||||||
|
ArrayRef<OperandBundleDef> Bundles, int NumOperands,
|
||||||
|
const Twine &NameStr, Instruction *InsertBefore);
|
||||||
|
|
||||||
|
inline CallBrInst(FunctionType *Ty, Value *Func, BasicBlock *DefaultDest,
|
||||||
|
ArrayRef<BasicBlock *> IndirectDests,
|
||||||
|
ArrayRef<Value *> Args,
|
||||||
|
ArrayRef<OperandBundleDef> Bundles, int NumOperands,
|
||||||
|
const Twine &NameStr, BasicBlock *InsertAtEnd);
|
||||||
|
|
||||||
|
void init(FunctionType *FTy, Value *Func, BasicBlock *DefaultDest,
|
||||||
|
ArrayRef<BasicBlock *> IndirectDests, ArrayRef<Value *> Args,
|
||||||
|
ArrayRef<OperandBundleDef> Bundles, const Twine &NameStr);
|
||||||
|
|
||||||
|
/// Compute the number of operands to allocate.
|
||||||
|
static int ComputeNumOperands(int NumArgs, int NumIndirectDests,
|
||||||
|
int NumBundleInputs = 0) {
|
||||||
|
// We need one operand for the called function, plus our extra operands and
|
||||||
|
// the input operand counts provided.
|
||||||
|
return 2 + NumIndirectDests + NumArgs + NumBundleInputs;
|
||||||
|
}
|
||||||
|
|
||||||
|
protected:
|
||||||
|
// Note: Instruction needs to be a friend here to call cloneImpl.
|
||||||
|
friend class Instruction;
|
||||||
|
|
||||||
|
CallBrInst *cloneImpl() const;
|
||||||
|
|
||||||
|
public:
|
||||||
|
static CallBrInst *Create(FunctionType *Ty, Value *Func,
|
||||||
|
BasicBlock *DefaultDest,
|
||||||
|
ArrayRef<BasicBlock *> IndirectDests,
|
||||||
|
ArrayRef<Value *> Args, const Twine &NameStr,
|
||||||
|
Instruction *InsertBefore = nullptr) {
|
||||||
|
int NumOperands = ComputeNumOperands(Args.size(), IndirectDests.size());
|
||||||
|
return new (NumOperands)
|
||||||
|
CallBrInst(Ty, Func, DefaultDest, IndirectDests, Args, None,
|
||||||
|
NumOperands, NameStr, InsertBefore);
|
||||||
|
}
|
||||||
|
|
||||||
|
static CallBrInst *Create(FunctionType *Ty, Value *Func,
|
||||||
|
BasicBlock *DefaultDest,
|
||||||
|
ArrayRef<BasicBlock *> IndirectDests,
|
||||||
|
ArrayRef<Value *> Args,
|
||||||
|
ArrayRef<OperandBundleDef> Bundles = None,
|
||||||
|
const Twine &NameStr = "",
|
||||||
|
Instruction *InsertBefore = nullptr) {
|
||||||
|
int NumOperands = ComputeNumOperands(Args.size(), IndirectDests.size(),
|
||||||
|
CountBundleInputs(Bundles));
|
||||||
|
unsigned DescriptorBytes = Bundles.size() * sizeof(BundleOpInfo);
|
||||||
|
|
||||||
|
return new (NumOperands, DescriptorBytes)
|
||||||
|
CallBrInst(Ty, Func, DefaultDest, IndirectDests, Args, Bundles,
|
||||||
|
NumOperands, NameStr, InsertBefore);
|
||||||
|
}
|
||||||
|
|
||||||
|
static CallBrInst *Create(FunctionType *Ty, Value *Func,
|
||||||
|
BasicBlock *DefaultDest,
|
||||||
|
ArrayRef<BasicBlock *> IndirectDests,
|
||||||
|
ArrayRef<Value *> Args, const Twine &NameStr,
|
||||||
|
BasicBlock *InsertAtEnd) {
|
||||||
|
int NumOperands = ComputeNumOperands(Args.size(), IndirectDests.size());
|
||||||
|
return new (NumOperands)
|
||||||
|
CallBrInst(Ty, Func, DefaultDest, IndirectDests, Args, None,
|
||||||
|
NumOperands, NameStr, InsertAtEnd);
|
||||||
|
}
|
||||||
|
|
||||||
|
static CallBrInst *Create(FunctionType *Ty, Value *Func,
|
||||||
|
BasicBlock *DefaultDest,
|
||||||
|
ArrayRef<BasicBlock *> IndirectDests,
|
||||||
|
ArrayRef<Value *> Args,
|
||||||
|
ArrayRef<OperandBundleDef> Bundles,
|
||||||
|
const Twine &NameStr, BasicBlock *InsertAtEnd) {
|
||||||
|
int NumOperands = ComputeNumOperands(Args.size(), IndirectDests.size(),
|
||||||
|
CountBundleInputs(Bundles));
|
||||||
|
unsigned DescriptorBytes = Bundles.size() * sizeof(BundleOpInfo);
|
||||||
|
|
||||||
|
return new (NumOperands, DescriptorBytes)
|
||||||
|
CallBrInst(Ty, Func, DefaultDest, IndirectDests, Args, Bundles,
|
||||||
|
NumOperands, NameStr, InsertAtEnd);
|
||||||
|
}
|
||||||
|
|
||||||
|
static CallBrInst *Create(FunctionCallee Func, BasicBlock *DefaultDest,
|
||||||
|
ArrayRef<BasicBlock *> IndirectDests,
|
||||||
|
ArrayRef<Value *> Args, const Twine &NameStr,
|
||||||
|
Instruction *InsertBefore = nullptr) {
|
||||||
|
return Create(Func.getFunctionType(), Func.getCallee(), DefaultDest,
|
||||||
|
IndirectDests, Args, NameStr, InsertBefore);
|
||||||
|
}
|
||||||
|
|
||||||
|
static CallBrInst *Create(FunctionCallee Func, BasicBlock *DefaultDest,
|
||||||
|
ArrayRef<BasicBlock *> IndirectDests,
|
||||||
|
ArrayRef<Value *> Args,
|
||||||
|
ArrayRef<OperandBundleDef> Bundles = None,
|
||||||
|
const Twine &NameStr = "",
|
||||||
|
Instruction *InsertBefore = nullptr) {
|
||||||
|
return Create(Func.getFunctionType(), Func.getCallee(), DefaultDest,
|
||||||
|
IndirectDests, Args, Bundles, NameStr, InsertBefore);
|
||||||
|
}
|
||||||
|
|
||||||
|
static CallBrInst *Create(FunctionCallee Func, BasicBlock *DefaultDest,
|
||||||
|
ArrayRef<BasicBlock *> IndirectDests,
|
||||||
|
ArrayRef<Value *> Args, const Twine &NameStr,
|
||||||
|
BasicBlock *InsertAtEnd) {
|
||||||
|
return Create(Func.getFunctionType(), Func.getCallee(), DefaultDest,
|
||||||
|
IndirectDests, Args, NameStr, InsertAtEnd);
|
||||||
|
}
|
||||||
|
|
||||||
|
static CallBrInst *Create(FunctionCallee Func,
|
||||||
|
BasicBlock *DefaultDest,
|
||||||
|
ArrayRef<BasicBlock *> IndirectDests,
|
||||||
|
ArrayRef<Value *> Args,
|
||||||
|
ArrayRef<OperandBundleDef> Bundles,
|
||||||
|
const Twine &NameStr, BasicBlock *InsertAtEnd) {
|
||||||
|
return Create(Func.getFunctionType(), Func.getCallee(), DefaultDest,
|
||||||
|
IndirectDests, Args, Bundles, NameStr, InsertAtEnd);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create a clone of \p CBI with a different set of operand bundles and
|
||||||
|
/// insert it before \p InsertPt.
|
||||||
|
///
|
||||||
|
/// The returned callbr instruction is identical to \p CBI in every way
|
||||||
|
/// except that the operand bundles for the new instruction are set to the
|
||||||
|
/// operand bundles in \p Bundles.
|
||||||
|
static CallBrInst *Create(CallBrInst *CBI,
|
||||||
|
ArrayRef<OperandBundleDef> Bundles,
|
||||||
|
Instruction *InsertPt = nullptr);
|
||||||
|
|
||||||
|
/// Return the number of callbr indirect dest labels.
|
||||||
|
///
|
||||||
|
unsigned getNumIndirectDests() const { return NumIndirectDests; }
|
||||||
|
|
||||||
|
/// getIndirectDestLabel - Return the i-th indirect dest label.
|
||||||
|
///
|
||||||
|
Value *getIndirectDestLabel(unsigned i) const {
|
||||||
|
assert(i < getNumIndirectDests() && "Out of bounds!");
|
||||||
|
return getOperand(i + getNumArgOperands() + getNumTotalBundleOperands() +
|
||||||
|
1);
|
||||||
|
}
|
||||||
|
|
||||||
|
Value *getIndirectDestLabelUse(unsigned i) const {
|
||||||
|
assert(i < getNumIndirectDests() && "Out of bounds!");
|
||||||
|
return getOperandUse(i + getNumArgOperands() + getNumTotalBundleOperands() +
|
||||||
|
1);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Return the destination basic blocks...
|
||||||
|
BasicBlock *getDefaultDest() const {
|
||||||
|
return cast<BasicBlock>(*(&Op<-1>() - getNumIndirectDests() - 1));
|
||||||
|
}
|
||||||
|
BasicBlock *getIndirectDest(unsigned i) const {
|
||||||
|
return cast<BasicBlock>(*(&Op<-1>() - getNumIndirectDests() + i));
|
||||||
|
}
|
||||||
|
SmallVector<BasicBlock *, 16> getIndirectDests() const {
|
||||||
|
SmallVector<BasicBlock *, 16> IndirectDests;
|
||||||
|
for (unsigned i = 0, e = getNumIndirectDests(); i < e; ++i)
|
||||||
|
IndirectDests.push_back(getIndirectDest(i));
|
||||||
|
return IndirectDests;
|
||||||
|
}
|
||||||
|
void setDefaultDest(BasicBlock *B) {
|
||||||
|
*(&Op<-1>() - getNumIndirectDests() - 1) = reinterpret_cast<Value *>(B);
|
||||||
|
}
|
||||||
|
void setIndirectDest(unsigned i, BasicBlock *B) {
|
||||||
|
*(&Op<-1>() - getNumIndirectDests() + i) = reinterpret_cast<Value *>(B);
|
||||||
|
}
|
||||||
|
|
||||||
|
BasicBlock *getSuccessor(unsigned i) const {
|
||||||
|
assert(i < getNumSuccessors() + 1 &&
|
||||||
|
"Successor # out of range for callbr!");
|
||||||
|
return i == 0 ? getDefaultDest() : getIndirectDest(i - 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
void setSuccessor(unsigned idx, BasicBlock *NewSucc) {
|
||||||
|
assert(idx < getNumIndirectDests() + 1 &&
|
||||||
|
"Successor # out of range for callbr!");
|
||||||
|
*(&Op<-1>() - getNumIndirectDests() -1 + idx) =
|
||||||
|
reinterpret_cast<Value *>(NewSucc);
|
||||||
|
}
|
||||||
|
|
||||||
|
unsigned getNumSuccessors() const { return getNumIndirectDests() + 1; }
|
||||||
|
|
||||||
|
// Methods for support type inquiry through isa, cast, and dyn_cast:
|
||||||
|
static bool classof(const Instruction *I) {
|
||||||
|
return (I->getOpcode() == Instruction::CallBr);
|
||||||
|
}
|
||||||
|
static bool classof(const Value *V) {
|
||||||
|
return isa<Instruction>(V) && classof(cast<Instruction>(V));
|
||||||
|
}
|
||||||
|
|
||||||
|
private:
|
||||||
|
|
||||||
|
// Shadow Instruction::setInstructionSubclassData with a private forwarding
|
||||||
|
// method so that subclasses cannot accidentally use it.
|
||||||
|
void setInstructionSubclassData(unsigned short D) {
|
||||||
|
Instruction::setInstructionSubclassData(D);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
CallBrInst::CallBrInst(FunctionType *Ty, Value *Func, BasicBlock *DefaultDest,
|
||||||
|
ArrayRef<BasicBlock *> IndirectDests,
|
||||||
|
ArrayRef<Value *> Args,
|
||||||
|
ArrayRef<OperandBundleDef> Bundles, int NumOperands,
|
||||||
|
const Twine &NameStr, Instruction *InsertBefore)
|
||||||
|
: CallBase(Ty->getReturnType(), Instruction::CallBr,
|
||||||
|
OperandTraits<CallBase>::op_end(this) - NumOperands, NumOperands,
|
||||||
|
InsertBefore) {
|
||||||
|
init(Ty, Func, DefaultDest, IndirectDests, Args, Bundles, NameStr);
|
||||||
|
}
|
||||||
|
|
||||||
|
CallBrInst::CallBrInst(FunctionType *Ty, Value *Func, BasicBlock *DefaultDest,
|
||||||
|
ArrayRef<BasicBlock *> IndirectDests,
|
||||||
|
ArrayRef<Value *> Args,
|
||||||
|
ArrayRef<OperandBundleDef> Bundles, int NumOperands,
|
||||||
|
const Twine &NameStr, BasicBlock *InsertAtEnd)
|
||||||
|
: CallBase(
|
||||||
|
cast<FunctionType>(
|
||||||
|
cast<PointerType>(Func->getType())->getElementType())
|
||||||
|
->getReturnType(),
|
||||||
|
Instruction::CallBr,
|
||||||
|
OperandTraits<CallBase>::op_end(this) - NumOperands, NumOperands,
|
||||||
|
InsertAtEnd) {
|
||||||
|
init(Ty, Func, DefaultDest, IndirectDests, Args, Bundles, NameStr);
|
||||||
|
}
|
||||||
|
|
||||||
//===----------------------------------------------------------------------===//
|
//===----------------------------------------------------------------------===//
|
||||||
// ResumeInst Class
|
// ResumeInst Class
|
||||||
//===----------------------------------------------------------------------===//
|
//===----------------------------------------------------------------------===//
|
||||||
|
@ -28,6 +28,7 @@
|
|||||||
///
|
///
|
||||||
HANDLE_TARGET_OPCODE(PHI)
|
HANDLE_TARGET_OPCODE(PHI)
|
||||||
HANDLE_TARGET_OPCODE(INLINEASM)
|
HANDLE_TARGET_OPCODE(INLINEASM)
|
||||||
|
HANDLE_TARGET_OPCODE(INLINEASM_BR)
|
||||||
HANDLE_TARGET_OPCODE(CFI_INSTRUCTION)
|
HANDLE_TARGET_OPCODE(CFI_INSTRUCTION)
|
||||||
HANDLE_TARGET_OPCODE(EH_LABEL)
|
HANDLE_TARGET_OPCODE(EH_LABEL)
|
||||||
HANDLE_TARGET_OPCODE(GC_LABEL)
|
HANDLE_TARGET_OPCODE(GC_LABEL)
|
||||||
|
@ -933,6 +933,15 @@ def INLINEASM : StandardPseudoInstruction {
|
|||||||
let AsmString = "";
|
let AsmString = "";
|
||||||
let hasSideEffects = 0; // Note side effect is encoded in an operand.
|
let hasSideEffects = 0; // Note side effect is encoded in an operand.
|
||||||
}
|
}
|
||||||
|
def INLINEASM_BR : StandardPseudoInstruction {
|
||||||
|
let OutOperandList = (outs);
|
||||||
|
let InOperandList = (ins variable_ops);
|
||||||
|
let AsmString = "";
|
||||||
|
let hasSideEffects = 0; // Note side effect is encoded in an operand.
|
||||||
|
let isTerminator = 1;
|
||||||
|
let isBranch = 1;
|
||||||
|
let isIndirectBranch = 1;
|
||||||
|
}
|
||||||
def CFI_INSTRUCTION : StandardPseudoInstruction {
|
def CFI_INSTRUCTION : StandardPseudoInstruction {
|
||||||
let OutOperandList = (outs);
|
let OutOperandList = (outs);
|
||||||
let InOperandList = (ins i32imm:$id);
|
let InOperandList = (ins i32imm:$id);
|
||||||
|
@ -3922,6 +3922,7 @@ bool llvm::isSafeToSpeculativelyExecute(const Value *V,
|
|||||||
case Instruction::VAArg:
|
case Instruction::VAArg:
|
||||||
case Instruction::Alloca:
|
case Instruction::Alloca:
|
||||||
case Instruction::Invoke:
|
case Instruction::Invoke:
|
||||||
|
case Instruction::CallBr:
|
||||||
case Instruction::PHI:
|
case Instruction::PHI:
|
||||||
case Instruction::Store:
|
case Instruction::Store:
|
||||||
case Instruction::Ret:
|
case Instruction::Ret:
|
||||||
|
@ -858,6 +858,7 @@ lltok::Kind LLLexer::LexIdentifier() {
|
|||||||
INSTKEYWORD(invoke, Invoke);
|
INSTKEYWORD(invoke, Invoke);
|
||||||
INSTKEYWORD(resume, Resume);
|
INSTKEYWORD(resume, Resume);
|
||||||
INSTKEYWORD(unreachable, Unreachable);
|
INSTKEYWORD(unreachable, Unreachable);
|
||||||
|
INSTKEYWORD(callbr, CallBr);
|
||||||
|
|
||||||
INSTKEYWORD(alloca, Alloca);
|
INSTKEYWORD(alloca, Alloca);
|
||||||
INSTKEYWORD(load, Load);
|
INSTKEYWORD(load, Load);
|
||||||
|
@ -163,6 +163,14 @@ bool LLParser::ValidateEndOfModule() {
|
|||||||
AS = AS.addAttributes(Context, AttributeList::FunctionIndex,
|
AS = AS.addAttributes(Context, AttributeList::FunctionIndex,
|
||||||
AttributeSet::get(Context, FnAttrs));
|
AttributeSet::get(Context, FnAttrs));
|
||||||
II->setAttributes(AS);
|
II->setAttributes(AS);
|
||||||
|
} else if (CallBrInst *CBI = dyn_cast<CallBrInst>(V)) {
|
||||||
|
AttributeList AS = CBI->getAttributes();
|
||||||
|
AttrBuilder FnAttrs(AS.getFnAttributes());
|
||||||
|
AS = AS.removeAttributes(Context, AttributeList::FunctionIndex);
|
||||||
|
FnAttrs.merge(B);
|
||||||
|
AS = AS.addAttributes(Context, AttributeList::FunctionIndex,
|
||||||
|
AttributeSet::get(Context, FnAttrs));
|
||||||
|
CBI->setAttributes(AS);
|
||||||
} else if (auto *GV = dyn_cast<GlobalVariable>(V)) {
|
} else if (auto *GV = dyn_cast<GlobalVariable>(V)) {
|
||||||
AttrBuilder Attrs(GV->getAttributes());
|
AttrBuilder Attrs(GV->getAttributes());
|
||||||
Attrs.merge(B);
|
Attrs.merge(B);
|
||||||
@ -5566,6 +5574,7 @@ int LLParser::ParseInstruction(Instruction *&Inst, BasicBlock *BB,
|
|||||||
case lltok::kw_catchswitch: return ParseCatchSwitch(Inst, PFS);
|
case lltok::kw_catchswitch: return ParseCatchSwitch(Inst, PFS);
|
||||||
case lltok::kw_catchpad: return ParseCatchPad(Inst, PFS);
|
case lltok::kw_catchpad: return ParseCatchPad(Inst, PFS);
|
||||||
case lltok::kw_cleanuppad: return ParseCleanupPad(Inst, PFS);
|
case lltok::kw_cleanuppad: return ParseCleanupPad(Inst, PFS);
|
||||||
|
case lltok::kw_callbr: return ParseCallBr(Inst, PFS);
|
||||||
// Unary Operators.
|
// Unary Operators.
|
||||||
case lltok::kw_fneg: {
|
case lltok::kw_fneg: {
|
||||||
FastMathFlags FMF = EatFastMathFlagsIfPresent();
|
FastMathFlags FMF = EatFastMathFlagsIfPresent();
|
||||||
@ -6184,6 +6193,124 @@ bool LLParser::ParseUnaryOp(Instruction *&Inst, PerFunctionState &PFS,
|
|||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// ParseCallBr
|
||||||
|
/// ::= 'callbr' OptionalCallingConv OptionalAttrs Type Value ParamList
|
||||||
|
/// OptionalAttrs OptionalOperandBundles 'to' TypeAndValue
|
||||||
|
/// '[' LabelList ']'
|
||||||
|
bool LLParser::ParseCallBr(Instruction *&Inst, PerFunctionState &PFS) {
|
||||||
|
LocTy CallLoc = Lex.getLoc();
|
||||||
|
AttrBuilder RetAttrs, FnAttrs;
|
||||||
|
std::vector<unsigned> FwdRefAttrGrps;
|
||||||
|
LocTy NoBuiltinLoc;
|
||||||
|
unsigned CC;
|
||||||
|
Type *RetType = nullptr;
|
||||||
|
LocTy RetTypeLoc;
|
||||||
|
ValID CalleeID;
|
||||||
|
SmallVector<ParamInfo, 16> ArgList;
|
||||||
|
SmallVector<OperandBundleDef, 2> BundleList;
|
||||||
|
|
||||||
|
BasicBlock *DefaultDest;
|
||||||
|
if (ParseOptionalCallingConv(CC) || ParseOptionalReturnAttrs(RetAttrs) ||
|
||||||
|
ParseType(RetType, RetTypeLoc, true /*void allowed*/) ||
|
||||||
|
ParseValID(CalleeID) || ParseParameterList(ArgList, PFS) ||
|
||||||
|
ParseFnAttributeValuePairs(FnAttrs, FwdRefAttrGrps, false,
|
||||||
|
NoBuiltinLoc) ||
|
||||||
|
ParseOptionalOperandBundles(BundleList, PFS) ||
|
||||||
|
ParseToken(lltok::kw_to, "expected 'to' in callbr") ||
|
||||||
|
ParseTypeAndBasicBlock(DefaultDest, PFS) ||
|
||||||
|
ParseToken(lltok::lsquare, "expected '[' in callbr"))
|
||||||
|
return true;
|
||||||
|
|
||||||
|
// Parse the destination list.
|
||||||
|
SmallVector<BasicBlock *, 16> IndirectDests;
|
||||||
|
|
||||||
|
if (Lex.getKind() != lltok::rsquare) {
|
||||||
|
BasicBlock *DestBB;
|
||||||
|
if (ParseTypeAndBasicBlock(DestBB, PFS))
|
||||||
|
return true;
|
||||||
|
IndirectDests.push_back(DestBB);
|
||||||
|
|
||||||
|
while (EatIfPresent(lltok::comma)) {
|
||||||
|
if (ParseTypeAndBasicBlock(DestBB, PFS))
|
||||||
|
return true;
|
||||||
|
IndirectDests.push_back(DestBB);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (ParseToken(lltok::rsquare, "expected ']' at end of block list"))
|
||||||
|
return true;
|
||||||
|
|
||||||
|
// If RetType is a non-function pointer type, then this is the short syntax
|
||||||
|
// for the call, which means that RetType is just the return type. Infer the
|
||||||
|
// rest of the function argument types from the arguments that are present.
|
||||||
|
FunctionType *Ty = dyn_cast<FunctionType>(RetType);
|
||||||
|
if (!Ty) {
|
||||||
|
// Pull out the types of all of the arguments...
|
||||||
|
std::vector<Type *> ParamTypes;
|
||||||
|
for (unsigned i = 0, e = ArgList.size(); i != e; ++i)
|
||||||
|
ParamTypes.push_back(ArgList[i].V->getType());
|
||||||
|
|
||||||
|
if (!FunctionType::isValidReturnType(RetType))
|
||||||
|
return Error(RetTypeLoc, "Invalid result type for LLVM function");
|
||||||
|
|
||||||
|
Ty = FunctionType::get(RetType, ParamTypes, false);
|
||||||
|
}
|
||||||
|
|
||||||
|
CalleeID.FTy = Ty;
|
||||||
|
|
||||||
|
// Look up the callee.
|
||||||
|
Value *Callee;
|
||||||
|
if (ConvertValIDToValue(PointerType::getUnqual(Ty), CalleeID, Callee, &PFS,
|
||||||
|
/*IsCall=*/true))
|
||||||
|
return true;
|
||||||
|
|
||||||
|
if (isa<InlineAsm>(Callee) && !Ty->getReturnType()->isVoidTy())
|
||||||
|
return Error(RetTypeLoc, "asm-goto outputs not supported");
|
||||||
|
|
||||||
|
// Set up the Attribute for the function.
|
||||||
|
SmallVector<Value *, 8> Args;
|
||||||
|
SmallVector<AttributeSet, 8> ArgAttrs;
|
||||||
|
|
||||||
|
// Loop through FunctionType's arguments and ensure they are specified
|
||||||
|
// correctly. Also, gather any parameter attributes.
|
||||||
|
FunctionType::param_iterator I = Ty->param_begin();
|
||||||
|
FunctionType::param_iterator E = Ty->param_end();
|
||||||
|
for (unsigned i = 0, e = ArgList.size(); i != e; ++i) {
|
||||||
|
Type *ExpectedTy = nullptr;
|
||||||
|
if (I != E) {
|
||||||
|
ExpectedTy = *I++;
|
||||||
|
} else if (!Ty->isVarArg()) {
|
||||||
|
return Error(ArgList[i].Loc, "too many arguments specified");
|
||||||
|
}
|
||||||
|
|
||||||
|
if (ExpectedTy && ExpectedTy != ArgList[i].V->getType())
|
||||||
|
return Error(ArgList[i].Loc, "argument is not of expected type '" +
|
||||||
|
getTypeString(ExpectedTy) + "'");
|
||||||
|
Args.push_back(ArgList[i].V);
|
||||||
|
ArgAttrs.push_back(ArgList[i].Attrs);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (I != E)
|
||||||
|
return Error(CallLoc, "not enough parameters specified for call");
|
||||||
|
|
||||||
|
if (FnAttrs.hasAlignmentAttr())
|
||||||
|
return Error(CallLoc, "callbr instructions may not have an alignment");
|
||||||
|
|
||||||
|
// Finish off the Attribute and check them
|
||||||
|
AttributeList PAL =
|
||||||
|
AttributeList::get(Context, AttributeSet::get(Context, FnAttrs),
|
||||||
|
AttributeSet::get(Context, RetAttrs), ArgAttrs);
|
||||||
|
|
||||||
|
CallBrInst *CBI =
|
||||||
|
CallBrInst::Create(Ty, Callee, DefaultDest, IndirectDests, Args,
|
||||||
|
BundleList);
|
||||||
|
CBI->setCallingConv(CC);
|
||||||
|
CBI->setAttributes(PAL);
|
||||||
|
ForwardRefAttrGroups[CBI] = FwdRefAttrGrps;
|
||||||
|
Inst = CBI;
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
//===----------------------------------------------------------------------===//
|
//===----------------------------------------------------------------------===//
|
||||||
// Binary Operators.
|
// Binary Operators.
|
||||||
//===----------------------------------------------------------------------===//
|
//===----------------------------------------------------------------------===//
|
||||||
|
@ -570,6 +570,7 @@ namespace llvm {
|
|||||||
bool ParseCatchSwitch(Instruction *&Inst, PerFunctionState &PFS);
|
bool ParseCatchSwitch(Instruction *&Inst, PerFunctionState &PFS);
|
||||||
bool ParseCatchPad(Instruction *&Inst, PerFunctionState &PFS);
|
bool ParseCatchPad(Instruction *&Inst, PerFunctionState &PFS);
|
||||||
bool ParseCleanupPad(Instruction *&Inst, PerFunctionState &PFS);
|
bool ParseCleanupPad(Instruction *&Inst, PerFunctionState &PFS);
|
||||||
|
bool ParseCallBr(Instruction *&Inst, PerFunctionState &PFS);
|
||||||
|
|
||||||
bool ParseUnaryOp(Instruction *&Inst, PerFunctionState &PFS, unsigned Opc,
|
bool ParseUnaryOp(Instruction *&Inst, PerFunctionState &PFS, unsigned Opc,
|
||||||
unsigned OperandType);
|
unsigned OperandType);
|
||||||
|
@ -327,6 +327,7 @@ enum Kind {
|
|||||||
kw_catchret,
|
kw_catchret,
|
||||||
kw_catchpad,
|
kw_catchpad,
|
||||||
kw_cleanuppad,
|
kw_cleanuppad,
|
||||||
|
kw_callbr,
|
||||||
|
|
||||||
kw_alloca,
|
kw_alloca,
|
||||||
kw_load,
|
kw_load,
|
||||||
|
@ -4231,6 +4231,74 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
|
|||||||
InstructionList.push_back(I);
|
InstructionList.push_back(I);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
case bitc::FUNC_CODE_INST_CALLBR: {
|
||||||
|
// CALLBR: [attr, cc, norm, transfs, fty, fnid, args]
|
||||||
|
unsigned OpNum = 0;
|
||||||
|
AttributeList PAL = getAttributes(Record[OpNum++]);
|
||||||
|
unsigned CCInfo = Record[OpNum++];
|
||||||
|
|
||||||
|
BasicBlock *DefaultDest = getBasicBlock(Record[OpNum++]);
|
||||||
|
unsigned NumIndirectDests = Record[OpNum++];
|
||||||
|
SmallVector<BasicBlock *, 16> IndirectDests;
|
||||||
|
for (unsigned i = 0, e = NumIndirectDests; i != e; ++i)
|
||||||
|
IndirectDests.push_back(getBasicBlock(Record[OpNum++]));
|
||||||
|
|
||||||
|
FunctionType *FTy = nullptr;
|
||||||
|
if (CCInfo >> bitc::CALL_EXPLICIT_TYPE & 1 &&
|
||||||
|
!(FTy = dyn_cast<FunctionType>(getTypeByID(Record[OpNum++]))))
|
||||||
|
return error("Explicit call type is not a function type");
|
||||||
|
|
||||||
|
Value *Callee;
|
||||||
|
if (getValueTypePair(Record, OpNum, NextValueNo, Callee))
|
||||||
|
return error("Invalid record");
|
||||||
|
|
||||||
|
PointerType *OpTy = dyn_cast<PointerType>(Callee->getType());
|
||||||
|
if (!OpTy)
|
||||||
|
return error("Callee is not a pointer type");
|
||||||
|
if (!FTy) {
|
||||||
|
FTy = dyn_cast<FunctionType>(OpTy->getElementType());
|
||||||
|
if (!FTy)
|
||||||
|
return error("Callee is not of pointer to function type");
|
||||||
|
} else if (OpTy->getElementType() != FTy)
|
||||||
|
return error("Explicit call type does not match pointee type of "
|
||||||
|
"callee operand");
|
||||||
|
if (Record.size() < FTy->getNumParams() + OpNum)
|
||||||
|
return error("Insufficient operands to call");
|
||||||
|
|
||||||
|
SmallVector<Value*, 16> Args;
|
||||||
|
// Read the fixed params.
|
||||||
|
for (unsigned i = 0, e = FTy->getNumParams(); i != e; ++i, ++OpNum) {
|
||||||
|
if (FTy->getParamType(i)->isLabelTy())
|
||||||
|
Args.push_back(getBasicBlock(Record[OpNum]));
|
||||||
|
else
|
||||||
|
Args.push_back(getValue(Record, OpNum, NextValueNo,
|
||||||
|
FTy->getParamType(i)));
|
||||||
|
if (!Args.back())
|
||||||
|
return error("Invalid record");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read type/value pairs for varargs params.
|
||||||
|
if (!FTy->isVarArg()) {
|
||||||
|
if (OpNum != Record.size())
|
||||||
|
return error("Invalid record");
|
||||||
|
} else {
|
||||||
|
while (OpNum != Record.size()) {
|
||||||
|
Value *Op;
|
||||||
|
if (getValueTypePair(Record, OpNum, NextValueNo, Op))
|
||||||
|
return error("Invalid record");
|
||||||
|
Args.push_back(Op);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
I = CallBrInst::Create(FTy, Callee, DefaultDest, IndirectDests, Args,
|
||||||
|
OperandBundles);
|
||||||
|
OperandBundles.clear();
|
||||||
|
InstructionList.push_back(I);
|
||||||
|
cast<CallBrInst>(I)->setCallingConv(
|
||||||
|
static_cast<CallingConv::ID>((0x7ff & CCInfo) >> bitc::CALL_CCONV));
|
||||||
|
cast<CallBrInst>(I)->setAttributes(PAL);
|
||||||
|
break;
|
||||||
|
}
|
||||||
case bitc::FUNC_CODE_INST_UNREACHABLE: // UNREACHABLE
|
case bitc::FUNC_CODE_INST_UNREACHABLE: // UNREACHABLE
|
||||||
I = new UnreachableInst(Context);
|
I = new UnreachableInst(Context);
|
||||||
InstructionList.push_back(I);
|
InstructionList.push_back(I);
|
||||||
|
@ -2777,6 +2777,41 @@ void ModuleBitcodeWriter::writeInstruction(const Instruction &I,
|
|||||||
Vals.push_back(VE.getValueID(CatchSwitch.getUnwindDest()));
|
Vals.push_back(VE.getValueID(CatchSwitch.getUnwindDest()));
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
case Instruction::CallBr: {
|
||||||
|
const CallBrInst *CBI = cast<CallBrInst>(&I);
|
||||||
|
const Value *Callee = CBI->getCalledValue();
|
||||||
|
FunctionType *FTy = CBI->getFunctionType();
|
||||||
|
|
||||||
|
if (CBI->hasOperandBundles())
|
||||||
|
writeOperandBundles(CBI, InstID);
|
||||||
|
|
||||||
|
Code = bitc::FUNC_CODE_INST_CALLBR;
|
||||||
|
|
||||||
|
Vals.push_back(VE.getAttributeListID(CBI->getAttributes()));
|
||||||
|
|
||||||
|
Vals.push_back(CBI->getCallingConv() << bitc::CALL_CCONV |
|
||||||
|
1 << bitc::CALL_EXPLICIT_TYPE);
|
||||||
|
|
||||||
|
Vals.push_back(VE.getValueID(CBI->getDefaultDest()));
|
||||||
|
Vals.push_back(CBI->getNumIndirectDests());
|
||||||
|
for (unsigned i = 0, e = CBI->getNumIndirectDests(); i != e; ++i)
|
||||||
|
Vals.push_back(VE.getValueID(CBI->getIndirectDest(i)));
|
||||||
|
|
||||||
|
Vals.push_back(VE.getTypeID(FTy));
|
||||||
|
pushValueAndType(Callee, InstID, Vals);
|
||||||
|
|
||||||
|
// Emit value #'s for the fixed parameters.
|
||||||
|
for (unsigned i = 0, e = FTy->getNumParams(); i != e; ++i)
|
||||||
|
pushValue(I.getOperand(i), InstID, Vals); // fixed param.
|
||||||
|
|
||||||
|
// Emit type/value pairs for varargs params.
|
||||||
|
if (FTy->isVarArg()) {
|
||||||
|
for (unsigned i = FTy->getNumParams(), e = CBI->getNumArgOperands();
|
||||||
|
i != e; ++i)
|
||||||
|
pushValueAndType(I.getOperand(i), InstID, Vals); // vararg
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
}
|
||||||
case Instruction::Unreachable:
|
case Instruction::Unreachable:
|
||||||
Code = bitc::FUNC_CODE_INST_UNREACHABLE;
|
Code = bitc::FUNC_CODE_INST_UNREACHABLE;
|
||||||
AbbrevToUse = FUNCTION_INST_UNREACHABLE_ABBREV;
|
AbbrevToUse = FUNCTION_INST_UNREACHABLE_ABBREV;
|
||||||
|
@ -414,10 +414,8 @@ ValueEnumerator::ValueEnumerator(const Module &M,
|
|||||||
EnumerateMetadata(&F, MD->getMetadata());
|
EnumerateMetadata(&F, MD->getMetadata());
|
||||||
}
|
}
|
||||||
EnumerateType(I.getType());
|
EnumerateType(I.getType());
|
||||||
if (const CallInst *CI = dyn_cast<CallInst>(&I))
|
if (const auto *Call = dyn_cast<CallBase>(&I))
|
||||||
EnumerateAttributes(CI->getAttributes());
|
EnumerateAttributes(Call->getAttributes());
|
||||||
else if (const InvokeInst *II = dyn_cast<InvokeInst>(&I))
|
|
||||||
EnumerateAttributes(II->getAttributes());
|
|
||||||
|
|
||||||
// Enumerate metadata attached with this instruction.
|
// Enumerate metadata attached with this instruction.
|
||||||
MDs.clear();
|
MDs.clear();
|
||||||
|
@ -1067,6 +1067,7 @@ void AsmPrinter::EmitFunctionBody() {
|
|||||||
OutStreamer->EmitLabel(MI.getOperand(0).getMCSymbol());
|
OutStreamer->EmitLabel(MI.getOperand(0).getMCSymbol());
|
||||||
break;
|
break;
|
||||||
case TargetOpcode::INLINEASM:
|
case TargetOpcode::INLINEASM:
|
||||||
|
case TargetOpcode::INLINEASM_BR:
|
||||||
EmitInlineAsm(&MI);
|
EmitInlineAsm(&MI);
|
||||||
break;
|
break;
|
||||||
case TargetOpcode::DBG_VALUE:
|
case TargetOpcode::DBG_VALUE:
|
||||||
|
@ -433,9 +433,16 @@ static void EmitGCCInlineAsmStr(const char *AsmStr, const MachineInstr *MI,
|
|||||||
++OpNo; // Skip over the ID number.
|
++OpNo; // Skip over the ID number.
|
||||||
|
|
||||||
if (Modifier[0] == 'l') { // Labels are target independent.
|
if (Modifier[0] == 'l') { // Labels are target independent.
|
||||||
// FIXME: What if the operand isn't an MBB, report error?
|
if (MI->getOperand(OpNo).isBlockAddress()) {
|
||||||
const MCSymbol *Sym = MI->getOperand(OpNo).getMBB()->getSymbol();
|
const BlockAddress *BA = MI->getOperand(OpNo).getBlockAddress();
|
||||||
Sym->print(OS, AP->MAI);
|
MCSymbol *Sym = AP->GetBlockAddressSymbol(BA);
|
||||||
|
Sym->print(OS, AP->MAI);
|
||||||
|
} else if (MI->getOperand(OpNo).isMBB()) {
|
||||||
|
const MCSymbol *Sym = MI->getOperand(OpNo).getMBB()->getSymbol();
|
||||||
|
Sym->print(OS, AP->MAI);
|
||||||
|
} else {
|
||||||
|
Error = true;
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
if (InlineAsm::isMemKind(OpFlags)) {
|
if (InlineAsm::isMemKind(OpFlags)) {
|
||||||
Error = AP->PrintAsmMemoryOperand(MI, OpNo, InlineAsmVariant,
|
Error = AP->PrintAsmMemoryOperand(MI, OpNo, InlineAsmVariant,
|
||||||
|
@ -655,6 +655,16 @@ bool CodeGenPrepare::isMergingEmptyBlockProfitable(BasicBlock *BB,
|
|||||||
BB->getSinglePredecessor()->getSingleSuccessor()))
|
BB->getSinglePredecessor()->getSingleSuccessor()))
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
|
// Skip merging if the block's successor is also a successor to any callbr
|
||||||
|
// that leads to this block.
|
||||||
|
// FIXME: Is this really needed? Is this a correctness issue?
|
||||||
|
for (pred_iterator PI = pred_begin(BB), E = pred_end(BB); PI != E; ++PI) {
|
||||||
|
if (auto *CBI = dyn_cast<CallBrInst>((*PI)->getTerminator()))
|
||||||
|
for (unsigned i = 0, e = CBI->getNumSuccessors(); i != e; ++i)
|
||||||
|
if (DestBB == CBI->getSuccessor(i))
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
// Try to skip merging if the unique predecessor of BB is terminated by a
|
// Try to skip merging if the unique predecessor of BB is terminated by a
|
||||||
// switch or indirect branch instruction, and BB is used as an incoming block
|
// switch or indirect branch instruction, and BB is used as an incoming block
|
||||||
// of PHIs in DestBB. In such case, merging BB and DestBB would cause ISel to
|
// of PHIs in DestBB. In such case, merging BB and DestBB would cause ISel to
|
||||||
|
@ -1259,6 +1259,12 @@ bool IRTranslator::translateInvoke(const User &U,
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool IRTranslator::translateCallBr(const User &U,
|
||||||
|
MachineIRBuilder &MIRBuilder) {
|
||||||
|
// FIXME: Implement this.
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
bool IRTranslator::translateLandingPad(const User &U,
|
bool IRTranslator::translateLandingPad(const User &U,
|
||||||
MachineIRBuilder &MIRBuilder) {
|
MachineIRBuilder &MIRBuilder) {
|
||||||
const LandingPadInst &LP = cast<LandingPadInst>(U);
|
const LandingPadInst &LP = cast<LandingPadInst>(U);
|
||||||
|
@ -148,11 +148,9 @@ bool IndirectBrExpandPass::runOnFunction(Function &F) {
|
|||||||
ConstantInt *BBIndexC = ConstantInt::get(ITy, BBIndex);
|
ConstantInt *BBIndexC = ConstantInt::get(ITy, BBIndex);
|
||||||
|
|
||||||
// Now rewrite the blockaddress to an integer constant based on the index.
|
// Now rewrite the blockaddress to an integer constant based on the index.
|
||||||
// FIXME: We could potentially preserve the uses as arguments to inline asm.
|
// FIXME: This part doesn't properly recognize other uses of blockaddress
|
||||||
// This would allow some uses such as diagnostic information in crashes to
|
// expressions, for instance, where they are used to pass labels to
|
||||||
// have higher quality even when this transform is enabled, but would break
|
// asm-goto. This part of the pass needs a rework.
|
||||||
// users that round-trip blockaddresses through inline assembly and then
|
|
||||||
// back into an indirectbr.
|
|
||||||
BA->replaceAllUsesWith(ConstantExpr::getIntToPtr(BBIndexC, BA->getType()));
|
BA->replaceAllUsesWith(ConstantExpr::getIntToPtr(BBIndexC, BA->getType()));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1048,14 +1048,18 @@ EmitSpecialNode(SDNode *Node, bool IsClone, bool IsCloned,
|
|||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
case ISD::INLINEASM: {
|
case ISD::INLINEASM:
|
||||||
|
case ISD::INLINEASM_BR: {
|
||||||
unsigned NumOps = Node->getNumOperands();
|
unsigned NumOps = Node->getNumOperands();
|
||||||
if (Node->getOperand(NumOps-1).getValueType() == MVT::Glue)
|
if (Node->getOperand(NumOps-1).getValueType() == MVT::Glue)
|
||||||
--NumOps; // Ignore the glue operand.
|
--NumOps; // Ignore the glue operand.
|
||||||
|
|
||||||
// Create the inline asm machine instruction.
|
// Create the inline asm machine instruction.
|
||||||
MachineInstrBuilder MIB = BuildMI(*MF, Node->getDebugLoc(),
|
unsigned TgtOpc = Node->getOpcode() == ISD::INLINEASM_BR
|
||||||
TII->get(TargetOpcode::INLINEASM));
|
? TargetOpcode::INLINEASM_BR
|
||||||
|
: TargetOpcode::INLINEASM;
|
||||||
|
MachineInstrBuilder MIB =
|
||||||
|
BuildMI(*MF, Node->getDebugLoc(), TII->get(TgtOpc));
|
||||||
|
|
||||||
// Add the asm string as an external symbol operand.
|
// Add the asm string as an external symbol operand.
|
||||||
SDValue AsmStrV = Node->getOperand(InlineAsm::Op_AsmString);
|
SDValue AsmStrV = Node->getOperand(InlineAsm::Op_AsmString);
|
||||||
|
@ -84,6 +84,7 @@ ResourcePriorityQueue::numberRCValPredInSU(SUnit *SU, unsigned RCId) {
|
|||||||
case ISD::CopyFromReg: NumberDeps++; break;
|
case ISD::CopyFromReg: NumberDeps++; break;
|
||||||
case ISD::CopyToReg: break;
|
case ISD::CopyToReg: break;
|
||||||
case ISD::INLINEASM: break;
|
case ISD::INLINEASM: break;
|
||||||
|
case ISD::INLINEASM_BR: break;
|
||||||
}
|
}
|
||||||
if (!ScegN->isMachineOpcode())
|
if (!ScegN->isMachineOpcode())
|
||||||
continue;
|
continue;
|
||||||
@ -120,6 +121,7 @@ unsigned ResourcePriorityQueue::numberRCValSuccInSU(SUnit *SU,
|
|||||||
case ISD::CopyFromReg: break;
|
case ISD::CopyFromReg: break;
|
||||||
case ISD::CopyToReg: NumberDeps++; break;
|
case ISD::CopyToReg: NumberDeps++; break;
|
||||||
case ISD::INLINEASM: break;
|
case ISD::INLINEASM: break;
|
||||||
|
case ISD::INLINEASM_BR: break;
|
||||||
}
|
}
|
||||||
if (!ScegN->isMachineOpcode())
|
if (!ScegN->isMachineOpcode())
|
||||||
continue;
|
continue;
|
||||||
@ -445,6 +447,7 @@ int ResourcePriorityQueue::SUSchedulingCost(SUnit *SU) {
|
|||||||
break;
|
break;
|
||||||
|
|
||||||
case ISD::INLINEASM:
|
case ISD::INLINEASM:
|
||||||
|
case ISD::INLINEASM_BR:
|
||||||
ResCount += PriorityThree;
|
ResCount += PriorityThree;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
@ -547,6 +550,7 @@ void ResourcePriorityQueue::initNumRegDefsLeft(SUnit *SU) {
|
|||||||
NodeNumDefs++;
|
NodeNumDefs++;
|
||||||
break;
|
break;
|
||||||
case ISD::INLINEASM:
|
case ISD::INLINEASM:
|
||||||
|
case ISD::INLINEASM_BR:
|
||||||
NodeNumDefs++;
|
NodeNumDefs++;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
@ -479,7 +479,8 @@ bool ScheduleDAGFast::DelayForLiveRegsBottomUp(SUnit *SU,
|
|||||||
}
|
}
|
||||||
|
|
||||||
for (SDNode *Node = SU->getNode(); Node; Node = Node->getGluedNode()) {
|
for (SDNode *Node = SU->getNode(); Node; Node = Node->getGluedNode()) {
|
||||||
if (Node->getOpcode() == ISD::INLINEASM) {
|
if (Node->getOpcode() == ISD::INLINEASM ||
|
||||||
|
Node->getOpcode() == ISD::INLINEASM_BR) {
|
||||||
// Inline asm can clobber physical defs.
|
// Inline asm can clobber physical defs.
|
||||||
unsigned NumOps = Node->getNumOperands();
|
unsigned NumOps = Node->getNumOperands();
|
||||||
if (Node->getOperand(NumOps-1).getValueType() == MVT::Glue)
|
if (Node->getOperand(NumOps-1).getValueType() == MVT::Glue)
|
||||||
|
@ -708,6 +708,7 @@ void ScheduleDAGRRList::EmitNode(SUnit *SU) {
|
|||||||
// removed.
|
// removed.
|
||||||
return;
|
return;
|
||||||
case ISD::INLINEASM:
|
case ISD::INLINEASM:
|
||||||
|
case ISD::INLINEASM_BR:
|
||||||
// For inline asm, clear the pipeline state.
|
// For inline asm, clear the pipeline state.
|
||||||
HazardRec->Reset();
|
HazardRec->Reset();
|
||||||
return;
|
return;
|
||||||
@ -1347,7 +1348,8 @@ DelayForLiveRegsBottomUp(SUnit *SU, SmallVectorImpl<unsigned> &LRegs) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
for (SDNode *Node = SU->getNode(); Node; Node = Node->getGluedNode()) {
|
for (SDNode *Node = SU->getNode(); Node; Node = Node->getGluedNode()) {
|
||||||
if (Node->getOpcode() == ISD::INLINEASM) {
|
if (Node->getOpcode() == ISD::INLINEASM ||
|
||||||
|
Node->getOpcode() == ISD::INLINEASM_BR) {
|
||||||
// Inline asm can clobber physical defs.
|
// Inline asm can clobber physical defs.
|
||||||
unsigned NumOps = Node->getNumOperands();
|
unsigned NumOps = Node->getNumOperands();
|
||||||
if (Node->getOperand(NumOps-1).getValueType() == MVT::Glue)
|
if (Node->getOperand(NumOps-1).getValueType() == MVT::Glue)
|
||||||
|
@ -2548,6 +2548,35 @@ void SelectionDAGBuilder::visitInvoke(const InvokeInst &I) {
|
|||||||
InvokeMBB->normalizeSuccProbs();
|
InvokeMBB->normalizeSuccProbs();
|
||||||
|
|
||||||
// Drop into normal successor.
|
// Drop into normal successor.
|
||||||
|
DAG.setRoot(DAG.getNode(ISD::BR, getCurSDLoc(), MVT::Other, getControlRoot(),
|
||||||
|
DAG.getBasicBlock(Return)));
|
||||||
|
}
|
||||||
|
|
||||||
|
void SelectionDAGBuilder::visitCallBr(const CallBrInst &I) {
|
||||||
|
MachineBasicBlock *CallBrMBB = FuncInfo.MBB;
|
||||||
|
|
||||||
|
// Deopt bundles are lowered in LowerCallSiteWithDeoptBundle, and we don't
|
||||||
|
// have to do anything here to lower funclet bundles.
|
||||||
|
assert(!I.hasOperandBundlesOtherThan(
|
||||||
|
{LLVMContext::OB_deopt, LLVMContext::OB_funclet}) &&
|
||||||
|
"Cannot lower callbrs with arbitrary operand bundles yet!");
|
||||||
|
|
||||||
|
assert(isa<InlineAsm>(I.getCalledValue()) &&
|
||||||
|
"Only know how to handle inlineasm callbr");
|
||||||
|
visitInlineAsm(&I);
|
||||||
|
|
||||||
|
// Retrieve successors.
|
||||||
|
MachineBasicBlock *Return = FuncInfo.MBBMap[I.getDefaultDest()];
|
||||||
|
|
||||||
|
// Update successor info.
|
||||||
|
addSuccessorWithProb(CallBrMBB, Return);
|
||||||
|
for (unsigned i = 0, e = I.getNumIndirectDests(); i < e; ++i) {
|
||||||
|
MachineBasicBlock *Target = FuncInfo.MBBMap[I.getIndirectDest(i)];
|
||||||
|
addSuccessorWithProb(CallBrMBB, Target);
|
||||||
|
}
|
||||||
|
CallBrMBB->normalizeSuccProbs();
|
||||||
|
|
||||||
|
// Drop into default successor.
|
||||||
DAG.setRoot(DAG.getNode(ISD::BR, getCurSDLoc(),
|
DAG.setRoot(DAG.getNode(ISD::BR, getCurSDLoc(),
|
||||||
MVT::Other, getControlRoot(),
|
MVT::Other, getControlRoot(),
|
||||||
DAG.getBasicBlock(Return)));
|
DAG.getBasicBlock(Return)));
|
||||||
@ -7584,7 +7613,14 @@ void SelectionDAGBuilder::visitInlineAsm(ImmutableCallSite CS) {
|
|||||||
|
|
||||||
// Process the call argument. BasicBlocks are labels, currently appearing
|
// Process the call argument. BasicBlocks are labels, currently appearing
|
||||||
// only in asm's.
|
// only in asm's.
|
||||||
if (const BasicBlock *BB = dyn_cast<BasicBlock>(OpInfo.CallOperandVal)) {
|
const Instruction *I = CS.getInstruction();
|
||||||
|
if (isa<CallBrInst>(I) &&
|
||||||
|
(ArgNo - 1) >= (cast<CallBrInst>(I)->getNumArgOperands() -
|
||||||
|
cast<CallBrInst>(I)->getNumIndirectDests())) {
|
||||||
|
const auto *BA = cast<BlockAddress>(OpInfo.CallOperandVal);
|
||||||
|
EVT VT = TLI.getValueType(DAG.getDataLayout(), BA->getType(), true);
|
||||||
|
OpInfo.CallOperand = DAG.getTargetBlockAddress(BA, VT);
|
||||||
|
} else if (const auto *BB = dyn_cast<BasicBlock>(OpInfo.CallOperandVal)) {
|
||||||
OpInfo.CallOperand = DAG.getBasicBlock(FuncInfo.MBBMap[BB]);
|
OpInfo.CallOperand = DAG.getBasicBlock(FuncInfo.MBBMap[BB]);
|
||||||
} else {
|
} else {
|
||||||
OpInfo.CallOperand = getValue(OpInfo.CallOperandVal);
|
OpInfo.CallOperand = getValue(OpInfo.CallOperandVal);
|
||||||
@ -7883,7 +7919,8 @@ void SelectionDAGBuilder::visitInlineAsm(ImmutableCallSite CS) {
|
|||||||
AsmNodeOperands[InlineAsm::Op_InputChain] = Chain;
|
AsmNodeOperands[InlineAsm::Op_InputChain] = Chain;
|
||||||
if (Flag.getNode()) AsmNodeOperands.push_back(Flag);
|
if (Flag.getNode()) AsmNodeOperands.push_back(Flag);
|
||||||
|
|
||||||
Chain = DAG.getNode(ISD::INLINEASM, getCurSDLoc(),
|
unsigned ISDOpc = isa<CallBrInst>(CS.getInstruction()) ? ISD::INLINEASM_BR : ISD::INLINEASM;
|
||||||
|
Chain = DAG.getNode(ISDOpc, getCurSDLoc(),
|
||||||
DAG.getVTList(MVT::Other, MVT::Glue), AsmNodeOperands);
|
DAG.getVTList(MVT::Other, MVT::Glue), AsmNodeOperands);
|
||||||
Flag = Chain.getValue(1);
|
Flag = Chain.getValue(1);
|
||||||
|
|
||||||
|
@ -46,6 +46,7 @@ class AtomicRMWInst;
|
|||||||
class BasicBlock;
|
class BasicBlock;
|
||||||
class BranchInst;
|
class BranchInst;
|
||||||
class CallInst;
|
class CallInst;
|
||||||
|
class CallBrInst;
|
||||||
class CatchPadInst;
|
class CatchPadInst;
|
||||||
class CatchReturnInst;
|
class CatchReturnInst;
|
||||||
class CatchSwitchInst;
|
class CatchSwitchInst;
|
||||||
@ -851,6 +852,7 @@ public:
|
|||||||
private:
|
private:
|
||||||
// These all get lowered before this pass.
|
// These all get lowered before this pass.
|
||||||
void visitInvoke(const InvokeInst &I);
|
void visitInvoke(const InvokeInst &I);
|
||||||
|
void visitCallBr(const CallBrInst &I);
|
||||||
void visitResume(const ResumeInst &I);
|
void visitResume(const ResumeInst &I);
|
||||||
|
|
||||||
void visitUnary(const User &I, unsigned Opcode);
|
void visitUnary(const User &I, unsigned Opcode);
|
||||||
|
@ -172,6 +172,7 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
|
|||||||
case ISD::UNDEF: return "undef";
|
case ISD::UNDEF: return "undef";
|
||||||
case ISD::MERGE_VALUES: return "merge_values";
|
case ISD::MERGE_VALUES: return "merge_values";
|
||||||
case ISD::INLINEASM: return "inlineasm";
|
case ISD::INLINEASM: return "inlineasm";
|
||||||
|
case ISD::INLINEASM_BR: return "inlineasm_br";
|
||||||
case ISD::EH_LABEL: return "eh_label";
|
case ISD::EH_LABEL: return "eh_label";
|
||||||
case ISD::HANDLENODE: return "handlenode";
|
case ISD::HANDLENODE: return "handlenode";
|
||||||
|
|
||||||
|
@ -2441,14 +2441,14 @@ bool SelectionDAGISel::IsLegalToFold(SDValue N, SDNode *U, SDNode *Root,
|
|||||||
return !findNonImmUse(Root, N.getNode(), U, IgnoreChains);
|
return !findNonImmUse(Root, N.getNode(), U, IgnoreChains);
|
||||||
}
|
}
|
||||||
|
|
||||||
void SelectionDAGISel::Select_INLINEASM(SDNode *N) {
|
void SelectionDAGISel::Select_INLINEASM(SDNode *N, bool Branch) {
|
||||||
SDLoc DL(N);
|
SDLoc DL(N);
|
||||||
|
|
||||||
std::vector<SDValue> Ops(N->op_begin(), N->op_end());
|
std::vector<SDValue> Ops(N->op_begin(), N->op_end());
|
||||||
SelectInlineAsmMemoryOperands(Ops, DL);
|
SelectInlineAsmMemoryOperands(Ops, DL);
|
||||||
|
|
||||||
const EVT VTs[] = {MVT::Other, MVT::Glue};
|
const EVT VTs[] = {MVT::Other, MVT::Glue};
|
||||||
SDValue New = CurDAG->getNode(ISD::INLINEASM, DL, VTs, Ops);
|
SDValue New = CurDAG->getNode(Branch ? ISD::INLINEASM_BR : ISD::INLINEASM, DL, VTs, Ops);
|
||||||
New->setNodeId(-1);
|
New->setNodeId(-1);
|
||||||
ReplaceUses(N, New.getNode());
|
ReplaceUses(N, New.getNode());
|
||||||
CurDAG->RemoveDeadNode(N);
|
CurDAG->RemoveDeadNode(N);
|
||||||
@ -2998,7 +2998,9 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
|
|||||||
CurDAG->RemoveDeadNode(NodeToMatch);
|
CurDAG->RemoveDeadNode(NodeToMatch);
|
||||||
return;
|
return;
|
||||||
case ISD::INLINEASM:
|
case ISD::INLINEASM:
|
||||||
Select_INLINEASM(NodeToMatch);
|
case ISD::INLINEASM_BR:
|
||||||
|
Select_INLINEASM(NodeToMatch,
|
||||||
|
NodeToMatch->getOpcode() == ISD::INLINEASM_BR);
|
||||||
return;
|
return;
|
||||||
case ISD::READ_REGISTER:
|
case ISD::READ_REGISTER:
|
||||||
Select_READ_REGISTER(NodeToMatch);
|
Select_READ_REGISTER(NodeToMatch);
|
||||||
|
@ -3289,7 +3289,8 @@ void TargetLowering::LowerAsmOperandForConstraint(SDValue Op,
|
|||||||
switch (ConstraintLetter) {
|
switch (ConstraintLetter) {
|
||||||
default: break;
|
default: break;
|
||||||
case 'X': // Allows any operand; labels (basic block) use this.
|
case 'X': // Allows any operand; labels (basic block) use this.
|
||||||
if (Op.getOpcode() == ISD::BasicBlock) {
|
if (Op.getOpcode() == ISD::BasicBlock ||
|
||||||
|
Op.getOpcode() == ISD::TargetBlockAddress) {
|
||||||
Ops.push_back(Op);
|
Ops.push_back(Op);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@ -3776,6 +3777,9 @@ void TargetLowering::ComputeConstraintToUse(AsmOperandInfo &OpInfo,
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (Op.getNode() && Op.getOpcode() == ISD::TargetBlockAddress)
|
||||||
|
return;
|
||||||
|
|
||||||
// Otherwise, try to resolve it to something we know about by looking at
|
// Otherwise, try to resolve it to something we know about by looking at
|
||||||
// the actual operand type.
|
// the actual operand type.
|
||||||
if (const char *Repl = LowerXConstraint(OpInfo.ConstraintVT)) {
|
if (const char *Repl = LowerXConstraint(OpInfo.ConstraintVT)) {
|
||||||
|
@ -1455,6 +1455,7 @@ int TargetLoweringBase::InstructionOpcodeToISD(unsigned Opcode) const {
|
|||||||
case Switch: return 0;
|
case Switch: return 0;
|
||||||
case IndirectBr: return 0;
|
case IndirectBr: return 0;
|
||||||
case Invoke: return 0;
|
case Invoke: return 0;
|
||||||
|
case CallBr: return 0;
|
||||||
case Resume: return 0;
|
case Resume: return 0;
|
||||||
case Unreachable: return 0;
|
case Unreachable: return 0;
|
||||||
case CleanupRet: return 0;
|
case CleanupRet: return 0;
|
||||||
|
@ -3836,6 +3836,51 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
|
|||||||
writeOperand(II->getNormalDest(), true);
|
writeOperand(II->getNormalDest(), true);
|
||||||
Out << " unwind ";
|
Out << " unwind ";
|
||||||
writeOperand(II->getUnwindDest(), true);
|
writeOperand(II->getUnwindDest(), true);
|
||||||
|
} else if (const CallBrInst *CBI = dyn_cast<CallBrInst>(&I)) {
|
||||||
|
Operand = CBI->getCalledValue();
|
||||||
|
FunctionType *FTy = CBI->getFunctionType();
|
||||||
|
Type *RetTy = FTy->getReturnType();
|
||||||
|
const AttributeList &PAL = CBI->getAttributes();
|
||||||
|
|
||||||
|
// Print the calling convention being used.
|
||||||
|
if (CBI->getCallingConv() != CallingConv::C) {
|
||||||
|
Out << " ";
|
||||||
|
PrintCallingConv(CBI->getCallingConv(), Out);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (PAL.hasAttributes(AttributeList::ReturnIndex))
|
||||||
|
Out << ' ' << PAL.getAsString(AttributeList::ReturnIndex);
|
||||||
|
|
||||||
|
// If possible, print out the short form of the callbr instruction. We can
|
||||||
|
// only do this if the first argument is a pointer to a nonvararg function,
|
||||||
|
// and if the return type is not a pointer to a function.
|
||||||
|
//
|
||||||
|
Out << ' ';
|
||||||
|
TypePrinter.print(FTy->isVarArg() ? FTy : RetTy, Out);
|
||||||
|
Out << ' ';
|
||||||
|
writeOperand(Operand, false);
|
||||||
|
Out << '(';
|
||||||
|
for (unsigned op = 0, Eop = CBI->getNumArgOperands(); op < Eop; ++op) {
|
||||||
|
if (op)
|
||||||
|
Out << ", ";
|
||||||
|
writeParamOperand(CBI->getArgOperand(op), PAL.getParamAttributes(op));
|
||||||
|
}
|
||||||
|
|
||||||
|
Out << ')';
|
||||||
|
if (PAL.hasAttributes(AttributeList::FunctionIndex))
|
||||||
|
Out << " #" << Machine.getAttributeGroupSlot(PAL.getFnAttributes());
|
||||||
|
|
||||||
|
writeOperandBundles(CBI);
|
||||||
|
|
||||||
|
Out << "\n to ";
|
||||||
|
writeOperand(CBI->getDefaultDest(), true);
|
||||||
|
Out << " [";
|
||||||
|
for (unsigned i = 0, e = CBI->getNumIndirectDests(); i != e; ++i) {
|
||||||
|
if (i != 0)
|
||||||
|
Out << ", ";
|
||||||
|
writeOperand(CBI->getIndirectDest(i), true);
|
||||||
|
}
|
||||||
|
Out << ']';
|
||||||
} else if (const AllocaInst *AI = dyn_cast<AllocaInst>(&I)) {
|
} else if (const AllocaInst *AI = dyn_cast<AllocaInst>(&I)) {
|
||||||
Out << ' ';
|
Out << ' ';
|
||||||
if (AI->isUsedWithInAlloca())
|
if (AI->isUsedWithInAlloca())
|
||||||
|
@ -301,6 +301,7 @@ const char *Instruction::getOpcodeName(unsigned OpCode) {
|
|||||||
case CatchRet: return "catchret";
|
case CatchRet: return "catchret";
|
||||||
case CatchPad: return "catchpad";
|
case CatchPad: return "catchpad";
|
||||||
case CatchSwitch: return "catchswitch";
|
case CatchSwitch: return "catchswitch";
|
||||||
|
case CallBr: return "callbr";
|
||||||
|
|
||||||
// Standard unary operators...
|
// Standard unary operators...
|
||||||
case FNeg: return "fneg";
|
case FNeg: return "fneg";
|
||||||
@ -405,6 +406,10 @@ static bool haveSameSpecialState(const Instruction *I1, const Instruction *I2,
|
|||||||
return CI->getCallingConv() == cast<InvokeInst>(I2)->getCallingConv() &&
|
return CI->getCallingConv() == cast<InvokeInst>(I2)->getCallingConv() &&
|
||||||
CI->getAttributes() == cast<InvokeInst>(I2)->getAttributes() &&
|
CI->getAttributes() == cast<InvokeInst>(I2)->getAttributes() &&
|
||||||
CI->hasIdenticalOperandBundleSchema(*cast<InvokeInst>(I2));
|
CI->hasIdenticalOperandBundleSchema(*cast<InvokeInst>(I2));
|
||||||
|
if (const CallBrInst *CI = dyn_cast<CallBrInst>(I1))
|
||||||
|
return CI->getCallingConv() == cast<CallBrInst>(I2)->getCallingConv() &&
|
||||||
|
CI->getAttributes() == cast<CallBrInst>(I2)->getAttributes() &&
|
||||||
|
CI->hasIdenticalOperandBundleSchema(*cast<CallBrInst>(I2));
|
||||||
if (const InsertValueInst *IVI = dyn_cast<InsertValueInst>(I1))
|
if (const InsertValueInst *IVI = dyn_cast<InsertValueInst>(I1))
|
||||||
return IVI->getIndices() == cast<InsertValueInst>(I2)->getIndices();
|
return IVI->getIndices() == cast<InsertValueInst>(I2)->getIndices();
|
||||||
if (const ExtractValueInst *EVI = dyn_cast<ExtractValueInst>(I1))
|
if (const ExtractValueInst *EVI = dyn_cast<ExtractValueInst>(I1))
|
||||||
@ -516,6 +521,7 @@ bool Instruction::mayReadFromMemory() const {
|
|||||||
return true;
|
return true;
|
||||||
case Instruction::Call:
|
case Instruction::Call:
|
||||||
case Instruction::Invoke:
|
case Instruction::Invoke:
|
||||||
|
case Instruction::CallBr:
|
||||||
return !cast<CallBase>(this)->doesNotAccessMemory();
|
return !cast<CallBase>(this)->doesNotAccessMemory();
|
||||||
case Instruction::Store:
|
case Instruction::Store:
|
||||||
return !cast<StoreInst>(this)->isUnordered();
|
return !cast<StoreInst>(this)->isUnordered();
|
||||||
@ -535,6 +541,7 @@ bool Instruction::mayWriteToMemory() const {
|
|||||||
return true;
|
return true;
|
||||||
case Instruction::Call:
|
case Instruction::Call:
|
||||||
case Instruction::Invoke:
|
case Instruction::Invoke:
|
||||||
|
case Instruction::CallBr:
|
||||||
return !cast<CallBase>(this)->onlyReadsMemory();
|
return !cast<CallBase>(this)->onlyReadsMemory();
|
||||||
case Instruction::Load:
|
case Instruction::Load:
|
||||||
return !cast<LoadInst>(this)->isUnordered();
|
return !cast<LoadInst>(this)->isUnordered();
|
||||||
@ -772,8 +779,8 @@ void Instruction::updateProfWeight(uint64_t S, uint64_t T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
void Instruction::setProfWeight(uint64_t W) {
|
void Instruction::setProfWeight(uint64_t W) {
|
||||||
assert((isa<CallInst>(this) || isa<InvokeInst>(this)) &&
|
assert(isa<CallBase>(this) &&
|
||||||
"Can only set weights for call and invoke instrucitons");
|
"Can only set weights for call like instructions");
|
||||||
SmallVector<uint32_t, 1> Weights;
|
SmallVector<uint32_t, 1> Weights;
|
||||||
Weights.push_back(W);
|
Weights.push_back(W);
|
||||||
MDBuilder MDB(getContext());
|
MDBuilder MDB(getContext());
|
||||||
|
@ -256,6 +256,11 @@ void LandingPadInst::addClause(Constant *Val) {
|
|||||||
|
|
||||||
Function *CallBase::getCaller() { return getParent()->getParent(); }
|
Function *CallBase::getCaller() { return getParent()->getParent(); }
|
||||||
|
|
||||||
|
unsigned CallBase::getNumSubclassExtraOperandsDynamic() const {
|
||||||
|
assert(getOpcode() == Instruction::CallBr && "Unexpected opcode!");
|
||||||
|
return cast<CallBrInst>(this)->getNumIndirectDests() + 1;
|
||||||
|
}
|
||||||
|
|
||||||
bool CallBase::isIndirectCall() const {
|
bool CallBase::isIndirectCall() const {
|
||||||
const Value *V = getCalledValue();
|
const Value *V = getCalledValue();
|
||||||
if (isa<Function>(V) || isa<Constant>(V))
|
if (isa<Function>(V) || isa<Constant>(V))
|
||||||
@ -726,6 +731,76 @@ LandingPadInst *InvokeInst::getLandingPadInst() const {
|
|||||||
return cast<LandingPadInst>(getUnwindDest()->getFirstNonPHI());
|
return cast<LandingPadInst>(getUnwindDest()->getFirstNonPHI());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
//===----------------------------------------------------------------------===//
|
||||||
|
// CallBrInst Implementation
|
||||||
|
//===----------------------------------------------------------------------===//
|
||||||
|
|
||||||
|
void CallBrInst::init(FunctionType *FTy, Value *Fn, BasicBlock *Fallthrough,
|
||||||
|
ArrayRef<BasicBlock *> IndirectDests,
|
||||||
|
ArrayRef<Value *> Args,
|
||||||
|
ArrayRef<OperandBundleDef> Bundles,
|
||||||
|
const Twine &NameStr) {
|
||||||
|
this->FTy = FTy;
|
||||||
|
|
||||||
|
assert((int)getNumOperands() ==
|
||||||
|
ComputeNumOperands(Args.size(), IndirectDests.size(),
|
||||||
|
CountBundleInputs(Bundles)) &&
|
||||||
|
"NumOperands not set up?");
|
||||||
|
NumIndirectDests = IndirectDests.size();
|
||||||
|
setDefaultDest(Fallthrough);
|
||||||
|
for (unsigned i = 0; i != NumIndirectDests; ++i)
|
||||||
|
setIndirectDest(i, IndirectDests[i]);
|
||||||
|
setCalledOperand(Fn);
|
||||||
|
|
||||||
|
#ifndef NDEBUG
|
||||||
|
assert(((Args.size() == FTy->getNumParams()) ||
|
||||||
|
(FTy->isVarArg() && Args.size() > FTy->getNumParams())) &&
|
||||||
|
"Calling a function with bad signature");
|
||||||
|
|
||||||
|
for (unsigned i = 0, e = Args.size(); i != e; i++)
|
||||||
|
assert((i >= FTy->getNumParams() ||
|
||||||
|
FTy->getParamType(i) == Args[i]->getType()) &&
|
||||||
|
"Calling a function with a bad signature!");
|
||||||
|
#endif
|
||||||
|
|
||||||
|
std::copy(Args.begin(), Args.end(), op_begin());
|
||||||
|
|
||||||
|
auto It = populateBundleOperandInfos(Bundles, Args.size());
|
||||||
|
(void)It;
|
||||||
|
assert(It + 2 + IndirectDests.size() == op_end() && "Should add up!");
|
||||||
|
|
||||||
|
setName(NameStr);
|
||||||
|
}
|
||||||
|
|
||||||
|
CallBrInst::CallBrInst(const CallBrInst &CBI)
|
||||||
|
: CallBase(CBI.Attrs, CBI.FTy, CBI.getType(), Instruction::CallBr,
|
||||||
|
OperandTraits<CallBase>::op_end(this) - CBI.getNumOperands(),
|
||||||
|
CBI.getNumOperands()) {
|
||||||
|
setCallingConv(CBI.getCallingConv());
|
||||||
|
std::copy(CBI.op_begin(), CBI.op_end(), op_begin());
|
||||||
|
std::copy(CBI.bundle_op_info_begin(), CBI.bundle_op_info_end(),
|
||||||
|
bundle_op_info_begin());
|
||||||
|
SubclassOptionalData = CBI.SubclassOptionalData;
|
||||||
|
NumIndirectDests = CBI.NumIndirectDests;
|
||||||
|
}
|
||||||
|
|
||||||
|
CallBrInst *CallBrInst::Create(CallBrInst *CBI, ArrayRef<OperandBundleDef> OpB,
|
||||||
|
Instruction *InsertPt) {
|
||||||
|
std::vector<Value *> Args(CBI->arg_begin(), CBI->arg_end());
|
||||||
|
|
||||||
|
auto *NewCBI = CallBrInst::Create(CBI->getFunctionType(),
|
||||||
|
CBI->getCalledValue(),
|
||||||
|
CBI->getDefaultDest(),
|
||||||
|
CBI->getIndirectDests(),
|
||||||
|
Args, OpB, CBI->getName(), InsertPt);
|
||||||
|
NewCBI->setCallingConv(CBI->getCallingConv());
|
||||||
|
NewCBI->SubclassOptionalData = CBI->SubclassOptionalData;
|
||||||
|
NewCBI->setAttributes(CBI->getAttributes());
|
||||||
|
NewCBI->setDebugLoc(CBI->getDebugLoc());
|
||||||
|
NewCBI->NumIndirectDests = CBI->NumIndirectDests;
|
||||||
|
return NewCBI;
|
||||||
|
}
|
||||||
|
|
||||||
//===----------------------------------------------------------------------===//
|
//===----------------------------------------------------------------------===//
|
||||||
// ReturnInst Implementation
|
// ReturnInst Implementation
|
||||||
//===----------------------------------------------------------------------===//
|
//===----------------------------------------------------------------------===//
|
||||||
@ -3996,6 +4071,14 @@ InvokeInst *InvokeInst::cloneImpl() const {
|
|||||||
return new(getNumOperands()) InvokeInst(*this);
|
return new(getNumOperands()) InvokeInst(*this);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
CallBrInst *CallBrInst::cloneImpl() const {
|
||||||
|
if (hasOperandBundles()) {
|
||||||
|
unsigned DescriptorBytes = getNumOperandBundles() * sizeof(BundleOpInfo);
|
||||||
|
return new (getNumOperands(), DescriptorBytes) CallBrInst(*this);
|
||||||
|
}
|
||||||
|
return new (getNumOperands()) CallBrInst(*this);
|
||||||
|
}
|
||||||
|
|
||||||
ResumeInst *ResumeInst::cloneImpl() const { return new (1) ResumeInst(*this); }
|
ResumeInst *ResumeInst::cloneImpl() const { return new (1) ResumeInst(*this); }
|
||||||
|
|
||||||
CleanupReturnInst *CleanupReturnInst::cloneImpl() const {
|
CleanupReturnInst *CleanupReturnInst::cloneImpl() const {
|
||||||
|
@ -57,7 +57,8 @@ Value::Value(Type *ty, unsigned scid)
|
|||||||
// FIXME: Why isn't this in the subclass gunk??
|
// FIXME: Why isn't this in the subclass gunk??
|
||||||
// Note, we cannot call isa<CallInst> before the CallInst has been
|
// Note, we cannot call isa<CallInst> before the CallInst has been
|
||||||
// constructed.
|
// constructed.
|
||||||
if (SubclassID == Instruction::Call || SubclassID == Instruction::Invoke)
|
if (SubclassID == Instruction::Call || SubclassID == Instruction::Invoke ||
|
||||||
|
SubclassID == Instruction::CallBr)
|
||||||
assert((VTy->isFirstClassType() || VTy->isVoidTy() || VTy->isStructTy()) &&
|
assert((VTy->isFirstClassType() || VTy->isVoidTy() || VTy->isStructTy()) &&
|
||||||
"invalid CallInst type!");
|
"invalid CallInst type!");
|
||||||
else if (SubclassID != BasicBlockVal &&
|
else if (SubclassID != BasicBlockVal &&
|
||||||
|
@ -466,6 +466,7 @@ private:
|
|||||||
void visitReturnInst(ReturnInst &RI);
|
void visitReturnInst(ReturnInst &RI);
|
||||||
void visitSwitchInst(SwitchInst &SI);
|
void visitSwitchInst(SwitchInst &SI);
|
||||||
void visitIndirectBrInst(IndirectBrInst &BI);
|
void visitIndirectBrInst(IndirectBrInst &BI);
|
||||||
|
void visitCallBrInst(CallBrInst &CBI);
|
||||||
void visitSelectInst(SelectInst &SI);
|
void visitSelectInst(SelectInst &SI);
|
||||||
void visitUserOp1(Instruction &I);
|
void visitUserOp1(Instruction &I);
|
||||||
void visitUserOp2(Instruction &I) { visitUserOp1(I); }
|
void visitUserOp2(Instruction &I) { visitUserOp1(I); }
|
||||||
@ -2450,6 +2451,26 @@ void Verifier::visitIndirectBrInst(IndirectBrInst &BI) {
|
|||||||
visitTerminator(BI);
|
visitTerminator(BI);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void Verifier::visitCallBrInst(CallBrInst &CBI) {
|
||||||
|
Assert(CBI.isInlineAsm(), "Callbr is currently only used for asm-goto!",
|
||||||
|
&CBI);
|
||||||
|
Assert(CBI.getType()->isVoidTy(), "Callbr return value is not supported!",
|
||||||
|
&CBI);
|
||||||
|
for (unsigned i = 0, e = CBI.getNumSuccessors(); i != e; ++i)
|
||||||
|
Assert(CBI.getSuccessor(i)->getType()->isLabelTy(),
|
||||||
|
"Callbr successors must all have pointer type!", &CBI);
|
||||||
|
for (unsigned i = 0, e = CBI.getNumOperands(); i != e; ++i) {
|
||||||
|
Assert(i >= CBI.getNumArgOperands() || !isa<BasicBlock>(CBI.getOperand(i)),
|
||||||
|
"Using an unescaped label as a callbr argument!", &CBI);
|
||||||
|
if (isa<BasicBlock>(CBI.getOperand(i)))
|
||||||
|
for (unsigned j = i + 1; j != e; ++j)
|
||||||
|
Assert(CBI.getOperand(i) != CBI.getOperand(j),
|
||||||
|
"Duplicate callbr destination!", &CBI);
|
||||||
|
}
|
||||||
|
|
||||||
|
visitTerminator(CBI);
|
||||||
|
}
|
||||||
|
|
||||||
void Verifier::visitSelectInst(SelectInst &SI) {
|
void Verifier::visitSelectInst(SelectInst &SI) {
|
||||||
Assert(!SelectInst::areInvalidOperands(SI.getOperand(0), SI.getOperand(1),
|
Assert(!SelectInst::areInvalidOperands(SI.getOperand(0), SI.getOperand(1),
|
||||||
SI.getOperand(2)),
|
SI.getOperand(2)),
|
||||||
|
@ -590,6 +590,7 @@ static bool hasSourceMods(const SDNode *N) {
|
|||||||
case ISD::FDIV:
|
case ISD::FDIV:
|
||||||
case ISD::FREM:
|
case ISD::FREM:
|
||||||
case ISD::INLINEASM:
|
case ISD::INLINEASM:
|
||||||
|
case ISD::INLINEASM_BR:
|
||||||
case AMDGPUISD::INTERP_P1:
|
case AMDGPUISD::INTERP_P1:
|
||||||
case AMDGPUISD::INTERP_P2:
|
case AMDGPUISD::INTERP_P2:
|
||||||
case AMDGPUISD::DIV_SCALE:
|
case AMDGPUISD::DIV_SCALE:
|
||||||
|
@ -9697,7 +9697,8 @@ static bool isCopyFromRegOfInlineAsm(const SDNode *N) {
|
|||||||
do {
|
do {
|
||||||
// Follow the chain until we find an INLINEASM node.
|
// Follow the chain until we find an INLINEASM node.
|
||||||
N = N->getOperand(0).getNode();
|
N = N->getOperand(0).getNode();
|
||||||
if (N->getOpcode() == ISD::INLINEASM)
|
if (N->getOpcode() == ISD::INLINEASM ||
|
||||||
|
N->getOpcode() == ISD::INLINEASM_BR)
|
||||||
return true;
|
return true;
|
||||||
} while (N->getOpcode() == ISD::CopyFromReg);
|
} while (N->getOpcode() == ISD::CopyFromReg);
|
||||||
return false;
|
return false;
|
||||||
|
@ -5313,7 +5313,8 @@ unsigned SIInstrInfo::getInstSizeInBytes(const MachineInstr &MI) const {
|
|||||||
return 0;
|
return 0;
|
||||||
case TargetOpcode::BUNDLE:
|
case TargetOpcode::BUNDLE:
|
||||||
return getInstBundleSize(MI);
|
return getInstBundleSize(MI);
|
||||||
case TargetOpcode::INLINEASM: {
|
case TargetOpcode::INLINEASM:
|
||||||
|
case TargetOpcode::INLINEASM_BR: {
|
||||||
const MachineFunction *MF = MI.getParent()->getParent();
|
const MachineFunction *MF = MI.getParent()->getParent();
|
||||||
const char *AsmStr = MI.getOperand(0).getSymbolName();
|
const char *AsmStr = MI.getOperand(0).getSymbolName();
|
||||||
return getInlineAsmLength(AsmStr, *MF->getTarget().getMCAsmInfo());
|
return getInlineAsmLength(AsmStr, *MF->getTarget().getMCAsmInfo());
|
||||||
|
@ -2615,6 +2615,7 @@ void ARMDAGToDAGISel::Select(SDNode *N) {
|
|||||||
return;
|
return;
|
||||||
break;
|
break;
|
||||||
case ISD::INLINEASM:
|
case ISD::INLINEASM:
|
||||||
|
case ISD::INLINEASM_BR:
|
||||||
if (tryInlineAsm(N))
|
if (tryInlineAsm(N))
|
||||||
return;
|
return;
|
||||||
break;
|
break;
|
||||||
@ -4319,7 +4320,7 @@ bool ARMDAGToDAGISel::tryInlineAsm(SDNode *N){
|
|||||||
if (!Changed)
|
if (!Changed)
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
SDValue New = CurDAG->getNode(ISD::INLINEASM, SDLoc(N),
|
SDValue New = CurDAG->getNode(N->getOpcode(), SDLoc(N),
|
||||||
CurDAG->getVTList(MVT::Other, MVT::Glue), AsmNodeOperands);
|
CurDAG->getVTList(MVT::Other, MVT::Glue), AsmNodeOperands);
|
||||||
New->setNodeId(-1);
|
New->setNodeId(-1);
|
||||||
ReplaceNode(N, New.getNode());
|
ReplaceNode(N, New.getNode());
|
||||||
|
@ -487,7 +487,8 @@ unsigned AVRInstrInfo::getInstSizeInBytes(const MachineInstr &MI) const {
|
|||||||
case TargetOpcode::KILL:
|
case TargetOpcode::KILL:
|
||||||
case TargetOpcode::DBG_VALUE:
|
case TargetOpcode::DBG_VALUE:
|
||||||
return 0;
|
return 0;
|
||||||
case TargetOpcode::INLINEASM: {
|
case TargetOpcode::INLINEASM:
|
||||||
|
case TargetOpcode::INLINEASM_BR: {
|
||||||
const MachineFunction &MF = *MI.getParent()->getParent();
|
const MachineFunction &MF = *MI.getParent()->getParent();
|
||||||
const AVRTargetMachine &TM = static_cast<const AVRTargetMachine&>(MF.getTarget());
|
const AVRTargetMachine &TM = static_cast<const AVRTargetMachine&>(MF.getTarget());
|
||||||
const AVRSubtarget &STI = MF.getSubtarget<AVRSubtarget>();
|
const AVRSubtarget &STI = MF.getSubtarget<AVRSubtarget>();
|
||||||
|
@ -578,7 +578,8 @@ HexagonTargetLowering::LowerINLINEASM(SDValue Op, SelectionDAG &DAG) const {
|
|||||||
const HexagonRegisterInfo &HRI = *Subtarget.getRegisterInfo();
|
const HexagonRegisterInfo &HRI = *Subtarget.getRegisterInfo();
|
||||||
unsigned LR = HRI.getRARegister();
|
unsigned LR = HRI.getRARegister();
|
||||||
|
|
||||||
if (Op.getOpcode() != ISD::INLINEASM || HMFI.hasClobberLR())
|
if ((Op.getOpcode() != ISD::INLINEASM &&
|
||||||
|
Op.getOpcode() != ISD::INLINEASM_BR) || HMFI.hasClobberLR())
|
||||||
return Op;
|
return Op;
|
||||||
|
|
||||||
unsigned NumOps = Op.getNumOperands();
|
unsigned NumOps = Op.getNumOperands();
|
||||||
@ -1291,6 +1292,7 @@ HexagonTargetLowering::HexagonTargetLowering(const TargetMachine &TM,
|
|||||||
setOperationAction(ISD::BUILD_PAIR, MVT::i64, Expand);
|
setOperationAction(ISD::BUILD_PAIR, MVT::i64, Expand);
|
||||||
setOperationAction(ISD::SIGN_EXTEND_INREG, MVT::i1, Expand);
|
setOperationAction(ISD::SIGN_EXTEND_INREG, MVT::i1, Expand);
|
||||||
setOperationAction(ISD::INLINEASM, MVT::Other, Custom);
|
setOperationAction(ISD::INLINEASM, MVT::Other, Custom);
|
||||||
|
setOperationAction(ISD::INLINEASM_BR, MVT::Other, Custom);
|
||||||
setOperationAction(ISD::PREFETCH, MVT::Other, Custom);
|
setOperationAction(ISD::PREFETCH, MVT::Other, Custom);
|
||||||
setOperationAction(ISD::READCYCLECOUNTER, MVT::i64, Custom);
|
setOperationAction(ISD::READCYCLECOUNTER, MVT::i64, Custom);
|
||||||
setOperationAction(ISD::INTRINSIC_VOID, MVT::Other, Custom);
|
setOperationAction(ISD::INTRINSIC_VOID, MVT::Other, Custom);
|
||||||
@ -2740,7 +2742,7 @@ HexagonTargetLowering::LowerOperation(SDValue Op, SelectionDAG &DAG) const {
|
|||||||
unsigned Opc = Op.getOpcode();
|
unsigned Opc = Op.getOpcode();
|
||||||
|
|
||||||
// Handle INLINEASM first.
|
// Handle INLINEASM first.
|
||||||
if (Opc == ISD::INLINEASM)
|
if (Opc == ISD::INLINEASM || Opc == ISD::INLINEASM_BR)
|
||||||
return LowerINLINEASM(Op, DAG);
|
return LowerINLINEASM(Op, DAG);
|
||||||
|
|
||||||
if (isHvxOperation(Op)) {
|
if (isHvxOperation(Op)) {
|
||||||
|
@ -112,6 +112,7 @@ bool VLIWResourceModel::isResourceAvailable(SUnit *SU, bool IsTop) {
|
|||||||
case TargetOpcode::IMPLICIT_DEF:
|
case TargetOpcode::IMPLICIT_DEF:
|
||||||
case TargetOpcode::COPY:
|
case TargetOpcode::COPY:
|
||||||
case TargetOpcode::INLINEASM:
|
case TargetOpcode::INLINEASM:
|
||||||
|
case TargetOpcode::INLINEASM_BR:
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -167,6 +168,7 @@ bool VLIWResourceModel::reserveResources(SUnit *SU, bool IsTop) {
|
|||||||
case TargetOpcode::EH_LABEL:
|
case TargetOpcode::EH_LABEL:
|
||||||
case TargetOpcode::COPY:
|
case TargetOpcode::COPY:
|
||||||
case TargetOpcode::INLINEASM:
|
case TargetOpcode::INLINEASM:
|
||||||
|
case TargetOpcode::INLINEASM_BR:
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
Packet.push_back(SU);
|
Packet.push_back(SU);
|
||||||
|
@ -307,7 +307,8 @@ unsigned MSP430InstrInfo::getInstSizeInBytes(const MachineInstr &MI) const {
|
|||||||
case TargetOpcode::KILL:
|
case TargetOpcode::KILL:
|
||||||
case TargetOpcode::DBG_VALUE:
|
case TargetOpcode::DBG_VALUE:
|
||||||
return 0;
|
return 0;
|
||||||
case TargetOpcode::INLINEASM: {
|
case TargetOpcode::INLINEASM:
|
||||||
|
case TargetOpcode::INLINEASM_BR: {
|
||||||
const MachineFunction *MF = MI.getParent()->getParent();
|
const MachineFunction *MF = MI.getParent()->getParent();
|
||||||
const TargetInstrInfo &TII = *MF->getSubtarget().getInstrInfo();
|
const TargetInstrInfo &TII = *MF->getSubtarget().getInstrInfo();
|
||||||
return TII.getInlineAsmLength(MI.getOperand(0).getSymbolName(),
|
return TII.getInlineAsmLength(MI.getOperand(0).getSymbolName(),
|
||||||
|
@ -577,7 +577,8 @@ unsigned MipsInstrInfo::getInstSizeInBytes(const MachineInstr &MI) const {
|
|||||||
switch (MI.getOpcode()) {
|
switch (MI.getOpcode()) {
|
||||||
default:
|
default:
|
||||||
return MI.getDesc().getSize();
|
return MI.getDesc().getSize();
|
||||||
case TargetOpcode::INLINEASM: { // Inline Asm: Variable size.
|
case TargetOpcode::INLINEASM:
|
||||||
|
case TargetOpcode::INLINEASM_BR: { // Inline Asm: Variable size.
|
||||||
const MachineFunction *MF = MI.getParent()->getParent();
|
const MachineFunction *MF = MI.getParent()->getParent();
|
||||||
const char *AsmStr = MI.getOperand(0).getSymbolName();
|
const char *AsmStr = MI.getOperand(0).getSymbolName();
|
||||||
return getInlineAsmLength(AsmStr, *MF->getTarget().getMCAsmInfo());
|
return getInlineAsmLength(AsmStr, *MF->getTarget().getMCAsmInfo());
|
||||||
|
@ -1000,7 +1000,8 @@ PPCRegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
|
|||||||
|
|
||||||
if (noImmForm)
|
if (noImmForm)
|
||||||
OperandBase = 1;
|
OperandBase = 1;
|
||||||
else if (OpC != TargetOpcode::INLINEASM) {
|
else if (OpC != TargetOpcode::INLINEASM &&
|
||||||
|
OpC != TargetOpcode::INLINEASM_BR) {
|
||||||
assert(ImmToIdxMap.count(OpC) &&
|
assert(ImmToIdxMap.count(OpC) &&
|
||||||
"No indexed form of load or store available!");
|
"No indexed form of load or store available!");
|
||||||
unsigned NewOpcode = ImmToIdxMap.find(OpC)->second;
|
unsigned NewOpcode = ImmToIdxMap.find(OpC)->second;
|
||||||
|
@ -439,7 +439,8 @@ unsigned RISCVInstrInfo::getInstSizeInBytes(const MachineInstr &MI) const {
|
|||||||
case RISCV::PseudoCALL:
|
case RISCV::PseudoCALL:
|
||||||
case RISCV::PseudoTAIL:
|
case RISCV::PseudoTAIL:
|
||||||
return 8;
|
return 8;
|
||||||
case TargetOpcode::INLINEASM: {
|
case TargetOpcode::INLINEASM:
|
||||||
|
case TargetOpcode::INLINEASM_BR: {
|
||||||
const MachineFunction &MF = *MI.getParent()->getParent();
|
const MachineFunction &MF = *MI.getParent()->getParent();
|
||||||
const auto &TM = static_cast<const RISCVTargetMachine &>(MF.getTarget());
|
const auto &TM = static_cast<const RISCVTargetMachine &>(MF.getTarget());
|
||||||
return getInlineAsmLength(MI.getOperand(0).getSymbolName(),
|
return getInlineAsmLength(MI.getOperand(0).getSymbolName(),
|
||||||
|
@ -312,7 +312,7 @@ bool SparcDAGToDAGISel::tryInlineAsm(SDNode *N){
|
|||||||
|
|
||||||
SelectInlineAsmMemoryOperands(AsmNodeOperands, SDLoc(N));
|
SelectInlineAsmMemoryOperands(AsmNodeOperands, SDLoc(N));
|
||||||
|
|
||||||
SDValue New = CurDAG->getNode(ISD::INLINEASM, SDLoc(N),
|
SDValue New = CurDAG->getNode(N->getOpcode(), SDLoc(N),
|
||||||
CurDAG->getVTList(MVT::Other, MVT::Glue), AsmNodeOperands);
|
CurDAG->getVTList(MVT::Other, MVT::Glue), AsmNodeOperands);
|
||||||
New->setNodeId(-1);
|
New->setNodeId(-1);
|
||||||
ReplaceNode(N, New.getNode());
|
ReplaceNode(N, New.getNode());
|
||||||
@ -328,7 +328,8 @@ void SparcDAGToDAGISel::Select(SDNode *N) {
|
|||||||
|
|
||||||
switch (N->getOpcode()) {
|
switch (N->getOpcode()) {
|
||||||
default: break;
|
default: break;
|
||||||
case ISD::INLINEASM: {
|
case ISD::INLINEASM:
|
||||||
|
case ISD::INLINEASM_BR: {
|
||||||
if (tryInlineAsm(N))
|
if (tryInlineAsm(N))
|
||||||
return;
|
return;
|
||||||
break;
|
break;
|
||||||
|
@ -253,6 +253,11 @@ static void printOperand(X86AsmPrinter &P, const MachineInstr *MI,
|
|||||||
printSymbolOperand(P, MO, O);
|
printSymbolOperand(P, MO, O);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
case MachineOperand::MO_BlockAddress: {
|
||||||
|
MCSymbol *Sym = P.GetBlockAddressSymbol(MO.getBlockAddress());
|
||||||
|
Sym->print(O, P.MAI);
|
||||||
|
break;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1476,7 +1476,8 @@ void FPS::handleSpecialFP(MachineBasicBlock::iterator &Inst) {
|
|||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
case TargetOpcode::INLINEASM: {
|
case TargetOpcode::INLINEASM:
|
||||||
|
case TargetOpcode::INLINEASM_BR: {
|
||||||
// The inline asm MachineInstr currently only *uses* FP registers for the
|
// The inline asm MachineInstr currently only *uses* FP registers for the
|
||||||
// 'f' constraint. These should be turned into the current ST(x) register
|
// 'f' constraint. These should be turned into the current ST(x) register
|
||||||
// in the machine instr.
|
// in the machine instr.
|
||||||
|
@ -6,7 +6,7 @@
|
|||||||
//
|
//
|
||||||
//===----------------------------------------------------------------------===//
|
//===----------------------------------------------------------------------===//
|
||||||
//
|
//
|
||||||
// This file implements the visitCall and visitInvoke functions.
|
// This file implements the visitCall, visitInvoke, and visitCallBr functions.
|
||||||
//
|
//
|
||||||
//===----------------------------------------------------------------------===//
|
//===----------------------------------------------------------------------===//
|
||||||
|
|
||||||
@ -1834,8 +1834,8 @@ Instruction *InstCombiner::visitCallInst(CallInst &CI) {
|
|||||||
IntrinsicInst *II = dyn_cast<IntrinsicInst>(&CI);
|
IntrinsicInst *II = dyn_cast<IntrinsicInst>(&CI);
|
||||||
if (!II) return visitCallBase(CI);
|
if (!II) return visitCallBase(CI);
|
||||||
|
|
||||||
// Intrinsics cannot occur in an invoke, so handle them here instead of in
|
// Intrinsics cannot occur in an invoke or a callbr, so handle them here
|
||||||
// visitCallBase.
|
// instead of in visitCallBase.
|
||||||
if (auto *MI = dyn_cast<AnyMemIntrinsic>(II)) {
|
if (auto *MI = dyn_cast<AnyMemIntrinsic>(II)) {
|
||||||
bool Changed = false;
|
bool Changed = false;
|
||||||
|
|
||||||
@ -4017,6 +4017,11 @@ Instruction *InstCombiner::visitInvokeInst(InvokeInst &II) {
|
|||||||
return visitCallBase(II);
|
return visitCallBase(II);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// CallBrInst simplification
|
||||||
|
Instruction *InstCombiner::visitCallBrInst(CallBrInst &CBI) {
|
||||||
|
return visitCallBase(CBI);
|
||||||
|
}
|
||||||
|
|
||||||
/// If this cast does not affect the value passed through the varargs area, we
|
/// If this cast does not affect the value passed through the varargs area, we
|
||||||
/// can eliminate the use of the cast.
|
/// can eliminate the use of the cast.
|
||||||
static bool isSafeToEliminateVarargsCast(const CallBase &Call,
|
static bool isSafeToEliminateVarargsCast(const CallBase &Call,
|
||||||
@ -4145,7 +4150,7 @@ static IntrinsicInst *findInitTrampoline(Value *Callee) {
|
|||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Improvements for call and invoke instructions.
|
/// Improvements for call, callbr and invoke instructions.
|
||||||
Instruction *InstCombiner::visitCallBase(CallBase &Call) {
|
Instruction *InstCombiner::visitCallBase(CallBase &Call) {
|
||||||
if (isAllocLikeFn(&Call, &TLI))
|
if (isAllocLikeFn(&Call, &TLI))
|
||||||
return visitAllocSite(Call);
|
return visitAllocSite(Call);
|
||||||
@ -4178,7 +4183,7 @@ Instruction *InstCombiner::visitCallBase(CallBase &Call) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// If the callee is a pointer to a function, attempt to move any casts to the
|
// If the callee is a pointer to a function, attempt to move any casts to the
|
||||||
// arguments of the call/invoke.
|
// arguments of the call/callbr/invoke.
|
||||||
Value *Callee = Call.getCalledValue();
|
Value *Callee = Call.getCalledValue();
|
||||||
if (!isa<Function>(Callee) && transformConstExprCastCall(Call))
|
if (!isa<Function>(Callee) && transformConstExprCastCall(Call))
|
||||||
return nullptr;
|
return nullptr;
|
||||||
@ -4211,9 +4216,9 @@ Instruction *InstCombiner::visitCallBase(CallBase &Call) {
|
|||||||
if (isa<CallInst>(OldCall))
|
if (isa<CallInst>(OldCall))
|
||||||
return eraseInstFromFunction(*OldCall);
|
return eraseInstFromFunction(*OldCall);
|
||||||
|
|
||||||
// We cannot remove an invoke, because it would change the CFG, just
|
// We cannot remove an invoke or a callbr, because it would change thexi
|
||||||
// change the callee to a null pointer.
|
// CFG, just change the callee to a null pointer.
|
||||||
cast<InvokeInst>(OldCall)->setCalledFunction(
|
cast<CallBase>(OldCall)->setCalledFunction(
|
||||||
CalleeF->getFunctionType(),
|
CalleeF->getFunctionType(),
|
||||||
Constant::getNullValue(CalleeF->getType()));
|
Constant::getNullValue(CalleeF->getType()));
|
||||||
return nullptr;
|
return nullptr;
|
||||||
@ -4228,8 +4233,8 @@ Instruction *InstCombiner::visitCallBase(CallBase &Call) {
|
|||||||
if (!Call.getType()->isVoidTy())
|
if (!Call.getType()->isVoidTy())
|
||||||
replaceInstUsesWith(Call, UndefValue::get(Call.getType()));
|
replaceInstUsesWith(Call, UndefValue::get(Call.getType()));
|
||||||
|
|
||||||
if (isa<InvokeInst>(Call)) {
|
if (Call.isTerminator()) {
|
||||||
// Can't remove an invoke because we cannot change the CFG.
|
// Can't remove an invoke or callbr because we cannot change the CFG.
|
||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -4282,7 +4287,7 @@ Instruction *InstCombiner::visitCallBase(CallBase &Call) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// If the callee is a constexpr cast of a function, attempt to move the cast to
|
/// If the callee is a constexpr cast of a function, attempt to move the cast to
|
||||||
/// the arguments of the call/invoke.
|
/// the arguments of the call/callbr/invoke.
|
||||||
bool InstCombiner::transformConstExprCastCall(CallBase &Call) {
|
bool InstCombiner::transformConstExprCastCall(CallBase &Call) {
|
||||||
auto *Callee = dyn_cast<Function>(Call.getCalledValue()->stripPointerCasts());
|
auto *Callee = dyn_cast<Function>(Call.getCalledValue()->stripPointerCasts());
|
||||||
if (!Callee)
|
if (!Callee)
|
||||||
@ -4333,17 +4338,21 @@ bool InstCombiner::transformConstExprCastCall(CallBase &Call) {
|
|||||||
return false; // Attribute not compatible with transformed value.
|
return false; // Attribute not compatible with transformed value.
|
||||||
}
|
}
|
||||||
|
|
||||||
// If the callbase is an invoke instruction, and the return value is used by
|
// If the callbase is an invoke/callbr instruction, and the return value is
|
||||||
// a PHI node in a successor, we cannot change the return type of the call
|
// used by a PHI node in a successor, we cannot change the return type of
|
||||||
// because there is no place to put the cast instruction (without breaking
|
// the call because there is no place to put the cast instruction (without
|
||||||
// the critical edge). Bail out in this case.
|
// breaking the critical edge). Bail out in this case.
|
||||||
if (!Caller->use_empty())
|
if (!Caller->use_empty()) {
|
||||||
if (InvokeInst *II = dyn_cast<InvokeInst>(Caller))
|
if (InvokeInst *II = dyn_cast<InvokeInst>(Caller))
|
||||||
for (User *U : II->users())
|
for (User *U : II->users())
|
||||||
if (PHINode *PN = dyn_cast<PHINode>(U))
|
if (PHINode *PN = dyn_cast<PHINode>(U))
|
||||||
if (PN->getParent() == II->getNormalDest() ||
|
if (PN->getParent() == II->getNormalDest() ||
|
||||||
PN->getParent() == II->getUnwindDest())
|
PN->getParent() == II->getUnwindDest())
|
||||||
return false;
|
return false;
|
||||||
|
// FIXME: Be conservative for callbr to avoid a quadratic search.
|
||||||
|
if (CallBrInst *CBI = dyn_cast<CallBrInst>(Caller))
|
||||||
|
return false;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
unsigned NumActualArgs = Call.arg_size();
|
unsigned NumActualArgs = Call.arg_size();
|
||||||
@ -4497,6 +4506,9 @@ bool InstCombiner::transformConstExprCastCall(CallBase &Call) {
|
|||||||
if (InvokeInst *II = dyn_cast<InvokeInst>(Caller)) {
|
if (InvokeInst *II = dyn_cast<InvokeInst>(Caller)) {
|
||||||
NewCall = Builder.CreateInvoke(Callee, II->getNormalDest(),
|
NewCall = Builder.CreateInvoke(Callee, II->getNormalDest(),
|
||||||
II->getUnwindDest(), Args, OpBundles);
|
II->getUnwindDest(), Args, OpBundles);
|
||||||
|
} else if (CallBrInst *CBI = dyn_cast<CallBrInst>(Caller)) {
|
||||||
|
NewCall = Builder.CreateCallBr(Callee, CBI->getDefaultDest(),
|
||||||
|
CBI->getIndirectDests(), Args, OpBundles);
|
||||||
} else {
|
} else {
|
||||||
NewCall = Builder.CreateCall(Callee, Args, OpBundles);
|
NewCall = Builder.CreateCall(Callee, Args, OpBundles);
|
||||||
cast<CallInst>(NewCall)->setTailCallKind(
|
cast<CallInst>(NewCall)->setTailCallKind(
|
||||||
@ -4520,11 +4532,14 @@ bool InstCombiner::transformConstExprCastCall(CallBase &Call) {
|
|||||||
NV = NC = CastInst::CreateBitOrPointerCast(NC, OldRetTy);
|
NV = NC = CastInst::CreateBitOrPointerCast(NC, OldRetTy);
|
||||||
NC->setDebugLoc(Caller->getDebugLoc());
|
NC->setDebugLoc(Caller->getDebugLoc());
|
||||||
|
|
||||||
// If this is an invoke instruction, we should insert it after the first
|
// If this is an invoke/callbr instruction, we should insert it after the
|
||||||
// non-phi, instruction in the normal successor block.
|
// first non-phi instruction in the normal successor block.
|
||||||
if (InvokeInst *II = dyn_cast<InvokeInst>(Caller)) {
|
if (InvokeInst *II = dyn_cast<InvokeInst>(Caller)) {
|
||||||
BasicBlock::iterator I = II->getNormalDest()->getFirstInsertionPt();
|
BasicBlock::iterator I = II->getNormalDest()->getFirstInsertionPt();
|
||||||
InsertNewInstBefore(NC, *I);
|
InsertNewInstBefore(NC, *I);
|
||||||
|
} else if (CallBrInst *CBI = dyn_cast<CallBrInst>(Caller)) {
|
||||||
|
BasicBlock::iterator I = CBI->getDefaultDest()->getFirstInsertionPt();
|
||||||
|
InsertNewInstBefore(NC, *I);
|
||||||
} else {
|
} else {
|
||||||
// Otherwise, it's a call, just insert cast right after the call.
|
// Otherwise, it's a call, just insert cast right after the call.
|
||||||
InsertNewInstBefore(NC, *Caller);
|
InsertNewInstBefore(NC, *Caller);
|
||||||
@ -4673,6 +4688,12 @@ InstCombiner::transformCallThroughTrampoline(CallBase &Call,
|
|||||||
NewArgs, OpBundles);
|
NewArgs, OpBundles);
|
||||||
cast<InvokeInst>(NewCaller)->setCallingConv(II->getCallingConv());
|
cast<InvokeInst>(NewCaller)->setCallingConv(II->getCallingConv());
|
||||||
cast<InvokeInst>(NewCaller)->setAttributes(NewPAL);
|
cast<InvokeInst>(NewCaller)->setAttributes(NewPAL);
|
||||||
|
} else if (CallBrInst *CBI = dyn_cast<CallBrInst>(&Call)) {
|
||||||
|
NewCaller =
|
||||||
|
CallBrInst::Create(NewFTy, NewCallee, CBI->getDefaultDest(),
|
||||||
|
CBI->getIndirectDests(), NewArgs, OpBundles);
|
||||||
|
cast<CallBrInst>(NewCaller)->setCallingConv(CBI->getCallingConv());
|
||||||
|
cast<CallBrInst>(NewCaller)->setAttributes(NewPAL);
|
||||||
} else {
|
} else {
|
||||||
NewCaller = CallInst::Create(NewFTy, NewCallee, NewArgs, OpBundles);
|
NewCaller = CallInst::Create(NewFTy, NewCallee, NewArgs, OpBundles);
|
||||||
cast<CallInst>(NewCaller)->setTailCallKind(
|
cast<CallInst>(NewCaller)->setTailCallKind(
|
||||||
|
@ -392,6 +392,7 @@ public:
|
|||||||
Instruction *visitSelectInst(SelectInst &SI);
|
Instruction *visitSelectInst(SelectInst &SI);
|
||||||
Instruction *visitCallInst(CallInst &CI);
|
Instruction *visitCallInst(CallInst &CI);
|
||||||
Instruction *visitInvokeInst(InvokeInst &II);
|
Instruction *visitInvokeInst(InvokeInst &II);
|
||||||
|
Instruction *visitCallBrInst(CallBrInst &CBI);
|
||||||
|
|
||||||
Instruction *SliceUpIllegalIntegerPHI(PHINode &PN);
|
Instruction *SliceUpIllegalIntegerPHI(PHINode &PN);
|
||||||
Instruction *visitPHINode(PHINode &PN);
|
Instruction *visitPHINode(PHINode &PN);
|
||||||
|
@ -921,8 +921,8 @@ Instruction *InstCombiner::foldOpIntoPhi(Instruction &I, PHINode *PN) {
|
|||||||
|
|
||||||
// If the InVal is an invoke at the end of the pred block, then we can't
|
// If the InVal is an invoke at the end of the pred block, then we can't
|
||||||
// insert a computation after it without breaking the edge.
|
// insert a computation after it without breaking the edge.
|
||||||
if (InvokeInst *II = dyn_cast<InvokeInst>(InVal))
|
if (isa<InvokeInst>(InVal))
|
||||||
if (II->getParent() == NonConstBB)
|
if (cast<Instruction>(InVal)->getParent() == NonConstBB)
|
||||||
return nullptr;
|
return nullptr;
|
||||||
|
|
||||||
// If the incoming non-constant value is in I's block, we will remove one
|
// If the incoming non-constant value is in I's block, we will remove one
|
||||||
|
@ -1131,6 +1131,14 @@ bool GVN::PerformLoadPRE(LoadInst *LI, AvailValInBlkVect &ValuesPerBlock,
|
|||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// FIXME: Can we support the fallthrough edge?
|
||||||
|
if (isa<CallBrInst>(Pred->getTerminator())) {
|
||||||
|
LLVM_DEBUG(
|
||||||
|
dbgs() << "COULD NOT PRE LOAD BECAUSE OF CALLBR CRITICAL EDGE '"
|
||||||
|
<< Pred->getName() << "': " << *LI << '\n');
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
if (LoadBB->isEHPad()) {
|
if (LoadBB->isEHPad()) {
|
||||||
LLVM_DEBUG(
|
LLVM_DEBUG(
|
||||||
dbgs() << "COULD NOT PRE LOAD BECAUSE OF AN EH PAD CRITICAL EDGE '"
|
dbgs() << "COULD NOT PRE LOAD BECAUSE OF AN EH PAD CRITICAL EDGE '"
|
||||||
@ -2167,8 +2175,8 @@ bool GVN::performScalarPRE(Instruction *CurInst) {
|
|||||||
return false;
|
return false;
|
||||||
|
|
||||||
// We don't currently value number ANY inline asm calls.
|
// We don't currently value number ANY inline asm calls.
|
||||||
if (CallInst *CallI = dyn_cast<CallInst>(CurInst))
|
if (auto *CallB = dyn_cast<CallBase>(CurInst))
|
||||||
if (CallI->isInlineAsm())
|
if (CallB->isInlineAsm())
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
uint32_t ValNo = VN.lookup(CurInst);
|
uint32_t ValNo = VN.lookup(CurInst);
|
||||||
@ -2251,6 +2259,11 @@ bool GVN::performScalarPRE(Instruction *CurInst) {
|
|||||||
if (isa<IndirectBrInst>(PREPred->getTerminator()))
|
if (isa<IndirectBrInst>(PREPred->getTerminator()))
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
|
// Don't do PRE across callbr.
|
||||||
|
// FIXME: Can we do this across the fallthrough edge?
|
||||||
|
if (isa<CallBrInst>(PREPred->getTerminator()))
|
||||||
|
return false;
|
||||||
|
|
||||||
// We can't do PRE safely on a critical edge, so instead we schedule
|
// We can't do PRE safely on a critical edge, so instead we schedule
|
||||||
// the edge to be split and perform the PRE the next time we iterate
|
// the edge to be split and perform the PRE the next time we iterate
|
||||||
// on the function.
|
// on the function.
|
||||||
|
@ -1055,7 +1055,7 @@ bool JumpThreadingPass::ProcessBlock(BasicBlock *BB) {
|
|||||||
Condition = IB->getAddress()->stripPointerCasts();
|
Condition = IB->getAddress()->stripPointerCasts();
|
||||||
Preference = WantBlockAddress;
|
Preference = WantBlockAddress;
|
||||||
} else {
|
} else {
|
||||||
return false; // Must be an invoke.
|
return false; // Must be an invoke or callbr.
|
||||||
}
|
}
|
||||||
|
|
||||||
// Run constant folding to see if we can reduce the condition to a simple
|
// Run constant folding to see if we can reduce the condition to a simple
|
||||||
@ -1428,7 +1428,9 @@ bool JumpThreadingPass::SimplifyPartiallyRedundantLoad(LoadInst *LoadI) {
|
|||||||
// Add all the unavailable predecessors to the PredsToSplit list.
|
// Add all the unavailable predecessors to the PredsToSplit list.
|
||||||
for (BasicBlock *P : predecessors(LoadBB)) {
|
for (BasicBlock *P : predecessors(LoadBB)) {
|
||||||
// If the predecessor is an indirect goto, we can't split the edge.
|
// If the predecessor is an indirect goto, we can't split the edge.
|
||||||
if (isa<IndirectBrInst>(P->getTerminator()))
|
// Same for CallBr.
|
||||||
|
if (isa<IndirectBrInst>(P->getTerminator()) ||
|
||||||
|
isa<CallBrInst>(P->getTerminator()))
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
if (!AvailablePredSet.count(P))
|
if (!AvailablePredSet.count(P))
|
||||||
@ -1641,8 +1643,9 @@ bool JumpThreadingPass::ProcessThreadableEdges(Value *Cond, BasicBlock *BB,
|
|||||||
++PredWithKnownDest;
|
++PredWithKnownDest;
|
||||||
|
|
||||||
// If the predecessor ends with an indirect goto, we can't change its
|
// If the predecessor ends with an indirect goto, we can't change its
|
||||||
// destination.
|
// destination. Same for CallBr.
|
||||||
if (isa<IndirectBrInst>(Pred->getTerminator()))
|
if (isa<IndirectBrInst>(Pred->getTerminator()) ||
|
||||||
|
isa<CallBrInst>(Pred->getTerminator()))
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
PredToDestList.push_back(std::make_pair(Pred, DestBB));
|
PredToDestList.push_back(std::make_pair(Pred, DestBB));
|
||||||
|
@ -638,6 +638,11 @@ private:
|
|||||||
visitTerminator(II);
|
visitTerminator(II);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void visitCallBrInst (CallBrInst &CBI) {
|
||||||
|
visitCallSite(&CBI);
|
||||||
|
visitTerminator(CBI);
|
||||||
|
}
|
||||||
|
|
||||||
void visitCallSite (CallSite CS);
|
void visitCallSite (CallSite CS);
|
||||||
void visitResumeInst (ResumeInst &I) { /*returns void*/ }
|
void visitResumeInst (ResumeInst &I) { /*returns void*/ }
|
||||||
void visitUnreachableInst(UnreachableInst &I) { /*returns void*/ }
|
void visitUnreachableInst(UnreachableInst &I) { /*returns void*/ }
|
||||||
@ -733,6 +738,13 @@ void SCCPSolver::getFeasibleSuccessors(Instruction &TI,
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// In case of callbr, we pessimistically assume that all successors are
|
||||||
|
// feasible.
|
||||||
|
if (isa<CallBrInst>(&TI)) {
|
||||||
|
Succs.assign(TI.getNumSuccessors(), true);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
LLVM_DEBUG(dbgs() << "Unknown terminator instruction: " << TI << '\n');
|
LLVM_DEBUG(dbgs() << "Unknown terminator instruction: " << TI << '\n');
|
||||||
llvm_unreachable("SCCP: Don't know how to handle this terminator!");
|
llvm_unreachable("SCCP: Don't know how to handle this terminator!");
|
||||||
}
|
}
|
||||||
@ -1597,6 +1609,7 @@ bool SCCPSolver::ResolvedUndefsIn(Function &F) {
|
|||||||
return true;
|
return true;
|
||||||
case Instruction::Call:
|
case Instruction::Call:
|
||||||
case Instruction::Invoke:
|
case Instruction::Invoke:
|
||||||
|
case Instruction::CallBr:
|
||||||
// There are two reasons a call can have an undef result
|
// There are two reasons a call can have an undef result
|
||||||
// 1. It could be tracked.
|
// 1. It could be tracked.
|
||||||
// 2. It could be constant-foldable.
|
// 2. It could be constant-foldable.
|
||||||
|
@ -549,6 +549,8 @@ BasicBlock *llvm::SplitBlockPredecessors(BasicBlock *BB,
|
|||||||
// all BlockAddress uses would need to be updated.
|
// all BlockAddress uses would need to be updated.
|
||||||
assert(!isa<IndirectBrInst>(Preds[i]->getTerminator()) &&
|
assert(!isa<IndirectBrInst>(Preds[i]->getTerminator()) &&
|
||||||
"Cannot split an edge from an IndirectBrInst");
|
"Cannot split an edge from an IndirectBrInst");
|
||||||
|
assert(!isa<CallBrInst>(Preds[i]->getTerminator()) &&
|
||||||
|
"Cannot split an edge from a CallBrInst");
|
||||||
Preds[i]->getTerminator()->replaceUsesOfWith(BB, NewBB);
|
Preds[i]->getTerminator()->replaceUsesOfWith(BB, NewBB);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -144,6 +144,10 @@ llvm::SplitCriticalEdge(Instruction *TI, unsigned SuccNum,
|
|||||||
// it in this generic function.
|
// it in this generic function.
|
||||||
if (DestBB->isEHPad()) return nullptr;
|
if (DestBB->isEHPad()) return nullptr;
|
||||||
|
|
||||||
|
// Don't split the non-fallthrough edge from a callbr.
|
||||||
|
if (isa<CallBrInst>(TI) && SuccNum > 0)
|
||||||
|
return nullptr;
|
||||||
|
|
||||||
// Create a new basic block, linking it into the CFG.
|
// Create a new basic block, linking it into the CFG.
|
||||||
BasicBlock *NewBB = BasicBlock::Create(TI->getContext(),
|
BasicBlock *NewBB = BasicBlock::Create(TI->getContext(),
|
||||||
TIBB->getName() + "." + DestBB->getName() + "_crit_edge");
|
TIBB->getName() + "." + DestBB->getName() + "_crit_edge");
|
||||||
|
@ -1504,6 +1504,10 @@ llvm::InlineResult llvm::InlineFunction(CallSite CS, InlineFunctionInfo &IFI,
|
|||||||
assert(TheCall->getParent() && TheCall->getFunction()
|
assert(TheCall->getParent() && TheCall->getFunction()
|
||||||
&& "Instruction not in function!");
|
&& "Instruction not in function!");
|
||||||
|
|
||||||
|
// FIXME: we don't inline callbr yet.
|
||||||
|
if (isa<CallBrInst>(TheCall))
|
||||||
|
return false;
|
||||||
|
|
||||||
// If IFI has any state in it, zap it before we fill it in.
|
// If IFI has any state in it, zap it before we fill it in.
|
||||||
IFI.reset();
|
IFI.reset();
|
||||||
|
|
||||||
@ -1729,6 +1733,8 @@ llvm::InlineResult llvm::InlineFunction(CallSite CS, InlineFunctionInfo &IFI,
|
|||||||
Instruction *NewI = nullptr;
|
Instruction *NewI = nullptr;
|
||||||
if (isa<CallInst>(I))
|
if (isa<CallInst>(I))
|
||||||
NewI = CallInst::Create(cast<CallInst>(I), OpDefs, I);
|
NewI = CallInst::Create(cast<CallInst>(I), OpDefs, I);
|
||||||
|
else if (isa<CallBrInst>(I))
|
||||||
|
NewI = CallBrInst::Create(cast<CallBrInst>(I), OpDefs, I);
|
||||||
else
|
else
|
||||||
NewI = InvokeInst::Create(cast<InvokeInst>(I), OpDefs, I);
|
NewI = InvokeInst::Create(cast<InvokeInst>(I), OpDefs, I);
|
||||||
|
|
||||||
@ -2031,6 +2037,8 @@ llvm::InlineResult llvm::InlineFunction(CallSite CS, InlineFunctionInfo &IFI,
|
|||||||
Instruction *NewInst;
|
Instruction *NewInst;
|
||||||
if (CS.isCall())
|
if (CS.isCall())
|
||||||
NewInst = CallInst::Create(cast<CallInst>(I), OpBundles, I);
|
NewInst = CallInst::Create(cast<CallInst>(I), OpBundles, I);
|
||||||
|
else if (CS.isCallBr())
|
||||||
|
NewInst = CallBrInst::Create(cast<CallBrInst>(I), OpBundles, I);
|
||||||
else
|
else
|
||||||
NewInst = InvokeInst::Create(cast<InvokeInst>(I), OpBundles, I);
|
NewInst = InvokeInst::Create(cast<InvokeInst>(I), OpBundles, I);
|
||||||
NewInst->takeName(I);
|
NewInst->takeName(I);
|
||||||
|
@ -996,6 +996,18 @@ bool llvm::TryToSimplifyUncondBranchFromEmptyBlock(BasicBlock *BB,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// We cannot fold the block if it's a branch to an already present callbr
|
||||||
|
// successor because that creates duplicate successors.
|
||||||
|
for (auto I = pred_begin(BB), E = pred_end(BB); I != E; ++I) {
|
||||||
|
if (auto *CBI = dyn_cast<CallBrInst>((*I)->getTerminator())) {
|
||||||
|
if (Succ == CBI->getDefaultDest())
|
||||||
|
return false;
|
||||||
|
for (unsigned i = 0, e = CBI->getNumIndirectDests(); i != e; ++i)
|
||||||
|
if (Succ == CBI->getIndirectDest(i))
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
LLVM_DEBUG(dbgs() << "Killing Trivial BB: \n" << *BB);
|
LLVM_DEBUG(dbgs() << "Killing Trivial BB: \n" << *BB);
|
||||||
|
|
||||||
SmallVector<DominatorTree::UpdateType, 32> Updates;
|
SmallVector<DominatorTree::UpdateType, 32> Updates;
|
||||||
|
@ -27,6 +27,9 @@
|
|||||||
// to transform the loop and make these guarantees. Client code should check
|
// to transform the loop and make these guarantees. Client code should check
|
||||||
// that these conditions are true before relying on them.
|
// that these conditions are true before relying on them.
|
||||||
//
|
//
|
||||||
|
// Similar complications arise from callbr instructions, particularly in
|
||||||
|
// asm-goto where blockaddress expressions are used.
|
||||||
|
//
|
||||||
// Note that the simplifycfg pass will clean up blocks which are split out but
|
// Note that the simplifycfg pass will clean up blocks which are split out but
|
||||||
// end up being unnecessary, so usage of this pass should not pessimize
|
// end up being unnecessary, so usage of this pass should not pessimize
|
||||||
// generated code.
|
// generated code.
|
||||||
@ -123,10 +126,11 @@ BasicBlock *llvm::InsertPreheaderForLoop(Loop *L, DominatorTree *DT,
|
|||||||
PI != PE; ++PI) {
|
PI != PE; ++PI) {
|
||||||
BasicBlock *P = *PI;
|
BasicBlock *P = *PI;
|
||||||
if (!L->contains(P)) { // Coming in from outside the loop?
|
if (!L->contains(P)) { // Coming in from outside the loop?
|
||||||
// If the loop is branched to from an indirect branch, we won't
|
// If the loop is branched to from an indirect terminator, we won't
|
||||||
// be able to fully transform the loop, because it prohibits
|
// be able to fully transform the loop, because it prohibits
|
||||||
// edge splitting.
|
// edge splitting.
|
||||||
if (isa<IndirectBrInst>(P->getTerminator())) return nullptr;
|
if (P->getTerminator()->isIndirectTerminator())
|
||||||
|
return nullptr;
|
||||||
|
|
||||||
// Keep track of it.
|
// Keep track of it.
|
||||||
OutsideBlocks.push_back(P);
|
OutsideBlocks.push_back(P);
|
||||||
@ -235,8 +239,8 @@ static Loop *separateNestedLoop(Loop *L, BasicBlock *Preheader,
|
|||||||
for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i) {
|
for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i) {
|
||||||
if (PN->getIncomingValue(i) != PN ||
|
if (PN->getIncomingValue(i) != PN ||
|
||||||
!L->contains(PN->getIncomingBlock(i))) {
|
!L->contains(PN->getIncomingBlock(i))) {
|
||||||
// We can't split indirectbr edges.
|
// We can't split indirect control flow edges.
|
||||||
if (isa<IndirectBrInst>(PN->getIncomingBlock(i)->getTerminator()))
|
if (PN->getIncomingBlock(i)->getTerminator()->isIndirectTerminator())
|
||||||
return nullptr;
|
return nullptr;
|
||||||
OuterLoopPreds.push_back(PN->getIncomingBlock(i));
|
OuterLoopPreds.push_back(PN->getIncomingBlock(i));
|
||||||
}
|
}
|
||||||
@ -357,8 +361,8 @@ static BasicBlock *insertUniqueBackedgeBlock(Loop *L, BasicBlock *Preheader,
|
|||||||
for (pred_iterator I = pred_begin(Header), E = pred_end(Header); I != E; ++I){
|
for (pred_iterator I = pred_begin(Header), E = pred_end(Header); I != E; ++I){
|
||||||
BasicBlock *P = *I;
|
BasicBlock *P = *I;
|
||||||
|
|
||||||
// Indirectbr edges cannot be split, so we must fail if we find one.
|
// Indirect edges cannot be split, so we must fail if we find one.
|
||||||
if (isa<IndirectBrInst>(P->getTerminator()))
|
if (P->getTerminator()->isIndirectTerminator())
|
||||||
return nullptr;
|
return nullptr;
|
||||||
|
|
||||||
if (P != Preheader) BackedgeBlocks.push_back(P);
|
if (P != Preheader) BackedgeBlocks.push_back(P);
|
||||||
|
@ -65,6 +65,9 @@ bool llvm::formDedicatedExitBlocks(Loop *L, DominatorTree *DT, LoopInfo *LI,
|
|||||||
if (isa<IndirectBrInst>(PredBB->getTerminator()))
|
if (isa<IndirectBrInst>(PredBB->getTerminator()))
|
||||||
// We cannot rewrite exiting edges from an indirectbr.
|
// We cannot rewrite exiting edges from an indirectbr.
|
||||||
return false;
|
return false;
|
||||||
|
if (isa<CallBrInst>(PredBB->getTerminator()))
|
||||||
|
// We cannot rewrite exiting edges from a callbr.
|
||||||
|
return false;
|
||||||
|
|
||||||
InLoopPredecessors.push_back(PredBB);
|
InLoopPredecessors.push_back(PredBB);
|
||||||
} else {
|
} else {
|
||||||
|
@ -1265,8 +1265,10 @@ static bool HoistThenElseCodeToIf(BranchInst *BI,
|
|||||||
while (isa<DbgInfoIntrinsic>(I2))
|
while (isa<DbgInfoIntrinsic>(I2))
|
||||||
I2 = &*BB2_Itr++;
|
I2 = &*BB2_Itr++;
|
||||||
}
|
}
|
||||||
|
// FIXME: Can we define a safety predicate for CallBr?
|
||||||
if (isa<PHINode>(I1) || !I1->isIdenticalToWhenDefined(I2) ||
|
if (isa<PHINode>(I1) || !I1->isIdenticalToWhenDefined(I2) ||
|
||||||
(isa<InvokeInst>(I1) && !isSafeToHoistInvoke(BB1, BB2, I1, I2)))
|
(isa<InvokeInst>(I1) && !isSafeToHoistInvoke(BB1, BB2, I1, I2)) ||
|
||||||
|
isa<CallBrInst>(I1))
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
BasicBlock *BIParent = BI->getParent();
|
BasicBlock *BIParent = BI->getParent();
|
||||||
@ -1349,9 +1351,14 @@ static bool HoistThenElseCodeToIf(BranchInst *BI,
|
|||||||
|
|
||||||
HoistTerminator:
|
HoistTerminator:
|
||||||
// It may not be possible to hoist an invoke.
|
// It may not be possible to hoist an invoke.
|
||||||
|
// FIXME: Can we define a safety predicate for CallBr?
|
||||||
if (isa<InvokeInst>(I1) && !isSafeToHoistInvoke(BB1, BB2, I1, I2))
|
if (isa<InvokeInst>(I1) && !isSafeToHoistInvoke(BB1, BB2, I1, I2))
|
||||||
return Changed;
|
return Changed;
|
||||||
|
|
||||||
|
// TODO: callbr hoisting currently disabled pending further study.
|
||||||
|
if (isa<CallBrInst>(I1))
|
||||||
|
return Changed;
|
||||||
|
|
||||||
for (BasicBlock *Succ : successors(BB1)) {
|
for (BasicBlock *Succ : successors(BB1)) {
|
||||||
for (PHINode &PN : Succ->phis()) {
|
for (PHINode &PN : Succ->phis()) {
|
||||||
Value *BB1V = PN.getIncomingValueForBlock(BB1);
|
Value *BB1V = PN.getIncomingValueForBlock(BB1);
|
||||||
@ -1443,7 +1450,7 @@ static bool canSinkInstructions(
|
|||||||
// Conservatively return false if I is an inline-asm instruction. Sinking
|
// Conservatively return false if I is an inline-asm instruction. Sinking
|
||||||
// and merging inline-asm instructions can potentially create arguments
|
// and merging inline-asm instructions can potentially create arguments
|
||||||
// that cannot satisfy the inline-asm constraints.
|
// that cannot satisfy the inline-asm constraints.
|
||||||
if (const auto *C = dyn_cast<CallInst>(I))
|
if (const auto *C = dyn_cast<CallBase>(I))
|
||||||
if (C->isInlineAsm())
|
if (C->isInlineAsm())
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
@ -1506,7 +1513,7 @@ static bool canSinkInstructions(
|
|||||||
// We can't create a PHI from this GEP.
|
// We can't create a PHI from this GEP.
|
||||||
return false;
|
return false;
|
||||||
// Don't create indirect calls! The called value is the final operand.
|
// Don't create indirect calls! The called value is the final operand.
|
||||||
if ((isa<CallInst>(I0) || isa<InvokeInst>(I0)) && OI == OE - 1) {
|
if (isa<CallBase>(I0) && OI == OE - 1) {
|
||||||
// FIXME: if the call was *already* indirect, we should do this.
|
// FIXME: if the call was *already* indirect, we should do this.
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
14
llvm/test/Bitcode/callbr.ll
Normal file
14
llvm/test/Bitcode/callbr.ll
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
; RUN: llvm-dis < %s.bc | FileCheck %s
|
||||||
|
|
||||||
|
; callbr.ll.bc was generated by passing this file to llvm-as.
|
||||||
|
|
||||||
|
define i32 @test_asm_goto(i32 %x){
|
||||||
|
entry:
|
||||||
|
; CHECK: callbr void asm "", "r,X"(i32 %x, i8* blockaddress(@test_asm_goto, %fail))
|
||||||
|
; CHECK-NEXT: to label %normal [label %fail]
|
||||||
|
callbr void asm "", "r,X"(i32 %x, i8* blockaddress(@test_asm_goto, %fail)) to label %normal [label %fail]
|
||||||
|
normal:
|
||||||
|
ret i32 1
|
||||||
|
fail:
|
||||||
|
ret i32 0
|
||||||
|
}
|
BIN
llvm/test/Bitcode/callbr.ll.bc
Normal file
BIN
llvm/test/Bitcode/callbr.ll.bc
Normal file
Binary file not shown.
106
llvm/test/CodeGen/X86/callbr-asm-blockplacement.ll
Normal file
106
llvm/test/CodeGen/X86/callbr-asm-blockplacement.ll
Normal file
@ -0,0 +1,106 @@
|
|||||||
|
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
|
||||||
|
; RUN: llc < %s -mtriple=x86_64-unknown-linux-gnu | FileCheck %s
|
||||||
|
|
||||||
|
; This test asserted in MachineBlockPlacement during asm-goto bring up.
|
||||||
|
|
||||||
|
%struct.wibble = type { %struct.pluto, i32, i8* }
|
||||||
|
%struct.pluto = type { i32, i32, i32 }
|
||||||
|
|
||||||
|
@global = external global [0 x %struct.wibble]
|
||||||
|
|
||||||
|
define i32 @foo(i32 %arg, i32 (i8*)* %arg3) nounwind {
|
||||||
|
; CHECK-LABEL: foo:
|
||||||
|
; CHECK: # %bb.0: # %bb
|
||||||
|
; CHECK-NEXT: pushq %rbp
|
||||||
|
; CHECK-NEXT: pushq %r15
|
||||||
|
; CHECK-NEXT: pushq %r14
|
||||||
|
; CHECK-NEXT: pushq %r13
|
||||||
|
; CHECK-NEXT: pushq %r12
|
||||||
|
; CHECK-NEXT: pushq %rbx
|
||||||
|
; CHECK-NEXT: pushq %rax
|
||||||
|
; CHECK-NEXT: movabsq $-2305847407260205056, %rbx # imm = 0xDFFFFC0000000000
|
||||||
|
; CHECK-NEXT: xorl %eax, %eax
|
||||||
|
; CHECK-NEXT: testb %al, %al
|
||||||
|
; CHECK-NEXT: jne .LBB0_5
|
||||||
|
; CHECK-NEXT: # %bb.1: # %bb5
|
||||||
|
; CHECK-NEXT: movq %rsi, %r14
|
||||||
|
; CHECK-NEXT: movslq %edi, %rbp
|
||||||
|
; CHECK-NEXT: leaq (,%rbp,8), %rax
|
||||||
|
; CHECK-NEXT: leaq global(%rax,%rax,2), %r15
|
||||||
|
; CHECK-NEXT: leaq global+4(%rax,%rax,2), %r12
|
||||||
|
; CHECK-NEXT: xorl %r13d, %r13d
|
||||||
|
; CHECK-NEXT: .p2align 4, 0x90
|
||||||
|
; CHECK-NEXT: .LBB0_2: # %bb8
|
||||||
|
; CHECK-NEXT: # =>This Inner Loop Header: Depth=1
|
||||||
|
; CHECK-NEXT: callq bar
|
||||||
|
; CHECK-NEXT: movq %rax, %rbx
|
||||||
|
; CHECK-NEXT: movq %rax, %rdi
|
||||||
|
; CHECK-NEXT: callq *%r14
|
||||||
|
; CHECK-NEXT: movq %r15, %rdi
|
||||||
|
; CHECK-NEXT: callq hoge
|
||||||
|
; CHECK-NEXT: movq %r12, %rdi
|
||||||
|
; CHECK-NEXT: callq hoge
|
||||||
|
; CHECK-NEXT: testb %r13b, %r13b
|
||||||
|
; CHECK-NEXT: jne .LBB0_2
|
||||||
|
; CHECK-NEXT: # %bb.3: # %bb15
|
||||||
|
; CHECK-NEXT: leaq (%rbp,%rbp,2), %rax
|
||||||
|
; CHECK-NEXT: movq %rbx, global+16(,%rax,8)
|
||||||
|
; CHECK-NEXT: movabsq $-2305847407260205056, %rbx # imm = 0xDFFFFC0000000000
|
||||||
|
; CHECK-NEXT: #APP
|
||||||
|
; CHECK-NEXT: #NO_APP
|
||||||
|
; CHECK-NEXT: .LBB0_4: # %bb17
|
||||||
|
; CHECK-NEXT: callq widget
|
||||||
|
; CHECK-NEXT: .Ltmp0: # Block address taken
|
||||||
|
; CHECK-NEXT: .LBB0_5: # %bb18
|
||||||
|
; CHECK-NEXT: movw $0, 14(%rbx)
|
||||||
|
; CHECK-NEXT: addq $8, %rsp
|
||||||
|
; CHECK-NEXT: popq %rbx
|
||||||
|
; CHECK-NEXT: popq %r12
|
||||||
|
; CHECK-NEXT: popq %r13
|
||||||
|
; CHECK-NEXT: popq %r14
|
||||||
|
; CHECK-NEXT: popq %r15
|
||||||
|
; CHECK-NEXT: popq %rbp
|
||||||
|
; CHECK-NEXT: retq
|
||||||
|
bb:
|
||||||
|
%tmp = add i64 0, -2305847407260205056
|
||||||
|
%tmp4 = sext i32 %arg to i64
|
||||||
|
br i1 undef, label %bb18, label %bb5
|
||||||
|
|
||||||
|
bb5: ; preds = %bb
|
||||||
|
%tmp6 = getelementptr [0 x %struct.wibble], [0 x %struct.wibble]* @global, i64 0, i64 %tmp4, i32 0, i32 0
|
||||||
|
%tmp7 = getelementptr [0 x %struct.wibble], [0 x %struct.wibble]* @global, i64 0, i64 %tmp4, i32 0, i32 1
|
||||||
|
br label %bb8
|
||||||
|
|
||||||
|
bb8: ; preds = %bb8, %bb5
|
||||||
|
%tmp9 = call i8* @bar(i64 undef)
|
||||||
|
%tmp10 = call i32 %arg3(i8* nonnull %tmp9)
|
||||||
|
%tmp11 = ptrtoint i32* %tmp6 to i64
|
||||||
|
call void @hoge(i64 %tmp11)
|
||||||
|
%tmp12 = ptrtoint i32* %tmp7 to i64
|
||||||
|
%tmp13 = add i64 undef, -2305847407260205056
|
||||||
|
call void @hoge(i64 %tmp12)
|
||||||
|
%tmp14 = icmp eq i32 0, 0
|
||||||
|
br i1 %tmp14, label %bb15, label %bb8
|
||||||
|
|
||||||
|
bb15: ; preds = %bb8
|
||||||
|
%tmp16 = getelementptr [0 x %struct.wibble], [0 x %struct.wibble]* @global, i64 0, i64 %tmp4, i32 2
|
||||||
|
store i8* %tmp9, i8** %tmp16
|
||||||
|
callbr void asm sideeffect "", "X"(i8* blockaddress(@foo, %bb18))
|
||||||
|
to label %bb17 [label %bb18]
|
||||||
|
|
||||||
|
bb17: ; preds = %bb15
|
||||||
|
call void @widget()
|
||||||
|
br label %bb18
|
||||||
|
|
||||||
|
bb18: ; preds = %bb17, %bb15, %bb
|
||||||
|
%tmp19 = add i64 %tmp, 14
|
||||||
|
%tmp20 = inttoptr i64 %tmp19 to i16*
|
||||||
|
store i16 0, i16* %tmp20
|
||||||
|
ret i32 undef
|
||||||
|
}
|
||||||
|
|
||||||
|
declare i8* @bar(i64)
|
||||||
|
|
||||||
|
declare void @widget()
|
||||||
|
|
||||||
|
declare void @hoge(i64)
|
151
llvm/test/CodeGen/X86/callbr-asm-branch-folding.ll
Normal file
151
llvm/test/CodeGen/X86/callbr-asm-branch-folding.ll
Normal file
@ -0,0 +1,151 @@
|
|||||||
|
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
|
||||||
|
; RUN: llc < %s -mtriple=x86_64-unknown-linux-gnu | FileCheck %s
|
||||||
|
|
||||||
|
; This test hung in the BranchFolding pass during asm-goto bring up
|
||||||
|
|
||||||
|
@e = global i32 0
|
||||||
|
@j = global i32 0
|
||||||
|
|
||||||
|
define void @n(i32* %o, i32 %p, i32 %u) nounwind {
|
||||||
|
; CHECK-LABEL: n:
|
||||||
|
; CHECK: # %bb.0: # %entry
|
||||||
|
; CHECK-NEXT: pushq %rbp
|
||||||
|
; CHECK-NEXT: pushq %r15
|
||||||
|
; CHECK-NEXT: pushq %r14
|
||||||
|
; CHECK-NEXT: pushq %r13
|
||||||
|
; CHECK-NEXT: pushq %r12
|
||||||
|
; CHECK-NEXT: pushq %rbx
|
||||||
|
; CHECK-NEXT: pushq %rax
|
||||||
|
; CHECK-NEXT: movl %edx, %ebx
|
||||||
|
; CHECK-NEXT: movl %esi, %r12d
|
||||||
|
; CHECK-NEXT: movq %rdi, %r15
|
||||||
|
; CHECK-NEXT: callq c
|
||||||
|
; CHECK-NEXT: movl %eax, %r13d
|
||||||
|
; CHECK-NEXT: movq %r15, %rdi
|
||||||
|
; CHECK-NEXT: callq l
|
||||||
|
; CHECK-NEXT: testl %eax, %eax
|
||||||
|
; CHECK-NEXT: je .LBB0_1
|
||||||
|
; CHECK-NEXT: .LBB0_10: # %cleanup
|
||||||
|
; CHECK-NEXT: addq $8, %rsp
|
||||||
|
; CHECK-NEXT: popq %rbx
|
||||||
|
; CHECK-NEXT: popq %r12
|
||||||
|
; CHECK-NEXT: popq %r13
|
||||||
|
; CHECK-NEXT: popq %r14
|
||||||
|
; CHECK-NEXT: popq %r15
|
||||||
|
; CHECK-NEXT: popq %rbp
|
||||||
|
; CHECK-NEXT: retq
|
||||||
|
; CHECK-NEXT: .LBB0_1: # %if.end
|
||||||
|
; CHECK-NEXT: movl %ebx, {{[-0-9]+}}(%r{{[sb]}}p) # 4-byte Spill
|
||||||
|
; CHECK-NEXT: cmpl $0, {{.*}}(%rip)
|
||||||
|
; CHECK-NEXT: # implicit-def: $ebx
|
||||||
|
; CHECK-NEXT: # implicit-def: $r14d
|
||||||
|
; CHECK-NEXT: je .LBB0_4
|
||||||
|
; CHECK-NEXT: # %bb.2: # %if.then4
|
||||||
|
; CHECK-NEXT: movslq %r12d, %rdi
|
||||||
|
; CHECK-NEXT: callq m
|
||||||
|
; CHECK-NEXT: # implicit-def: $ebx
|
||||||
|
; CHECK-NEXT: # implicit-def: $ebp
|
||||||
|
; CHECK-NEXT: .LBB0_3: # %r
|
||||||
|
; CHECK-NEXT: callq c
|
||||||
|
; CHECK-NEXT: movl %ebp, %r14d
|
||||||
|
; CHECK-NEXT: .LBB0_4: # %if.end8
|
||||||
|
; CHECK-NEXT: movl %ebx, %edi
|
||||||
|
; CHECK-NEXT: callq i
|
||||||
|
; CHECK-NEXT: movl %eax, %ebp
|
||||||
|
; CHECK-NEXT: orl %r14d, %ebp
|
||||||
|
; CHECK-NEXT: testl %r13d, %r13d
|
||||||
|
; CHECK-NEXT: je .LBB0_6
|
||||||
|
; CHECK-NEXT: # %bb.5:
|
||||||
|
; CHECK-NEXT: andl $4, %ebx
|
||||||
|
; CHECK-NEXT: jmp .LBB0_3
|
||||||
|
; CHECK-NEXT: .LBB0_6: # %if.end12
|
||||||
|
; CHECK-NEXT: testl %ebp, %ebp
|
||||||
|
; CHECK-NEXT: je .LBB0_9
|
||||||
|
; CHECK-NEXT: # %bb.7: # %if.then14
|
||||||
|
; CHECK-NEXT: movl {{[-0-9]+}}(%r{{[sb]}}p), %eax # 4-byte Reload
|
||||||
|
; CHECK-NEXT: #APP
|
||||||
|
; CHECK-NEXT: #NO_APP
|
||||||
|
; CHECK-NEXT: jmp .LBB0_10
|
||||||
|
; CHECK-NEXT: .Ltmp0: # Block address taken
|
||||||
|
; CHECK-NEXT: .LBB0_8: # %if.then20.critedge
|
||||||
|
; CHECK-NEXT: movl {{.*}}(%rip), %edi
|
||||||
|
; CHECK-NEXT: movslq %eax, %rcx
|
||||||
|
; CHECK-NEXT: movl $1, %esi
|
||||||
|
; CHECK-NEXT: movq %r15, %rdx
|
||||||
|
; CHECK-NEXT: addq $8, %rsp
|
||||||
|
; CHECK-NEXT: popq %rbx
|
||||||
|
; CHECK-NEXT: popq %r12
|
||||||
|
; CHECK-NEXT: popq %r13
|
||||||
|
; CHECK-NEXT: popq %r14
|
||||||
|
; CHECK-NEXT: popq %r15
|
||||||
|
; CHECK-NEXT: popq %rbp
|
||||||
|
; CHECK-NEXT: jmp k # TAILCALL
|
||||||
|
; CHECK-NEXT: .LBB0_9: # %if.else
|
||||||
|
; CHECK-NEXT: incq 0
|
||||||
|
; CHECK-NEXT: jmp .LBB0_10
|
||||||
|
entry:
|
||||||
|
%call = tail call i32 @c()
|
||||||
|
%call1 = tail call i32 @l(i32* %o)
|
||||||
|
%tobool = icmp eq i32 %call1, 0
|
||||||
|
br i1 %tobool, label %if.end, label %cleanup
|
||||||
|
|
||||||
|
if.end: ; preds = %entry
|
||||||
|
%0 = load i32, i32* @e
|
||||||
|
%tobool3 = icmp eq i32 %0, 0
|
||||||
|
br i1 %tobool3, label %if.end8, label %if.then4, !prof !0
|
||||||
|
|
||||||
|
if.then4: ; preds = %if.end
|
||||||
|
%conv5 = sext i32 %p to i64
|
||||||
|
%call6 = tail call i32 @m(i64 %conv5)
|
||||||
|
br label %r
|
||||||
|
|
||||||
|
r: ; preds = %if.end8, %if.then4
|
||||||
|
%flags.0 = phi i32 [ undef, %if.then4 ], [ %and, %if.end8 ]
|
||||||
|
%major.0 = phi i32 [ undef, %if.then4 ], [ %or, %if.end8 ]
|
||||||
|
%call7 = tail call i32 @c()
|
||||||
|
br label %if.end8
|
||||||
|
|
||||||
|
if.end8: ; preds = %r, %if.end
|
||||||
|
%flags.1 = phi i32 [ %flags.0, %r ], [ undef, %if.end ]
|
||||||
|
%major.1 = phi i32 [ %major.0, %r ], [ undef, %if.end ]
|
||||||
|
%call9 = tail call i32 @i(i32 %flags.1)
|
||||||
|
%or = or i32 %call9, %major.1
|
||||||
|
%and = and i32 %flags.1, 4
|
||||||
|
%tobool10 = icmp eq i32 %call, 0
|
||||||
|
br i1 %tobool10, label %if.end12, label %r
|
||||||
|
|
||||||
|
if.end12: ; preds = %if.end8
|
||||||
|
%tobool13 = icmp eq i32 %or, 0
|
||||||
|
br i1 %tobool13, label %if.else, label %if.then14
|
||||||
|
|
||||||
|
if.then14: ; preds = %if.end12
|
||||||
|
callbr void asm sideeffect "", "X,~{dirflag},~{fpsr},~{flags}"(i8* blockaddress(@n, %if.then20.critedge))
|
||||||
|
to label %cleanup [label %if.then20.critedge]
|
||||||
|
|
||||||
|
if.then20.critedge: ; preds = %if.then14
|
||||||
|
%1 = load i32, i32* @j
|
||||||
|
%conv21 = sext i32 %u to i64
|
||||||
|
%call22 = tail call i32 @k(i32 %1, i64 1, i32* %o, i64 %conv21)
|
||||||
|
br label %cleanup
|
||||||
|
|
||||||
|
if.else: ; preds = %if.end12
|
||||||
|
%2 = load i64, i64* null
|
||||||
|
%inc = add i64 %2, 1
|
||||||
|
store i64 %inc, i64* null
|
||||||
|
br label %cleanup
|
||||||
|
|
||||||
|
cleanup: ; preds = %if.else, %if.then20.critedge, %if.then14, %entry
|
||||||
|
ret void
|
||||||
|
}
|
||||||
|
|
||||||
|
declare i32 @c()
|
||||||
|
|
||||||
|
declare i32 @l(i32*)
|
||||||
|
|
||||||
|
declare i32 @m(i64)
|
||||||
|
|
||||||
|
declare i32 @i(i32)
|
||||||
|
|
||||||
|
declare i32 @k(i32, i64, i32*, i64)
|
||||||
|
|
||||||
|
!0 = !{!"branch_weights", i32 2000, i32 1}
|
15
llvm/test/CodeGen/X86/callbr-asm-destinations.ll
Normal file
15
llvm/test/CodeGen/X86/callbr-asm-destinations.ll
Normal file
@ -0,0 +1,15 @@
|
|||||||
|
; RUN: not llc -mtriple=i686-- < %s 2> %t
|
||||||
|
; RUN: FileCheck %s < %t
|
||||||
|
|
||||||
|
; CHECK: Duplicate callbr destination
|
||||||
|
|
||||||
|
; A test for asm-goto duplicate labels limitation
|
||||||
|
|
||||||
|
define i32 @test(i32 %a) {
|
||||||
|
entry:
|
||||||
|
%0 = add i32 %a, 4
|
||||||
|
callbr void asm "xorl $0, $0; jmp ${1:l}", "r,X,~{dirflag},~{fpsr},~{flags}"(i32 %0, i8* blockaddress(@test, %fail)) to label %fail [label %fail]
|
||||||
|
|
||||||
|
fail:
|
||||||
|
ret i32 1
|
||||||
|
}
|
18
llvm/test/CodeGen/X86/callbr-asm-errors.ll
Normal file
18
llvm/test/CodeGen/X86/callbr-asm-errors.ll
Normal file
@ -0,0 +1,18 @@
|
|||||||
|
; RUN: not llc -mtriple=i686-- < %s 2> %t
|
||||||
|
; RUN: FileCheck %s < %t
|
||||||
|
|
||||||
|
; CHECK: Duplicate callbr destination
|
||||||
|
|
||||||
|
; A test for asm-goto duplicate labels limitation
|
||||||
|
|
||||||
|
define i32 @test(i32 %a) {
|
||||||
|
entry:
|
||||||
|
%0 = add i32 %a, 4
|
||||||
|
callbr void asm "xorl $0, $0; jmp ${1:l}", "r,X,X,~{dirflag},~{fpsr},~{flags}"(i32 %0, i8* blockaddress(@test, %fail), i8* blockaddress(@test, %fail)) to label %normal [label %fail, label %fail]
|
||||||
|
|
||||||
|
normal:
|
||||||
|
ret i32 %0
|
||||||
|
|
||||||
|
fail:
|
||||||
|
ret i32 1
|
||||||
|
}
|
18
llvm/test/CodeGen/X86/callbr-asm-outputs.ll
Normal file
18
llvm/test/CodeGen/X86/callbr-asm-outputs.ll
Normal file
@ -0,0 +1,18 @@
|
|||||||
|
; RUN: not llc -mtriple=i686-- < %s 2> %t
|
||||||
|
; RUN: FileCheck %s < %t
|
||||||
|
|
||||||
|
; CHECK: error: asm-goto outputs not supported
|
||||||
|
|
||||||
|
; A test for asm-goto output prohibition
|
||||||
|
|
||||||
|
define i32 @test(i32 %a) {
|
||||||
|
entry:
|
||||||
|
%0 = add i32 %a, 4
|
||||||
|
%1 = callbr i32 asm "xorl $1, $1; jmp ${1:l}", "=&r,r,X,~{dirflag},~{fpsr},~{flags}"(i32 %0, i8* blockaddress(@test, %fail)) to label %normal [label %fail]
|
||||||
|
|
||||||
|
normal:
|
||||||
|
ret i32 %1
|
||||||
|
|
||||||
|
fail:
|
||||||
|
ret i32 1
|
||||||
|
}
|
133
llvm/test/CodeGen/X86/callbr-asm.ll
Normal file
133
llvm/test/CodeGen/X86/callbr-asm.ll
Normal file
@ -0,0 +1,133 @@
|
|||||||
|
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
|
||||||
|
; RUN: llc < %s -mtriple=i686-- -O3 | FileCheck %s
|
||||||
|
|
||||||
|
; Tests for using callbr as an asm-goto wrapper
|
||||||
|
|
||||||
|
; Test 1 - fallthrough label gets removed, but the fallthrough code that is
|
||||||
|
; unreachable due to asm ending on a jmp is still left in.
|
||||||
|
define i32 @test1(i32 %a) {
|
||||||
|
; CHECK-LABEL: test1:
|
||||||
|
; CHECK: # %bb.0: # %entry
|
||||||
|
; CHECK-NEXT: movl {{[0-9]+}}(%esp), %eax
|
||||||
|
; CHECK-NEXT: addl $4, %eax
|
||||||
|
; CHECK-NEXT: #APP
|
||||||
|
; CHECK-NEXT: xorl %eax, %eax
|
||||||
|
; CHECK-NEXT: jmp .Ltmp00
|
||||||
|
; CHECK-NEXT: #NO_APP
|
||||||
|
; CHECK-NEXT: .LBB0_1: # %normal
|
||||||
|
; CHECK-NEXT: xorl %eax, %eax
|
||||||
|
; CHECK-NEXT: retl
|
||||||
|
; CHECK-NEXT: .Ltmp0: # Block address taken
|
||||||
|
; CHECK-NEXT: .LBB0_2: # %fail
|
||||||
|
; CHECK-NEXT: movl $1, %eax
|
||||||
|
; CHECK-NEXT: retl
|
||||||
|
entry:
|
||||||
|
%0 = add i32 %a, 4
|
||||||
|
callbr void asm "xorl $0, $0; jmp ${1:l}", "r,X,~{dirflag},~{fpsr},~{flags}"(i32 %0, i8* blockaddress(@test1, %fail)) to label %normal [label %fail]
|
||||||
|
|
||||||
|
normal:
|
||||||
|
ret i32 0
|
||||||
|
|
||||||
|
fail:
|
||||||
|
ret i32 1
|
||||||
|
}
|
||||||
|
|
||||||
|
; Test 2 - callbr terminates an unreachable block, function gets simplified
|
||||||
|
; to a trivial zero return.
|
||||||
|
define i32 @test2(i32 %a) {
|
||||||
|
; CHECK-LABEL: test2:
|
||||||
|
; CHECK: # %bb.0: # %entry
|
||||||
|
; CHECK-NEXT: xorl %eax, %eax
|
||||||
|
; CHECK-NEXT: retl
|
||||||
|
entry:
|
||||||
|
br label %normal
|
||||||
|
|
||||||
|
unreachableasm:
|
||||||
|
%0 = add i32 %a, 4
|
||||||
|
callbr void asm sideeffect "xorl $0, $0; jmp ${1:l}", "r,X,~{dirflag},~{fpsr},~{flags}"(i32 %0, i8* blockaddress(@test2, %fail)) to label %normal [label %fail]
|
||||||
|
|
||||||
|
normal:
|
||||||
|
ret i32 0
|
||||||
|
|
||||||
|
fail:
|
||||||
|
ret i32 1
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
; Test 3 - asm-goto implements a loop. The loop gets recognized, but many loop
|
||||||
|
; transforms fail due to canonicalization having callbr exceptions. Trivial
|
||||||
|
; blocks at labels 1 and 3 also don't get simplified due to callbr.
|
||||||
|
define dso_local i32 @test3(i32 %a) {
|
||||||
|
; CHECK-LABEL: test3:
|
||||||
|
; CHECK: # %bb.0: # %entry
|
||||||
|
; CHECK-NEXT: .Ltmp1: # Block address taken
|
||||||
|
; CHECK-NEXT: .LBB2_1: # %label01
|
||||||
|
; CHECK-NEXT: # =>This Loop Header: Depth=1
|
||||||
|
; CHECK-NEXT: # Child Loop BB2_2 Depth 2
|
||||||
|
; CHECK-NEXT: # Child Loop BB2_3 Depth 3
|
||||||
|
; CHECK-NEXT: # Child Loop BB2_4 Depth 4
|
||||||
|
; CHECK-NEXT: .Ltmp2: # Block address taken
|
||||||
|
; CHECK-NEXT: .LBB2_2: # %label02
|
||||||
|
; CHECK-NEXT: # Parent Loop BB2_1 Depth=1
|
||||||
|
; CHECK-NEXT: # => This Loop Header: Depth=2
|
||||||
|
; CHECK-NEXT: # Child Loop BB2_3 Depth 3
|
||||||
|
; CHECK-NEXT: # Child Loop BB2_4 Depth 4
|
||||||
|
; CHECK-NEXT: addl $4, {{[0-9]+}}(%esp)
|
||||||
|
; CHECK-NEXT: .Ltmp3: # Block address taken
|
||||||
|
; CHECK-NEXT: .LBB2_3: # %label03
|
||||||
|
; CHECK-NEXT: # Parent Loop BB2_1 Depth=1
|
||||||
|
; CHECK-NEXT: # Parent Loop BB2_2 Depth=2
|
||||||
|
; CHECK-NEXT: # => This Loop Header: Depth=3
|
||||||
|
; CHECK-NEXT: # Child Loop BB2_4 Depth 4
|
||||||
|
; CHECK-NEXT: .p2align 4, 0x90
|
||||||
|
; CHECK-NEXT: .Ltmp4: # Block address taken
|
||||||
|
; CHECK-NEXT: .LBB2_4: # %label04
|
||||||
|
; CHECK-NEXT: # Parent Loop BB2_1 Depth=1
|
||||||
|
; CHECK-NEXT: # Parent Loop BB2_2 Depth=2
|
||||||
|
; CHECK-NEXT: # Parent Loop BB2_3 Depth=3
|
||||||
|
; CHECK-NEXT: # => This Inner Loop Header: Depth=4
|
||||||
|
; CHECK-NEXT: #APP
|
||||||
|
; CHECK-NEXT: jmp .Ltmp10
|
||||||
|
; CHECK-NEXT: jmp .Ltmp20
|
||||||
|
; CHECK-NEXT: jmp .Ltmp30
|
||||||
|
; CHECK-NEXT: #NO_APP
|
||||||
|
; CHECK-NEXT: .LBB2_5: # %normal0
|
||||||
|
; CHECK-NEXT: # in Loop: Header=BB2_4 Depth=4
|
||||||
|
; CHECK-NEXT: #APP
|
||||||
|
; CHECK-NEXT: jmp .Ltmp10
|
||||||
|
; CHECK-NEXT: jmp .Ltmp20
|
||||||
|
; CHECK-NEXT: jmp .Ltmp30
|
||||||
|
; CHECK-NEXT: jmp .Ltmp40
|
||||||
|
; CHECK-NEXT: #NO_APP
|
||||||
|
; CHECK-NEXT: .LBB2_6: # %normal1
|
||||||
|
; CHECK-NEXT: movl {{[0-9]+}}(%esp), %eax
|
||||||
|
; CHECK-NEXT: retl
|
||||||
|
entry:
|
||||||
|
%a.addr = alloca i32, align 4
|
||||||
|
store i32 %a, i32* %a.addr, align 4
|
||||||
|
br label %label01
|
||||||
|
|
||||||
|
label01: ; preds = %normal0, %label04, %entry
|
||||||
|
br label %label02
|
||||||
|
|
||||||
|
label02: ; preds = %normal0, %label04, %label01
|
||||||
|
%0 = load i32, i32* %a.addr, align 4
|
||||||
|
%add = add nsw i32 %0, 4
|
||||||
|
store i32 %add, i32* %a.addr, align 4
|
||||||
|
br label %label03
|
||||||
|
|
||||||
|
label03: ; preds = %normal0, %label04, %label02
|
||||||
|
br label %label04
|
||||||
|
|
||||||
|
label04: ; preds = %normal0, %label03
|
||||||
|
callbr void asm sideeffect "jmp ${0:l}; jmp ${1:l}; jmp ${2:l}", "X,X,X,~{dirflag},~{fpsr},~{flags}"(i8* blockaddress(@test3, %label01), i8* blockaddress(@test3, %label02), i8* blockaddress(@test3, %label03))
|
||||||
|
to label %normal0 [label %label01, label %label02, label %label03]
|
||||||
|
|
||||||
|
normal0: ; preds = %label04
|
||||||
|
callbr void asm sideeffect "jmp ${0:l}; jmp ${1:l}; jmp ${2:l}; jmp ${3:l}", "X,X,X,X,~{dirflag},~{fpsr},~{flags}"(i8* blockaddress(@test3, %label01), i8* blockaddress(@test3, %label02), i8* blockaddress(@test3, %label03), i8* blockaddress(@test3, %label04))
|
||||||
|
to label %normal1 [label %label01, label %label02, label %label03, label %label04]
|
||||||
|
|
||||||
|
normal1: ; preds = %normal0
|
||||||
|
%1 = load i32, i32* %a.addr, align 4
|
||||||
|
ret i32 %1
|
||||||
|
}
|
49
llvm/test/Transforms/GVN/callbr-loadpre-critedge.ll
Normal file
49
llvm/test/Transforms/GVN/callbr-loadpre-critedge.ll
Normal file
@ -0,0 +1,49 @@
|
|||||||
|
; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
|
||||||
|
; RUN: opt < %s -gvn -S | FileCheck %s
|
||||||
|
|
||||||
|
; This test checks that we don't hang trying to split a critical edge in loadpre
|
||||||
|
; when the control flow uses a callbr instruction.
|
||||||
|
|
||||||
|
%struct.pluto = type <{ i8, i8 }>
|
||||||
|
|
||||||
|
define void @widget(%struct.pluto** %tmp1) {
|
||||||
|
; CHECK-LABEL: @widget(
|
||||||
|
; CHECK-NEXT: bb:
|
||||||
|
; CHECK-NEXT: callbr void asm sideeffect "", "X,X"(i8* blockaddress(@widget, [[BB5:%.*]]), i8* blockaddress(@widget, [[BB8:%.*]]))
|
||||||
|
; CHECK-NEXT: to label [[BB4:%.*]] [label [[BB5]], label %bb8]
|
||||||
|
; CHECK: bb4:
|
||||||
|
; CHECK-NEXT: br label [[BB5]]
|
||||||
|
; CHECK: bb5:
|
||||||
|
; CHECK-NEXT: [[TMP6:%.*]] = load %struct.pluto*, %struct.pluto** [[TMP1:%.*]]
|
||||||
|
; CHECK-NEXT: [[TMP7:%.*]] = getelementptr inbounds [[STRUCT_PLUTO:%.*]], %struct.pluto* [[TMP6]], i64 0, i32 1
|
||||||
|
; CHECK-NEXT: br label [[BB8]]
|
||||||
|
; CHECK: bb8:
|
||||||
|
; CHECK-NEXT: [[TMP9:%.*]] = phi i8* [ [[TMP7]], [[BB5]] ], [ null, [[BB:%.*]] ]
|
||||||
|
; CHECK-NEXT: [[TMP10:%.*]] = load %struct.pluto*, %struct.pluto** [[TMP1]]
|
||||||
|
; CHECK-NEXT: [[TMP11:%.*]] = getelementptr inbounds [[STRUCT_PLUTO]], %struct.pluto* [[TMP10]], i64 0, i32 0
|
||||||
|
; CHECK-NEXT: [[TMP12:%.*]] = load i8, i8* [[TMP11]]
|
||||||
|
; CHECK-NEXT: tail call void @spam(i8* [[TMP9]], i8 [[TMP12]])
|
||||||
|
; CHECK-NEXT: ret void
|
||||||
|
;
|
||||||
|
bb:
|
||||||
|
callbr void asm sideeffect "", "X,X"(i8* blockaddress(@widget, %bb5), i8* blockaddress(@widget, %bb8))
|
||||||
|
to label %bb4 [label %bb5, label %bb8]
|
||||||
|
|
||||||
|
bb4: ; preds = %bb
|
||||||
|
br label %bb5
|
||||||
|
|
||||||
|
bb5: ; preds = %bb4, %bb
|
||||||
|
%tmp6 = load %struct.pluto*, %struct.pluto** %tmp1
|
||||||
|
%tmp7 = getelementptr inbounds %struct.pluto, %struct.pluto* %tmp6, i64 0, i32 1
|
||||||
|
br label %bb8
|
||||||
|
|
||||||
|
bb8: ; preds = %bb5, %bb
|
||||||
|
%tmp9 = phi i8* [ %tmp7, %bb5 ], [ null, %bb ]
|
||||||
|
%tmp10 = load %struct.pluto*, %struct.pluto** %tmp1
|
||||||
|
%tmp11 = getelementptr inbounds %struct.pluto, %struct.pluto* %tmp10, i64 0, i32 0
|
||||||
|
%tmp12 = load i8, i8* %tmp11
|
||||||
|
tail call void @spam(i8* %tmp9, i8 %tmp12)
|
||||||
|
ret void
|
||||||
|
}
|
||||||
|
|
||||||
|
declare void @spam(i8*, i8)
|
43
llvm/test/Transforms/GVN/callbr-scalarpre-critedge.ll
Normal file
43
llvm/test/Transforms/GVN/callbr-scalarpre-critedge.ll
Normal file
@ -0,0 +1,43 @@
|
|||||||
|
; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
|
||||||
|
; RUN: opt < %s -gvn -S | FileCheck %s
|
||||||
|
|
||||||
|
; This test checks that we don't hang trying to split a critical edge in scalar
|
||||||
|
; PRE when the control flow uses a callbr instruction.
|
||||||
|
|
||||||
|
define void @wombat(i64 %arg, i64* %arg1, i64 %arg2, i32* %arg3) {
|
||||||
|
; CHECK-LABEL: @wombat(
|
||||||
|
; CHECK-NEXT: bb:
|
||||||
|
; CHECK-NEXT: [[TMP5:%.*]] = or i64 [[ARG2:%.*]], [[ARG:%.*]]
|
||||||
|
; CHECK-NEXT: callbr void asm sideeffect "", "X,X"(i8* blockaddress(@wombat, [[BB7:%.*]]), i8* blockaddress(@wombat, [[BB9:%.*]]))
|
||||||
|
; CHECK-NEXT: to label [[BB6:%.*]] [label [[BB7]], label %bb9]
|
||||||
|
; CHECK: bb6:
|
||||||
|
; CHECK-NEXT: br label [[BB7]]
|
||||||
|
; CHECK: bb7:
|
||||||
|
; CHECK-NEXT: [[TMP8:%.*]] = trunc i64 [[TMP5]] to i32
|
||||||
|
; CHECK-NEXT: tail call void @barney(i32 [[TMP8]])
|
||||||
|
; CHECK-NEXT: br label [[BB9]]
|
||||||
|
; CHECK: bb9:
|
||||||
|
; CHECK-NEXT: [[TMP10:%.*]] = trunc i64 [[TMP5]] to i32
|
||||||
|
; CHECK-NEXT: store i32 [[TMP10]], i32* [[ARG3:%.*]]
|
||||||
|
; CHECK-NEXT: ret void
|
||||||
|
;
|
||||||
|
bb:
|
||||||
|
%tmp5 = or i64 %arg2, %arg
|
||||||
|
callbr void asm sideeffect "", "X,X"(i8* blockaddress(@wombat, %bb7), i8* blockaddress(@wombat, %bb9))
|
||||||
|
to label %bb6 [label %bb7, label %bb9]
|
||||||
|
|
||||||
|
bb6: ; preds = %bb
|
||||||
|
br label %bb7
|
||||||
|
|
||||||
|
bb7: ; preds = %bb6, %bb
|
||||||
|
%tmp8 = trunc i64 %tmp5 to i32
|
||||||
|
tail call void @barney(i32 %tmp8)
|
||||||
|
br label %bb9
|
||||||
|
|
||||||
|
bb9: ; preds = %bb7, %bb
|
||||||
|
%tmp10 = trunc i64 %tmp5 to i32
|
||||||
|
store i32 %tmp10, i32* %arg3
|
||||||
|
ret void
|
||||||
|
}
|
||||||
|
|
||||||
|
declare void @barney(i32)
|
58
llvm/test/Transforms/JumpThreading/callbr-edge-split.ll
Normal file
58
llvm/test/Transforms/JumpThreading/callbr-edge-split.ll
Normal file
@ -0,0 +1,58 @@
|
|||||||
|
; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
|
||||||
|
; RUN: opt < %s -S -jump-threading | FileCheck %s
|
||||||
|
|
||||||
|
; This test used to cause jump threading to try to split an edge of a callbr.
|
||||||
|
|
||||||
|
@a = global i32 0
|
||||||
|
|
||||||
|
define i32 @c() {
|
||||||
|
; CHECK-LABEL: @c(
|
||||||
|
; CHECK-NEXT: entry:
|
||||||
|
; CHECK-NEXT: [[TMP0:%.*]] = load i32, i32* @a
|
||||||
|
; CHECK-NEXT: [[TOBOOL:%.*]] = icmp eq i32 [[TMP0]], 0
|
||||||
|
; CHECK-NEXT: br i1 [[TOBOOL]], label [[IF_ELSE:%.*]], label [[IF_THEN:%.*]]
|
||||||
|
; CHECK: if.then:
|
||||||
|
; CHECK-NEXT: [[CALL:%.*]] = call i32 @b()
|
||||||
|
; CHECK-NEXT: [[PHITMP:%.*]] = icmp ne i32 [[CALL]], 0
|
||||||
|
; CHECK-NEXT: br i1 [[PHITMP]], label [[IF_THEN2:%.*]], label [[IF_END4:%.*]]
|
||||||
|
; CHECK: if.else:
|
||||||
|
; CHECK-NEXT: callbr void asm sideeffect "", "X"(i8* blockaddress(@c, [[IF_THEN2]]))
|
||||||
|
; CHECK-NEXT: to label [[IF_END_THREAD:%.*]] [label %if.then2]
|
||||||
|
; CHECK: if.end.thread:
|
||||||
|
; CHECK-NEXT: br label [[IF_THEN2]]
|
||||||
|
; CHECK: if.then2:
|
||||||
|
; CHECK-NEXT: [[CALL3:%.*]] = call i32 @b()
|
||||||
|
; CHECK-NEXT: br label [[IF_END4]]
|
||||||
|
; CHECK: if.end4:
|
||||||
|
; CHECK-NEXT: ret i32 undef
|
||||||
|
;
|
||||||
|
entry:
|
||||||
|
%0 = load i32, i32* @a
|
||||||
|
%tobool = icmp eq i32 %0, 0
|
||||||
|
br i1 %tobool, label %if.else, label %if.then
|
||||||
|
|
||||||
|
if.then: ; preds = %entry
|
||||||
|
%call = call i32 @b() #2
|
||||||
|
%phitmp = icmp ne i32 %call, 0
|
||||||
|
br label %if.end
|
||||||
|
|
||||||
|
if.else: ; preds = %entry
|
||||||
|
callbr void asm sideeffect "", "X"(i8* blockaddress(@c, %if.end)) #2
|
||||||
|
to label %normal [label %if.end]
|
||||||
|
|
||||||
|
normal: ; preds = %if.else
|
||||||
|
br label %if.end
|
||||||
|
|
||||||
|
if.end: ; preds = %if.else, %normal, %if.then
|
||||||
|
%d.0 = phi i1 [ %phitmp, %if.then ], [ undef, %normal ], [ undef, %if.else ]
|
||||||
|
br i1 %d.0, label %if.then2, label %if.end4
|
||||||
|
|
||||||
|
if.then2: ; preds = %if.end
|
||||||
|
%call3 = call i32 @b()
|
||||||
|
br label %if.end4
|
||||||
|
|
||||||
|
if.end4: ; preds = %if.then2, %if.end
|
||||||
|
ret i32 undef
|
||||||
|
}
|
||||||
|
|
||||||
|
declare i32 @b()
|
@ -63,14 +63,6 @@ lpad:
|
|||||||
resume { i8*, i32 } zeroinitializer
|
resume { i8*, i32 } zeroinitializer
|
||||||
}
|
}
|
||||||
|
|
||||||
define i8 @call_with_same_range() {
|
|
||||||
; CHECK-LABEL: @call_with_same_range
|
|
||||||
; CHECK: tail call i8 @call_with_range
|
|
||||||
bitcast i8 0 to i8
|
|
||||||
%out = call i8 @dummy(), !range !0
|
|
||||||
ret i8 %out
|
|
||||||
}
|
|
||||||
|
|
||||||
define i8 @invoke_with_same_range() personality i8* undef {
|
define i8 @invoke_with_same_range() personality i8* undef {
|
||||||
; CHECK-LABEL: @invoke_with_same_range()
|
; CHECK-LABEL: @invoke_with_same_range()
|
||||||
; CHECK: tail call i8 @invoke_with_range()
|
; CHECK: tail call i8 @invoke_with_range()
|
||||||
@ -84,6 +76,13 @@ lpad:
|
|||||||
resume { i8*, i32 } zeroinitializer
|
resume { i8*, i32 } zeroinitializer
|
||||||
}
|
}
|
||||||
|
|
||||||
|
define i8 @call_with_same_range() {
|
||||||
|
; CHECK-LABEL: @call_with_same_range
|
||||||
|
; CHECK: tail call i8 @call_with_range
|
||||||
|
bitcast i8 0 to i8
|
||||||
|
%out = call i8 @dummy(), !range !0
|
||||||
|
ret i8 %out
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
declare i8 @dummy();
|
declare i8 @dummy();
|
||||||
|
@ -3,13 +3,13 @@
|
|||||||
; CHECK-LABEL: @int_ptr_arg_different
|
; CHECK-LABEL: @int_ptr_arg_different
|
||||||
; CHECK-NEXT: call void asm
|
; CHECK-NEXT: call void asm
|
||||||
|
|
||||||
|
; CHECK-LABEL: @int_ptr_null
|
||||||
|
; CHECK-NEXT: tail call void @float_ptr_null()
|
||||||
|
|
||||||
; CHECK-LABEL: @int_ptr_arg_same
|
; CHECK-LABEL: @int_ptr_arg_same
|
||||||
; CHECK-NEXT: %2 = bitcast i32* %0 to float*
|
; CHECK-NEXT: %2 = bitcast i32* %0 to float*
|
||||||
; CHECK-NEXT: tail call void @float_ptr_arg_same(float* %2)
|
; CHECK-NEXT: tail call void @float_ptr_arg_same(float* %2)
|
||||||
|
|
||||||
; CHECK-LABEL: @int_ptr_null
|
|
||||||
; CHECK-NEXT: tail call void @float_ptr_null()
|
|
||||||
|
|
||||||
; Used to satisfy minimum size limit
|
; Used to satisfy minimum size limit
|
||||||
declare void @stuff()
|
declare void @stuff()
|
||||||
|
|
||||||
|
@ -291,6 +291,7 @@ static const char *GetCodeName(unsigned CodeID, unsigned BlockID,
|
|||||||
STRINGIFY_CODE(FUNC_CODE, INST_LOADATOMIC)
|
STRINGIFY_CODE(FUNC_CODE, INST_LOADATOMIC)
|
||||||
STRINGIFY_CODE(FUNC_CODE, INST_STOREATOMIC)
|
STRINGIFY_CODE(FUNC_CODE, INST_STOREATOMIC)
|
||||||
STRINGIFY_CODE(FUNC_CODE, INST_CMPXCHG)
|
STRINGIFY_CODE(FUNC_CODE, INST_CMPXCHG)
|
||||||
|
STRINGIFY_CODE(FUNC_CODE, INST_CALLBR)
|
||||||
}
|
}
|
||||||
case bitc::VALUE_SYMTAB_BLOCK_ID:
|
case bitc::VALUE_SYMTAB_BLOCK_ID:
|
||||||
switch (CodeID) {
|
switch (CodeID) {
|
||||||
|
@ -23,7 +23,7 @@ syn match llvmType /\<i\d\+\>/
|
|||||||
" The true and false tokens can be used for comparison opcodes, but it's
|
" The true and false tokens can be used for comparison opcodes, but it's
|
||||||
" much more common for these tokens to be used for boolean constants.
|
" much more common for these tokens to be used for boolean constants.
|
||||||
syn keyword llvmStatement add addrspacecast alloca and arcp ashr atomicrmw
|
syn keyword llvmStatement add addrspacecast alloca and arcp ashr atomicrmw
|
||||||
syn keyword llvmStatement bitcast br catchpad catchswitch catchret call
|
syn keyword llvmStatement bitcast br catchpad catchswitch catchret call callbr
|
||||||
syn keyword llvmStatement cleanuppad cleanupret cmpxchg eq exact extractelement
|
syn keyword llvmStatement cleanuppad cleanupret cmpxchg eq exact extractelement
|
||||||
syn keyword llvmStatement extractvalue fadd fast fcmp fdiv fence fmul fpext
|
syn keyword llvmStatement extractvalue fadd fast fcmp fdiv fence fmul fpext
|
||||||
syn keyword llvmStatement fptosi fptoui fptrunc free frem fsub fneg getelementptr
|
syn keyword llvmStatement fptosi fptoui fptrunc free frem fsub fneg getelementptr
|
||||||
|
Loading…
x
Reference in New Issue
Block a user