2017-10-19 21:21:30 +00:00
|
|
|
//===- ArgumentPromotion.cpp - Promote by-reference arguments -------------===//
|
2005-04-21 23:48:37 +00:00
|
|
|
//
|
2019-01-19 08:50:56 +00:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
2005-04-21 23:48:37 +00:00
|
|
|
//
|
2004-03-07 21:29:54 +00:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
//
|
|
|
|
// This pass promotes "by reference" arguments to be "by value" arguments. In
|
|
|
|
// practice, this means looking for internal functions that have pointer
|
2007-10-26 03:03:51 +00:00
|
|
|
// arguments. If it can prove, through the use of alias analysis, that an
|
|
|
|
// argument is *only* loaded, then it can pass the value into the function
|
2004-03-07 21:29:54 +00:00
|
|
|
// instead of the address of the value. This can cause recursive simplification
|
2004-05-23 21:21:17 +00:00
|
|
|
// of code and lead to the elimination of allocas (especially in C++ template
|
|
|
|
// code like the STL).
|
2004-03-07 21:29:54 +00:00
|
|
|
//
|
2004-03-08 01:04:36 +00:00
|
|
|
// This pass also handles aggregate arguments that are passed into a function,
|
|
|
|
// scalarizing them if the elements of the aggregate are only loaded. Note that
|
2008-07-29 10:00:13 +00:00
|
|
|
// by default it refuses to scalarize aggregates which would require passing in
|
|
|
|
// more than three operands to the function, because passing thousands of
|
2008-09-07 09:54:09 +00:00
|
|
|
// operands for a large array or structure is unprofitable! This limit can be
|
2008-07-29 10:00:13 +00:00
|
|
|
// configured or disabled, however.
|
2004-03-08 01:04:36 +00:00
|
|
|
//
|
2004-03-07 21:29:54 +00:00
|
|
|
// Note that this transformation could also be done for arguments that are only
|
2007-10-26 03:03:51 +00:00
|
|
|
// stored to (returning the value instead), but does not currently. This case
|
|
|
|
// would be best handled when and if LLVM begins supporting multiple return
|
|
|
|
// values from functions.
|
2004-03-07 21:29:54 +00:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2017-02-09 23:46:27 +00:00
|
|
|
#include "llvm/Transforms/IPO/ArgumentPromotion.h"
|
2022-05-04 11:38:21 +03:00
|
|
|
|
2012-12-03 16:50:05 +00:00
|
|
|
#include "llvm/ADT/DepthFirstIterator.h"
|
2017-10-19 21:21:30 +00:00
|
|
|
#include "llvm/ADT/STLExtras.h"
|
2020-08-03 19:18:13 +01:00
|
|
|
#include "llvm/ADT/ScopeExit.h"
|
2017-10-19 21:21:30 +00:00
|
|
|
#include "llvm/ADT/SmallPtrSet.h"
|
|
|
|
#include "llvm/ADT/SmallVector.h"
|
2012-12-03 16:50:05 +00:00
|
|
|
#include "llvm/ADT/Statistic.h"
|
2017-10-19 21:21:30 +00:00
|
|
|
#include "llvm/ADT/Twine.h"
|
2016-12-19 08:22:17 +00:00
|
|
|
#include "llvm/Analysis/AssumptionCache.h"
|
[PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatible
with the new pass manager, and no longer relying on analysis groups.
This builds essentially a ground-up new AA infrastructure stack for
LLVM. The core ideas are the same that are used throughout the new pass
manager: type erased polymorphism and direct composition. The design is
as follows:
- FunctionAAResults is a type-erasing alias analysis results aggregation
interface to walk a single query across a range of results from
different alias analyses. Currently this is function-specific as we
always assume that aliasing queries are *within* a function.
- AAResultBase is a CRTP utility providing stub implementations of
various parts of the alias analysis result concept, notably in several
cases in terms of other more general parts of the interface. This can
be used to implement only a narrow part of the interface rather than
the entire interface. This isn't really ideal, this logic should be
hoisted into FunctionAAResults as currently it will cause
a significant amount of redundant work, but it faithfully models the
behavior of the prior infrastructure.
- All the alias analysis passes are ported to be wrapper passes for the
legacy PM and new-style analysis passes for the new PM with a shared
result object. In some cases (most notably CFL), this is an extremely
naive approach that we should revisit when we can specialize for the
new pass manager.
- BasicAA has been restructured to reflect that it is much more
fundamentally a function analysis because it uses dominator trees and
loop info that need to be constructed for each function.
All of the references to getting alias analysis results have been
updated to use the new aggregation interface. All the preservation and
other pass management code has been updated accordingly.
The way the FunctionAAResultsWrapperPass works is to detect the
available alias analyses when run, and add them to the results object.
This means that we should be able to continue to respect when various
passes are added to the pipeline, for example adding CFL or adding TBAA
passes should just cause their results to be available and to get folded
into this. The exception to this rule is BasicAA which really needs to
be a function pass due to using dominator trees and loop info. As
a consequence, the FunctionAAResultsWrapperPass directly depends on
BasicAA and always includes it in the aggregation.
This has significant implications for preserving analyses. Generally,
most passes shouldn't bother preserving FunctionAAResultsWrapperPass
because rebuilding the results just updates the set of known AA passes.
The exception to this rule are LoopPass instances which need to preserve
all the function analyses that the loop pass manager will end up
needing. This means preserving both BasicAAWrapperPass and the
aggregating FunctionAAResultsWrapperPass.
Now, when preserving an alias analysis, you do so by directly preserving
that analysis. This is only necessary for non-immutable-pass-provided
alias analyses though, and there are only three of interest: BasicAA,
GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is
preserved when needed because it (like DominatorTree and LoopInfo) is
marked as a CFG-only pass. I've expanded GlobalsAA into the preserved
set everywhere we previously were preserving all of AliasAnalysis, and
I've added SCEVAA in the intersection of that with where we preserve
SCEV itself.
One significant challenge to all of this is that the CGSCC passes were
actually using the alias analysis implementations by taking advantage of
a pretty amazing set of loop holes in the old pass manager's analysis
management code which allowed analysis groups to slide through in many
cases. Moving away from analysis groups makes this problem much more
obvious. To fix it, I've leveraged the flexibility the design of the new
PM components provides to just directly construct the relevant alias
analyses for the relevant functions in the IPO passes that need them.
This is a bit hacky, but should go away with the new pass manager, and
is already in many ways cleaner than the prior state.
Another significant challenge is that various facilities of the old
alias analysis infrastructure just don't fit any more. The most
significant of these is the alias analysis 'counter' pass. That pass
relied on the ability to snoop on AA queries at different points in the
analysis group chain. Instead, I'm planning to build printing
functionality directly into the aggregation layer. I've not included
that in this patch merely to keep it smaller.
Note that all of this needs a nearly complete rewrite of the AA
documentation. I'm planning to do that, but I'd like to make sure the
new design settles, and to flesh out a bit more of what it looks like in
the new pass manager first.
Differential Revision: http://reviews.llvm.org/D12080
llvm-svn: 247167
2015-09-09 17:55:00 +00:00
|
|
|
#include "llvm/Analysis/BasicAliasAnalysis.h"
|
2012-12-03 16:50:05 +00:00
|
|
|
#include "llvm/Analysis/CallGraph.h"
|
2016-02-24 12:49:04 +00:00
|
|
|
#include "llvm/Analysis/Loads.h"
|
2017-10-19 21:21:30 +00:00
|
|
|
#include "llvm/Analysis/MemoryLocation.h"
|
2019-01-16 05:15:31 +00:00
|
|
|
#include "llvm/Analysis/TargetTransformInfo.h"
|
2022-03-21 21:53:28 +01:00
|
|
|
#include "llvm/Analysis/ValueTracking.h"
|
2017-10-19 21:21:30 +00:00
|
|
|
#include "llvm/IR/Argument.h"
|
|
|
|
#include "llvm/IR/Attributes.h"
|
|
|
|
#include "llvm/IR/BasicBlock.h"
|
2014-03-04 11:45:46 +00:00
|
|
|
#include "llvm/IR/CFG.h"
|
2013-01-02 11:36:10 +00:00
|
|
|
#include "llvm/IR/Constants.h"
|
2014-07-10 05:27:53 +00:00
|
|
|
#include "llvm/IR/DataLayout.h"
|
2013-01-02 11:36:10 +00:00
|
|
|
#include "llvm/IR/DerivedTypes.h"
|
2022-05-04 11:38:21 +03:00
|
|
|
#include "llvm/IR/Dominators.h"
|
2017-10-19 21:21:30 +00:00
|
|
|
#include "llvm/IR/Function.h"
|
2019-04-02 17:42:17 +00:00
|
|
|
#include "llvm/IR/IRBuilder.h"
|
2017-10-19 21:21:30 +00:00
|
|
|
#include "llvm/IR/InstrTypes.h"
|
|
|
|
#include "llvm/IR/Instruction.h"
|
2013-01-02 11:36:10 +00:00
|
|
|
#include "llvm/IR/Instructions.h"
|
2017-10-19 21:21:30 +00:00
|
|
|
#include "llvm/IR/Metadata.h"
|
2019-04-02 17:42:17 +00:00
|
|
|
#include "llvm/IR/NoFolder.h"
|
2017-10-19 21:21:30 +00:00
|
|
|
#include "llvm/IR/PassManager.h"
|
|
|
|
#include "llvm/IR/Type.h"
|
|
|
|
#include "llvm/IR/Use.h"
|
|
|
|
#include "llvm/IR/User.h"
|
|
|
|
#include "llvm/IR/Value.h"
|
|
|
|
#include "llvm/Support/Casting.h"
|
2004-09-01 22:55:40 +00:00
|
|
|
#include "llvm/Support/Debug.h"
|
2009-07-25 00:23:56 +00:00
|
|
|
#include "llvm/Support/raw_ostream.h"
|
2022-05-04 11:38:21 +03:00
|
|
|
#include "llvm/Transforms/Utils/PromoteMemToReg.h"
|
2017-10-19 21:21:30 +00:00
|
|
|
#include <algorithm>
|
|
|
|
#include <cassert>
|
|
|
|
#include <cstdint>
|
|
|
|
#include <utility>
|
|
|
|
#include <vector>
|
|
|
|
|
2004-03-07 21:29:54 +00:00
|
|
|
using namespace llvm;
|
|
|
|
|
2014-04-22 02:55:47 +00:00
|
|
|
#define DEBUG_TYPE "argpromotion"
|
|
|
|
|
2017-01-29 08:03:19 +00:00
|
|
|
STATISTIC(NumArgumentsPromoted, "Number of pointer arguments promoted");
|
|
|
|
STATISTIC(NumArgumentsDead, "Number of dead pointer args eliminated");
|
2004-03-07 21:29:54 +00:00
|
|
|
|
2022-04-28 09:51:39 -07:00
|
|
|
namespace {
|
|
|
|
|
2022-01-28 17:22:58 +01:00
|
|
|
struct ArgPart {
|
|
|
|
Type *Ty;
|
|
|
|
Align Alignment;
|
2022-05-04 11:38:21 +03:00
|
|
|
/// A representative guaranteed-executed load or store instruction for use by
|
2022-01-28 17:22:58 +01:00
|
|
|
/// metadata transfer.
|
2022-05-04 11:38:21 +03:00
|
|
|
Instruction *MustExecInstr;
|
2022-01-28 17:22:58 +01:00
|
|
|
};
|
2022-04-28 09:51:39 -07:00
|
|
|
|
2022-01-28 17:22:58 +01:00
|
|
|
using OffsetAndArgPart = std::pair<int64_t, ArgPart>;
|
|
|
|
|
2022-04-28 09:51:39 -07:00
|
|
|
} // end anonymous namespace
|
|
|
|
|
2022-01-28 17:22:58 +01:00
|
|
|
static Value *createByteGEP(IRBuilderBase &IRB, const DataLayout &DL,
|
|
|
|
Value *Ptr, Type *ResElemTy, int64_t Offset) {
|
2022-04-28 15:31:00 +02:00
|
|
|
// For non-opaque pointers, try to create a "nice" GEP if possible, otherwise
|
2022-01-28 17:22:58 +01:00
|
|
|
// fall back to an i8 GEP to a specific offset.
|
|
|
|
unsigned AddrSpace = Ptr->getType()->getPointerAddressSpace();
|
|
|
|
APInt OrigOffset(DL.getIndexTypeSizeInBits(Ptr->getType()), Offset);
|
|
|
|
if (!Ptr->getType()->isOpaquePointerTy()) {
|
|
|
|
Type *OrigElemTy = Ptr->getType()->getNonOpaquePointerElementType();
|
|
|
|
if (OrigOffset == 0 && OrigElemTy == ResElemTy)
|
|
|
|
return Ptr;
|
|
|
|
|
|
|
|
if (OrigElemTy->isSized()) {
|
|
|
|
APInt TmpOffset = OrigOffset;
|
|
|
|
Type *TmpTy = OrigElemTy;
|
|
|
|
SmallVector<APInt> IntIndices =
|
|
|
|
DL.getGEPIndicesForOffset(TmpTy, TmpOffset);
|
|
|
|
if (TmpOffset == 0) {
|
|
|
|
// Try to add trailing zero indices to reach the right type.
|
|
|
|
while (TmpTy != ResElemTy) {
|
|
|
|
Type *NextTy = GetElementPtrInst::getTypeAtIndex(TmpTy, (uint64_t)0);
|
|
|
|
if (!NextTy)
|
|
|
|
break;
|
|
|
|
|
|
|
|
IntIndices.push_back(APInt::getZero(
|
|
|
|
isa<StructType>(TmpTy) ? 32 : OrigOffset.getBitWidth()));
|
|
|
|
TmpTy = NextTy;
|
|
|
|
}
|
|
|
|
|
|
|
|
SmallVector<Value *> Indices;
|
|
|
|
for (const APInt &Index : IntIndices)
|
|
|
|
Indices.push_back(IRB.getInt(Index));
|
|
|
|
|
|
|
|
if (OrigOffset != 0 || TmpTy == ResElemTy) {
|
|
|
|
Ptr = IRB.CreateGEP(OrigElemTy, Ptr, Indices);
|
|
|
|
return IRB.CreateBitCast(Ptr, ResElemTy->getPointerTo(AddrSpace));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (OrigOffset != 0) {
|
|
|
|
Ptr = IRB.CreateBitCast(Ptr, IRB.getInt8PtrTy(AddrSpace));
|
|
|
|
Ptr = IRB.CreateGEP(IRB.getInt8Ty(), Ptr, IRB.getInt(OrigOffset));
|
|
|
|
}
|
|
|
|
return IRB.CreateBitCast(Ptr, ResElemTy->getPointerTo(AddrSpace));
|
|
|
|
}
|
2016-07-02 18:59:51 +00:00
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
/// DoPromotion - This method actually performs the promotion of the specified
|
|
|
|
/// arguments, and returns the new function. At this point, we know that it's
|
|
|
|
/// safe to do so.
|
2022-06-28 16:25:54 +03:00
|
|
|
static Function *
|
|
|
|
doPromotion(Function *F, FunctionAnalysisManager &FAM,
|
|
|
|
const DenseMap<Argument *, SmallVector<OffsetAndArgPart, 4>>
|
|
|
|
&ArgsToPromote) {
|
2017-01-29 08:03:16 +00:00
|
|
|
// Start by computing a new prototype for the function, which is the same as
|
|
|
|
// the old function, but has modified arguments.
|
|
|
|
FunctionType *FTy = F->getFunctionType();
|
2017-01-29 08:03:19 +00:00
|
|
|
std::vector<Type *> Params;
|
2016-07-02 18:59:51 +00:00
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
// Attribute - Keep track of the parameter attributes for the arguments
|
|
|
|
// that we are *not* promoting. For the ones that we do promote, the parameter
|
|
|
|
// attributes are lost
|
2017-04-13 00:58:09 +00:00
|
|
|
SmallVector<AttributeSet, 8> ArgAttrVec;
|
|
|
|
AttributeList PAL = F->getAttributes();
|
2014-08-28 22:42:00 +00:00
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
// First, determine the new argument list
|
2017-04-28 18:37:16 +00:00
|
|
|
unsigned ArgNo = 0;
|
2017-01-29 08:03:16 +00:00
|
|
|
for (Function::arg_iterator I = F->arg_begin(), E = F->arg_end(); I != E;
|
2017-04-28 18:37:16 +00:00
|
|
|
++I, ++ArgNo) {
|
2022-05-04 11:38:21 +03:00
|
|
|
if (!ArgsToPromote.count(&*I)) {
|
2017-01-29 08:03:16 +00:00
|
|
|
// Unchanged argument
|
|
|
|
Params.push_back(I->getType());
|
2021-08-13 11:16:52 -07:00
|
|
|
ArgAttrVec.push_back(PAL.getParamAttrs(ArgNo));
|
2017-01-29 08:03:16 +00:00
|
|
|
} else if (I->use_empty()) {
|
|
|
|
// Dead argument (which are always marked as promotable)
|
|
|
|
++NumArgumentsDead;
|
|
|
|
} else {
|
2022-01-28 17:22:58 +01:00
|
|
|
const auto &ArgParts = ArgsToPromote.find(&*I)->second;
|
|
|
|
for (const auto &Pair : ArgParts) {
|
|
|
|
Params.push_back(Pair.second.Ty);
|
2017-04-13 00:58:09 +00:00
|
|
|
ArgAttrVec.push_back(AttributeSet());
|
2017-01-29 08:03:16 +00:00
|
|
|
}
|
2022-01-28 17:22:58 +01:00
|
|
|
++NumArgumentsPromoted;
|
2017-01-29 08:03:16 +00:00
|
|
|
}
|
2014-08-28 22:42:00 +00:00
|
|
|
}
|
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
Type *RetTy = FTy->getReturnType();
|
2014-08-28 22:42:00 +00:00
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
// Construct the new function type using the new arguments.
|
|
|
|
FunctionType *NFTy = FunctionType::get(RetTy, Params, FTy->isVarArg());
|
2014-08-28 22:42:00 +00:00
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
// Create the new function body and insert it into the module.
|
2018-12-18 09:52:52 +00:00
|
|
|
Function *NF = Function::Create(NFTy, F->getLinkage(), F->getAddressSpace(),
|
|
|
|
F->getName());
|
2017-01-29 08:03:16 +00:00
|
|
|
NF->copyAttributesFrom(F);
|
2020-09-10 13:08:57 -04:00
|
|
|
NF->copyMetadata(F, 0);
|
2019-02-28 01:11:12 +00:00
|
|
|
|
2020-09-10 13:08:57 -04:00
|
|
|
// The new function will have the !dbg metadata copied from the original
|
|
|
|
// function. The original function may not be deleted, and dbg metadata need
|
2022-04-28 15:31:00 +02:00
|
|
|
// to be unique, so we need to drop it.
|
2019-02-28 01:11:12 +00:00
|
|
|
F->setSubprogram(nullptr);
|
2014-08-28 22:42:00 +00:00
|
|
|
|
2018-05-14 12:53:11 +00:00
|
|
|
LLVM_DEBUG(dbgs() << "ARG PROMOTION: Promoting to:" << *NF << "\n"
|
|
|
|
<< "From: " << *F);
|
2017-01-29 08:03:19 +00:00
|
|
|
|
2022-05-02 13:29:34 +08:00
|
|
|
uint64_t LargestVectorWidth = 0;
|
|
|
|
for (auto *I : Params)
|
|
|
|
if (auto *VT = dyn_cast<llvm::VectorType>(I))
|
|
|
|
LargestVectorWidth = std::max(
|
|
|
|
LargestVectorWidth, VT->getPrimitiveSizeInBits().getKnownMinSize());
|
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
// Recompute the parameter attributes list based on the new arguments for
|
|
|
|
// the function.
|
2021-08-13 11:16:52 -07:00
|
|
|
NF->setAttributes(AttributeList::get(F->getContext(), PAL.getFnAttrs(),
|
|
|
|
PAL.getRetAttrs(), ArgAttrVec));
|
2022-05-02 13:29:34 +08:00
|
|
|
AttributeFuncs::updateMinLegalVectorWidthAttr(*NF, LargestVectorWidth);
|
2017-04-13 00:58:09 +00:00
|
|
|
ArgAttrVec.clear();
|
2014-08-28 22:42:00 +00:00
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
F->getParent()->getFunctionList().insert(F->getIterator(), NF);
|
|
|
|
NF->takeName(F);
|
2014-08-28 22:42:00 +00:00
|
|
|
|
2022-06-28 16:25:54 +03:00
|
|
|
// Loop over all the callers of the function, transforming the call sites to
|
|
|
|
// pass in the loaded pointers.
|
2017-01-29 08:03:19 +00:00
|
|
|
SmallVector<Value *, 16> Args;
|
2020-10-20 13:08:07 -07:00
|
|
|
const DataLayout &DL = F->getParent()->getDataLayout();
|
2017-01-29 08:03:16 +00:00
|
|
|
while (!F->use_empty()) {
|
2020-04-20 18:14:13 -07:00
|
|
|
CallBase &CB = cast<CallBase>(*F->user_back());
|
|
|
|
assert(CB.getCalledFunction() == F);
|
|
|
|
const AttributeList &CallPAL = CB.getAttributes();
|
|
|
|
IRBuilder<NoFolder> IRB(&CB);
|
[PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatible
with the new pass manager, and no longer relying on analysis groups.
This builds essentially a ground-up new AA infrastructure stack for
LLVM. The core ideas are the same that are used throughout the new pass
manager: type erased polymorphism and direct composition. The design is
as follows:
- FunctionAAResults is a type-erasing alias analysis results aggregation
interface to walk a single query across a range of results from
different alias analyses. Currently this is function-specific as we
always assume that aliasing queries are *within* a function.
- AAResultBase is a CRTP utility providing stub implementations of
various parts of the alias analysis result concept, notably in several
cases in terms of other more general parts of the interface. This can
be used to implement only a narrow part of the interface rather than
the entire interface. This isn't really ideal, this logic should be
hoisted into FunctionAAResults as currently it will cause
a significant amount of redundant work, but it faithfully models the
behavior of the prior infrastructure.
- All the alias analysis passes are ported to be wrapper passes for the
legacy PM and new-style analysis passes for the new PM with a shared
result object. In some cases (most notably CFL), this is an extremely
naive approach that we should revisit when we can specialize for the
new pass manager.
- BasicAA has been restructured to reflect that it is much more
fundamentally a function analysis because it uses dominator trees and
loop info that need to be constructed for each function.
All of the references to getting alias analysis results have been
updated to use the new aggregation interface. All the preservation and
other pass management code has been updated accordingly.
The way the FunctionAAResultsWrapperPass works is to detect the
available alias analyses when run, and add them to the results object.
This means that we should be able to continue to respect when various
passes are added to the pipeline, for example adding CFL or adding TBAA
passes should just cause their results to be available and to get folded
into this. The exception to this rule is BasicAA which really needs to
be a function pass due to using dominator trees and loop info. As
a consequence, the FunctionAAResultsWrapperPass directly depends on
BasicAA and always includes it in the aggregation.
This has significant implications for preserving analyses. Generally,
most passes shouldn't bother preserving FunctionAAResultsWrapperPass
because rebuilding the results just updates the set of known AA passes.
The exception to this rule are LoopPass instances which need to preserve
all the function analyses that the loop pass manager will end up
needing. This means preserving both BasicAAWrapperPass and the
aggregating FunctionAAResultsWrapperPass.
Now, when preserving an alias analysis, you do so by directly preserving
that analysis. This is only necessary for non-immutable-pass-provided
alias analyses though, and there are only three of interest: BasicAA,
GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is
preserved when needed because it (like DominatorTree and LoopInfo) is
marked as a CFG-only pass. I've expanded GlobalsAA into the preserved
set everywhere we previously were preserving all of AliasAnalysis, and
I've added SCEVAA in the intersection of that with where we preserve
SCEV itself.
One significant challenge to all of this is that the CGSCC passes were
actually using the alias analysis implementations by taking advantage of
a pretty amazing set of loop holes in the old pass manager's analysis
management code which allowed analysis groups to slide through in many
cases. Moving away from analysis groups makes this problem much more
obvious. To fix it, I've leveraged the flexibility the design of the new
PM components provides to just directly construct the relevant alias
analyses for the relevant functions in the IPO passes that need them.
This is a bit hacky, but should go away with the new pass manager, and
is already in many ways cleaner than the prior state.
Another significant challenge is that various facilities of the old
alias analysis infrastructure just don't fit any more. The most
significant of these is the alias analysis 'counter' pass. That pass
relied on the ability to snoop on AA queries at different points in the
analysis group chain. Instead, I'm planning to build printing
functionality directly into the aggregation layer. I've not included
that in this patch merely to keep it smaller.
Note that all of this needs a nearly complete rewrite of the AA
documentation. I'm planning to do that, but I'd like to make sure the
new design settles, and to flesh out a bit more of what it looks like in
the new pass manager first.
Differential Revision: http://reviews.llvm.org/D12080
llvm-svn: 247167
2015-09-09 17:55:00 +00:00
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
// Loop over the operands, inserting GEP and loads in the caller as
|
|
|
|
// appropriate.
|
2022-04-28 15:31:00 +02:00
|
|
|
auto *AI = CB.arg_begin();
|
2017-04-28 18:37:16 +00:00
|
|
|
ArgNo = 0;
|
2017-01-29 08:03:19 +00:00
|
|
|
for (Function::arg_iterator I = F->arg_begin(), E = F->arg_end(); I != E;
|
2022-05-04 11:38:21 +03:00
|
|
|
++I, ++AI, ++ArgNo) {
|
|
|
|
if (!ArgsToPromote.count(&*I)) {
|
2017-01-29 08:03:19 +00:00
|
|
|
Args.push_back(*AI); // Unmodified argument
|
2021-08-13 11:16:52 -07:00
|
|
|
ArgAttrVec.push_back(CallPAL.getParamAttrs(ArgNo));
|
2017-01-29 08:03:16 +00:00
|
|
|
} else if (!I->use_empty()) {
|
2022-01-28 17:22:58 +01:00
|
|
|
Value *V = *AI;
|
|
|
|
const auto &ArgParts = ArgsToPromote.find(&*I)->second;
|
|
|
|
for (const auto &Pair : ArgParts) {
|
|
|
|
LoadInst *LI = IRB.CreateAlignedLoad(
|
|
|
|
Pair.second.Ty,
|
|
|
|
createByteGEP(IRB, DL, V, Pair.second.Ty, Pair.first),
|
|
|
|
Pair.second.Alignment, V->getName() + ".val");
|
2022-05-04 11:38:21 +03:00
|
|
|
if (Pair.second.MustExecInstr) {
|
|
|
|
LI->setAAMetadata(Pair.second.MustExecInstr->getAAMetadata());
|
|
|
|
LI->copyMetadata(*Pair.second.MustExecInstr,
|
2022-02-10 11:26:26 +01:00
|
|
|
{LLVMContext::MD_range, LLVMContext::MD_nonnull,
|
|
|
|
LLVMContext::MD_dereferenceable,
|
|
|
|
LLVMContext::MD_dereferenceable_or_null,
|
2022-07-26 16:02:30 +08:00
|
|
|
LLVMContext::MD_align, LLVMContext::MD_noundef,
|
|
|
|
LLVMContext::MD_nontemporal});
|
2017-01-29 08:03:16 +00:00
|
|
|
}
|
2022-01-28 17:22:58 +01:00
|
|
|
Args.push_back(LI);
|
2017-04-13 00:58:09 +00:00
|
|
|
ArgAttrVec.push_back(AttributeSet());
|
2011-01-16 08:09:24 +00:00
|
|
|
}
|
2008-05-27 11:50:51 +00:00
|
|
|
}
|
2022-05-04 11:38:21 +03:00
|
|
|
}
|
2008-09-07 09:54:09 +00:00
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
// Push any varargs arguments on the list.
|
2020-04-20 18:14:13 -07:00
|
|
|
for (; AI != CB.arg_end(); ++AI, ++ArgNo) {
|
2017-01-29 08:03:16 +00:00
|
|
|
Args.push_back(*AI);
|
2021-08-13 11:16:52 -07:00
|
|
|
ArgAttrVec.push_back(CallPAL.getParamAttrs(ArgNo));
|
2011-01-16 08:09:24 +00:00
|
|
|
}
|
2008-09-07 09:54:09 +00:00
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
SmallVector<OperandBundleDef, 1> OpBundles;
|
2020-04-20 18:14:13 -07:00
|
|
|
CB.getOperandBundlesAsDefs(OpBundles);
|
2004-03-07 21:29:54 +00:00
|
|
|
|
2020-04-20 18:14:13 -07:00
|
|
|
CallBase *NewCS = nullptr;
|
|
|
|
if (InvokeInst *II = dyn_cast<InvokeInst>(&CB)) {
|
2017-04-13 00:58:09 +00:00
|
|
|
NewCS = InvokeInst::Create(NF, II->getNormalDest(), II->getUnwindDest(),
|
2020-04-20 18:14:13 -07:00
|
|
|
Args, OpBundles, "", &CB);
|
2017-01-29 08:03:16 +00:00
|
|
|
} else {
|
2020-04-20 18:14:13 -07:00
|
|
|
auto *NewCall = CallInst::Create(NF, Args, OpBundles, "", &CB);
|
|
|
|
NewCall->setTailCallKind(cast<CallInst>(&CB)->getTailCallKind());
|
2017-04-13 00:58:09 +00:00
|
|
|
NewCS = NewCall;
|
2017-01-29 08:03:16 +00:00
|
|
|
}
|
2020-04-20 18:14:13 -07:00
|
|
|
NewCS->setCallingConv(CB.getCallingConv());
|
2021-08-13 11:16:52 -07:00
|
|
|
NewCS->setAttributes(AttributeList::get(F->getContext(),
|
|
|
|
CallPAL.getFnAttrs(),
|
|
|
|
CallPAL.getRetAttrs(), ArgAttrVec));
|
2020-06-04 14:30:58 +07:00
|
|
|
NewCS->copyMetadata(CB, {LLVMContext::MD_prof, LLVMContext::MD_dbg});
|
2017-01-29 08:03:16 +00:00
|
|
|
Args.clear();
|
2017-04-13 00:58:09 +00:00
|
|
|
ArgAttrVec.clear();
|
2004-11-13 23:31:34 +00:00
|
|
|
|
2022-05-02 13:29:34 +08:00
|
|
|
AttributeFuncs::updateMinLegalVectorWidthAttr(*CB.getCaller(),
|
|
|
|
LargestVectorWidth);
|
|
|
|
|
2020-04-20 18:14:13 -07:00
|
|
|
if (!CB.use_empty()) {
|
|
|
|
CB.replaceAllUsesWith(NewCS);
|
|
|
|
NewCS->takeName(&CB);
|
2017-01-29 08:03:16 +00:00
|
|
|
}
|
2004-11-13 23:31:34 +00:00
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
// Finally, remove the old call from the program, reducing the use-count of
|
|
|
|
// F.
|
2020-04-20 18:14:13 -07:00
|
|
|
CB.eraseFromParent();
|
2004-11-13 23:31:34 +00:00
|
|
|
}
|
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
// Since we have now created the new function, splice the body of the old
|
|
|
|
// function right into the new function, leaving the old rotting hulk of the
|
|
|
|
// function empty.
|
2022-12-13 15:49:48 -08:00
|
|
|
NF->splice(NF->begin(), F);
|
2017-01-29 08:03:16 +00:00
|
|
|
|
2022-05-04 11:38:21 +03:00
|
|
|
// We will collect all the new created allocas to promote them into registers
|
|
|
|
// after the following loop
|
|
|
|
SmallVector<AllocaInst *, 4> Allocas;
|
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
// Loop over the argument list, transferring uses of the old arguments over to
|
|
|
|
// the new arguments, also transferring over the names as well.
|
2022-02-01 10:32:58 +01:00
|
|
|
Function::arg_iterator I2 = NF->arg_begin();
|
|
|
|
for (Argument &Arg : F->args()) {
|
2022-05-04 11:38:21 +03:00
|
|
|
if (!ArgsToPromote.count(&Arg)) {
|
2017-01-29 08:03:16 +00:00
|
|
|
// If this is an unmodified argument, move the name and users over to the
|
|
|
|
// new version.
|
2022-02-01 10:32:58 +01:00
|
|
|
Arg.replaceAllUsesWith(&*I2);
|
|
|
|
I2->takeName(&Arg);
|
2017-01-29 08:03:16 +00:00
|
|
|
++I2;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2020-08-03 19:18:13 +01:00
|
|
|
// There potentially are metadata uses for things like llvm.dbg.value.
|
|
|
|
// Replace them with undef, after handling the other regular uses.
|
|
|
|
auto RauwUndefMetadata = make_scope_exit(
|
2022-02-01 10:32:58 +01:00
|
|
|
[&]() { Arg.replaceAllUsesWith(UndefValue::get(Arg.getType())); });
|
2020-08-03 19:18:13 +01:00
|
|
|
|
2022-02-01 10:32:58 +01:00
|
|
|
if (Arg.use_empty())
|
2017-01-29 08:03:16 +00:00
|
|
|
continue;
|
|
|
|
|
2022-05-04 11:38:21 +03:00
|
|
|
// Otherwise, if we promoted this argument, we have to create an alloca in
|
|
|
|
// the callee for every promotable part and store each of the new incoming
|
|
|
|
// arguments into the corresponding alloca, what lets the old code (the
|
|
|
|
// store instructions if they are allowed especially) a chance to work as
|
|
|
|
// before.
|
|
|
|
assert(Arg.getType()->isPointerTy() &&
|
|
|
|
"Only arguments with a pointer type are promotable");
|
|
|
|
|
|
|
|
IRBuilder<NoFolder> IRB(&NF->begin()->front());
|
|
|
|
|
|
|
|
// Add only the promoted elements, so parts from ArgsToPromote
|
|
|
|
SmallDenseMap<int64_t, AllocaInst *> OffsetToAlloca;
|
2022-01-28 17:22:58 +01:00
|
|
|
for (const auto &Pair : ArgsToPromote.find(&Arg)->second) {
|
2022-05-04 11:38:21 +03:00
|
|
|
int64_t Offset = Pair.first;
|
|
|
|
const ArgPart &Part = Pair.second;
|
|
|
|
|
|
|
|
Argument *NewArg = I2++;
|
|
|
|
NewArg->setName(Arg.getName() + "." + Twine(Offset) + ".val");
|
|
|
|
|
|
|
|
AllocaInst *NewAlloca = IRB.CreateAlloca(
|
|
|
|
Part.Ty, nullptr, Arg.getName() + "." + Twine(Offset) + ".allc");
|
|
|
|
NewAlloca->setAlignment(Pair.second.Alignment);
|
|
|
|
IRB.CreateAlignedStore(NewArg, NewAlloca, Pair.second.Alignment);
|
|
|
|
|
|
|
|
// Collect the alloca to retarget the users to
|
|
|
|
OffsetToAlloca.insert({Offset, NewAlloca});
|
2022-01-28 17:22:58 +01:00
|
|
|
}
|
|
|
|
|
2022-05-04 11:38:21 +03:00
|
|
|
auto GetAlloca = [&](Value *Ptr) {
|
|
|
|
APInt Offset(DL.getIndexTypeSizeInBits(Ptr->getType()), 0);
|
|
|
|
Ptr = Ptr->stripAndAccumulateConstantOffsets(DL, Offset,
|
|
|
|
/* AllowNonInbounds */ true);
|
|
|
|
assert(Ptr == &Arg && "Not constant offset from arg?");
|
|
|
|
return OffsetToAlloca.lookup(Offset.getSExtValue());
|
|
|
|
};
|
2022-01-28 17:22:58 +01:00
|
|
|
|
2022-05-04 11:38:21 +03:00
|
|
|
// Cleanup the code from the dead instructions: GEPs and BitCasts in between
|
|
|
|
// the original argument and its users: loads and stores. Retarget every
|
|
|
|
// user to the new created alloca.
|
2022-01-28 17:22:58 +01:00
|
|
|
SmallVector<Value *, 16> Worklist;
|
|
|
|
SmallVector<Instruction *, 16> DeadInsts;
|
|
|
|
append_range(Worklist, Arg.users());
|
|
|
|
while (!Worklist.empty()) {
|
|
|
|
Value *V = Worklist.pop_back_val();
|
|
|
|
if (isa<BitCastInst>(V) || isa<GetElementPtrInst>(V)) {
|
|
|
|
DeadInsts.push_back(cast<Instruction>(V));
|
|
|
|
append_range(Worklist, V->users());
|
|
|
|
continue;
|
|
|
|
}
|
2017-01-29 08:03:16 +00:00
|
|
|
|
2022-01-28 17:22:58 +01:00
|
|
|
if (auto *LI = dyn_cast<LoadInst>(V)) {
|
|
|
|
Value *Ptr = LI->getPointerOperand();
|
2022-05-04 11:38:21 +03:00
|
|
|
LI->setOperand(LoadInst::getPointerOperandIndex(), GetAlloca(Ptr));
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (auto *SI = dyn_cast<StoreInst>(V)) {
|
|
|
|
assert(!SI->isVolatile() && "Volatile operations can't be promoted.");
|
|
|
|
Value *Ptr = SI->getPointerOperand();
|
|
|
|
SI->setOperand(StoreInst::getPointerOperandIndex(), GetAlloca(Ptr));
|
2022-01-28 17:22:58 +01:00
|
|
|
continue;
|
|
|
|
}
|
2017-01-29 08:03:16 +00:00
|
|
|
|
2022-01-28 17:22:58 +01:00
|
|
|
llvm_unreachable("Unexpected user");
|
|
|
|
}
|
2017-01-29 08:03:16 +00:00
|
|
|
|
2022-01-28 17:22:58 +01:00
|
|
|
for (Instruction *I : DeadInsts) {
|
2022-06-26 13:44:05 +01:00
|
|
|
I->replaceAllUsesWith(PoisonValue::get(I->getType()));
|
2022-01-28 17:22:58 +01:00
|
|
|
I->eraseFromParent();
|
2017-01-29 08:03:16 +00:00
|
|
|
}
|
2022-05-04 11:38:21 +03:00
|
|
|
|
|
|
|
// Collect the allocas for promotion
|
|
|
|
for (const auto &Pair : OffsetToAlloca) {
|
|
|
|
assert(isAllocaPromotable(Pair.second) &&
|
|
|
|
"By design, only promotable allocas should be produced.");
|
|
|
|
Allocas.push_back(Pair.second);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
LLVM_DEBUG(dbgs() << "ARG PROMOTION: " << Allocas.size()
|
|
|
|
<< " alloca(s) are promotable by Mem2Reg\n");
|
|
|
|
|
|
|
|
if (!Allocas.empty()) {
|
|
|
|
// And we are able to call the `promoteMemoryToRegister()` function.
|
|
|
|
// Our earlier checks have ensured that PromoteMemToReg() will
|
|
|
|
// succeed.
|
2022-06-28 16:25:54 +03:00
|
|
|
auto &DT = FAM.getResult<DominatorTreeAnalysis>(*NF);
|
|
|
|
auto &AC = FAM.getResult<AssumptionAnalysis>(*NF);
|
|
|
|
PromoteMemToReg(Allocas, DT, &AC);
|
2017-01-29 08:03:16 +00:00
|
|
|
}
|
|
|
|
|
2017-02-09 23:46:27 +00:00
|
|
|
return NF;
|
2017-01-29 08:03:16 +00:00
|
|
|
}
|
|
|
|
|
2019-07-09 11:35:35 +00:00
|
|
|
/// Return true if we can prove that all callees pass in a valid pointer for the
|
|
|
|
/// specified function argument.
|
2022-01-28 17:22:58 +01:00
|
|
|
static bool allCallersPassValidPointerForArgument(Argument *Arg,
|
|
|
|
Align NeededAlign,
|
|
|
|
uint64_t NeededDerefBytes) {
|
2017-01-29 08:03:16 +00:00
|
|
|
Function *Callee = Arg->getParent();
|
|
|
|
const DataLayout &DL = Callee->getParent()->getDataLayout();
|
2022-01-28 17:22:58 +01:00
|
|
|
APInt Bytes(64, NeededDerefBytes);
|
2017-01-29 08:03:16 +00:00
|
|
|
|
2022-02-08 10:29:51 +01:00
|
|
|
// Check if the argument itself is marked dereferenceable and aligned.
|
|
|
|
if (isDereferenceableAndAlignedPointer(Arg, NeededAlign, Bytes, DL))
|
|
|
|
return true;
|
2017-01-29 08:03:16 +00:00
|
|
|
|
|
|
|
// Look at all call sites of the function. At this point we know we only have
|
|
|
|
// direct callees.
|
2022-02-08 10:29:51 +01:00
|
|
|
return all_of(Callee->users(), [&](User *U) {
|
2020-04-20 18:14:13 -07:00
|
|
|
CallBase &CB = cast<CallBase>(*U);
|
2022-05-04 11:38:21 +03:00
|
|
|
return isDereferenceableAndAlignedPointer(CB.getArgOperand(Arg->getArgNo()),
|
|
|
|
NeededAlign, Bytes, DL);
|
2022-02-08 10:29:51 +01:00
|
|
|
});
|
2017-01-29 08:03:16 +00:00
|
|
|
}
|
|
|
|
|
2022-01-28 17:22:58 +01:00
|
|
|
/// Determine that this argument is safe to promote, and find the argument
|
|
|
|
/// parts it can be promoted into.
|
|
|
|
static bool findArgParts(Argument *Arg, const DataLayout &DL, AAResults &AAR,
|
2022-02-10 10:38:37 +01:00
|
|
|
unsigned MaxElements, bool IsRecursive,
|
2022-01-28 17:22:58 +01:00
|
|
|
SmallVectorImpl<OffsetAndArgPart> &ArgPartsVec) {
|
2008-07-29 10:00:13 +00:00
|
|
|
// Quick exit for unused arguments
|
|
|
|
if (Arg->use_empty())
|
|
|
|
return true;
|
|
|
|
|
2022-05-04 11:38:21 +03:00
|
|
|
// We can only promote this argument if all the uses are loads at known
|
2022-01-28 17:22:58 +01:00
|
|
|
// offsets.
|
2008-07-29 10:00:13 +00:00
|
|
|
//
|
|
|
|
// Promoting the argument causes it to be loaded in the caller
|
|
|
|
// unconditionally. This is only safe if we can prove that either the load
|
|
|
|
// would have happened in the callee anyway (ie, there is a load in the entry
|
|
|
|
// block) or the pointer passed in at every call site is guaranteed to be
|
|
|
|
// valid.
|
|
|
|
// In the former case, invalid loads can happen, but would have happened
|
|
|
|
// anyway, in the latter case, invalid loads won't happen. This prevents us
|
|
|
|
// from introducing an invalid load that wouldn't have happened in the
|
|
|
|
// original code.
|
2022-01-28 17:22:58 +01:00
|
|
|
|
|
|
|
SmallDenseMap<int64_t, ArgPart, 4> ArgParts;
|
|
|
|
Align NeededAlign(1);
|
|
|
|
uint64_t NeededDerefBytes = 0;
|
|
|
|
|
2022-05-04 11:38:21 +03:00
|
|
|
// And if this is a byval argument we also allow to have store instructions.
|
|
|
|
// Only handle in such way arguments with specified alignment;
|
|
|
|
// if it's unspecified, the actual alignment of the argument is
|
|
|
|
// target-specific.
|
|
|
|
bool AreStoresAllowed = Arg->getParamByValType() && Arg->getParamAlign();
|
|
|
|
|
|
|
|
// An end user of a pointer argument is a load or store instruction.
|
2022-12-04 17:31:16 -08:00
|
|
|
// Returns std::nullopt if this load or store is not based on the argument.
|
|
|
|
// Return true if we can promote the instruction, false otherwise.
|
2022-05-04 11:38:21 +03:00
|
|
|
auto HandleEndUser = [&](auto *I, Type *Ty,
|
2022-12-05 07:07:19 +00:00
|
|
|
bool GuaranteedToExecute) -> std::optional<bool> {
|
2022-05-04 11:38:21 +03:00
|
|
|
// Don't promote volatile or atomic instructions.
|
|
|
|
if (!I->isSimple())
|
2022-01-28 17:22:58 +01:00
|
|
|
return false;
|
|
|
|
|
2022-05-04 11:38:21 +03:00
|
|
|
Value *Ptr = I->getPointerOperand();
|
2022-01-28 17:22:58 +01:00
|
|
|
APInt Offset(DL.getIndexTypeSizeInBits(Ptr->getType()), 0);
|
|
|
|
Ptr = Ptr->stripAndAccumulateConstantOffsets(DL, Offset,
|
|
|
|
/* AllowNonInbounds */ true);
|
|
|
|
if (Ptr != Arg)
|
2022-12-02 21:11:37 -08:00
|
|
|
return std::nullopt;
|
2022-01-28 17:22:58 +01:00
|
|
|
|
|
|
|
if (Offset.getSignificantBits() >= 64)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
TypeSize Size = DL.getTypeStoreSize(Ty);
|
|
|
|
// Don't try to promote scalable types.
|
|
|
|
if (Size.isScalable())
|
|
|
|
return false;
|
|
|
|
|
2022-02-10 10:38:37 +01:00
|
|
|
// If this is a recursive function and one of the types is a pointer,
|
2022-01-28 17:22:58 +01:00
|
|
|
// then promoting it might lead to recursive promotion.
|
2022-02-10 10:38:37 +01:00
|
|
|
if (IsRecursive && Ty->isPointerTy())
|
2022-01-28 17:22:58 +01:00
|
|
|
return false;
|
|
|
|
|
|
|
|
int64_t Off = Offset.getSExtValue();
|
|
|
|
auto Pair = ArgParts.try_emplace(
|
2022-05-04 11:38:21 +03:00
|
|
|
Off, ArgPart{Ty, I->getAlign(), GuaranteedToExecute ? I : nullptr});
|
2022-01-28 17:22:58 +01:00
|
|
|
ArgPart &Part = Pair.first->second;
|
|
|
|
bool OffsetNotSeenBefore = Pair.second;
|
|
|
|
|
|
|
|
// We limit promotion to only promoting up to a fixed number of elements of
|
|
|
|
// the aggregate.
|
2022-04-28 09:37:35 -07:00
|
|
|
if (MaxElements > 0 && ArgParts.size() > MaxElements) {
|
2022-01-28 17:22:58 +01:00
|
|
|
LLVM_DEBUG(dbgs() << "ArgPromotion of " << *Arg << " failed: "
|
|
|
|
<< "more than " << MaxElements << " parts\n");
|
|
|
|
return false;
|
2019-07-09 11:35:35 +00:00
|
|
|
}
|
|
|
|
|
2022-05-04 11:38:21 +03:00
|
|
|
// For now, we only support loading/storing one specific type at a given
|
|
|
|
// offset.
|
2022-01-28 17:22:58 +01:00
|
|
|
if (Part.Ty != Ty) {
|
|
|
|
LLVM_DEBUG(dbgs() << "ArgPromotion of " << *Arg << " failed: "
|
2022-05-04 11:38:21 +03:00
|
|
|
<< "accessed as both " << *Part.Ty << " and " << *Ty
|
2022-01-28 17:22:58 +01:00
|
|
|
<< " at offset " << Off << "\n");
|
|
|
|
return false;
|
|
|
|
}
|
2019-07-09 11:35:35 +00:00
|
|
|
|
2022-05-04 11:38:21 +03:00
|
|
|
// If this instruction is not guaranteed to execute, and we haven't seen a
|
|
|
|
// load or store at this offset before (or it had lower alignment), then we
|
|
|
|
// need to remember that requirement.
|
|
|
|
// Note that skipping instructions of previously seen offsets is only
|
|
|
|
// correct because we only allow a single type for a given offset, which
|
|
|
|
// also means that the number of accessed bytes will be the same.
|
2022-01-28 17:22:58 +01:00
|
|
|
if (!GuaranteedToExecute &&
|
2022-05-04 11:38:21 +03:00
|
|
|
(OffsetNotSeenBefore || Part.Alignment < I->getAlign())) {
|
2022-01-28 17:22:58 +01:00
|
|
|
// We won't be able to prove dereferenceability for negative offsets.
|
|
|
|
if (Off < 0)
|
|
|
|
return false;
|
2019-07-09 11:35:35 +00:00
|
|
|
|
2022-01-28 17:22:58 +01:00
|
|
|
// If the offset is not aligned, an aligned base pointer won't help.
|
2022-05-04 11:38:21 +03:00
|
|
|
if (!isAligned(I->getAlign(), Off))
|
2022-01-28 17:22:58 +01:00
|
|
|
return false;
|
2019-07-09 11:35:35 +00:00
|
|
|
|
2023-01-06 14:47:21 +00:00
|
|
|
NeededDerefBytes = std::max(NeededDerefBytes, Off + Size.getFixedSize());
|
2022-05-04 11:38:21 +03:00
|
|
|
NeededAlign = std::max(NeededAlign, I->getAlign());
|
2008-07-29 10:00:13 +00:00
|
|
|
}
|
|
|
|
|
2022-05-04 11:38:21 +03:00
|
|
|
Part.Alignment = std::max(Part.Alignment, I->getAlign());
|
2022-01-28 17:22:58 +01:00
|
|
|
return true;
|
|
|
|
};
|
|
|
|
|
2022-05-04 11:38:21 +03:00
|
|
|
// Look for loads and stores that are guaranteed to execute on entry.
|
2022-01-28 17:22:58 +01:00
|
|
|
for (Instruction &I : Arg->getParent()->getEntryBlock()) {
|
2022-12-05 07:07:19 +00:00
|
|
|
std::optional<bool> Res{};
|
2022-01-28 17:22:58 +01:00
|
|
|
if (LoadInst *LI = dyn_cast<LoadInst>(&I))
|
2022-05-04 11:38:21 +03:00
|
|
|
Res = HandleEndUser(LI, LI->getType(), /* GuaranteedToExecute */ true);
|
|
|
|
else if (StoreInst *SI = dyn_cast<StoreInst>(&I))
|
|
|
|
Res = HandleEndUser(SI, SI->getValueOperand()->getType(),
|
|
|
|
/* GuaranteedToExecute */ true);
|
|
|
|
if (Res && !*Res)
|
|
|
|
return false;
|
2022-01-28 17:22:58 +01:00
|
|
|
|
2022-01-28 16:07:40 +01:00
|
|
|
if (!isGuaranteedToTransferExecutionToSuccessor(&I))
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2022-01-28 17:22:58 +01:00
|
|
|
// Now look at all loads of the argument. Remember the load instructions
|
|
|
|
// for the aliasing check below.
|
2022-05-04 11:38:21 +03:00
|
|
|
SmallVector<const Use *, 16> Worklist;
|
|
|
|
SmallPtrSet<const Use *, 16> Visited;
|
2017-01-29 08:03:19 +00:00
|
|
|
SmallVector<LoadInst *, 16> Loads;
|
2022-05-04 11:38:21 +03:00
|
|
|
auto AppendUses = [&](const Value *V) {
|
|
|
|
for (const Use &U : V->uses())
|
|
|
|
if (Visited.insert(&U).second)
|
|
|
|
Worklist.push_back(&U);
|
2022-01-28 17:22:58 +01:00
|
|
|
};
|
2022-05-04 11:38:21 +03:00
|
|
|
AppendUses(Arg);
|
2022-01-28 17:22:58 +01:00
|
|
|
while (!Worklist.empty()) {
|
2022-05-04 11:38:21 +03:00
|
|
|
const Use *U = Worklist.pop_back_val();
|
|
|
|
Value *V = U->getUser();
|
2022-01-28 17:22:58 +01:00
|
|
|
if (isa<BitCastInst>(V)) {
|
2022-05-04 11:38:21 +03:00
|
|
|
AppendUses(V);
|
2022-01-28 17:22:58 +01:00
|
|
|
continue;
|
|
|
|
}
|
2019-07-09 11:35:35 +00:00
|
|
|
|
2022-01-28 17:22:58 +01:00
|
|
|
if (auto *GEP = dyn_cast<GetElementPtrInst>(V)) {
|
|
|
|
if (!GEP->hasAllConstantIndices())
|
2019-07-09 11:35:35 +00:00
|
|
|
return false;
|
2022-05-04 11:38:21 +03:00
|
|
|
AppendUses(V);
|
2022-01-28 17:22:58 +01:00
|
|
|
continue;
|
|
|
|
}
|
2008-07-29 10:00:13 +00:00
|
|
|
|
2022-01-28 17:22:58 +01:00
|
|
|
if (auto *LI = dyn_cast<LoadInst>(V)) {
|
2022-05-04 11:38:21 +03:00
|
|
|
if (!*HandleEndUser(LI, LI->getType(), /* GuaranteedToExecute */ false))
|
2019-07-09 11:35:35 +00:00
|
|
|
return false;
|
2022-01-28 17:22:58 +01:00
|
|
|
Loads.push_back(LI);
|
|
|
|
continue;
|
2004-03-08 01:04:36 +00:00
|
|
|
}
|
2008-09-07 09:54:09 +00:00
|
|
|
|
2022-05-04 11:38:21 +03:00
|
|
|
// Stores are allowed for byval arguments
|
|
|
|
auto *SI = dyn_cast<StoreInst>(V);
|
|
|
|
if (AreStoresAllowed && SI &&
|
|
|
|
U->getOperandNo() == StoreInst::getPointerOperandIndex()) {
|
|
|
|
if (!*HandleEndUser(SI, SI->getValueOperand()->getType(),
|
|
|
|
/* GuaranteedToExecute */ false))
|
|
|
|
return false;
|
|
|
|
continue;
|
|
|
|
// Only stores TO the argument is allowed, all the other stores are
|
|
|
|
// unknown users
|
|
|
|
}
|
|
|
|
|
2022-01-28 17:22:58 +01:00
|
|
|
// Unknown user.
|
|
|
|
LLVM_DEBUG(dbgs() << "ArgPromotion of " << *Arg << " failed: "
|
|
|
|
<< "unknown user " << *V << "\n");
|
|
|
|
return false;
|
|
|
|
}
|
2004-03-07 21:29:54 +00:00
|
|
|
|
2022-01-28 17:22:58 +01:00
|
|
|
if (NeededDerefBytes || NeededAlign > 1) {
|
|
|
|
// Try to prove a required deref / aligned requirement.
|
|
|
|
if (!allCallersPassValidPointerForArgument(Arg, NeededAlign,
|
|
|
|
NeededDerefBytes)) {
|
|
|
|
LLVM_DEBUG(dbgs() << "ArgPromotion of " << *Arg << " failed: "
|
|
|
|
<< "not dereferenceable or aligned\n");
|
|
|
|
return false;
|
2008-07-29 10:00:13 +00:00
|
|
|
}
|
|
|
|
}
|
2004-03-07 21:29:54 +00:00
|
|
|
|
2022-01-28 17:22:58 +01:00
|
|
|
if (ArgParts.empty())
|
2017-01-29 08:03:19 +00:00
|
|
|
return true; // No users, this is a dead argument.
|
2004-11-13 23:31:34 +00:00
|
|
|
|
2022-01-28 17:22:58 +01:00
|
|
|
// Sort parts by offset.
|
|
|
|
append_range(ArgPartsVec, ArgParts);
|
2022-07-24 16:21:29 -07:00
|
|
|
sort(ArgPartsVec, llvm::less_first());
|
2022-01-28 17:22:58 +01:00
|
|
|
|
|
|
|
// Make sure the parts are non-overlapping.
|
|
|
|
int64_t Offset = ArgPartsVec[0].first;
|
|
|
|
for (const auto &Pair : ArgPartsVec) {
|
|
|
|
if (Pair.first < Offset)
|
|
|
|
return false; // Overlap with previous part.
|
|
|
|
|
|
|
|
Offset = Pair.first + DL.getTypeStoreSize(Pair.second.Ty);
|
|
|
|
}
|
|
|
|
|
2022-05-04 11:38:21 +03:00
|
|
|
// If store instructions are allowed, the path from the entry of the function
|
|
|
|
// to each load may be not free of instructions that potentially invalidate
|
|
|
|
// the load, and this is an admissible situation.
|
|
|
|
if (AreStoresAllowed)
|
|
|
|
return true;
|
|
|
|
|
2022-04-28 15:31:00 +02:00
|
|
|
// Okay, now we know that the argument is only used by load instructions, and
|
2008-07-29 10:00:13 +00:00
|
|
|
// it is safe to unconditionally perform all of them. Use alias analysis to
|
2004-11-13 23:31:34 +00:00
|
|
|
// check to see if the pointer is guaranteed to not be modified from entry of
|
|
|
|
// the function to each of the load instructions.
|
2004-03-07 21:29:54 +00:00
|
|
|
|
|
|
|
// Because there could be several/many load instructions, remember which
|
|
|
|
// blocks we know to be transparent to the load.
|
2017-01-29 08:03:19 +00:00
|
|
|
df_iterator_default_set<BasicBlock *, 16> TranspBlocks;
|
2006-09-15 05:22:51 +00:00
|
|
|
|
2016-07-09 10:36:36 +00:00
|
|
|
for (LoadInst *Load : Loads) {
|
2004-03-07 21:29:54 +00:00
|
|
|
// Check to see if the load is invalidated from the start of the block to
|
|
|
|
// the load itself.
|
|
|
|
BasicBlock *BB = Load->getParent();
|
2004-03-08 01:04:36 +00:00
|
|
|
|
2015-06-17 07:18:54 +00:00
|
|
|
MemoryLocation Loc = MemoryLocation::get(Load);
|
2017-12-07 22:41:34 +00:00
|
|
|
if (AAR.canInstructionRangeModRef(BB->front(), *Load, Loc, ModRefInfo::Mod))
|
2017-01-29 08:03:19 +00:00
|
|
|
return false; // Pointer is invalidated!
|
2004-03-07 21:29:54 +00:00
|
|
|
|
|
|
|
// Now check every path from the entry block to the load for transparency.
|
|
|
|
// To do this, we perform a depth first search on the inverse CFG from the
|
|
|
|
// loading block.
|
2015-02-04 19:14:57 +00:00
|
|
|
for (BasicBlock *P : predecessors(BB)) {
|
2014-08-24 23:23:06 +00:00
|
|
|
for (BasicBlock *TranspBB : inverse_depth_first_ext(P, TranspBlocks))
|
[PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatible
with the new pass manager, and no longer relying on analysis groups.
This builds essentially a ground-up new AA infrastructure stack for
LLVM. The core ideas are the same that are used throughout the new pass
manager: type erased polymorphism and direct composition. The design is
as follows:
- FunctionAAResults is a type-erasing alias analysis results aggregation
interface to walk a single query across a range of results from
different alias analyses. Currently this is function-specific as we
always assume that aliasing queries are *within* a function.
- AAResultBase is a CRTP utility providing stub implementations of
various parts of the alias analysis result concept, notably in several
cases in terms of other more general parts of the interface. This can
be used to implement only a narrow part of the interface rather than
the entire interface. This isn't really ideal, this logic should be
hoisted into FunctionAAResults as currently it will cause
a significant amount of redundant work, but it faithfully models the
behavior of the prior infrastructure.
- All the alias analysis passes are ported to be wrapper passes for the
legacy PM and new-style analysis passes for the new PM with a shared
result object. In some cases (most notably CFL), this is an extremely
naive approach that we should revisit when we can specialize for the
new pass manager.
- BasicAA has been restructured to reflect that it is much more
fundamentally a function analysis because it uses dominator trees and
loop info that need to be constructed for each function.
All of the references to getting alias analysis results have been
updated to use the new aggregation interface. All the preservation and
other pass management code has been updated accordingly.
The way the FunctionAAResultsWrapperPass works is to detect the
available alias analyses when run, and add them to the results object.
This means that we should be able to continue to respect when various
passes are added to the pipeline, for example adding CFL or adding TBAA
passes should just cause their results to be available and to get folded
into this. The exception to this rule is BasicAA which really needs to
be a function pass due to using dominator trees and loop info. As
a consequence, the FunctionAAResultsWrapperPass directly depends on
BasicAA and always includes it in the aggregation.
This has significant implications for preserving analyses. Generally,
most passes shouldn't bother preserving FunctionAAResultsWrapperPass
because rebuilding the results just updates the set of known AA passes.
The exception to this rule are LoopPass instances which need to preserve
all the function analyses that the loop pass manager will end up
needing. This means preserving both BasicAAWrapperPass and the
aggregating FunctionAAResultsWrapperPass.
Now, when preserving an alias analysis, you do so by directly preserving
that analysis. This is only necessary for non-immutable-pass-provided
alias analyses though, and there are only three of interest: BasicAA,
GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is
preserved when needed because it (like DominatorTree and LoopInfo) is
marked as a CFG-only pass. I've expanded GlobalsAA into the preserved
set everywhere we previously were preserving all of AliasAnalysis, and
I've added SCEVAA in the intersection of that with where we preserve
SCEV itself.
One significant challenge to all of this is that the CGSCC passes were
actually using the alias analysis implementations by taking advantage of
a pretty amazing set of loop holes in the old pass manager's analysis
management code which allowed analysis groups to slide through in many
cases. Moving away from analysis groups makes this problem much more
obvious. To fix it, I've leveraged the flexibility the design of the new
PM components provides to just directly construct the relevant alias
analyses for the relevant functions in the IPO passes that need them.
This is a bit hacky, but should go away with the new pass manager, and
is already in many ways cleaner than the prior state.
Another significant challenge is that various facilities of the old
alias analysis infrastructure just don't fit any more. The most
significant of these is the alias analysis 'counter' pass. That pass
relied on the ability to snoop on AA queries at different points in the
analysis group chain. Instead, I'm planning to build printing
functionality directly into the aggregation layer. I've not included
that in this patch merely to keep it smaller.
Note that all of this needs a nearly complete rewrite of the AA
documentation. I'm planning to do that, but I'd like to make sure the
new design settles, and to flesh out a bit more of what it looks like in
the new pass manager first.
Differential Revision: http://reviews.llvm.org/D12080
llvm-svn: 247167
2015-09-09 17:55:00 +00:00
|
|
|
if (AAR.canBasicBlockModify(*TranspBB, Loc))
|
2004-03-07 21:29:54 +00:00
|
|
|
return false;
|
2010-07-12 14:15:10 +00:00
|
|
|
}
|
2004-03-07 21:29:54 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// If the path from the entry of the function to each load is free of
|
|
|
|
// instructions that potentially invalidate the load, we can make the
|
|
|
|
// transformation!
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2022-01-28 17:22:58 +01:00
|
|
|
/// Check if callers and callee agree on how promoted arguments would be
|
|
|
|
/// passed.
|
|
|
|
static bool areTypesABICompatible(ArrayRef<Type *> Types, const Function &F,
|
|
|
|
const TargetTransformInfo &TTI) {
|
|
|
|
return all_of(F.uses(), [&](const Use &U) {
|
2020-04-20 18:14:13 -07:00
|
|
|
CallBase *CB = dyn_cast<CallBase>(U.getUser());
|
|
|
|
if (!CB)
|
2019-10-30 17:20:20 -05:00
|
|
|
return false;
|
2022-01-28 17:22:58 +01:00
|
|
|
|
2020-04-20 18:14:13 -07:00
|
|
|
const Function *Caller = CB->getCaller();
|
|
|
|
const Function *Callee = CB->getCalledFunction();
|
2022-01-28 17:22:58 +01:00
|
|
|
return TTI.areTypesABICompatible(Caller, Callee, Types);
|
|
|
|
});
|
2019-01-16 05:15:31 +00:00
|
|
|
}
|
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
/// PromoteArguments - This method checks the specified function to see if there
|
|
|
|
/// are any promotable arguments and if it is safe to promote the function (for
|
|
|
|
/// example, all callers are direct). If safe to promote some arguments, it
|
|
|
|
/// calls the DoPromotion method.
|
2022-06-28 16:25:54 +03:00
|
|
|
static Function *promoteArguments(Function *F, FunctionAnalysisManager &FAM,
|
|
|
|
unsigned MaxElements, bool IsRecursive) {
|
2018-02-22 14:42:08 +00:00
|
|
|
// Don't perform argument promotion for naked functions; otherwise we can end
|
|
|
|
// up removing parameters that are seemingly 'not used' as they are referred
|
|
|
|
// to in the assembly.
|
2022-05-04 11:38:21 +03:00
|
|
|
if (F->hasFnAttribute(Attribute::Naked))
|
2018-02-22 14:42:08 +00:00
|
|
|
return nullptr;
|
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
// Make sure that it is local to this module.
|
2017-02-09 23:46:27 +00:00
|
|
|
if (!F->hasLocalLinkage())
|
2017-01-29 08:03:19 +00:00
|
|
|
return nullptr;
|
2014-07-23 22:09:29 +00:00
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
// Don't promote arguments for variadic functions. Adding, removing, or
|
|
|
|
// changing non-pack parameters can change the classification of pack
|
|
|
|
// parameters. Frontends encode that classification at the call site in the
|
|
|
|
// IR, while in the callee the classification is determined dynamically based
|
|
|
|
// on the number of registers consumed so far.
|
2017-01-29 08:03:19 +00:00
|
|
|
if (F->isVarArg())
|
|
|
|
return nullptr;
|
2008-05-26 19:58:59 +00:00
|
|
|
|
2019-05-02 00:37:36 +00:00
|
|
|
// Don't transform functions that receive inallocas, as the transformation may
|
|
|
|
// not be safe depending on calling convention.
|
|
|
|
if (F->getAttributes().hasAttrSomewhere(Attribute::InAlloca))
|
|
|
|
return nullptr;
|
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
// First check: see if there are any pointer arguments! If not, quick exit.
|
2017-01-29 08:03:19 +00:00
|
|
|
SmallVector<Argument *, 16> PointerArgs;
|
2017-01-29 08:03:16 +00:00
|
|
|
for (Argument &I : F->args())
|
|
|
|
if (I.getType()->isPointerTy())
|
|
|
|
PointerArgs.push_back(&I);
|
2017-01-29 08:03:19 +00:00
|
|
|
if (PointerArgs.empty())
|
|
|
|
return nullptr;
|
2004-05-23 21:21:17 +00:00
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
// Second check: make sure that all callers are direct callers. We can't
|
|
|
|
// transform functions that have indirect callers. Also see if the function
|
2022-02-10 10:38:37 +01:00
|
|
|
// is self-recursive.
|
2017-01-29 08:03:16 +00:00
|
|
|
for (Use &U : F->uses()) {
|
2020-04-20 18:14:13 -07:00
|
|
|
CallBase *CB = dyn_cast<CallBase>(U.getUser());
|
2017-01-29 08:03:16 +00:00
|
|
|
// Must be a direct call.
|
2022-04-12 15:16:11 -07:00
|
|
|
if (CB == nullptr || !CB->isCallee(&U) ||
|
|
|
|
CB->getFunctionType() != F->getFunctionType())
|
2017-01-29 08:03:19 +00:00
|
|
|
return nullptr;
|
|
|
|
|
2018-03-02 00:59:27 +00:00
|
|
|
// Can't change signature of musttail callee
|
2020-04-20 18:14:13 -07:00
|
|
|
if (CB->isMustTailCall())
|
2018-03-02 00:59:27 +00:00
|
|
|
return nullptr;
|
|
|
|
|
2022-04-28 15:31:00 +02:00
|
|
|
if (CB->getFunction() == F)
|
2022-02-10 10:38:37 +01:00
|
|
|
IsRecursive = true;
|
2017-01-29 08:03:16 +00:00
|
|
|
}
|
2017-01-29 08:03:19 +00:00
|
|
|
|
2018-03-02 00:59:27 +00:00
|
|
|
// Can't change signature of musttail caller
|
|
|
|
// FIXME: Support promoting whole chain of musttail functions
|
|
|
|
for (BasicBlock &BB : *F)
|
|
|
|
if (BB.getTerminatingMustTailCall())
|
|
|
|
return nullptr;
|
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
const DataLayout &DL = F->getParent()->getDataLayout();
|
2022-06-28 16:25:54 +03:00
|
|
|
auto &AAR = FAM.getResult<AAManager>(*F);
|
|
|
|
const auto &TTI = FAM.getResult<TargetIRAnalysis>(*F);
|
2008-09-07 09:54:09 +00:00
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
// Check to see which arguments are promotable. If an argument is promotable,
|
|
|
|
// add it to ArgsToPromote.
|
2022-01-28 17:22:58 +01:00
|
|
|
DenseMap<Argument *, SmallVector<OffsetAndArgPart, 4>> ArgsToPromote;
|
2017-01-29 08:03:16 +00:00
|
|
|
for (Argument *PtrArg : PointerArgs) {
|
|
|
|
// Replace sret attribute with noalias. This reduces register pressure by
|
|
|
|
// avoiding a register copy.
|
|
|
|
if (PtrArg->hasStructRetAttr()) {
|
|
|
|
unsigned ArgNo = PtrArg->getArgNo();
|
2017-05-03 18:17:31 +00:00
|
|
|
F->removeParamAttr(ArgNo, Attribute::StructRet);
|
|
|
|
F->addParamAttr(ArgNo, Attribute::NoAlias);
|
2017-01-29 08:03:16 +00:00
|
|
|
for (Use &U : F->uses()) {
|
2020-04-20 18:14:13 -07:00
|
|
|
CallBase &CB = cast<CallBase>(*U.getUser());
|
|
|
|
CB.removeParamAttr(ArgNo, Attribute::StructRet);
|
|
|
|
CB.addParamAttr(ArgNo, Attribute::NoAlias);
|
2017-01-29 08:03:16 +00:00
|
|
|
}
|
|
|
|
}
|
2008-09-07 09:54:09 +00:00
|
|
|
|
2022-05-12 16:39:26 +02:00
|
|
|
// If we can promote the pointer to its value.
|
|
|
|
SmallVector<OffsetAndArgPart, 4> ArgParts;
|
2022-05-04 11:38:21 +03:00
|
|
|
|
2022-05-12 16:39:26 +02:00
|
|
|
if (findArgParts(PtrArg, DL, AAR, MaxElements, IsRecursive, ArgParts)) {
|
|
|
|
SmallVector<Type *, 4> Types;
|
|
|
|
for (const auto &Pair : ArgParts)
|
|
|
|
Types.push_back(Pair.second.Ty);
|
|
|
|
|
|
|
|
if (areTypesABICompatible(Types, *F, TTI)) {
|
|
|
|
ArgsToPromote.insert({PtrArg, std::move(ArgParts)});
|
|
|
|
}
|
|
|
|
}
|
2017-01-29 08:03:16 +00:00
|
|
|
}
|
2004-03-07 21:29:54 +00:00
|
|
|
|
2017-01-29 08:03:16 +00:00
|
|
|
// No promotable pointer arguments.
|
2022-05-04 11:38:21 +03:00
|
|
|
if (ArgsToPromote.empty())
|
2017-01-29 08:03:16 +00:00
|
|
|
return nullptr;
|
2004-03-07 21:29:54 +00:00
|
|
|
|
2022-06-28 16:25:54 +03:00
|
|
|
return doPromotion(F, FAM, ArgsToPromote);
|
2017-02-09 23:46:27 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
PreservedAnalyses ArgumentPromotionPass::run(LazyCallGraph::SCC &C,
|
|
|
|
CGSCCAnalysisManager &AM,
|
|
|
|
LazyCallGraph &CG,
|
|
|
|
CGSCCUpdateResult &UR) {
|
|
|
|
bool Changed = false, LocalChange;
|
|
|
|
|
|
|
|
// Iterate until we stop promoting from this SCC.
|
|
|
|
do {
|
|
|
|
LocalChange = false;
|
|
|
|
|
[NewPM] Only invalidate modified functions' analyses in CGSCC passes + turn on eagerly invalidate analyses
Previously, any change in any function in an SCC would cause all
analyses for all functions in the SCC to be invalidated. With this
change, we now manually invalidate analyses for functions we modify,
then let the pass manager know that all function analyses should be
preserved since we've already handled function analysis invalidation.
So far this only touches the inliner, argpromotion, function-attrs, and
updateCGAndAnalysisManager(), since they are the most used.
This is part of an effort to investigate running the function
simplification pipeline less on functions we visit multiple times in the
inliner pipeline.
However, this causes major memory regressions especially on larger IR.
To counteract this, turn on the option to eagerly invalidate function
analyses. This invalidates analyses on functions immediately after
they're processed in a module or scc to function adaptor for specific
parts of the pipeline.
Within an SCC, if a pass only modifies one function, other functions in
the SCC do not have their analyses invalidated, so in later function
passes in the SCC pass manager the analyses may still be cached. It is
only after the function passes that the eager invalidation takes effect.
For the default pipelines this makes sense because the inliner pipeline
runs the function simplification pipeline after all other SCC passes
(except CoroSplit which doesn't request any analyses).
Overall this has mostly positive effects on compile time and positive effects on memory usage.
https://llvm-compile-time-tracker.com/compare.php?from=7f627596977624730f9298a1b69883af1555765e&to=39e824e0d3ca8a517502f13032dfa67304841c90&stat=instructions
https://llvm-compile-time-tracker.com/compare.php?from=7f627596977624730f9298a1b69883af1555765e&to=39e824e0d3ca8a517502f13032dfa67304841c90&stat=max-rss
D113196 shows that we slightly regressed compile times in exchange for
some memory improvements when turning on eager invalidation. D100917
shows that we slightly improved compile times in exchange for major
memory regressions in some cases when invalidating less in SCC passes.
Turning these on at the same time keeps the memory improvements while
keeping compile times neutral/slightly positive.
Reviewed By: asbirlea, nikic
Differential Revision: https://reviews.llvm.org/D113304
2021-05-03 16:50:26 -07:00
|
|
|
FunctionAnalysisManager &FAM =
|
|
|
|
AM.getResult<FunctionAnalysisManagerCGSCCProxy>(C, CG).getManager();
|
|
|
|
|
2022-02-10 10:38:37 +01:00
|
|
|
bool IsRecursive = C.size() > 1;
|
2017-02-09 23:46:27 +00:00
|
|
|
for (LazyCallGraph::Node &N : C) {
|
|
|
|
Function &OldF = N.getFunction();
|
2022-06-28 16:25:54 +03:00
|
|
|
Function *NewF = promoteArguments(&OldF, FAM, MaxElements, IsRecursive);
|
2017-02-09 23:46:27 +00:00
|
|
|
if (!NewF)
|
|
|
|
continue;
|
|
|
|
LocalChange = true;
|
|
|
|
|
|
|
|
// Directly substitute the functions in the call graph. Note that this
|
|
|
|
// requires the old function to be completely dead and completely
|
|
|
|
// replaced by the new function. It does no call graph updates, it merely
|
|
|
|
// swaps out the particular function mapped to a particular node in the
|
|
|
|
// graph.
|
|
|
|
C.getOuterRefSCC().replaceNodeFunction(N, *NewF);
|
2021-03-18 09:11:28 -07:00
|
|
|
FAM.clear(OldF, OldF.getName());
|
2017-02-09 23:46:27 +00:00
|
|
|
OldF.eraseFromParent();
|
[NewPM] Only invalidate modified functions' analyses in CGSCC passes + turn on eagerly invalidate analyses
Previously, any change in any function in an SCC would cause all
analyses for all functions in the SCC to be invalidated. With this
change, we now manually invalidate analyses for functions we modify,
then let the pass manager know that all function analyses should be
preserved since we've already handled function analysis invalidation.
So far this only touches the inliner, argpromotion, function-attrs, and
updateCGAndAnalysisManager(), since they are the most used.
This is part of an effort to investigate running the function
simplification pipeline less on functions we visit multiple times in the
inliner pipeline.
However, this causes major memory regressions especially on larger IR.
To counteract this, turn on the option to eagerly invalidate function
analyses. This invalidates analyses on functions immediately after
they're processed in a module or scc to function adaptor for specific
parts of the pipeline.
Within an SCC, if a pass only modifies one function, other functions in
the SCC do not have their analyses invalidated, so in later function
passes in the SCC pass manager the analyses may still be cached. It is
only after the function passes that the eager invalidation takes effect.
For the default pipelines this makes sense because the inliner pipeline
runs the function simplification pipeline after all other SCC passes
(except CoroSplit which doesn't request any analyses).
Overall this has mostly positive effects on compile time and positive effects on memory usage.
https://llvm-compile-time-tracker.com/compare.php?from=7f627596977624730f9298a1b69883af1555765e&to=39e824e0d3ca8a517502f13032dfa67304841c90&stat=instructions
https://llvm-compile-time-tracker.com/compare.php?from=7f627596977624730f9298a1b69883af1555765e&to=39e824e0d3ca8a517502f13032dfa67304841c90&stat=max-rss
D113196 shows that we slightly regressed compile times in exchange for
some memory improvements when turning on eager invalidation. D100917
shows that we slightly improved compile times in exchange for major
memory regressions in some cases when invalidating less in SCC passes.
Turning these on at the same time keeps the memory improvements while
keeping compile times neutral/slightly positive.
Reviewed By: asbirlea, nikic
Differential Revision: https://reviews.llvm.org/D113304
2021-05-03 16:50:26 -07:00
|
|
|
|
|
|
|
PreservedAnalyses FuncPA;
|
|
|
|
FuncPA.preserveSet<CFGAnalyses>();
|
|
|
|
for (auto *U : NewF->users()) {
|
|
|
|
auto *UserF = cast<CallBase>(U)->getFunction();
|
|
|
|
FAM.invalidate(*UserF, FuncPA);
|
|
|
|
}
|
2017-02-09 23:46:27 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
Changed |= LocalChange;
|
|
|
|
} while (LocalChange);
|
|
|
|
|
|
|
|
if (!Changed)
|
|
|
|
return PreservedAnalyses::all();
|
|
|
|
|
2021-11-01 19:49:05 -07:00
|
|
|
PreservedAnalyses PA;
|
|
|
|
// We've cleared out analyses for deleted functions.
|
|
|
|
PA.preserve<FunctionAnalysisManagerCGSCCProxy>();
|
[NewPM] Only invalidate modified functions' analyses in CGSCC passes + turn on eagerly invalidate analyses
Previously, any change in any function in an SCC would cause all
analyses for all functions in the SCC to be invalidated. With this
change, we now manually invalidate analyses for functions we modify,
then let the pass manager know that all function analyses should be
preserved since we've already handled function analysis invalidation.
So far this only touches the inliner, argpromotion, function-attrs, and
updateCGAndAnalysisManager(), since they are the most used.
This is part of an effort to investigate running the function
simplification pipeline less on functions we visit multiple times in the
inliner pipeline.
However, this causes major memory regressions especially on larger IR.
To counteract this, turn on the option to eagerly invalidate function
analyses. This invalidates analyses on functions immediately after
they're processed in a module or scc to function adaptor for specific
parts of the pipeline.
Within an SCC, if a pass only modifies one function, other functions in
the SCC do not have their analyses invalidated, so in later function
passes in the SCC pass manager the analyses may still be cached. It is
only after the function passes that the eager invalidation takes effect.
For the default pipelines this makes sense because the inliner pipeline
runs the function simplification pipeline after all other SCC passes
(except CoroSplit which doesn't request any analyses).
Overall this has mostly positive effects on compile time and positive effects on memory usage.
https://llvm-compile-time-tracker.com/compare.php?from=7f627596977624730f9298a1b69883af1555765e&to=39e824e0d3ca8a517502f13032dfa67304841c90&stat=instructions
https://llvm-compile-time-tracker.com/compare.php?from=7f627596977624730f9298a1b69883af1555765e&to=39e824e0d3ca8a517502f13032dfa67304841c90&stat=max-rss
D113196 shows that we slightly regressed compile times in exchange for
some memory improvements when turning on eager invalidation. D100917
shows that we slightly improved compile times in exchange for major
memory regressions in some cases when invalidating less in SCC passes.
Turning these on at the same time keeps the memory improvements while
keeping compile times neutral/slightly positive.
Reviewed By: asbirlea, nikic
Differential Revision: https://reviews.llvm.org/D113304
2021-05-03 16:50:26 -07:00
|
|
|
// We've manually invalidated analyses for functions we've modified.
|
|
|
|
PA.preserveSet<AllAnalysesOn<Function>>();
|
2021-11-01 19:49:05 -07:00
|
|
|
return PA;
|
2017-01-29 08:03:16 +00:00
|
|
|
}
|