Multiplying raw block frequency with an integer carries a high risk
of overflow.
- Add `BlockFrequency::mul` return an std::optional with the product
or `nullopt` to indicate an overflow.
- Fix two instances where overflow was likely.
Continuing the patch series to get rid of debug intrinsics [0], instruction
insertion needs to be done with iterators rather than instruction pointers,
so that we can communicate information in the iterator class. This patch
adds an iterator-taking insertBefore method and converts various call sites
to take iterators. These are all sites where such debug-info needs to be
preserved so that a stage2 clang can be built identically; it's likely that
many more will need to be changed in the future.
At this stage, this is just changing the spelling of a few operations,
which will eventually become signifiant once the debug-info bearing
iterator is used.
[0] https://discourse.llvm.org/t/rfc-instruction-api-changes-needed-to-eliminate-debug-intrinsics-from-ir/68939
Differential Revision: https://reviews.llvm.org/D152537
Before elimination of mostly empty block it makes sense to remove dead PHI nodes.
It open more opportunity for elimination plus eliminates dead code itself.
It appeared that change results in failing many unit tests and some of
them I've updated and for another one I disable this optimization.
The pattern I observed in the tests is that there is a infinite loop
without side effects. As a result after elimination of dead phi node all other
related instruction are also removed and tests stops to check what it is expected.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D158503
Refactor to use BasicBlockUtils functions and make life easier for
a subsequent patch for updating the dominator tree.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D154053
Release BlockFrequencyInfo and BranchProbabilityInfo results and other
per function information immediately afterwards, instead of holding
onto the memory until the next `CodeGenPrepare::runOnFunction` call.
Differential Revision: https://reviews.llvm.org/D152552
Co-authored-by: Erik Hogeman <erik.hogeman@arm.com>
Some transformation in CodeGenPrepare pass may create and/or delete
basic block, but they don't update the LoopInfo, so the LoopInfo may
end up containing dangling pointers and sometimes reused basic blocks,
which leads to "interesting" non-deterministic behaviour.
These transformations do not seem to alter the loop structure of the
function, and updating the loop info is quite straighforward.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D150384
Change-Id: If8ab3905749ea6be94fbbacd54c5cfab5bc1fba1
InstCombine tries to swap compare operands to match sub instructions
in order to expose "CSE opportunities". However, it doesn't really
make sense to perform this transform in the middle-end, as we cannot
actually CSE the instructions there.
The backend already performs this fold in
18f5446a45/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp (L4236)
on the SDAG level, however this only works within a single basic block.
To handle cross-BB cases, we do need to handle this in the IR layer.
This patch moves the fold from InstCombine to CGP in the backend,
while keeping the same (somewhat dubious) heuristic.
Differential Revision: https://reviews.llvm.org/D152541
If the ZExt can be lowered to a single ZExt to the next power-of-2 and
the remaining ZExt folded into the user, don't use tbl lowering.
Fixes#62620.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D150482
class InstructionRemover manages resources such as dynamically allocated memory, it's generally a good practice to either implement a custom copy constructor or disable the default one.
Reviewed By: pengfei
Differential Revision: https://reviews.llvm.org/D151543
While the original motivation for this patch (address space 7 on
AMDGPU) has been reworked and is not presently planned to reach IR
translation, the incorrect (by the spec) handling of index offset
width in IR translation and CodeGenPrepare is likely to trip someone
- possibly future AMD, since we have a p7:160:256:256:32 now, so we
convert to the other API now.
Reviewed By: aemerson, arsenm
Differential Revision: https://reviews.llvm.org/D143526
This is rework of;
- rG13e77db2df94 (r328395; MVT)
Since `LowLevelType.h` has been restored to `CodeGen`, `MachinveValueType.h`
can be restored as well.
Depends on D148767
Differential Revision: https://reviews.llvm.org/D149024
First, in CodeGenPrepare.cpp, line 6891, the VectorCond will always be false
because if not function will return at 6888.
Second, in SelectionDAGBuilder.cpp, line 5443, getSExtValue() will return
value as int type, but now we use unsigned Val to maintain it, which make the
if condition at 5452 meaningless.
Reviewed By: skan
Differential Revision: https://reviews.llvm.org/D149033
When checking the profitability of folding an address computation
into a memory instruction, the compiler tries to determine the liveness
of the values, comprising the address, at the point of the memory instruction.
This patch improves on the live variable estimates by including
the loop invariants which are references in the loop body.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D143897
The "nested" `AddressingModeMatcher`s in
`AddressingModeMatcher::isProfitableToFoldIntoAddressingMode` are constructed
using the original memory instruction, even though they check whether the
address operand of a differrent memory instructon is foldable. The memory
instruction is used only for a dominance check (when not checking for
profitability), and using the wrong memory instruction does not change the
outcome of the test - if an address is foldable, the dominance test afects which
of the two possible ways to fold is chosen, but this result is discarded.
As an example, in
target triple = "x86_64-linux"
declare i1 @check(i64, i64)
define i32 @f(i1 %cc, ptr %p, ptr %q, i64 %n) {
entry:
br label %loop
loop:
%iv = phi i64 [ %i, %C ], [ 0, %entry ]
%offs = mul i64 %iv, 4
%c.0 = icmp ult i64 %iv, %n
br i1 %c.0, label %A, label %fail
A:
br i1 %cc, label %B, label %C
C:
%u = phi i32 [0, %A], [%w, %B]
%i = add i64 %iv, 1
%a.0 = getelementptr i8, ptr %p, i64 %offs
%a.1 = getelementptr i8, ptr %a.0, i64 4
%v = load i32, ptr %a.1
%c.1 = icmp eq i32 %v, %u
br i1 %c.1, label %exit, label %loop
B:
%a.2 = getelementptr i8, ptr %p, i64 %offs
%a.3 = getelementptr i8, ptr %a.2, i64 4
%w = load i32, ptr %a.3
br label %C
exit:
ret i32 -1
fail:
ret i32 0
}
the dominance test is perfomed between `%i = ...` and `%v = ...` at the moment
we're checking whether `%a3 = ...` is foldable
Using the memory instruction, which uses the interesting address is "more
correct" and this change is needed by a future patch.
Reviewed By: mkazantsev
Differential Revision: https://reviews.llvm.org/D143896
This patch recommits 0827e2fa3fd15b49fd2d0fc676753f11abb60cab after
reverting it in ed7ada259f665a742561b88e9e6c078e9ea85224. Added
workround for `Targetlowering::AddrMode` no longer being an aggregate
in C++20.
`AArch64TargetLowering::isLegalAddressingMode` has a number of
defects, including accepting an addressing mode, which consists of
only an immediate operand, or not checking the offset range for an
addressing mode in the form `1*ScaledReg + Offs`.
This patch fixes the above issues.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D143895
Change-Id: I41a520c13ce21da503ca45019979bfceb8b648fa
`AArch64TargetLowering::isLegalAddressingMode` has a number of
defects, including accepting an addressing mode which consists of only
an immediate operand, or not checking the offset range for an
addressing mode in the form `1*ScaledReg + Offs`.
This patch fixes the above issues.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D143895
Change-Id: I756fa21941844ded44f082ac7eea4391219f9851
This change initializes the members TSI, LI, DT, PSI, and ORE pointer feilds of the SelectOptimize class to nullptr.
Reviewed By: LuoYuanke
Differential Revision: https://reviews.llvm.org/D148303
... when finding all memory uses for an address and make it a
parameter.
Now that we have avoided potentially exponential run time of
`FindAllMemoryUses` in D143893. it'd be beneficial to increase the
limit up from 20.
Reviewed By: mkazantsev
Differential Revision: https://reviews.llvm.org/D143894
Change-Id: I3abdf40332ef65e9b2f819ac32ac60e4200ec51d
The counter of the number of instructions seen in `FindAllMemoryUses`
is reset after returning from a recursive invocation of
`FindAllMemoryUses` to the value it had before the call. In effect,
depending on the shape of the uses graph, the function may scan up to
`2^N-1` instructions where `N` is the scan limit
(`MaxMemoryUsesToScan`). This does not look intuitive or intended.
This patch changes the counting to just count the scanned
instructions, independent of the shape of the references.
Reviewed By: mkazantsev
Differential Revision: https://reviews.llvm.org/D143893
Change-Id: I99f5de55e84843cf2fbea287d6ae4312fa196240
CodeGenPare may optimize memory access modes.
During such optimization, it might create a new instruction representing combined value.
Later, If the optimization failed, the generated value is not removed and remains a dead instruction.
Normally this won't be a problem as dead code will be eliminated later.
However, in this case (Issue 58538), the generated instruction may trigger an infinite loop.
The infinite loop involves `sinkCmpExpression`, where it tries to optimize the placeholder generated by us.
(See the test case detailed in the issue)
To fix this, we remove the unnecessary placeholder immediately when we abort the optimization.
`AddressingModeCombiner` will keep track of the placeholder, and remove it if it is an inserted placeholder and has no uses.
This patch fixes https://github.com/llvm/llvm-project/issues/58538, a test is also included.
Reviewed By: skatkov
Differential Revision: https://reviews.llvm.org/D147041
For add, if we match the constant edge case the add isn't used by
the compare so we shouldn't check for 2 users.
For sub, the compare is not a user of the sub so the math is
used if the sub has any users.
This regresses RISC-V which I will work on other patches for.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D146786
I've pulled this change from D145404 to land in isolation because
I'm concerned the code might be more important than the test
coverage might suggest (NOTE: the code has no test coverage).
This patch adds AArch64 CodeGen support such that the type can be passed
and returned to/from functions, and also adds support to use this type in
load/store operations and PHI nodes.
Reviewed By: paulwalker-arm
Differential Revision: https://reviews.llvm.org/D136862
This function was added for ARM targets, but aligning global/stack pointer
arguments passed to memcpy/memmove/memset can improve code size and
performance for all targets that don't have fast unaligned accesses.
This adds a generic implementation that adjusts the alignment to pointer
size if unaligned accesses are slow.
Review D134168 suggests that this significantly improves performance on
synthetic benchmarks such as Dhrystone on RV32 as it avoids memcpy() calls.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D134282
LoopUnroll estimates the loop size via getInstructionCost(),
but getInstructionCost() cannot pass CostKind to getVectorInstrCost().
And so does getShuffleCost() to getBroadcastShuffleOverhead(),
getPermuteShuffleOverhead(), getExtractSubvectorOverhead(),
and getInsertSubvectorOverhead().
To address this, this patch adds an argument CostKind to these
functions.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D142116