Replacing a free extension with 2 or more extensions unnecessarily
increases the number of IR instructions without providing any benefits.
It also unnecessarily causes operations to be performed on wider types
than necessary.
In some cases, the extra extensions also pessimize codegen (see
bfis-in-loop.ll).
The changes in arm64-codegen-prepare-extload.ll also show that we avoid
promotions that should only be performed in stress mode.
PR: https://github.com/llvm/llvm-project/pull/77094
Port CodeGenPrepare to new pass manager and dependency
BasicBlockSectionsProfileReader
Fixes: #75380
Co-authored-by: Krishna-13-cyber <84722531+Krishna-13-cyber@users.noreply.github.com>
Revert e0c554ad87d18dcbfcb9b6485d0da800ae1338d1 "Port CodeGenPrepare to new pass manager (and BasicBlockSectionsProfil… (#75380)"
Revert #75380 and #77054 as they were breaking EXPENSIVE_CHECKS buildbots: https://lab.llvm.org/buildbot/#/builders/104
Port CodeGenPrepare to new pass manager and dependency
BasicBlockSectionsProfileReader
Fixes: #64560
Co-authored-by: Krishna-13-cyber <84722531+Krishna-13-cyber@users.noreply.github.com>
Vectors are always bit-packed and don't respect the elements' alignment
requirements. This is different from arrays. This means offsets of
vector GEPs need to be computed differently than offsets of array GEPs.
This PR fixes many places that rely on an incorrect pattern
that always relies on `DL.getTypeAllocSize(GTI.getIndexedType())`.
We replace these by usages of `GTI.getSequentialElementStride(DL)`,
which is a new helper function added in this PR.
This changes behavior for GEPs into vectors with element types for which
the (bit) size and alloc size is different. This includes two cases:
* Types with a bit size that is not a multiple of a byte, e.g. i1.
GEPs into such vectors are questionable to begin with, as some elements
are not even addressable.
* Overaligned types, e.g. i16 with 32-bit alignment.
Existing tests are unaffected, but a miscompilation of a new test is fixed.
---------
Co-authored-by: Nikita Popov <github@npopov.com>
As part of RemoveDIs, we need instruction insertion to be done with
iterators rather than instruction pointers, so that we can communicate
some debug-info facts about the position. This patch is an entirely
mechanical replacement of Instruction * with BasicBlock::iterator, plus
using insertBefore to insert some instructions because we don't have
iterator-taking constructors yet.
Sadly it's not NFC because it causes dbg.value intrinsics / their
DPValue equivalents to shift location.
It turns out that CodeGenPrepare will skip over consecutive select
instructions as it knows it can optimise them all at the same time. This
is unfortunate for the RemoveDIs project to remove intrinsic-based
debug-info, because that means debug-info attached to those skipped
instructions doesn't get seen by optimizeInst and so updated. Add code
to handle debug-info on those skipped instructions manually.
This code will also have been slower when it had dbg.values stuffed in
between instructions, but with RemoveDIs it'll go faster because the
dbg.values won't break up the select sequence.
When all the large const offsets masked with the same value from bit-12 to bit-23.
Fold
add x8, x0, #2031, lsl #12
add x8, x8, #960
ldr x9, [x8, x8]
ldr x8, [x8, #2056]
into
add x8, x0, #2031, lsl #12
ldr x9, [x8, #960]
ldr x8, [x8, #3016]
CodeGenPrepare needs to support the maintenence of DPValues, the
non-instruction replacement for dbg.value intrinsics. This means there are
a few functions we need to duplicate or replicate the functionality of:
* fixupDbgValue for setting users of sunk addr GEPs,
* The remains of placeDbgValues needs a DPValue implementation for sinking
* Rollback of RAUWs needs to update DPValues
* Rollback of instruction removal needs supporting (see github #73350)
* A few places where we have to use iterators rather than instructions.
There are three places where we have to use the setHeadBit call on
iterators to indicate which portion of debug-info records we're about to
splice around. This is because CodeGenPrepare, unlike other optimisation
passes, is very much concerned with which block an operation occurs in and
where in the block instructions are because it's preparing things to be in
a format that's good for SelectionDAG.
There isn't a large amount of test coverage for debuginfo behaviours in
this pass, hence I've added some more.
Fix the issue by not reusing the zext at all. The code already handles
creation of new zexts if more than one is needed. Always use that
code-path instead of trying to reuse the old zext in some case.
(Alternatively we could also drop poison-generating flags on the old
zext, but it seems cleaner to not reuse it at all, especially if it's
not always possible anyway.)
Fixes https://github.com/llvm/llvm-project/issues/72046.
For more recent sve capable CPUs it is beneficial to use the inc*
instruction
to increment a value by vscale (potentially shifted or multiplied) even
in
short loops.
This patch tells codegenprepare to sink appropriate vscale calls into
blocks where they are used so that isel can match them.
The optimization in CodeGenPrepare, where GEPs are unmerged across
indirect branches must respect the types of both GEPs and their sizes
when adjusting the indices.
The sample here shows the bug:
https://godbolt.org/z/8e9o5sYPP
The value `%elementValuePtr` addresses the second field of the
`%struct.Blub`. It is therefore a GEP with index 1 and type i8.
The value `%nextArrayElement` addresses the next array element. It is
therefore a GEP with index 1 and type `%struct.Blub`.
Both values point to completely different addresses, even if the indices
are the same, due to the types being different.
However, after CodeGenPrepare has run, `%nextArrayElement` is a bitcast
from `%elementValuePtr`, meaning both were treated as equal.
The cause for this is that the unmerging optimization does not take
types into consideration.
It sees both GEPs have `%currentArrayElement` as source operand and
therefore tries to rewrite `%nextArrayElement` in terms of
`%elementValuePtr`.
It changes the index to the difference of the two GEPs. As both indices
are `1`, the difference is `0`. As the indices are `0` the GEP is later
replaced with a simple bitcast in CodeGenPrepare.
Before adjusting the indices, the types of the GEPs would have to be
aligned and the indices scaled accordingly for the optimization to be
correct.
Due to the size of the struct being `16` and the `%elementValuePtr`
pointing to offset `1`, the correct index for the unmerged
`%nextArrayElement` would be 15.
I assume this bug emerged from the opaque pointer change as GEPs like
`%elementValuePtr` that access the struct field based of type i8 did not
naturally occur before.
In light of future migration to ptradd, simply not performing the
optimization if the types mismatch should be sufficient.
The `BlockFrequency` class abstracts `uint64_t` frequency values. Use it
more consistently in various APIs and disable implicit conversion to
make usage more consistent and explicit.
- Use `BlockFrequency Freq` parameter for `setBlockFreq`,
`getProfileCountFromFreq` and `setBlockFreqAndScale` functions.
- Return `BlockFrequency` in `getEntryFreq()` functions.
- While on it change some `const BlockFrequency& Freq` parameters to
plain `BlockFreqency Freq`.
- Mark `BlockFrequency(uint64_t)` constructor as explicit.
- Add missing `BlockFrequency::operator!=`.
- Remove `uint64_t BlockFreqency::getMaxFrequency()`.
- Add `BlockFrequency BlockFrequency::max()` function.
Instead of ConstantExpr::getCast() with a fixed opcode, use the
corresponding getXYZ methods instead. For the one place creating
a pointer bitcast drop it entirely, as this is redundant with
opaque pointers.
Multiplying raw block frequency with an integer carries a high risk
of overflow.
- Add `BlockFrequency::mul` return an std::optional with the product
or `nullopt` to indicate an overflow.
- Fix two instances where overflow was likely.
Continuing the patch series to get rid of debug intrinsics [0], instruction
insertion needs to be done with iterators rather than instruction pointers,
so that we can communicate information in the iterator class. This patch
adds an iterator-taking insertBefore method and converts various call sites
to take iterators. These are all sites where such debug-info needs to be
preserved so that a stage2 clang can be built identically; it's likely that
many more will need to be changed in the future.
At this stage, this is just changing the spelling of a few operations,
which will eventually become signifiant once the debug-info bearing
iterator is used.
[0] https://discourse.llvm.org/t/rfc-instruction-api-changes-needed-to-eliminate-debug-intrinsics-from-ir/68939
Differential Revision: https://reviews.llvm.org/D152537
Before elimination of mostly empty block it makes sense to remove dead PHI nodes.
It open more opportunity for elimination plus eliminates dead code itself.
It appeared that change results in failing many unit tests and some of
them I've updated and for another one I disable this optimization.
The pattern I observed in the tests is that there is a infinite loop
without side effects. As a result after elimination of dead phi node all other
related instruction are also removed and tests stops to check what it is expected.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D158503
Refactor to use BasicBlockUtils functions and make life easier for
a subsequent patch for updating the dominator tree.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D154053
Release BlockFrequencyInfo and BranchProbabilityInfo results and other
per function information immediately afterwards, instead of holding
onto the memory until the next `CodeGenPrepare::runOnFunction` call.
Differential Revision: https://reviews.llvm.org/D152552
Co-authored-by: Erik Hogeman <erik.hogeman@arm.com>
Some transformation in CodeGenPrepare pass may create and/or delete
basic block, but they don't update the LoopInfo, so the LoopInfo may
end up containing dangling pointers and sometimes reused basic blocks,
which leads to "interesting" non-deterministic behaviour.
These transformations do not seem to alter the loop structure of the
function, and updating the loop info is quite straighforward.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D150384
Change-Id: If8ab3905749ea6be94fbbacd54c5cfab5bc1fba1
InstCombine tries to swap compare operands to match sub instructions
in order to expose "CSE opportunities". However, it doesn't really
make sense to perform this transform in the middle-end, as we cannot
actually CSE the instructions there.
The backend already performs this fold in
18f5446a45/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp (L4236)
on the SDAG level, however this only works within a single basic block.
To handle cross-BB cases, we do need to handle this in the IR layer.
This patch moves the fold from InstCombine to CGP in the backend,
while keeping the same (somewhat dubious) heuristic.
Differential Revision: https://reviews.llvm.org/D152541
If the ZExt can be lowered to a single ZExt to the next power-of-2 and
the remaining ZExt folded into the user, don't use tbl lowering.
Fixes#62620.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D150482
class InstructionRemover manages resources such as dynamically allocated memory, it's generally a good practice to either implement a custom copy constructor or disable the default one.
Reviewed By: pengfei
Differential Revision: https://reviews.llvm.org/D151543
While the original motivation for this patch (address space 7 on
AMDGPU) has been reworked and is not presently planned to reach IR
translation, the incorrect (by the spec) handling of index offset
width in IR translation and CodeGenPrepare is likely to trip someone
- possibly future AMD, since we have a p7:160:256:256:32 now, so we
convert to the other API now.
Reviewed By: aemerson, arsenm
Differential Revision: https://reviews.llvm.org/D143526
This is rework of;
- rG13e77db2df94 (r328395; MVT)
Since `LowLevelType.h` has been restored to `CodeGen`, `MachinveValueType.h`
can be restored as well.
Depends on D148767
Differential Revision: https://reviews.llvm.org/D149024
First, in CodeGenPrepare.cpp, line 6891, the VectorCond will always be false
because if not function will return at 6888.
Second, in SelectionDAGBuilder.cpp, line 5443, getSExtValue() will return
value as int type, but now we use unsigned Val to maintain it, which make the
if condition at 5452 meaningless.
Reviewed By: skan
Differential Revision: https://reviews.llvm.org/D149033
When checking the profitability of folding an address computation
into a memory instruction, the compiler tries to determine the liveness
of the values, comprising the address, at the point of the memory instruction.
This patch improves on the live variable estimates by including
the loop invariants which are references in the loop body.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D143897
The "nested" `AddressingModeMatcher`s in
`AddressingModeMatcher::isProfitableToFoldIntoAddressingMode` are constructed
using the original memory instruction, even though they check whether the
address operand of a differrent memory instructon is foldable. The memory
instruction is used only for a dominance check (when not checking for
profitability), and using the wrong memory instruction does not change the
outcome of the test - if an address is foldable, the dominance test afects which
of the two possible ways to fold is chosen, but this result is discarded.
As an example, in
target triple = "x86_64-linux"
declare i1 @check(i64, i64)
define i32 @f(i1 %cc, ptr %p, ptr %q, i64 %n) {
entry:
br label %loop
loop:
%iv = phi i64 [ %i, %C ], [ 0, %entry ]
%offs = mul i64 %iv, 4
%c.0 = icmp ult i64 %iv, %n
br i1 %c.0, label %A, label %fail
A:
br i1 %cc, label %B, label %C
C:
%u = phi i32 [0, %A], [%w, %B]
%i = add i64 %iv, 1
%a.0 = getelementptr i8, ptr %p, i64 %offs
%a.1 = getelementptr i8, ptr %a.0, i64 4
%v = load i32, ptr %a.1
%c.1 = icmp eq i32 %v, %u
br i1 %c.1, label %exit, label %loop
B:
%a.2 = getelementptr i8, ptr %p, i64 %offs
%a.3 = getelementptr i8, ptr %a.2, i64 4
%w = load i32, ptr %a.3
br label %C
exit:
ret i32 -1
fail:
ret i32 0
}
the dominance test is perfomed between `%i = ...` and `%v = ...` at the moment
we're checking whether `%a3 = ...` is foldable
Using the memory instruction, which uses the interesting address is "more
correct" and this change is needed by a future patch.
Reviewed By: mkazantsev
Differential Revision: https://reviews.llvm.org/D143896
This patch recommits 0827e2fa3fd15b49fd2d0fc676753f11abb60cab after
reverting it in ed7ada259f665a742561b88e9e6c078e9ea85224. Added
workround for `Targetlowering::AddrMode` no longer being an aggregate
in C++20.
`AArch64TargetLowering::isLegalAddressingMode` has a number of
defects, including accepting an addressing mode, which consists of
only an immediate operand, or not checking the offset range for an
addressing mode in the form `1*ScaledReg + Offs`.
This patch fixes the above issues.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D143895
Change-Id: I41a520c13ce21da503ca45019979bfceb8b648fa
`AArch64TargetLowering::isLegalAddressingMode` has a number of
defects, including accepting an addressing mode which consists of only
an immediate operand, or not checking the offset range for an
addressing mode in the form `1*ScaledReg + Offs`.
This patch fixes the above issues.
Reviewed By: dmgreen
Differential Revision: https://reviews.llvm.org/D143895
Change-Id: I756fa21941844ded44f082ac7eea4391219f9851
This change initializes the members TSI, LI, DT, PSI, and ORE pointer feilds of the SelectOptimize class to nullptr.
Reviewed By: LuoYuanke
Differential Revision: https://reviews.llvm.org/D148303