MachineVerifier tried to checkLivenessAtDef() ignoring it is actually a subreg.
The issue was with processing two subregs of the same reg are used in the same
instruction (e.g. inline asm): "def early-clobber" and other just "def".
Reviewed By: foad
Differential Revision: https://reviews.llvm.org/D126661
Test for a case we observed after the initial implementation of D129997
landed, in which case we observed a crash while building the ppc64le
Linux kernel. In that case, we had one block with two exits, both to the
same successor. Removing one of the exits corrupted the
successor/predecessor lists.
So when we have an INLINEASM_BR, check a few things for each indirect
target:
1. that it exists.
2. that it is listed in our successors.
3. that its predecessor list contains the parent MBB of INLINEASM_BR.
This would have caught the regression discovered after D129997 landed,
after the pass that was problematic (early-tailduplication) rather than
getting a stack trace in a later pass (regalloc) that doesn't understand
the anomaly and crashes.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D130290
There are two different senses in which a block can be "address-taken".
There can be a BlockAddress involved, which means we need to map the
IR-level value to some specific block of machine code. Or there can be
constructs inside a function which involve using the address of a basic
block to implement certain kinds of control flow.
Mixing these together causes a problem: if target-specific passes are
marking random blocks "address-taken", if we have a BlockAddress, we
can't actually tell which MachineBasicBlock corresponds to the
BlockAddress.
So split this into two separate bits: one for BlockAddress, and one for
the machine-specific bits.
Discovered while trying to sort out related stuff on D102817.
Differential Revision: https://reviews.llvm.org/D124697
Verify the LiveStacks analysis after a pass that claims to preserve it,
even if there are no further passes (apart from the verifier itself)
that would use the analysis.
Differential Revision: https://reviews.llvm.org/D129200
1. When checking if a candidate contains a CFI instruction, actually
iterate over all of the instructions, instead of stopping halfway
through.
2. Make sure copied CFI directives refer to the correct instruction.
Fixes https://github.com/llvm/llvm-project/issues/55842
Differential Revision: https://reviews.llvm.org/D126930
The most common situation where G_ASSERT_ZEXT appears for AMDGPU is a
copy from a physical register, which happens to use set the actual
register class on the virtual register. After copy coalescing, the
assert's source operand had a vreg with a set class. The verifier was
strictly rejecting cases where the set class/bank weren't an exact
match. Additionally, RegBankSelect was also expecting a register bank
to be set on the register, not a class.
This is much stricter than regular copies so relax this behavior. This
now allows these 2 cases:
1. Source register has either class or bank, and the result does not
2. Source register has a register class, and the result is a register
with a matching bank.
This should avoid needing some kind of special handling to avoid
violating this constraint when folding copies.
This reverts commit 7f230feeeac8a67b335f52bd2e900a05c6098f20.
Breaks CodeGenCUDA/link-device-bitcode.cu in check-clang,
and many LLVM tests, see comments on https://reviews.llvm.org/D121169
This wraps up from D119053. The 2 headers are moved as described,
fixed file headers and include guards, updated all files where the old
paths were detected (simple grep through the repo), and `clang-format`-ed it all.
Differential Revision: https://reviews.llvm.org/D119876
Shiny new DBG_PHI instruction usually have physical registers as operands
-- however, the machine verifier checks to see whether they're live, and
occasionally this fails. There's a filter for DBG_VALUE instructions to not
get verified in this way: expand it to exempt all debug instructions from
liveness checking, which means DBG_PHIs get treated like DBG_VALUEs.
This also future proofs against us adding new debug instructions.
Differential Revision: https://reviews.llvm.org/D117891
D112556 added verification that the live interval for a subreg operand
must have subranges. This patch fixes a corner case, where if all subreg
operands for a particular register are undef uses then no subranges
are required. This matches how LiveIntervalCalc would build the live
intervals in the first place, since an undef use is not considered
to read the register.
Before this patch, CodeGen/AMDGPU/no-remat-indirect-mov.mir would fail
with -early-live-intervals:
# After Live Interval Analysis
...
*** Bad machine code: Live interval for subreg operand has no subranges ***
- function: index_vgpr_waterfall_loop
- basic block: %bb.1 (0x6a9a968) [352B;496B)
- instruction: 432B %24:vgpr_32 = V_MOV_B32_e32 undef %18.sub0:vreg_512, implicit $exec, implicit %18:vreg_512, implicit $m0
- operand 1: undef %18.sub0:vreg_512
Differential Revision: https://reviews.llvm.org/D115360
Expanding on D109750.
Since `DBG_VALUE` instructions have final register validity determined in
`LDVImpl::handleDebugValue`, there is no apparent reason to immediately prune
unused register operands as their defs are erased. Consequently, this renders
`MachineInstr::eraseFromParentAndMarkDBGValuesForRemoval` moot; gaining a
substantial performance improvement.
The only necessary changes involve making relevant passes consider invalid
DBG_VALUE vregs uses as valid.
Reviewed By: MatzeB
Differential Revision: https://reviews.llvm.org/D112852
MachineVerifier verified the subranges of a live interval if
they existed, but did not complain if they did not exist.
This patch changes the verifier to complain if there are no
subranges in the live interval for a subreg operand (so long
as MachineRegisterInfo says we should be tracking subreg
liveness for that register). This matches the conditions for
LiveIntervalCalc to create subranges in the first place.
Differential Revision: https://reviews.llvm.org/D112556
TargetPassConfig::addPass takes a "bool verifyAfter" argument which lets
you skip machine verification after a particular pass. Unfortunately
this is used in generic code in TargetPassConfig itself to skip
verification after a generic pass, only because some previous target-
specific pass damaged the MIR on that specific target. This is bad
because problems in one target cause lack of verification for all
targets.
This patch replaces that mechanism with a new MachineFunction property
called "FailsVerification" which can be set by (usually target-specific)
passes that are known to introduce problems. Later passes can reset it
again if they are known to clean up the previous problems.
Differential Revision: https://reviews.llvm.org/D111397
Based on the reasoning of D53903, register operands of DBG_VALUE are
invariably treated as RegState::Debug operands. This change enforces
this invariant as part of MachineInstr::addOperand so that all passes
emit this flag consistently.
RegState::Debug is inconsistently set on DBG_VALUE registers throughout
LLVM. This runs the risk of a filtering iterator like
MachineRegisterInfo::reg_nodbg_iterator to process these operands
erroneously when not parsed from MIR sources.
This issue was observed in the development of the llvm-mos fork which
adds a backend that relies on physical register operands much more than
existing targets. Physical RegUnit 0 has the same numeric encoding as
$noreg (indicating an undef for DBG_VALUE). Allowing debug operands into
the machine scheduler correlates $noreg with RegUnit 0 (i.e. a collision
of register numbers with different zero semantics). Eventually, this
causes an assert where DBG_VALUE instructions are prohibited from
participating in live register ranges.
Reviewed By: MatzeB, StephenTozer
Differential Revision: https://reviews.llvm.org/D110105
Enable verification of live intervals immediately after computing them
(when -early-live-intervals is used) and fix a problem that that
provokes: currently the verifier insists that a segment that ends at an
early-clobber slot must be followed by another segment starting at the
same slot. But before TwoAddressInstruction runs, the equivalent
condition is: a segment that ends at an early-clobber slot must have its
last use tied to an early-clobber def. That condition is harder to check
here, so for now just disable this check until tied operands have been
rewritten.
Differential Revision: https://reviews.llvm.org/D111065
LiveVariables does not examine the contents of bundles, so
MachineVerifier should not expect it to know about kill flags on
operands of instructions inside a bundle.
With this fix we can enable machine verification after running the
LiveVariables analysis. Doing this does not show any problems in
check-llvm in an LLVM_ENABLE_EXPENSIVE_CHECKS build.
Differential Revision: https://reviews.llvm.org/D110700
The semantics of tail predication loops means that the value of LR as an
instruction is executed determines the predicate. In other words:
mov r3, #3
DLSTP lr, r3 // Start tail predication, lr==3
VADD.s32 q0, q1, q2 // Lanes 0,1 and 2 are updated in q0.
mov lr, #1
VADD.s32 q0, q1, q2 // Only first lane is updated.
This means that the value of lr cannot be spilled and re-used in tail
predication regions without potentially altering the behaviour of the
program. More lanes than required could be stored, for example, and in
the case of a gather those lanes might not have been setup, leading to
alignment exceptions.
This patch adds a new lr predicate operand to MVE instructions in order
to keep a reference to the lr that they use as a tail predicate. It will
usually hold the zeroreg meaning not predicated, being set to the LR phi
value in the MVETPAndVPTOptimisationsPass. This will prevent it from
being spilled anywhere that it needs to be used.
A lot of tests needed updating.
Differential Revision: https://reviews.llvm.org/D107638
Please refer to
https://lists.llvm.org/pipermail/llvm-dev/2021-September/152440.html
(and that whole thread.)
TLDR: the original patch had no prior RFC, yet it had some changes that
really need a proper RFC discussion. It won't be productive to discuss
such an RFC, once it's actually posted, while said patch is already
committed, because that introduces bias towards already-committed stuff,
and the tree is potentially in broken state meanwhile.
While the end result of discussion may lead back to the current design,
it may also not lead to the current design.
Therefore i take it upon myself
to revert the tree back to last known good state.
This reverts commit 4c4093e6e39fe6601f9c95a95a6bc242ef648cd5.
This reverts commit 0a2b1ba33ae6dcaedb81417f7c4cc714f72a5968.
This reverts commit d9873711cb03ac7aedcaadcba42f82c66e962e6e.
This reverts commit 791006fb8c6fff4f33c33cb513a96b1d3f94c767.
This reverts commit c22b64ef66f7518abb6f022fcdfd86d16c764caf.
This reverts commit 72ebcd3198327da12804305bda13d9b7088772a8.
This reverts commit 5fa6039a5fc1b6392a3c9a3326a76604e0cb1001.
This reverts commit 9efda541bfbd145de90f7db38d935db6246dc45a.
This reverts commit 94d3ff09cfa8d7aecf480e54da9a5334e262e76b.
Basically the same as G_LROUND. Handles the llvm.llround family of intrinsics.
Also add a helper function to the MachineVerifier for checking if all of the
(virtual register) operands of an instruction are scalars. Seems like a useful
thing to have.
Differential Revision: https://reviews.llvm.org/D108429
For some reductions like G_VECREDUCE_OR on AArch64, we need to scalarize
completely if the source is <= 64b. This change adds support for that in
the legalizer. If the source has a pow-2 num elements, then we can do
a tree reduction using the scalar operation in the individual elements.
Otherwise, we just create a sequential chain of operations.
For AArch64, we only need to scalarize if the input is <64b. If it's great than
64b then we can first do a fewElements step to 64b, taking advantage of vector
instructions until we reach the point of scalarization.
I also had to relax the verifier checks for reductions because the intrinsics
support <1 x EltTy> types, which we lower to scalars for GlobalISel.
Differential Revision: https://reviews.llvm.org/D108276
Add a generic opcode equivalent to the `llvm.isnan` intrinsic +
MachineVerifier support for it.
We need an opcode here because we may want target-specific lowering later on.
Differential Revision: https://reviews.llvm.org/D108222
This was picking a concrete size for a physical register, and
enforcing exact match on the virtual register's type size. Some
targets add multiple types to a register class, and some are smaller
than the full bit width. For example x86 adds f32 to 128-bit xmm
registers, and AMDGPU adds i16/f16 to 32-bit registers.
It might be better to represent these cases as a copy of the full
register and an extraction of the subpart, but a lot of code assumes
you can directly copy. This will help fix the current usage of the DAG
calling convention infrastructure which is incompatible with how
GlobalISel is now using it.
The API is somewhat cumbersome here, but I just mirrored the existing
functions, except now with LLTs (and allow returning null on failure,
unlike the MVT version). I think the concept of selecting register
classes based on type is flawed to begin with, but I'm trying to keep
this compatible with the existing handling.
If Virtual Register is alive in landing pad its def must be
before the call causing the exception or it should be statepoint instruction itself and
in this case def actually means the relocation of gc pointer and is alive in
landing pad.
The test shows the triggering this check for an option under development
use-registers-for-gc-values-in-landing-pad which is off by default until
it is functionally correct.
Reviewers: reames, void, jyknight, nickdesaulniers, efriedma, arsenm, rnk
Reviewed By: rnk
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D100525
Darwin platforms for both AArch64 and X86 can provide optimized `bzero()`
routines. In this case, it may be preferable to use `bzero` in place of a
memset of 0.
This adds a G_BZERO generic opcode, similar to G_MEMSET et al. This opcode can
be generated by platforms which may want to use bzero.
To emit the G_BZERO, this adds a pre-legalize combine for AArch64. The
conditions for this are largely a port of the bzero case in
`AArch64SelectionDAGInfo::EmitTargetCodeForMemset`.
The only difference in comparison to the SelectionDAG code is that, when
compiling for minsize, this will fire for all memsets of 0. The original code
notes that it's not beneficial to do this for small memsets; however, using
bzero here will save a mov from wzr. For minsize, I think that it's preferable
to prioritise omitting the mov.
This also fixes a bug in the libcall legalization code which would delete
instructions which could not be legalized. It also adds a check to make sure
that we actually get a libcall name.
Code size improvements (Darwin):
- CTMark -Os: -0.0% geomean (-0.1% on pairlocalalign)
- CTMark -Oz: -0.2% geomean (-0.5% on bullet)
Differential Revision: https://reviews.llvm.org/D99358
There is a bunch of similar bitfield extraction code throughout *ISelDAGToDAG.
E.g, ARMISelDAGToDAG, AArch64ISelDAGToDAG, and AMDGPUISelDAGToDAG all contain
code that matches a bitfield extract from an and + right shift.
Rather than duplicating code in the same way, this adds two opcodes:
- G_UBFX (unsigned bitfield extract)
- G_SBFX (signed bitfield extract)
They work like this
```
%x = G_UBFX %y, %lsb, %width
```
Where `lsb` and `width` are
- The least-significant bit of the extraction
- The width of the extraction
This will extract `width` bits from `%y`, starting at `lsb`. G_UBFX zero-extends
the result, while G_SBFX sign-extends the result.
This should allow us to use the combiner to match the bitfield extraction
patterns rather than duplicating pattern-matching code in each target.
Differential Revision: https://reviews.llvm.org/D98464
- Add new callback in `TargetInstrInfo` --
`isPCRelRegisterOperandLegal` -- to query whether pc-rel
register MachineOperand is legal.
- Add new function to search DebugLoc in a reverse ordering
Authors: myhsu, m4yers, glaubitz
Differential Revision: https://reviews.llvm.org/D88386
This adds a G_ASSERT_SEXT opcode, similar to G_ASSERT_ZEXT. This instruction
signifies that an operation was already sign extended from a smaller type.
This is useful for functions with sign-extended parameters.
E.g.
```
define void @foo(i16 signext %x) {
...
}
```
This adds verifier, regbankselect, and instruction selection support for
G_ASSERT_SEXT equivalent to G_ASSERT_ZEXT.
Differential Revision: https://reviews.llvm.org/D96890