199 Commits

Author SHA1 Message Date
Nick Desaulniers
2fe24fdc94
reapply "[TargetInstrInfo] enable foldMemoryOperand for InlineAsm (#70743)" (#72910)
This reverts commit 42204c94ba9fcb0b4b1335e648ce140a3eef8a9d.

It was accidentally backed out.

#20571
#70743
2023-11-21 08:57:23 -08:00
Bill Wendling
42204c94ba Revert "[TargetInstrInfo] enable foldMemoryOperand for InlineAsm (#70743)"
This reverts commit 99ee2db198d86f685bcb07a1495a7115ffc31d7e.

It's causing ICEs in the ARM tests. See the comment here:

99ee2db198
2023-11-19 01:00:46 -08:00
Nick Desaulniers
99ee2db198
[TargetInstrInfo] enable foldMemoryOperand for InlineAsm (#70743)
foldMemoryOperand looks at pairs of instructions (generally a load to
virt reg then use of the virtreg, or def of a virtreg then a store) and
attempts to combine them.  This can reduce register pressure.

A prior commit added the ability to mark such a MachineOperand as
foldable. In terms of INLINEASM, this means that "rm" was used (rather
than just "r") to denote that the INLINEASM may use a memory operand
rather than a register operand. This effectively undoes decisions made
by the instruction selection framework.  Callers will be added in the
register allocation frameworks. This has been tested with all of the
above (which will come as follow up patches).

Thanks to @topperc who suggested this at last years LLVM US Dev Meeting
and @qcolombet who confirmed this was the right approach.

Link: https://github.com/llvm/llvm-project/issues/20571
2023-11-17 10:03:14 -08:00
Nick Desaulniers
778a48468b
[InlineAsm] Steal a bit to denote a register is foldable (#70738)
When using the inline asm constraint string "rm" (or "g"), we generally
would like the compiler to choose "r", but it is permitted to choose "m"
if there's register pressure. This is distinct from "r" in which the
register is not permitted to be spilled to the stack.

The decision of which to use must be made at some point.  Currently, the
instruction selection frameworks (ISELs) make the choice, and the
register allocators had better be able to handle the result.

Steal a bit from Storage when using register operands to disambiguate
between the two cases.  Add helpers/getters/setters, and print in MIR
when such a register is foldable.

The getter will later be used by the register allocation frameworks (and
asserted by the ISELs) while the setters will be used by the instruction
selection frameworks.

Link: https://github.com/llvm/llvm-project/issues/20571
2023-11-03 09:37:07 -07:00
Rahman Lavaee
f70e39ec17
[BasicBlockSections] Apply path cloning with -basic-block-sections. (#68860)
28b9126879
introduced the path cloning format in the basic-block-sections profile.

This PR validates and applies path clonings. 
A path cloning is valid if all of these conditions hold:
  1. All bb ids in the path are mapped to existing blocks.
2. Each two consecutive bb ids in the path have a successor relationship
in the CFG.
3. The path does not include a block with indirect branches, except
possibly as the last block.
 
Applying a path cloning involves cloning all blocks in the path (except
the first one) and setting up their branches.
Once all clonings are applied, the cluster information is used to guide
block layout in the modified function.
2023-10-27 21:49:39 -07:00
Nick Desaulniers
86735a4353
reland [InlineAsm] wrap ConstraintCode in enum class NFC (#66264)
reland [InlineAsm] wrap ConstraintCode in enum class NFC (#66003)

This reverts commit ee643b706be2b6bef9980b25cc9cc988dab94bb5.

Fix up build failures in targets I missed in #66003

Kept as 3 commits for reviewers to see better what's changed. Will
squash when
merging.

- reland [InlineAsm] wrap ConstraintCode in enum class NFC (#66003)
- fix all the targets I missed in #66003
- fix off by one found by llvm/test/CodeGen/SystemZ/inline-asm-addr.ll
2023-09-13 13:31:24 -07:00
Reid Kleckner
ee643b706b Revert "[InlineAsm] wrap ConstraintCode in enum class NFC (#66003)"
This reverts commit 2ca4d136124d151216aac77a0403dcb5c5835bcd.

Also revert the followup, "[InlineAsm] fix botched merge conflict resolution"

This reverts commit 8b9bf3a9f715ee5dce96eb1194441850c3663da1.

There were SystemZ and Mips build errors, too many to fix forward.
2023-09-13 09:58:02 -07:00
Nick Desaulniers
2ca4d13612
[InlineAsm] wrap ConstraintCode in enum class NFC (#66003)
Similar to
commit 2fad6e69851e ("[InlineAsm] wrap Kind in enum class NFC")

Fix the TODOs added in
commit 93bd428742f9 ("[InlineAsm] refactor InlineAsm class NFC
(#65649)")
2023-09-13 08:48:09 -07:00
Nick Desaulniers
93bd428742
[InlineAsm] refactor InlineAsm class NFC (#65649)
I would like to steal one of these bits to denote whether a kind may be
spilled by the register allocator or not, but I'm afraid to touch of any
this code using bitwise operands.

Make flags a first class type using bitfields, rather than launder data
around via `unsigned`.
2023-09-11 09:27:37 -07:00
Nick Desaulniers
2fad6e6985 [InlineAsm] wrap Kind in enum class NFC
Should add some minor type safety to the use of this information, since
there's quite a bit of metadata being laundered through an `unsigned`.

I'm looking to potentially add more bitfields to that `unsigned`, but I
find InlineAsm's big ol' bag of enum values and usage of `unsigned`
confusing, type-unsafe, and un-ergonomic. These can probably be better
abstracted.

I think the lack of static_cast outside of InlineAsm indicates the prior
code smell fixed here.

Reviewed By: qcolombet

Differential Revision: https://reviews.llvm.org/D159242
2023-08-31 08:54:51 -07:00
Daniel Hoekwater
8c249c44d4 [CodeGen][AArch64] Don't split functions with a red zone on AArch64
Because unconditional branch relaxation on AArch64 grows the stack to
spill a register, splitting a function would cause the red zone to be
overwritten. Explicitly disable MFS for such functions.

Differential Revision: https://reviews.llvm.org/D157127
2023-08-24 21:57:35 +00:00
Kazu Hirata
d85993d28f [llvm] Remove redundant control flow statements (NFC) 2023-08-19 08:07:30 -07:00
Sander de Smalen
bbb95893de [TII] NFCI: Simplify the interface for isTriviallyReMaterializable
Currently `isTriviallyReMaterializable` calls
`isReallyTriviallyReMaterializable` and
`isReallyTriviallyReMaterializableGeneric`. The two interfaces
are confusing, but there are also some real issues with this.

The documentation of this function (see below) suggests that
`isReallyTriviallyRematerializable` allows the target to override the
default behaviour.

  /// For instructions with opcodes for which the M_REMATERIALIZABLE flag is
  /// set, this hook lets the target specify whether the instruction is actually
  /// trivially rematerializable, taking into consideration its operands.

It however implements something different. The default behaviour
is the analysis done in `isReallyTriviallyReMaterializableGeneric`,
which is testing if it is safe to rematerialize the MachineInstr.

The result of `isReallyTriviallyReMaterializable` is only considered if
`isReallyTriviallyReMaterializableGeneric` returns `false`.  That means
there is no way to override the default behaviour if
`isReallyTriviallyReMaterializableGeneric` returns true (i.e. it is safe to
rematerialize, but we'd rather not).

By making this a single interface, we can override the interface to do either.

Reviewed By: craig.topper, nemanjai

Differential Revision: https://reviews.llvm.org/D156520
2023-08-07 13:01:06 +00:00
Matt Arsenault
4d42e8b5d1 Reapply "[CodeGen]Allow targets to use target specific COPY instructions for live range splitting"
This reverts commit a496c8be6e638ae58bb45f13113dbe3a4b7b23fd.

The workaround in c26dfc81e254c78dc23579cf3d1336f77249e1f6 should work
around the underlying problem with SUBREG_TO_REG.
2023-07-31 20:15:45 -04:00
Jay Foad
2dcf051259 [CodeGen] Store call frame size in MachineBasicBlock
Record the call frame size on entry to each basic block. This is usually
zero except when a basic block has been split in the middle of a call
sequence.

This simplifies PEI::replaceFrameIndices which previously had to visit
basic blocks in a specific order and had special handling for
unreachable blocks. More importantly it paves the way for an equally
simple implementation of a backwards version of replaceFrameIndices,
which is required to fully convert PrologEpilogInserter to backwards
register scavenging, which is preferred because it does not rely on
accurate kill flags.

Differential Revision: https://reviews.llvm.org/D156113
2023-07-27 10:32:00 +01:00
Vitaly Buka
a496c8be6e Revert "[CodeGen]Allow targets to use target specific COPY instructions for live range splitting"
And dependent commits.

Details in D150388.

This reverts commit 825b7f0ca5f2211ec3c93139f98d1e24048c225c.
This reverts commit 7a98f084c4d121244ef7286bc6503b6a181d446e.
This reverts commit b4a62b1fa546312d882fa12dfdcd015177d66826.
This reverts commit b7836d856206ec39509d42529f958c920368166b.

No conflicts in the code, few tests had conflicts in autogenerated CHECKs:
llvm/test/CodeGen/Thumb2/mve-float32regloops.ll
llvm/test/CodeGen/AMDGPU/fix-frame-reg-in-custom-csr-spills.ll

Reviewed By: alexfh

Differential Revision: https://reviews.llvm.org/D156381
2023-07-26 22:13:32 -07:00
Yashwant Singh
b7836d8562 [CodeGen]Allow targets to use target specific COPY instructions for live range splitting
Replacing D143754. Right now the LiveRangeSplitting during register allocation uses
TargetOpcode::COPY instruction for splitting. For AMDGPU target that creates a
problem as we have both vector and scalar copies. Vector copies perform a copy over
a vector register but only on the lanes(threads) that are active. This is mostly sufficient
however we do run into cases when we have to copy the entire vector register and
not just active lane data. One major place where we need that is live range splitting.

Allowing targets to use their own copy instructions(if defined) will provide a lot of
flexibility and ease to lower these pseudo instructions to correct MIR.

- Introduce getTargetCopyOpcode() virtual function and use if to generate copy in Live range
 splitting.
- Replace necessary MI.isCopy() checks with TII.isCopyInstr() in register allocator pipeline.

Reviewed By: arsenm, cdevadas, kparzysz

Differential Revision: https://reviews.llvm.org/D150388
2023-07-07 22:29:50 +05:30
Yashwant Singh
7aebe4eaaa [CodeGen] Move lowerCopy from expandPostRA to TII
This will allow targets to lower their 'copy' instructions easily.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D152261
2023-07-04 09:04:49 +05:30
Jeremy Morse
7561f9cd5c [DebugInfo][CSInfo] Avoid crash when defining super-regs
In rare situations involving AVX intrinsics, it seems LLVM can be coaxed
into generating copies to arguments that look like this:

    $xmm0 = VMOVAPSrr $xmm1, implicit-def $ymm0
    CALL64 @something ymm0

This particular form of copy implicitly zeros the upper lanes of ymm0,
hence there's an implicit-def for the register in the copy. The X86
implementation of describeLoadedValue doesn't attempt to describe this sort
of copy which causes the generic implementation in
TargetInstrInfo::describeLoadedValue to fire an assertion saying it
expected the target hook to handle it.

Play it safe in the generic implementation and return the "no location /
value" return value, rather than asserting.

Differential Revision: https://reviews.llvm.org/D148626
2023-04-25 14:15:33 +01:00
Daniel Hoekwater
6b62166b4c Account for PATCHABLE instrs in Branch Relaxation
PATCHABLE_* instructions expand to up to 36-byte
sleds. Updating the size of PATCHABLE instructions
causes them to be outlined, so we need to add a
check to prevent the outliner from considering
basic blocks that contain PATCHABLE instructions.

Differential Revision: https://reviews.llvm.org/D147982
2023-04-14 16:14:50 -07:00
duk
e8bc77ec08
[MachineOutliner] Fix label outlining regression introduced in D125072
Due to a change in the APIs used to determine what instructions can be outlined, the check for label outling was never hit. Instead, all labels were considered invisible, which is the opposite of the intended behavior and causes obscure crashes down the line. We now replicate the original behavior more closely, with explicit checks for known-good and known-bad instruction types.

Reviewed by: paquette

Differential Revision: https://reviews.llvm.org/D147178
2023-03-30 14:43:12 -04:00
Felipe de Azevedo Piovezan
6e861d818a [MachineCombiner] Preserve debug instruction number
Each target's `TargetInstrInfo` is responsible for announcing which code
patterns it is able to transform during the MachineCombiner pass.
Currently, these patterns are applied without preserving the debug
instruction number required by the InstrRef implementation of
LiveDebugValues. As such, we've seen a number of examples where debug
information is dropped for variables in InstrRef mode that were
otherwise available in VarLoc mode. This has been observed both in X86
and AArch examples.

This commit is an initial attempt at preserving said numbers by changing
the general (target agnostic) implementation of TargetInstrInfo: the
reassociation pattern must keep the debug number of the "top level"
instruction, i.e., the instruction whose value represents the final
value of the arithmetic expression. Intermediate values must have their
debug number dropped, as they have no equivalent value in the
unoptimized code.

Future work is required to update each target's
`TargetInstrInfo::genAlternativeCodeSequence` method.

Differential Revision: https://reviews.llvm.org/D145759
2023-03-13 09:29:30 -04:00
Felipe de Azevedo Piovezan
c0967995d2 [CodeGen] Prevent nullptr deref in genAlternativeCodeSequence
A pointer dereference was added (D141302) above an assert that checks
whether the pointer is null. This commit moves the assert above the
dereference and transforms it into an llvm_unreachable to better express
the intent that certain switch cases should never be reached.

Differential Revision: https://reviews.llvm.org/D145599
2023-03-08 13:41:32 -05:00
Anton Sidorenko
2693efa8a5 [MachineCombiner] Support local strategy for traces
For in-order cores MachineCombiner makes better decisions when the critical path
is calculated only for the current basic block and does not take into account
other blocks from the trace.

This patch adds a virtual method to TargetInstrInfo to allow each target decide
which strategy to use.

Depends on D140541

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D140542
2023-02-17 13:17:22 +03:00
duk
d61d591411
[MachineOutliner] Make getOutliningType partially target-independent
The motivation behind this patch is to unify some of the outliner logic across architectures. This looks nicer in general and makes fixing [issues like this](https://reviews.llvm.org/D124707#3483805) easier.
There are some notable changes here:
    1. `isMetaInstruction()` is used directly instead of checking for specific meta-instructions like `IMPLICIT_DEF` or `KILL`. This was already done in the RISC-V implementation, but other architectures still did hardcoded checks.
        - As an exception to this, CFI instructions are explicitly delegated to the target because RISC-V has different handling for those.

    2. `isTargetIndex()` checks are replaced with an assert; none of the architectures supported actually use `MO_TargetIndex` at this point in time.

    3. `isCFIIndex()` and `isFI()` checks are also replaced with asserts, since these operands should not exist in [any context](https://reviews.llvm.org/D122635#3447214) at this stage in the pipeline.

Reviewed by: paquette

Differential Revision: https://reviews.llvm.org/D125072
2023-02-09 14:35:00 -05:00
Jay Foad
768aed1378 [MC] Make more use of MCInstrDesc::operands. NFC.
Change MCInstrDesc::operands to return an ArrayRef so we can easily use
it everywhere instead of the (IMHO ugly) opInfo_begin and opInfo_end.
A future patch will remove opInfo_begin and opInfo_end.

Also use it instead of raw access to the OpInfo pointer. A future patch
will remove this pointer.

Differential Revision: https://reviews.llvm.org/D142213
2023-01-23 11:31:41 +00:00
Craig Topper
e72ca520bb [CodeGen] Remove uses of Register::isPhysicalRegister/isVirtualRegister. NFC
Use isPhysical/isVirtual methods.

Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D141715
2023-01-13 14:38:08 -08:00
Florian Hahn
20ecc07991
[MachineCombiner] Lift same-bb restriction for reassociable ops.
This patch relaxes the restriction that both reassociate operands must
be in the same block as the root instruction.

The comment indicates that the reason for this restriction was that the
operands not in the same block won't have a depth in the trace.

I believe this is outdated; if the operand is in a different block, it
must dominate the current block (otherwise it would need to be phi),
which in turn means the operand's block must be included in the current
rance, and depths must be available.

There's a test case (no_reassociate_different_block) added in
70520e2f1c5fc4 which shows that we have accurate depths for operands
defined in other blocks.

This allows reassociation of code that computes the final reduction
value after vectorization, among other things.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D141302
2023-01-13 15:32:44 +00:00
Christudasan Devadasan
b5efec4b27 [CodeGen] Additional Register argument to storeRegToStackSlot/loadRegFromStackSlot
With D134950, targets get notified when a virtual register is created and/or
cloned. Targets can do the needful with the delegate callback. AMDGPU propagates
the virtual register flags maintained in the target file itself. They are useful
to identify a certain type of machine operands while inserting spill stores and
reloads. Since RegAllocFast spills the physical register itself, there is no way
its virtual register can be mapped back to retrieve the flags. It can be solved
by passing the virtual register as an additional argument. This argument has no
use when the spill interfaces are called during the greedy allocator or even the
PrologEpilogInserter and can pass a null register in such cases.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D138656
2022-12-17 11:55:34 +05:30
Fangrui Song
51b685734b [Transforms,CodeGen] std::optional::value => operator*/operator->
value() has undesired exception checking semantics and calls
__throw_bad_optional_access in libc++. Moreover, the API is unavailable without
_LIBCPP_NO_EXCEPTIONS on older Mach-O platforms (see
_LIBCPP_AVAILABILITY_BAD_OPTIONAL_ACCESS).
2022-12-16 23:21:27 +00:00
Anton Sidorenko
f8ed709345 [MachineCombiner] Extend reassociation logic to handle inverse instructions
Machine combiner supports generic reassociation only of associative and
commutative instructions, for example (A + X) + Y => (X + Y) + A. However, we
can extend this generic support to handle patterns like
(X + A) - Y => (X - Y) + A), where `-` is the inverse of `+`.
This patch adds interface functions to process reassociation patterns of
associative/commutative instructions and their inverse variants with minimal
changes in backends.

Differential Revision: https://reviews.llvm.org/D136754
2022-12-07 13:50:28 +03:00
Fangrui Song
b0df70403d [Target] llvm::Optional => std::optional
The updated functions are mostly internal with a few exceptions (virtual functions in
TargetInstrInfo.h, TargetRegisterInfo.h).
To minimize changes to LLVMCodeGen, GlobalISel files are skipped.

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-04 22:43:14 +00:00
Kazu Hirata
998960ee1f [CodeGen] Use std::nullopt instead of None (NFC)
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated.  The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-02 20:36:08 -08:00
David Green
71609871dd [AArch64][MachineCombiner] Use MIMetadata to copy pcsections metadata to reassociated instructions.
D134260/D138107 exposed that the MachineCombiner was not copying
pcsections metadata where it should. This patch switches the MIBuild
methods to use MIMetadata that can copy the debug loc and pcsections at
the same time.

Differential Revision: https://reviews.llvm.org/D138112
2022-11-16 13:22:48 +00:00
Anton Sidorenko
4431e705cc [NFC] Use forward decl of MachineCombinerPattern enum to reduce dependencies
Differential Revision: https://reviews.llvm.org/D135776
2022-10-13 14:56:14 +01:00
Tim Northover
b586dc21a7 Outliner: add "target-cpu" feature from source function to outlined
The CPU is used to determine which inline asm instructions are allowed, so
needs to be copied across in case the outlined function contains any.
2022-08-02 09:33:29 +01:00
Matt Arsenault
8d0383eb69 CodeGen: Remove AliasAnalysis from regalloc
This was stored in LiveIntervals, but not actually used for anything
related to LiveIntervals. It was only used in one check for if a load
instruction is rematerializable. I also don't think this was entirely
correct, since it was implicitly assuming constant loads are also
dereferenceable.

Remove this and rely only on the invariant+dereferenceable flags in
the memory operand. Set the flag based on the AA query upfront. This
should have the same net benefit, but has the possible disadvantage of
making this AA query nonlazy.

Preserve the behavior of assuming pointsToConstantMemory implying
dereferenceable for now, but maybe this should be changed.
2022-07-18 17:23:41 -04:00
serge-sans-paille
989f1c72e0 Cleanup codegen includes
This is a (fixed) recommit of https://reviews.llvm.org/D121169

after:  1061034926
before: 1063332844

Discourse thread: https://discourse.llvm.org/t/include-what-you-use-include-cleanup
Differential Revision: https://reviews.llvm.org/D121681
2022-03-16 08:43:00 +01:00
Nico Weber
a278250b0f Revert "Cleanup codegen includes"
This reverts commit 7f230feeeac8a67b335f52bd2e900a05c6098f20.
Breaks CodeGenCUDA/link-device-bitcode.cu in check-clang,
and many LLVM tests, see comments on https://reviews.llvm.org/D121169
2022-03-10 07:59:22 -05:00
serge-sans-paille
7f230feeea Cleanup codegen includes
after:  1061034926
before: 1063332844

Differential Revision: https://reviews.llvm.org/D121169
2022-03-10 10:00:30 +01:00
Kazu Hirata
3a8c51480f [CodeGen] Use = default (NFC)
Identified with modernize-use-equals-default
2022-02-06 10:54:44 -08:00
serge-sans-paille
ffe8720aa0 Reduce dependencies on llvm/BinaryFormat/Dwarf.h
This header is very large (3M Lines once expended) and was included in location
where dwarf-specific information were not needed.

More specifically, this commit suppresses the dependencies on
llvm/BinaryFormat/Dwarf.h in two headers: llvm/IR/IRBuilder.h and
llvm/IR/DebugInfoMetadata.h. As these headers (esp. the former) are widely used,
this has a decent impact on number of preprocessed lines generated during
compilation of LLVM, as showcased below.

This is achieved by moving some definitions back to the .cpp file, no
performance impact implied[0].

As a consequence of that patch, downstream user may need to manually some extra
files:

llvm/IR/IRBuilder.h no longer includes llvm/BinaryFormat/Dwarf.h
llvm/IR/DebugInfoMetadata.h no longer includes llvm/BinaryFormat/Dwarf.h

In some situations, codes maybe relying on the fact that
llvm/BinaryFormat/Dwarf.h was including llvm/ADT/Triple.h, this hidden
dependency now needs to be explicit.

$ clang++ -E  -Iinclude -I../llvm/include ../llvm/lib/Transforms/Scalar/*.cpp -std=c++14 -fno-rtti -fno-exceptions | wc -l
after:   10978519
before:  11245451

Related Discourse thread: https://llvm.discourse.group/t/include-what-you-use-include-cleanup
[0] https://llvm-compile-time-tracker.com/compare.php?from=fa7145dfbf94cb93b1c3e610582c495cb806569b&to=995d3e326ee1d9489145e20762c65465a9caeab4&stat=instructions

Differential Revision: https://reviews.llvm.org/D118781
2022-02-04 11:44:03 +01:00
John Brawn
94843ea7d7 [AArch64] Make machine combiner patterns preserve MIFlags
This is mainly done so that we don't lose the nofpexcept flag once we
start emitting it.

Differential Revision: https://reviews.llvm.org/D118621
2022-02-03 11:58:59 +00:00
Fangrui Song
a6a07a514b [MachineOutliner] Don't outline functions starting with PATCHABLE_FUNCTION_ENTER/FENTRL_CALL
MachineOutliner may outline a "patchable-function-entry" function whose body has
a TargetOpcode::PATCHABLE_FUNCTION_ENTER MachineInstr. This is incorrect because
the special code sequence must stay unchanged to be used at run-time.
Avoid outlining PATCHABLE_FUNCTION_ENTER. While here, avoid outlining FENTRY_CALL too
(which doesn't reproduce currently) to allow phase ordering flexibility.

Fixes #52635

Reviewed By: paquette

Differential Revision: https://reviews.llvm.org/D115614
2021-12-13 13:24:29 -08:00
Mircea Trofin
4afae6f7c7 [NFC] Rename MachineFunction::cloneMachineInstrBundle (coding style) 2021-12-08 21:12:54 -08:00
Ties Stuij
f5f28d5b0c [ARM] Implement BTI placement pass for PACBTI-M
This patch implements a new MachineFunction in the ARM backend for
placing BTI instructions. It is similar to the existing AArch64
aarch64-branch-targets pass.

BTI instructions are inserted into basic blocks that:
- Have their address taken
- Are the entry block of a function, if the function has external
  linkage or has its address taken
- Are mentioned in jump tables
- Are exception/cleanup landing pads

Each BTI instructions is placed in the beginning of a BB after the
so-called meta instructions (e.g. exception handler labels).

Each outlining candidate and the outlined function need to be in agreement about
whether BTI placement is enabled or not. If branch target enforcement is
disabled for a function, the outliner should not covertly enable it by emitting
a call to an outlined function, which begins with BTI.

The cost mode of the outliner is adjusted to account for the extra BTI
instructions in the outlined function.

The ARM Constant Islands pass will maintain the count of the jump tables, which
reference a block. A `BTI` instruction is removed from a block only if the
reference count reaches zero.

PAC instructions in entry blocks are replaced with PACBTI instructions (tests
for this case will be added in a later patch because the compiler currently does
not generate PAC instructions).

The ARM Constant Island pass is adjusted to handle BTI
instructions correctly.

Functions with static linkage that don't have their address taken can
still be called indirectly by linker-generated veneers and thus their
entry points need be marked with BTI or PACBTI.

The changes are tested using "LLVM IR -> assembly" tests, jump tables
also have a MIR test. Unfortunately it is not possible add MIR tests
for exception handling and computed gotos because of MIR parser
limitations.

This patch is part of a series that adds support for the PACBTI-M extension of
the Armv8.1-M architecture, as detailed here:

https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/armv8-1-m-pointer-authentication-and-branch-target-identification-extension

The PACBTI-M specification can be found in the Armv8-M Architecture Reference
Manual:

https://developer.arm.com/documentation/ddi0553/latest

The following people contributed to this patch:

- Mikhail Maltsev
- Momchil Velikov
- Ties Stuij

Reviewed By: ostannard

Differential Revision: https://reviews.llvm.org/D112426
2021-12-01 12:54:05 +00:00
Kazu Hirata
259cd6f893 [llvm] Use range-based for loops (NFC) 2021-11-25 22:17:10 -08:00
Kazu Hirata
d243cbf8ea [llvm] Use isa instead of dyn_cast (NFC) 2021-11-14 19:40:46 -08:00
Stanislav Mekhanoshin
08d7eec06e Revert "Allow rematerialization of virtual reg uses"
Reverted due to two distcint performance regression reports.

This reverts commit 92c1fd19abb15bc68b1127a26137a69e033cdb39.
2021-09-24 10:26:11 -07:00
Stanislav Mekhanoshin
92c1fd19ab Allow rematerialization of virtual reg uses
Currently isReallyTriviallyReMaterializableGeneric() implementation
prevents rematerialization on any virtual register use on the grounds
that is not a trivial rematerialization and that we do not want to
extend liveranges.

It appears that LRE logic does not attempt to extend a liverange of
a source register for rematerialization so that is not an issue.
That is checked in the LiveRangeEdit::allUsesAvailableAt().

The only non-trivial aspect of it is accounting for tied-defs which
normally represent a read-modify-write operation and not rematerializable.

The test for a tied-def situation already exists in the
/CodeGen/AMDGPU/remat-vop.mir,
test_no_remat_v_cvt_f32_i32_sdwa_dst_unused_preserve.

The change has affected ARM/Thumb, Mips, RISCV, and x86. For the targets
where I more or less understand the asm it seems to reduce spilling
(as expected) or be neutral. However, it needs a review by all targets'
specialists.

Differential Revision: https://reviews.llvm.org/D106408
2021-08-24 11:09:02 -07:00