Remove the following intrinsics which can be trivially replaced with an
`addrspacecast`
* llvm.nvvm.ptr.gen.to.global
* llvm.nvvm.ptr.gen.to.shared
* llvm.nvvm.ptr.gen.to.constant
* llvm.nvvm.ptr.gen.to.local
* llvm.nvvm.ptr.global.to.gen
* llvm.nvvm.ptr.shared.to.gen
* llvm.nvvm.ptr.constant.to.gen
* llvm.nvvm.ptr.local.to.gen
Also, cleanup the NVPTX lowering of `addrspacecast` making it more
concise.
This was reverted to avoid conflicts while reverting #107655. Re-landing
unchanged.
This change deprecates the following intrinsics which can be trivially
converted to llvm funnel-shift intrinsics:
- @llvm.nvvm.rotate.b32
- @llvm.nvvm.rotate.right.b64
- @llvm.nvvm.rotate.b64
This fixes a bug in the previous version (#107655) which flipped the
order of the operands to the PTX funnel shift instruction. In LLVM IR
the high bits are the first arg and the low bits are the second arg,
while in PTX this is reversed.
When searching for an intrinsic name in a target specific slice of the
intrinsic name table, skip over the target prefix. For such cases,
currently the first loop iteration in `lookupLLVMIntrinsicByName` does
nothing (i.e., `Low` and `High` stay unchanged and it does not shrink
down the search window), so we can skip this useless first iteration by
skipping over the target prefix.
Remove the following intrinsics which can be trivially replaced with an
`addrspacecast`
* llvm.nvvm.ptr.gen.to.global
* llvm.nvvm.ptr.gen.to.shared
* llvm.nvvm.ptr.gen.to.constant
* llvm.nvvm.ptr.gen.to.local
* llvm.nvvm.ptr.global.to.gen
* llvm.nvvm.ptr.shared.to.gen
* llvm.nvvm.ptr.constant.to.gen
* llvm.nvvm.ptr.local.to.gen
Also, cleanup the NVPTX lowering of `addrspacecast` making it more
concise.
Remove the following methods:
* Type::getNonOpaquePointerElementType()
* Type::isOpaquePointerTy()
* LLVMContext::supportsTypedPointers()
* LLVMContext::setOpaquePointers()
These were used temporarily during the opaque pointers migration,
and are no longer needed.
This change deprecates the following intrinsics which can be trivially
converted to llvm funnel-shift intrinsics:
- @llvm.nvvm.rotate.b32
- @llvm.nvvm.rotate.right.b64
- @llvm.nvvm.rotate.b64
Remove the following intrinsics which correspond directly to a bitcast:
- llvm.nvvm.bitcast.f2i
- llvm.nvvm.bitcast.i2f
- llvm.nvvm.bitcast.d2ll
- llvm.nvvm.bitcast.ll2d
Since the migration from `@llvm.dbg.value` intrinsic to `#dbg_value`
records, there is no way to retrieve the debug records for an
`Instruction` in LLVM-C API.
Previously, with debug info intrinsics, retrieving debug info for an
`Instruction` could be done with `LLVMGetNextInstructions`, because the
intrinsic call was also an instruction.
However, to be able to retrieve debug info with the current LLVM, where
debug records are used, the `getDbgRecordRange()` iterator needs to be
exposed.
Add new functions for DbgRecord sequence traversal:
LLVMGetFirstDbgRecord
LLVMGetLastDbgRecord
LLVMGetNextDbgRecord
LLVMGetPreviousDbgRecord
See llvm/docs/RemoveDIsDebugInfo.md and release notes.
It is almost always simpler to use {} instead of std::nullopt to
initialize an empty ArrayRef. This patch changes all occurrences I could
find in LLVM itself. In future the ArrayRef(std::nullopt_t) constructor
could be deprecated or removed.
During inter-procedural SCCP, also infer attributes on arguments, not
just return values. This allows other non-interprocedural passes to make
use of the information later.
In `User::operator new` a single allocation is created to store the
`User` object itself, "intrusive" operands or a pointer for "hung off"
operands, and the descriptor. After allocation, details about the layout
(number of operands, how the operands are stored, if there is a
descriptor) are stored in the `User` object by settings its fields. The
`Value` and `User` constructors are then very careful not to initialize
these fields so that the values set during allocation can be
subsequently read. However, when the `User` object is returned from
`operator new` [its value is technically "indeterminate" and so reading
a field without first initializing it is undefined behavior (and will be
erroneous in
C++26)](https://en.cppreference.com/w/cpp/language/default_initialization#Indeterminate_and_erroneous_values).
We discovered this issue when trying to build LLVM using MSVC's [`/sdl`
flag](https://learn.microsoft.com/en-us/cpp/build/reference/sdl-enable-additional-security-checks?view=msvc-170)
which clears class fields after allocation (the docs say that this
feature shouldn't be turned on for custom allocators and should only
clear pointers, but that doesn't seem to match the implementation).
MSVC's behavior both with and without the `/sdl` flag is standards
conforming since a program is supposed to initialize storage before
reading from it, thus the compiler implementation changing any values
will never be observed in a well-formed program. The standard also
provides no provisions for making storage bytes not indeterminate by
setting them during allocation or `operator new`.
The fix for this is to create a set of types that encode the layout and
provide these to both `operator new` and the constructor:
* The `AllocMarker` types are used to select which `operator new` to
use.
* `AllocMarker` can then be implicitly converted to a `AllocInfo` which
tells the constructor how the type was laid out.
Adds target codegen info class for DirectX. For now it always translates
`__hlsl_resource_t` handle to `target("dx.TypedBuffer", i32, 1, 0, 1)`
(`RWBuffer<int>`). More work is needed to determine the actual target
exp type and parameters based on the resource handle attributes.
Part 1/2 of #95952
This PR adds DK_Instrumentation enum to DiagnosticKind and
DiagnosticInfoInstrumentation is extended from DiagnosticsInfo for IR
instrumentation reporting.
During the ThinLTO indexing step for one of our large applications, we
create 4 million instances of FunctionSummary.
Changing:
std::vector<EdgeTy> CallGraphEdgeList;
to:
SmallVector<EdgeTy, 0> CallGraphEdgeList;
in FunctionSummary reduces the size of each instance by 8 bytes. The
rest of the patch makes the same change to other places so that the
types stay compatible across function boundaries.
The primary motivation is to remove `EntryCount` from `FunctionSummary`.
This frees 8 bytes out of `sizeof(FunctionSummary)` (136 bytes as of
64498c5483).
While I'm at it, this PR clean up {SummaryBasedOptimizations,
SyntheticCountsPropagation} since they were not used and there are no
plans to further invest on them.
With this patch, bitcode writer writes a placeholder 0 at the byte
offset of `EntryCount` and bitcode reader can parse the function entry
count at the correct byte offset. Added a TODO to stop writing
`EntryCount` and bump bitcode version
Since IR Types are immutable it makes sense to check them on
construction instead of in the IR Verifier pass.
This patch checks that some TargetExtTypes are well-formed in the sense
that they have the expected number of type parameters and integer
parameters. When called from LLParser it gives a diagnostic message.
When called from anywhere else it just asserts that they are
well-formed.
This handles the edge case where BitWidth is 1 and doing the
increment gets a value that's not valid in that width, while we
just want wrap-around.
Split out of https://github.com/llvm/llvm-project/pull/80309.
Change the "fixed encoding" table used for encoding intrinsic
type signature to use 16-bit encoding as opposed to 32-bit.
This results in both space and time improvements. For space,
the total static storage size (in bytes) of this info reduces by 50%:
- Current = 14193*4 (Fixed table) + 16058 + 3 (Long Table) = 72833
- New size = 14193*2 (Fixed table) + 19879 + 3 (Long Table) = 48268.
- Reduction = 50.9%
For time, with the added benchmark, we see a 7.3% speedup in
`GetIntrinsicInfoTableEntries` benchmark. Actual output of the
benchmark in included in the GitHub MR.
When serialising to textual IR, there can be constant Values referred to
by DbgRecords that don't appear anywhere else, and have types hidden
even deeper in side them. Enumerate these when enumerating all types.
Test by Mikael Holmén.
Summary:
This patch proposes new llvm types for RISCV vector tuples represented
as `TargetExtType` which contains both `LMUL` and `NF`(num_fields)
information and keep it all the way down to `selectionDAG` to match the
corresponding `MVT`(support in the following patch).
Detail:
Currently we have built-in C types for RISCV vector tuple type, e.g.
`vint32m1x2_t`, however it's is represented as structure of scalable
vector types, i.e. `{<vscale x 2 x i32>, <vscale x 2 x i32>}`. It loses
the information for num_fields(NF) as struct is flattened during
`selectionDAG`, thus it makes it not possible to handle inline assembly
of vector tuple type, it also makes the calling convention of vector
tuple types handing not strait forward and hard to realize the
allocation code, i.e. `RVVArgDispatcher`.
The llvm IR for the example above is then represented as
`target("riscv.vector.tuple", <vscale x 8 x i8>, 2)` in which the first
type parameter is the equivalent size scalable vecotr of i8 element
type, the following integer parameter is the `NF` of the tuple.
The new RISCV specific vector insert/extract intrinsics are also added
as `llvm.riscv.vector.insert` and `llvm.riscv.vector.extract` to handle
tuple type subvector insertion/extraction since the generic ones only
operates on `VectorType` but not `TargetExtType`.
There are total of 32 llvm types added for each `VREGS * NF <= 8`, where
`VREGS` is the vector registers needed for each `LMUL` and `NF` is
num_fields.
The name of types are:
```
target("riscv.vector.tuple", <vscale x 1 x i8>, 2) // LMUL = mf8, NF = 2
target("riscv.vector.tuple", <vscale x 1 x i8>, 3) // LMUL = mf8, NF = 3
target("riscv.vector.tuple", <vscale x 1 x i8>, 4) // LMUL = mf8, NF = 4
target("riscv.vector.tuple", <vscale x 1 x i8>, 5) // LMUL = mf8, NF = 5
target("riscv.vector.tuple", <vscale x 1 x i8>, 6) // LMUL = mf8, NF = 6
target("riscv.vector.tuple", <vscale x 1 x i8>, 7) // LMUL = mf8, NF = 7
target("riscv.vector.tuple", <vscale x 1 x i8>, 8) // LMUL = mf8, NF = 8
target("riscv.vector.tuple", <vscale x 2 x i8>, 2) // LMUL = mf4, NF = 2
target("riscv.vector.tuple", <vscale x 2 x i8>, 3) // LMUL = mf4, NF = 3
target("riscv.vector.tuple", <vscale x 2 x i8>, 4) // LMUL = mf4, NF = 4
target("riscv.vector.tuple", <vscale x 2 x i8>, 5) // LMUL = mf4, NF = 5
target("riscv.vector.tuple", <vscale x 2 x i8>, 6) // LMUL = mf4, NF = 6
target("riscv.vector.tuple", <vscale x 2 x i8>, 7) // LMUL = mf4, NF = 7
target("riscv.vector.tuple", <vscale x 2 x i8>, 8) // LMUL = mf4, NF = 8
target("riscv.vector.tuple", <vscale x 4 x i8>, 2) // LMUL = mf2, NF = 2
target("riscv.vector.tuple", <vscale x 4 x i8>, 3) // LMUL = mf2, NF = 3
target("riscv.vector.tuple", <vscale x 4 x i8>, 4) // LMUL = mf2, NF = 4
target("riscv.vector.tuple", <vscale x 4 x i8>, 5) // LMUL = mf2, NF = 5
target("riscv.vector.tuple", <vscale x 4 x i8>, 6) // LMUL = mf2, NF = 6
target("riscv.vector.tuple", <vscale x 4 x i8>, 7) // LMUL = mf2, NF = 7
target("riscv.vector.tuple", <vscale x 4 x i8>, 8) // LMUL = mf2, NF = 8
target("riscv.vector.tuple", <vscale x 8 x i8>, 2) // LMUL = m1, NF = 2
target("riscv.vector.tuple", <vscale x 8 x i8>, 3) // LMUL = m1, NF = 3
target("riscv.vector.tuple", <vscale x 8 x i8>, 4) // LMUL = m1, NF = 4
target("riscv.vector.tuple", <vscale x 8 x i8>, 5) // LMUL = m1, NF = 5
target("riscv.vector.tuple", <vscale x 8 x i8>, 6) // LMUL = m1, NF = 6
target("riscv.vector.tuple", <vscale x 8 x i8>, 7) // LMUL = m1, NF = 7
target("riscv.vector.tuple", <vscale x 8 x i8>, 8) // LMUL = m1, NF = 8
target("riscv.vector.tuple", <vscale x 16 x i8>, 2) // LMUL = m2, NF = 2
target("riscv.vector.tuple", <vscale x 16 x i8>, 3) // LMUL = m2, NF = 3
target("riscv.vector.tuple", <vscale x 16 x i8>, 4) // LMUL = m2, NF = 4
target("riscv.vector.tuple", <vscale x 32 x i8>, 2) // LMUL = m4, NF = 2
```
RFC:
https://discourse.llvm.org/t/rfc-support-riscv-vector-tuple-type-in-llvm/80005
This reverts commit 178fc4779ece31392a2cd01472b0279e50b3a199.
This attribute was not needed now that we are using the lsan style
ScopedDisabler for disabling this sanitizer
See #106736#106125
For more discussion
These lists are quite static. Heavy use of macros is undesirable, and
not idiomatic in LLVM, so let's just use the naive switch cases.
Note that the first two fields in the CONSTRAINEDFP property were
utterly unused (aside from a C++ test).
In the same vien as https://github.com/llvm/llvm-project/pull/105551.
Once both changes have landed, we'll be left with _BINARYOP which needs
a bit of additional untangling, and the actual opcode mappings.
These lists are quite static and several of the parameters are actually
constant across all users. Heavy use of macros is undesirable, and not
idiomatic in LLVM, so let's just use the naive switch cases.
I'll probably continue with removing the other property macros. These
two just happened to be the two I actually had to figure out for an
unrelated change.
This patch is part of a set of patches that add an `-fextend-lifetimes`
flag to clang, which extends the lifetimes of local variables and
parameters for improved debuggability. In addition to that flag, the
patch series adds a pragma to selectively disable `-fextend-lifetimes`,
and an `-fextend-this-ptr` flag which functions as `-fextend-lifetimes`
for this pointers only. All changes and tests in these patches were
written by Wolfgang Pieb (@wolfy1961), while Stephen Tozer (@SLTozer)
has handled review and merging. The extend lifetimes flag is intended to
eventually be set on by `-Og`, as discussed in the RFC
here:
https://discourse.llvm.org/t/rfc-redefine-og-o1-and-add-a-new-level-of-og/72850
This patch implements a new intrinsic instruction in LLVM,
`llvm.fake.use` in IR and `FAKE_USE` in MIR, that takes a single operand
and has no effect other than "using" its operand, to ensure that its
operand remains live until after the fake use. This patch does not emit
fake uses anywhere; the next patch in this sequence causes them to be
emitted from the clang frontend, such that for each variable (or this) a
fake.use operand is inserted at the end of that variable's scope, using
that variable's value. This patch covers everything post-frontend, which
is largely just the basic plumbing for a new intrinsic/instruction,
along with a few steps to preserve the fake uses through optimizations
(such as moving them ahead of a tail call or translating them through
SROA).
Co-authored-by: Stephen Tozer <stephen.tozer@sony.com>
Fix#105571 which demonstrates an end() iterator dereference when
performing a non-empty splice to end() from a region that ends at
Src::end().
Rather than calling Instruction::adoptDbgRecords from Dest, create a marker
(which takes an iterator) and absorbDebugValues onto that. The "absorb" variant
doesn't clean up the source marker, which in this case we know is a trailing
marker, so we have to do that manually.
Not quite NFC, fixes splitBasicBlockBefore case when we split before an
instruction with debug records (but without the headBit set, i.e., we are
splitting before the instruction but after the debug records that come before
it). splitBasicBlockBefore splices the instructions before the split point into
a new block. Prior to this patch, the debug records would get shifted up to the
front of the spliced instructions (as seen in the modified unittest - I believe
the unittest was checking erroneous behaviour). We instead want to leave those
debug records at the end of the spliced instructions.
The functionality of the deleted `else if` branch is covered by the remaining
`if` now that `DestMarker` is set to the trailing marker if `Dest` is `end()`.
Previously the "===" markers were sometimes detached, now we always detach
them and always reattach them.
Note: `deleteTrailingDbgRecords` only "unlinks" the tailing marker from the
block, it doesn't delete anything. The trailing marker is still cleaned up
properly inside the final `if` body with `DestMarker->eraseFromParent();`.
Part 1 of 2 needed for #105571
This patch is moving out stepvector intrinsic from the experimental
namespace.
This intrinsic exists in LLVM for several years now, and is widely used.