6850 Commits

Author SHA1 Message Date
Mehdi Amini
a04b0d587a
Implement Move-assignment for llvm::Module (NFC) (#117270)
Move-assignment is quite convenient in various situation, and
work-around having it available is very convoluted.
2024-11-22 19:48:34 +01:00
Jay Foad
294c5cb2be
[IR] Add TargetExtType::CanBeLocal property (#99016)
Add a property to allow marking target extension types that cannot be
used in an alloca instruction or byval argument, similar to CanBeGlobal
for global variables.

---------

Co-authored-by: Nikita Popov <github@npopov.com>
2024-11-22 10:02:43 +00:00
Nikita Popov
cc70e12ebd [Operator] Truncate large type sizes in GEP calculations
If the size is larger than the index width, truncate it instead
of asserting.

Longer-term we should consider rejecting types larger than the
index size in the verifier, though this is probably tricky in
practice (it's address space dependent, and types are owned by
the context, not the module).

Fixes https://github.com/llvm/llvm-project/issues/116960.
2024-11-21 15:00:43 +01:00
Paul Walker
4872ecf1cc
[LLVM][IR] Teach extractelement folds about constant ConstantInt/FP. (#116793) 2024-11-21 12:39:53 +00:00
Paul Walker
af641ff260
[LLVM][IR] Refactor ConstantFold:FoldBitCast to fully support vector ConstantInt/FP. (#116787)
This fixes the code quality issue reported in
https://github.com/llvm/llvm-project/pull/111149.
2024-11-21 12:05:15 +00:00
Paul Walker
56c091ea71
[LLVM][IR] Use splat syntax when printing ConstantExpr based splats. (#116856)
This brings the printing of scalable vector constant splats inline with
their fixed length counterparts.
2024-11-21 11:21:12 +00:00
Ramkumar Ramachandra
2b5214b9e1
IR: de-duplicate two CmpInst routines (NFC) (#116866)
De-duplicate the functions getSignedPredicate and getUnsignedPredicate,
nearly identical versions of which were present in CmpInst and ICmpInst,
creating less confusion.
2024-11-20 09:30:35 +00:00
Jay Foad
497b1ae15f
[IR] Improve check for target extension types disallowed in globals. (#116639)
For target extension types that are not allowed to be used as globals,
also disallow them nested inside array and structure types.
2024-11-19 09:29:01 +00:00
Tom Honermann
dc087d1a33
Avoid undefined behavior in shift operators during constant folding of DIExpressions. (#116466)
Bit shift operations with a shift operand greater than or equal to the bit width
of the (promoted) value type result in undefined behavior according to C++
[expr.shift]p1. This change adds checking for this situation and avoids attempts
to constant fold DIExpressions that would otherwise provoke such behavior.
An existing test that presumably intended to exercise shifts at the UB boundary
has been updated; it now checks for shifts of 64 bits instead of 65. This issue
was reported by a static analysis tool; no actual cases of shift operations that
would result in undefined behavior in practice have been identified.
2024-11-18 18:32:20 -05:00
Teresa Johnson
9513f2fdf2
[MemProf] Print full context hash when reporting hinted bytes (#114465)
Improve the information printed when -memprof-report-hinted-sizes is
enabled. Now print the full context hash computed from the original
profile, similar to what we do when reporting matching statistics. This
will make it easier to correlate with the profile.

Note that the full context hash must be computed at profile match time
and saved in the metadata and summary, because we may trim the context
during matching when it isn't needed for distinguishing hotness.
Similarly, due to the context trimming, we may have more than one full
context id and total size pair per MIB in the metadata and summary,
which now get a list of these pairs.

Remove the old aggregate size from the metadata and summary support.
One other change from the prior support is that we no longer write the
size information into the combined index for the LTO backends, which
don't use this information, which reduces unnecessary bloat in
distributed index files.
2024-11-15 08:24:44 -08:00
Alex Bradbury
298127dcbe Reapply [IR] Initial introduction of llvm.experimental.memset_pattern (#97583)
Relands 7ff3a9acd84654c9ec2939f45ba27f162ae7fbc3 after regenerating the
test case.

Supersedes the draft PR #94992, taking a different approach following
feedback:
* Lower in PreISelIntrinsicLowering
* Don't require that the number of bytes to set is a compile-time
constant
* Define llvm.memset_pattern rather than llvm.memset_pattern.inline

As discussed in the [RFC
thread](https://discourse.llvm.org/t/rfc-introducing-an-llvm-memset-pattern-inline-intrinsic/79496),
the intent is that the intrinsic will be lowered to loops, a sequence of
stores, or libcalls depending on the expected cost and availability of
libcalls on the target. Right now, there's just a single lowering path
that aims to handle all cases. My intent would be to follow up with
additional PRs that add additional optimisations when possible (e.g.
when libcalls are available, when arguments are known to be constant
etc).
2024-11-15 15:21:39 +00:00
Alex Bradbury
0fb8fac5d6 Revert "[IR] Initial introduction of llvm.experimental.memset_pattern (#97583)"
This reverts commit 7ff3a9acd84654c9ec2939f45ba27f162ae7fbc3.

Recent scheduling changes means tests need to be re-generated. Reverting
to green while I do that.
2024-11-15 14:48:32 +00:00
Nikita Popov
dad9e4a165
[RuntimeLibCalls] Consistently disable unavailable libcalls (#116214)
The logic for marking runtime libcalls unavailable currently duplicates
essentially the same logic for some random subset of targets, where
someone reported an issue and then someone went and fixed the issue for
that specific target only. However, the availability for most of these
is completely target independent. In particular:

 * MULO_I128 is never available in libgcc
 * Various I128 libcalls are not available for 32-bit targets in libgcc
 * powi is never available in MSVCRT

Unify the logic for these, so we don't miss any targets. This fixes
https://github.com/llvm/llvm-project/issues/16778 on AArch64, which is
one of the targets that was previously missed in this logic.
2024-11-15 15:46:28 +01:00
Alex Bradbury
7ff3a9acd8
[IR] Initial introduction of llvm.experimental.memset_pattern (#97583)
Supersedes the draft PR #94992, taking a different approach following
feedback:
* Lower in PreISelIntrinsicLowering
* Don't require that the number of bytes to set is a compile-time
constant
* Define llvm.memset_pattern rather than llvm.memset_pattern.inline

As discussed in the [RFC
thread](https://discourse.llvm.org/t/rfc-introducing-an-llvm-memset-pattern-inline-intrinsic/79496),
the intent is that the intrinsic will be lowered to loops, a sequence of
stores, or libcalls depending on the expected cost and availability of
libcalls on the target. Right now, there's just a single lowering path
that aims to handle all cases. My intent would be to follow up with
additional PRs that add additional optimisations when possible (e.g.
when libcalls are available, when arguments are known to be constant
etc).
2024-11-15 14:07:46 +00:00
Graham Hunter
ed5aaddd7b
[IR] Vector extract last active element intrinsic (#113587)
As discussed in #112738, it may be better to have an intrinsic to represent vector element extracts based on mask bits. This intrinsic is for the case of extracting the last active element, if any, or a default value if the mask is all-false.

The target-agnostic SelectionDAG lowering is similar to the IR in #106560.
2024-11-14 17:48:43 +00:00
Ricardo Jesus
e52238b59f
[AArch64] Add @llvm.experimental.vector.match (#101974)
This patch introduces an experimental intrinsic for matching the
elements of one vector against the elements of another.

For AArch64 targets that support SVE2, the intrinsic lowers to a MATCH
instruction for supported fixed and scalar vector types.
2024-11-14 09:00:19 +00:00
Daniel Paoliello
fa0cf3d39e
[llvm][aarch64] Fix Arm64EC name mangling algorithm (#115567)
Arm64EC uses a special name mangling mode that adds `$$h` between the
symbol name and its type. In MSVC's name mangling `@` is used to
separate the name and type BUT it is also used for other purposes, such
as the separator between paths in a fully qualified name.

The original algorithm was quite fragile and made assumptions that
didn't hold true for all MSVC mangled symbols, so instead of trying to
improve this algorithm we are now using the demangler to indicate where
the insertion point should be (i.e., to parse the fully-qualified name
and return the current string offset).

Also fixed `isArm64ECMangledFunctionName` to search for `@$$h` since the
`$$h` must always be after a `@`.

Fixes #115231
2024-11-13 15:35:03 -08:00
Augusto Noronha
67fb2686fb
[DebugInfo] Add a specification attribute to LLVM DebugInfo (#115362)
Add a specification attribute to LLVM DebugInfo, which is analogous
to DWARF's DW_AT_specification. According to the DWARF spec:
"A debugging information entry that represents a declaration that
completes another (earlier) non-defining declaration may have a
DW_AT_specification attribute whose value is a reference to the
debugging information entry representing the non-defining declaration."

This patch allows types to be specifications of other types. This is
used by Swift to represent generic types. For example, given this Swift
program:

```
struct MyStruct<T> {
    let t: T
}

let variable = MyStruct<Int>(t: 43)
```

The Swift compiler emits (roughly) an unsubtituted type for MyStruct<T>:
```
DW_TAG_structure_type
    DW_AT_name	("MyStruct")
    // "$s1w8MyStructVyxGD" is a Swift mangled name roughly equivalent to 
    // MyStruct<T>
    DW_AT_linkage_name	("$s1w8MyStructVyxGD")
    // other attributes here
```
And a specification for MyStruct<Int>:
```
DW_TAG_structure_type
    DW_AT_specification	(<link to "MyStruct">)
    // "$s1w8MyStructVySiGD" is a Swift mangled name equivalent to
    // MyStruct<Int>
    DW_AT_linkage_name	("$s1w8MyStructVySiGD")
    DW_AT_byte_size	(0x08)
    // other attributes here
```
2024-11-13 09:55:37 -08:00
Paul Walker
97298853b4
[LLVM][IR] Teach constant integer binop folds about vector ConstantInts. (#115739)
The existing logic mostly works with the main changes being:
 * Use getScalarSizeInBits instead of IntegerType::getBitWidth
 * Use ConstantInt::get(Type* instead of ConstantInt::get(LLVMContext
2024-11-13 12:33:56 +00:00
Kazu Hirata
4048c64306
[llvm] Remove redundant control flow statements (NFC) (#115831)
Identified with readability-redundant-control-flow.
2024-11-12 10:09:42 -08:00
Nikita Popov
63fb980d50
[IR] Add helper for comparing KnownBits with IR predicate (NFC) (#115878)
Add `ICmpInst::compare()` overload accepting `KnownBits`, similar to the
existing one accepting `APInt`. This is not directly part of KnownBits
(or APInt) for layering reasons.
2024-11-12 17:41:08 +01:00
Kazu Hirata
5b19ed8bb4
[llvm] Migrate away from PointerUnion::{is,get,dyn_cast} (NFC) (#115626)
Note that PointerUnion::{is,get,dyn_cast} have been soft deprecated in
PointerUnion.h:

  // FIXME: Replace the uses of is(), get() and dyn_cast() with
  //        isa<T>, cast<T> and the llvm::dyn_cast<T>
2024-11-10 07:24:06 -08:00
Adrian Kuegel
3c3f19ca5e Revert "[NFC][LLVM] Use namespace Intrinsic in Intrinsics.cpp (#114822)"
This reverts commit c2b61fcb3cd4ffa286b24437b7b6d66f0dee6c25.

Intrinsic namespace contains memcpy which is a naming conflict with
memcpy from string.h header.
2024-11-08 14:32:31 +00:00
Hans Wennborg
546066e4f7 Fix DIBuilder::createVariantPart after f6617d65e496
which ended up passing 0 for the Discriminator arg, Discriminator for
the DataLocation arg, etc.

The DICompositeType::get's new NumExtraInhabitants parameter is at the
end, and has a default value, so no change in the caller is necessary.

See comment on https://github.com/llvm/llvm-project/pull/112590
2024-11-08 10:50:36 +01:00
Augusto Noronha
f6617d65e4
[DebugInfo] Add num_extra_inhabitants to debug info (#112590)
An extra inhabitant is a bit pattern that does not represent a valid
value for instances of a given type. The number of extra inhabitants is
the number of those bit configurations.

This is used by Swift to save space when composing types. For example,
because Bool only needs 2 bit patterns to represent all of its values
(true and false), an Optional<Bool> only occupies 1 byte in memory by
using a bit configuration that is unused by Bool. Which bit patterns are
unused are part of the ABI of the language.

Since Swift generics are not monomorphized, by using dynamic libraries
you can have generic types whose size, alignment, etc, are known only
at runtime (which is why this feature is needed).

This patch adds num_extra_inhabitants to LLVM-IR debug info and in DWARF
as an Apple extension.
2024-11-06 15:48:04 -08:00
yingopq
86e4beb702
[MIPS] LLVM data layout give i128 an alignment of 16 for mips64 (#112084)
Fix parts of #102783.
2024-11-06 16:14:30 +01:00
Paul Walker
38fffa630e
[LLVM][IR] Use splat syntax when printing Constant[Data]Vector. (#112548) 2024-11-06 11:53:33 +00:00
Rahul Joshi
c2b61fcb3c
[NFC][LLVM] Use namespace Intrinsic in Intrinsics.cpp (#114822)
Add `using namespace Intrinsic` to Intrinsics.cpp file.
2024-11-05 07:27:51 -08:00
Jay Foad
4831e0aa88
[IR] Disallow recursive types (#114799)
StructType::setBody is the only mechanism that can potentially create
recursion in the type system. Add a runtime check that it is not
actually used to create recursion.

If the check fails, report an error from LLParser, BitcodeReader and
IRLinker. In all other cases assert that the check succeeds.

In future StructType::setBody will be removed in favor of specifying the
body when the type is created, so any performance hit from this runtime
check will be temporary.
2024-11-05 09:41:10 +00:00
Rahul Joshi
b5dc7b8fc2
[LLVM] Change LLVMIntrinsicCopyOverloadedName API return type (#114334)
Change the return type of `LLVMIntrinsicCopyOverloadedName` and
`LLVMIntrinsicCopyOverloadedName2` to `char *` instead of `const char *`
since the returned memory is owned by the caller and we expect that the
returned pointer is passed to free to deallocate it (without casting it
back to non-const pointer).
2024-11-04 08:21:01 -08:00
Ramkumar Ramachandra
cd16b077bf
IR: introduce CmpInst::isEquivalence (#111979)
Steal impliesEquivalanceIf{True,False} (sic) from GVN, and extend it for
floating-point constant vectors, and accounting for denormal values.
Since InstCombine also performs GVN-like replacements, introduce
CmpInst::isEquivalence, and remove the corresponding code in GVN, with
the intent of using it in more places.

The code in GVN also has a bad FIXME saying that the optimization may be
valid in the nsz case, but this is not the case.

Alive2 proof: https://alive2.llvm.org/ce/z/vEaK8M
2024-11-04 15:54:59 +00:00
Kazu Hirata
6f10b65297
[IR] Remove unused includes (NFC) (#114679)
Identified with misc-include-cleaner.
2024-11-02 16:54:06 -07:00
Yingwei Zheng
a77dedcacb
[InstSimplify][InstCombine][ConstantFold] Move vector div/rem by zero fold to InstCombine (#114280)
Previously we fold `div/rem X, C` into `poison` if any element of the
constant divisor `C` is zero or undef. However, it is incorrect when
threading udiv over an vector select:
https://alive2.llvm.org/ce/z/3Ninx5
```
define <2 x i32> @vec_select_udiv_poison(<2 x i1> %x) {
  %sel = select <2 x i1> %x, <2 x i32> <i32 -1, i32 -1>, <2 x i32> <i32 0, i32 1>
  %div = udiv <2 x i32> <i32 42, i32 -7>, %sel
  ret <2 x i32> %div
}
```
In this case, `threadBinOpOverSelect` folds `udiv <i32 42, i32 -7>, <i32
-1, i32 -1>` and `udiv <i32 42, i32 -7>, <i32 0, i32 1>` into
`zeroinitializer` and `poison`, respectively. One solution is to
introduce a new flag indicating that we are threading over a vector
select. But it requires to modify both `InstSimplify` and
`ConstantFold`.

However, this optimization doesn't provide benefits to real-world
programs:

https://dtcxzyw.github.io/llvm-opt-benchmark/coverage/data/zyw/opt-ci/actions-runner/_work/llvm-opt-benchmark/llvm-opt-benchmark/llvm/llvm-project/llvm/lib/IR/ConstantFold.cpp.html#L908

https://dtcxzyw.github.io/llvm-opt-benchmark/coverage/data/zyw/opt-ci/actions-runner/_work/llvm-opt-benchmark/llvm-opt-benchmark/llvm/llvm-project/llvm/lib/Analysis/InstructionSimplify.cpp.html#L1107

This patch moves the fold into InstCombine to avoid breaking numerous
existing tests.

Fixes #114191 and #113866 (only poison-safety issue).
2024-11-01 22:56:22 +08:00
Matt Arsenault
9e8219a78c
IR: Fix verifier missing addrspace mismatch in vector GEPs (#114091) 2024-10-29 19:41:49 -07:00
tf2spi
f23bdbbaff
Add DILabel functions for LLVM-C (#112840)
Addresses #112799
2024-10-28 10:59:53 -07:00
Alex MacLean
fb33af08e4
[NVPTX] Remove nvvm.ldg.global.* intrinsics (#112834)
Remove these intrinsics which can be better represented by load
instructions with `!invariant.load` metadata:

- llvm.nvvm.ldg.global.i
- llvm.nvvm.ldg.global.f
- llvm.nvvm.ldg.global.p
2024-10-27 16:14:13 -07:00
Kyungwoo Lee
0dd9fdcf83
[StructuralHash] Support Differences (#112638)
This computes a structural hash while allowing for selective ignoring of
certain operands based on a custom function that is provided. Instead of
a single hash value, it now returns FunctionHashInfo which includes a
hash value, an instruction mapping, and a map to track the operand
location and its corresponding hash value that is ignored.

Depends on https://github.com/llvm/llvm-project/pull/112621.
This is a patch for
https://discourse.llvm.org/t/rfc-global-function-merging/82608.
2024-10-26 20:02:05 -07:00
Kyungwoo Lee
1941c5180b Reland (2nd attempt) [StructuralHash] Refactor (#112621)
This is largely NFC, and it prepares for #112638.
 - Use stable_hash instead of uint64_t
 - Rename update* to hash* functions. They compute stable_hash locally and return it.

This is a patch for
https://discourse.llvm.org/t/rfc-global-function-merging/82608.
2024-10-26 16:12:28 -07:00
Kyungwoo Lee
d104b8e827 Revert "Reland [StructuralHash] Refactor (#112621)"
This reverts commit 98ca9a635bd2fb98cee473a9558687a5b522e219.
2024-10-26 13:55:46 -07:00
Kyungwoo Lee
98ca9a635b Reland [StructuralHash] Refactor (#112621)
This is largely NFC, and it prepares for #112638.
 - Use stable_hash instead of uint64_t
 - Rename update* to hash* functions. They compute stable_hash locally and return it.

This is a patch for
https://discourse.llvm.org/t/rfc-global-function-merging/82608.
2024-10-26 12:07:57 -07:00
Kyungwoo Lee
9672375623 Revert "[StructuralHash] Refactor (#112621)"
This reverts commit b667d161f0a9ff6b29cda0ccdb0081610c1e8b8c.
2024-10-26 09:55:21 -07:00
Kyungwoo Lee
b667d161f0
[StructuralHash] Refactor (#112621)
This is largely NFC, and it prepares for #112638.
 - Use stable_hash instead of uint64_t
- Rename update* to hash* functions. They compute stable_hash locally and return it.
 
This is a patch for
https://discourse.llvm.org/t/rfc-global-function-merging/82608.
2024-10-26 09:20:26 -07:00
davidtrevelyan
4102625380
[rtsan][llvm][NFC] Rename sanitize_realtime_unsafe attr to sanitize_realtime_blocking (#113155)
# What

This PR renames the newly-introduced llvm attribute
`sanitize_realtime_unsafe` to `sanitize_realtime_blocking`. Likewise,
sibling variables such as `SanitizeRealtimeUnsafe` are renamed to
`SanitizeRealtimeBlocking` respectively. There are no other functional
changes.


# Why?

- There are a number of problems that can cause a function to be
real-time "unsafe",
- we wish to communicate what problems rtsan detects and *why* they're
unsafe, and
- a generic "unsafe" attribute is, in our opinion, too broad a net -
which may lead to future implementations that need extra contextual
information passed through them in order to communicate meaningful
reasons to users.
- We want to avoid this situation and make the runtime library boundary
API/ABI as simple as possible, and
- we believe that restricting the scope of attributes to names like
`sanitize_realtime_blocking` is an effective means of doing so.

We also feel that the symmetry between `[[clang::blocking]]` and
`sanitize_realtime_blocking` is easier to follow as a developer.

# Concerns

- I'm aware that the LLVM attribute `sanitize_realtime_unsafe` has been
part of the tree for a few weeks now (introduced here:
https://github.com/llvm/llvm-project/pull/106754). Given that it hasn't
been released in version 20 yet, am I correct in considering this to not
be a breaking change?
2024-10-26 13:06:11 +01:00
Gang Chen
4ac0e7e400
[AMDGPU] Add a type for the named barrier (#113614) 2024-10-25 11:24:47 -07:00
ssijaric-nv
14db069468
[InstCombine] Fix a cycle when folding fneg(select) with scalable vector types (#112465)
The two folding operations are causing a cycle for the following case
with
scalable vector types:

define <vscale x 2 x double> @test_fneg_select_abs(<vscale x 2 x i1>
%cond, <vscale x 2 x double> %b) {
%1 = select <vscale x 2 x i1> %cond, <vscale x 2 x double>
zeroinitializer, <vscale x 2 x double> %b
  %2 = fneg fast <vscale x 2 x double> %1
  ret <vscale x 2 x double> %2
}

1) fold fneg:  -(Cond ? C : Y) -> Cond ? -C : -Y

2) fold select: (Cond ? -X : -Y) -> -(Cond ? X : Y)

1) results in the following since '<vscale x 2 x double>
zeroinitializer' passes
the check for the immediate constant:

%.neg = fneg fast <vscale x 2 x double> zeroinitializer
%b.neg = fneg fast <vscale x 2 x double> %b
%1 = select fast <vscale x 2 x i1> %cond, <vscale x 2 x double> %.neg,
<vscale x 2 x double> %b.neg

and so we end up going back and forth between 1) and 2).

Attempt to fold scalable vector constants, so that we end up with a
splat instead:

define <vscale x 2 x double> @test_fneg_select_abs(<vscale x 2 x i1>
%cond, <vscale x 2 x double> %b) {
  %b.neg = fneg fast <vscale x 2 x double> %b
%1 = select fast <vscale x 2 x i1> %cond, <vscale x 2 x double>
shufflevector (<vscale x 2 x double> insertelement (<vscale x 2 x
double> poison, double -0.000000e+00, i64 0), <vscale x 2 x double>
poison, <vscale x 2 x i32> zeroinitializer), <vscale x 2 x double>
%b.neg
  ret <vscale x 2 x double> %1
}
2024-10-25 10:47:39 -07:00
Alexander Richardson
305a1ceae3
[DataLayout] Refactor storage of non-integral address spaces
Instead of storing this as a separate array of non-integral pointers,
add it to the PointerSpec class instead. This will allow for future
simplifications such as splitting the non-integral property into
multiple distinct ones: relocatable (i.e. non-stable representation) and
non-integral representation (i.e. pointers with metadata).

Reviewed By: arsenm

Pull Request: https://github.com/llvm/llvm-project/pull/105734
2024-10-25 10:02:40 -07:00
Jay Foad
90cdc03e7f
[IR] Fix undiagnosed cases of structs containing scalable vectors (#113455)
Type::isScalableTy and StructType::containsScalableVectorType failed to
detect some cases of structs containing scalable vectors because
containsScalableVectorType did not call back into isScalableTy to check
the element types. Fix this, which requires sharing the same Visited set
in both functions. Also change the external API so that callers are
never required to pass in a Visited set, and normalize the naming to
isScalableTy.
2024-10-25 12:56:10 +01:00
Thomas Fransham
b8fddca7bd
[llvm] Support llvm::Any across shared libraries on windows (#108051)
This is part of the effort to support for enabling plugins on windows by
adding better support for building llvm as a DLL. The export macros used
here were added in #96630

Since shared library symbols aren't deduplicated across multiple
libraries on windows like Linux we have to manually explicitly import
and export `Any::TypeId` template instantiations for the uses of
`llvm::Any` in the LLVM codebase to support LLVM Windows shared library
builds.
This change ensures that external code, including LLVM's own tests, can
use PassManager callbacks when LLVM is built as a DLL.

I also removed the only use of llvm::Any for LoopNest that only existed
in debug code and there also doesn't seem to be any code creating
`Any<LoopNest>`
2024-10-24 08:07:13 +03:00
Florian Mayer
23b18fa01e
[MTE] Do not allow local aliases to MTE globals (#106280)
With this change and appropriate linker changes
(https://r.android.com/3236256)
AOSP boots with memtag-global throughout the platform.

Without this change, we would sometimes generate PC-relative references
to tagged globals, which then do not have the proper tag.
2024-10-21 17:00:41 -07:00
goldsteinn
69a798a996
Reapply "[Inliner] Propagate more attributes to params when inlining (#91101)" (2nd Attempt) (#112749)
Root cause of the bug was code hanging onto `range` attr after
changing BitWidth. This was fixed in PR #112633.
2024-10-17 20:28:47 -05:00