To authenticate pointers, CodeGen needs access to the key and
discriminators that were used to sign the pointer. That information is
sometimes known from the context, but not always, which is why `Address`
needs to hold that information.
This patch adds methods and data members to `Address`, which will be
needed in subsequent patches to authenticate signed pointers, and uses
the newly added methods throughout CodeGen. Although this patch isn't
strictly NFC as it causes CodeGen to use different code paths in some
cases (e.g., `mergeAddressesInConditionalExpr`), it doesn't cause any
changes in functionality as it doesn't add any information needed for
authentication.
In addition to the changes mentioned above, this patch introduces class
`RawAddress`, which contains a pointer that we know is unsigned, and
adds several new functions for creating `Address` and `LValue` objects.
This patch inserts 1-byte counters instead of an 8-byte counters into
llvm profiles for source-based code coverage. The origial idea was
proposed as block-cov for PGO, and this patch repurposes that idea for
coverage: https://groups.google.com/g/llvm-dev/c/r03Z6JoN7d4
The current 8-byte counters mechanism add counters to minimal regions,
and infer the counters in the remaining regions via adding or
subtracting counters. For example, it infers the counter in the if.else
region by subtracting the counters between if.entry and if.then regions
in an if statement. Whenever there is a control-flow merge, it adds the
counters from all the incoming regions. However, we are not going to be
able to infer counters by subtracting two execution counts when using
single-byte counters. Therefore, this patch conservatively inserts
additional counters for the cases where we need to add or subtract
counters.
RFC:
https://discourse.llvm.org/t/rfc-single-byte-counters-for-source-based-code-coverage/75685
Clang has a `signed-integer-overflow` sanitizer to catch arithmetic
overflow; however, most of its instrumentation [fails to
apply](https://godbolt.org/z/ee41rE8o6) when `-fwrapv` is enabled; this
is by design.
The Linux kernel enables `-fno-strict-overflow` which implies `-fwrapv`.
This means we are [currently unable to detect signed-integer
wrap-around](https://github.com/KSPP/linux/issues/26). All the while,
the root cause of many security vulnerabilities in the Linux kernel is
[arithmetic overflow](https://cwe.mitre.org/data/definitions/190.html).
To work around this and enhance the functionality of
`-fsanitize=signed-integer-overflow`, we instrument signed arithmetic
even if the signed overflow behavior is defined.
Co-authored-by: Justin Stitt <justinstitt@google.com>
HLSL supports vector truncation and element conversions as part of
standard conversion sequences. The vector truncation conversion is a C++
second conversion in the conversion sequence. If a vector truncation is
in a conversion sequence an element conversion may occur after it before
the standard C++ third conversion.
Vector element conversions can be boolean conversions, floating point or
integral conversions or promotions.
[HLSL Draft
Specification](https://microsoft.github.io/hlsl-specs/specs/hlsl.pdf)
---------
Co-authored-by: Aaron Ballman <aaron@aaronballman.com>
Instead of only handling vscale x 16 x i1 predicate vectors, handle any
scalable i1 vector where the known minimum is divisible by 8.
This is used on RISC-V where we have multiple sizes of predicate
types.
Testing the shift-exponent check with small width _BitInt values exposed
a bug in ScalarExprEmitter::GetWidthMinusOneValue when using the result
to determine valid exponent sizes. False positives were reported for
some left shifts when width(LHS)-1 > range(RHS) and false negatives were
reported for right shifts when value(RHS) > range(LHS). This patch caps
the maximum value of GetWidthMinusOneValue to fit within range(RHS) to
fix the issue with left shifts and fixes a code generation in EmitShr to
fix the issue with right shifts and renames the function to
GetMaximumShiftAmount to better reflect the new behaviour.
Fixes#80135.
Co-authored-by: Adam Magier <adam.magier@ericsson.com>
Implements https://isocpp.org/files/papers/P2662R3.pdf
The feature is exposed as an extension in older language modes.
Mangling is not yet supported and that is something we will have to do before release.
This is a fix for MC/DC issue https://github.com/llvm/llvm-project/issues/78453 in which a ConditionalOperator that evaluates a complex condition was incorrectly updating its global bitmap after visiting its LHS and RHS children. This
was wrong because if the LHS or RHS also evaluate a complex condition, the MCDC temporary bitmap value will get corrupted. The fix is to ensure that the bitmap is updated prior to visiting the LHS and RHS.
Vectors are always bit-packed and don't respect the elements' alignment
requirements. This is different from arrays. This means offsets of
vector GEPs need to be computed differently than offsets of array GEPs.
This PR fixes many places that rely on an incorrect pattern
that always relies on `DL.getTypeAllocSize(GTI.getIndexedType())`.
We replace these by usages of `GTI.getSequentialElementStride(DL)`,
which is a new helper function added in this PR.
This changes behavior for GEPs into vectors with element types for which
the (bit) size and alloc size is different. This includes two cases:
* Types with a bit size that is not a multiple of a byte, e.g. i1.
GEPs into such vectors are questionable to begin with, as some elements
are not even addressable.
* Overaligned types, e.g. i16 with 32-bit alignment.
Existing tests are unaffected, but a miscompilation of a new test is fixed.
---------
Co-authored-by: Nikita Popov <github@npopov.com>
When constructing vectors from elements, use poison instead of
undef as the base value. These literals always initialize all
elements (padding the remainder with zero), so that the choice
of base value does not affect semantics.
Update all callers to pass through the Address.
For the older builtins such as `__sync_*` and MSVC `_Interlocked*`,
natural alignment of the atomic access is _assumed_. This change
preserves that behavior. It will pass through greater-than-required
alignments, however.
The data size is required for implementing the `memmove` optimization
for `std::copy`, `std::move` etc. correctly as well as replacing
`__compressed_pair` with `[[no_unique_address]]` in libc++. Since the
compiler already knows the data size, we can avoid some complexity by
exposing that information.
Adds a new `__builtin_vectorelements()` function which returns the
number of elements for a given vector either at compile-time for
fixed-sized vectors, e.g., created via `__attribute__((vector_size(N)))`
or at runtime via a call to `@llvm.vscale.i32()` for scalable vectors,
e.g., SVE or RISCV V.
The new builtin follows a similar path as `sizeof()`, as it essentially
does the same thing but for the number of elements in vector instead of
the number of bytes. This allows us to re-use a lot of the existing
logic to handle types etc.
A small side addition is `Type::isSizelessVectorType()`, which we need
to distinguish between sizeless vectors (SVE, RISCV V) and sizeless
types (WASM).
This is the [corresponding
discussion](https://discourse.llvm.org/t/new-builtin-function-to-get-number-of-lanes-in-simd-vectors/73911).
This changes to address the PR : 55207
We update the volatility on the LValue by looking at the LHS cast operation qualifier and propagate the RValue volatile-ness from the CGF data structure .
Reviewed By: rjmccall
Differential Revision: https://reviews.llvm.org/D157890
For vector * scalar + vector, we emit `fmuladd` directly from clang.
This enables it also for matrix * scalar + matrix.
rdar://113967122
Differential Revision: https://reviews.llvm.org/D158883
Since we also have VLST for rvv now, it is not clear to keep using `isVLSTBuiltinType`, so I added prefix SVE to it.
Reviewed By: paulwalker-arm
Differential Revision: https://reviews.llvm.org/D158045
OpenCL and HIP have -cl-fp32-correctly-rounded-divide-sqrt and
-fno-hip-correctly-rounded-divide-sqrt. The corresponding fpmath metadata
was only set on fdiv, and not sqrt. The backend is currently underutilizing
sqrt lowering options, and the responsibility is split between the libraries
and backend and this metadata is needed.
CUDA/NVCC has -prec-div and -prev-sqrt but clang doesn't appear to be
aiming for compatibility with those. Don't know if OpenMP has a similar
control.
* Add `Address::withElementType()` as a replacement for
`CGBuilderTy::CreateElementBitCast`.
* Partial progress towards replacing `CreateElementBitCast`, as it no
longer does what its name suggests. Either replace its uses with
`Address::withElementType()`, or remove them if no longer needed.
* Remove unused parameter 'Name' of `CreateElementBitCast`
Reviewed By: barannikov88, nikic
Differential Revision: https://reviews.llvm.org/D153196
Pursuant to discussions at
https://discourse.llvm.org/t/rfc-c-23-p1467r9-extended-floating-point-types-and-standard-names/70033/22,
this commit enhances the handling of the __bf16 type in Clang.
- Firstly, it upgrades __bf16 from a storage-only type to an arithmetic
type.
- Secondly, it changes the mangling of __bf16 to DF16b on all
architectures except ARM. This change has been made in
accordance with the finalization of the mangling for the
std::bfloat16_t type, as discussed at
https://github.com/itanium-cxx-abi/cxx-abi/pull/147.
- Finally, this commit extends the existing excess precision support to
the __bf16 type. This applies to hardware architectures that do not
natively support bfloat16 arithmetic.
Appropriate tests have been added to verify the effects of these
changes and ensure no regressions in other areas of the compiler.
Reviewed By: rjmccall, pengfei, zahiraam
Differential Revision: https://reviews.llvm.org/D150913
Without opaque pointers, this code determined !heapallocsite based
on the innermost cast of the allocation call. With opaque pointers,
the casts no longer generate an instruction, so the outermost cast
is used. Add an explicit check for nested casts to prevent this.
Differential Revision: https://reviews.llvm.org/D145788
Allows us to handle expressions like -(a * b) + c
Based on the examples from D144366 that gcc seems to get.
Reviewed By: kpn
Differential Revision: https://reviews.llvm.org/D144447
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
Clang language-level address spaces and LLVM pointer address spaces are
not the same thing (even though they will both have a numeric value of
zero in many cases). LangAS is a enum class to avoid implicit conversions,
but eba69b59d1a30dead07da2c279c8ecfd2b62ba9f avoided the compiler error by
adding a `static_cast<>`. While touching this code, simplify it by using
CreatePointerBitCastOrAddrSpaceCast() which is already a no-op if the types
match.
This changes the code generation for spir64 to place the globals in
the sycl_global addreds space, which maps to `addrspace(1)`.
Reviewed By: bader
Differential Revision: https://reviews.llvm.org/D138284
A common post condition of the various visitor functions in CodeGen is that instructions, that do not return any values, simply return a nullptr Value as a sentinel. This has not been the case however for calls to some builtins returning void, as well as for an initializer expression of the form `void()`. This would then lead to ICEs in CodeGen on code relying on nullptr being returned for void values, which is eg. the case for conditional expressions [0].
This patch fixes that by returning nullptr Values for intrinsics known not to return any values as well as for a scalar initializer returning void.
Fixes https://github.com/llvm/llvm-project/issues/53127
[0] 266ec801fb/clang/lib/CodeGen/CGExprScalar.cpp (L4849-L4892)
Differential Revision: https://reviews.llvm.org/D136548
short will be promoted to int in UsualUnaryConversions.
Disable it for HLSL to keep int16_t as 16bit.
Reviewed By: aaron.ballman, rjmccall
Differential Revision: https://reviews.llvm.org/D133668
Seeing the wrong instruction for this name in IR is confusing.
Most of the tests are not even checking a subsequent use of
the value, so I just deleted the over-specified CHECKs.
Without this patch, clang will not wrap in an ElaboratedType node types written
without a keyword and nested name qualifier, which goes against the intent that
we should produce an AST which retains enough details to recover how things are
written.
The lack of this sugar is incompatible with the intent of the type printer
default policy, which is to print types as written, but to fall back and print
them fully qualified when they are desugared.
An ElaboratedTypeLoc without keyword / NNS uses no storage by itself, but still
requires pointer alignment due to pre-existing bug in the TypeLoc buffer
handling.
---
Troubleshooting list to deal with any breakage seen with this patch:
1) The most likely effect one would see by this patch is a change in how
a type is printed. The type printer will, by design and default,
print types as written. There are customization options there, but
not that many, and they mainly apply to how to print a type that we
somehow failed to track how it was written. This patch fixes a
problem where we failed to distinguish between a type
that was written without any elaborated-type qualifiers,
such as a 'struct'/'class' tags and name spacifiers such as 'std::',
and one that has been stripped of any 'metadata' that identifies such,
the so called canonical types.
Example:
```
namespace foo {
struct A {};
A a;
};
```
If one were to print the type of `foo::a`, prior to this patch, this
would result in `foo::A`. This is how the type printer would have,
by default, printed the canonical type of A as well.
As soon as you add any name qualifiers to A, the type printer would
suddenly start accurately printing the type as written. This patch
will make it print it accurately even when written without
qualifiers, so we will just print `A` for the initial example, as
the user did not really write that `foo::` namespace qualifier.
2) This patch could expose a bug in some AST matcher. Matching types
is harder to get right when there is sugar involved. For example,
if you want to match a type against being a pointer to some type A,
then you have to account for getting a type that is sugar for a
pointer to A, or being a pointer to sugar to A, or both! Usually
you would get the second part wrong, and this would work for a
very simple test where you don't use any name qualifiers, but
you would discover is broken when you do. The usual fix is to
either use the matcher which strips sugar, which is annoying
to use as for example if you match an N level pointer, you have
to put N+1 such matchers in there, beginning to end and between
all those levels. But in a lot of cases, if the property you want
to match is present in the canonical type, it's easier and faster
to just match on that... This goes with what is said in 1), if
you want to match against the name of a type, and you want
the name string to be something stable, perhaps matching on
the name of the canonical type is the better choice.
3) This patch could expose a bug in how you get the source range of some
TypeLoc. For some reason, a lot of code is using getLocalSourceRange(),
which only looks at the given TypeLoc node. This patch introduces a new,
and more common TypeLoc node which contains no source locations on itself.
This is not an inovation here, and some other, more rare TypeLoc nodes could
also have this property, but if you use getLocalSourceRange on them, it's not
going to return any valid locations, because it doesn't have any. The right fix
here is to always use getSourceRange() or getBeginLoc/getEndLoc which will dive
into the inner TypeLoc to get the source range if it doesn't find it on the
top level one. You can use getLocalSourceRange if you are really into
micro-optimizations and you have some outside knowledge that the TypeLocs you are
dealing with will always include some source location.
4) Exposed a bug somewhere in the use of the normal clang type class API, where you
have some type, you want to see if that type is some particular kind, you try a
`dyn_cast` such as `dyn_cast<TypedefType>` and that fails because now you have an
ElaboratedType which has a TypeDefType inside of it, which is what you wanted to match.
Again, like 2), this would usually have been tested poorly with some simple tests with
no qualifications, and would have been broken had there been any other kind of type sugar,
be it an ElaboratedType or a TemplateSpecializationType or a SubstTemplateParmType.
The usual fix here is to use `getAs` instead of `dyn_cast`, which will look deeper
into the type. Or use `getAsAdjusted` when dealing with TypeLocs.
For some reason the API is inconsistent there and on TypeLocs getAs behaves like a dyn_cast.
5) It could be a bug in this patch perhaps.
Let me know if you need any help!
Signed-off-by: Matheus Izvekov <mizvekov@gmail.com>
Differential Revision: https://reviews.llvm.org/D112374