To authenticate pointers, CodeGen needs access to the key and
discriminators that were used to sign the pointer. That information is
sometimes known from the context, but not always, which is why `Address`
needs to hold that information.
This patch adds methods and data members to `Address`, which will be
needed in subsequent patches to authenticate signed pointers, and uses
the newly added methods throughout CodeGen. Although this patch isn't
strictly NFC as it causes CodeGen to use different code paths in some
cases (e.g., `mergeAddressesInConditionalExpr`), it doesn't cause any
changes in functionality as it doesn't add any information needed for
authentication.
In addition to the changes mentioned above, this patch introduces class
`RawAddress`, which contains a pointer that we know is unsigned, and
adds several new functions for creating `Address` and `LValue` objects.
This reapplies 8bd1f9116aab879183f34707e6d21c7051d083b6. The commit
broke msan bots because LValue::IsKnownNonNull was uninitialized.
To authenticate pointers, CodeGen needs access to the key and
discriminators that were used to sign the pointer. That information is
sometimes known from the context, but not always, which is why `Address`
needs to hold that information.
This patch adds methods and data members to `Address`, which will be
needed in subsequent patches to authenticate signed pointers, and uses
the newly added methods throughout CodeGen. Although this patch isn't
strictly NFC as it causes CodeGen to use different code paths in some
cases (e.g., `mergeAddressesInConditionalExpr`), it doesn't cause any
changes in functionality as it doesn't add any information needed for
authentication.
In addition to the changes mentioned above, this patch introduces class
`RawAddress`, which contains a pointer that we know is unsigned, and
adds several new functions for creating `Address` and `LValue` objects.
This patch inserts 1-byte counters instead of an 8-byte counters into
llvm profiles for source-based code coverage. The origial idea was
proposed as block-cov for PGO, and this patch repurposes that idea for
coverage: https://groups.google.com/g/llvm-dev/c/r03Z6JoN7d4
The current 8-byte counters mechanism add counters to minimal regions,
and infer the counters in the remaining regions via adding or
subtracting counters. For example, it infers the counter in the if.else
region by subtracting the counters between if.entry and if.then regions
in an if statement. Whenever there is a control-flow merge, it adds the
counters from all the incoming regions. However, we are not going to be
able to infer counters by subtracting two execution counts when using
single-byte counters. Therefore, this patch conservatively inserts
additional counters for the cases where we need to add or subtract
counters.
RFC:
https://discourse.llvm.org/t/rfc-single-byte-counters-for-source-based-code-coverage/75685
Clang has a `signed-integer-overflow` sanitizer to catch arithmetic
overflow; however, most of its instrumentation [fails to
apply](https://godbolt.org/z/ee41rE8o6) when `-fwrapv` is enabled; this
is by design.
The Linux kernel enables `-fno-strict-overflow` which implies `-fwrapv`.
This means we are [currently unable to detect signed-integer
wrap-around](https://github.com/KSPP/linux/issues/26). All the while,
the root cause of many security vulnerabilities in the Linux kernel is
[arithmetic overflow](https://cwe.mitre.org/data/definitions/190.html).
To work around this and enhance the functionality of
`-fsanitize=signed-integer-overflow`, we instrument signed arithmetic
even if the signed overflow behavior is defined.
Co-authored-by: Justin Stitt <justinstitt@google.com>
HLSL supports vector truncation and element conversions as part of
standard conversion sequences. The vector truncation conversion is a C++
second conversion in the conversion sequence. If a vector truncation is
in a conversion sequence an element conversion may occur after it before
the standard C++ third conversion.
Vector element conversions can be boolean conversions, floating point or
integral conversions or promotions.
[HLSL Draft
Specification](https://microsoft.github.io/hlsl-specs/specs/hlsl.pdf)
---------
Co-authored-by: Aaron Ballman <aaron@aaronballman.com>
Instead of only handling vscale x 16 x i1 predicate vectors, handle any
scalable i1 vector where the known minimum is divisible by 8.
This is used on RISC-V where we have multiple sizes of predicate
types.
Testing the shift-exponent check with small width _BitInt values exposed
a bug in ScalarExprEmitter::GetWidthMinusOneValue when using the result
to determine valid exponent sizes. False positives were reported for
some left shifts when width(LHS)-1 > range(RHS) and false negatives were
reported for right shifts when value(RHS) > range(LHS). This patch caps
the maximum value of GetWidthMinusOneValue to fit within range(RHS) to
fix the issue with left shifts and fixes a code generation in EmitShr to
fix the issue with right shifts and renames the function to
GetMaximumShiftAmount to better reflect the new behaviour.
Fixes#80135.
Co-authored-by: Adam Magier <adam.magier@ericsson.com>
Implements https://isocpp.org/files/papers/P2662R3.pdf
The feature is exposed as an extension in older language modes.
Mangling is not yet supported and that is something we will have to do before release.
This is a fix for MC/DC issue https://github.com/llvm/llvm-project/issues/78453 in which a ConditionalOperator that evaluates a complex condition was incorrectly updating its global bitmap after visiting its LHS and RHS children. This
was wrong because if the LHS or RHS also evaluate a complex condition, the MCDC temporary bitmap value will get corrupted. The fix is to ensure that the bitmap is updated prior to visiting the LHS and RHS.
Vectors are always bit-packed and don't respect the elements' alignment
requirements. This is different from arrays. This means offsets of
vector GEPs need to be computed differently than offsets of array GEPs.
This PR fixes many places that rely on an incorrect pattern
that always relies on `DL.getTypeAllocSize(GTI.getIndexedType())`.
We replace these by usages of `GTI.getSequentialElementStride(DL)`,
which is a new helper function added in this PR.
This changes behavior for GEPs into vectors with element types for which
the (bit) size and alloc size is different. This includes two cases:
* Types with a bit size that is not a multiple of a byte, e.g. i1.
GEPs into such vectors are questionable to begin with, as some elements
are not even addressable.
* Overaligned types, e.g. i16 with 32-bit alignment.
Existing tests are unaffected, but a miscompilation of a new test is fixed.
---------
Co-authored-by: Nikita Popov <github@npopov.com>
When constructing vectors from elements, use poison instead of
undef as the base value. These literals always initialize all
elements (padding the remainder with zero), so that the choice
of base value does not affect semantics.
Update all callers to pass through the Address.
For the older builtins such as `__sync_*` and MSVC `_Interlocked*`,
natural alignment of the atomic access is _assumed_. This change
preserves that behavior. It will pass through greater-than-required
alignments, however.
The data size is required for implementing the `memmove` optimization
for `std::copy`, `std::move` etc. correctly as well as replacing
`__compressed_pair` with `[[no_unique_address]]` in libc++. Since the
compiler already knows the data size, we can avoid some complexity by
exposing that information.
Adds a new `__builtin_vectorelements()` function which returns the
number of elements for a given vector either at compile-time for
fixed-sized vectors, e.g., created via `__attribute__((vector_size(N)))`
or at runtime via a call to `@llvm.vscale.i32()` for scalable vectors,
e.g., SVE or RISCV V.
The new builtin follows a similar path as `sizeof()`, as it essentially
does the same thing but for the number of elements in vector instead of
the number of bytes. This allows us to re-use a lot of the existing
logic to handle types etc.
A small side addition is `Type::isSizelessVectorType()`, which we need
to distinguish between sizeless vectors (SVE, RISCV V) and sizeless
types (WASM).
This is the [corresponding
discussion](https://discourse.llvm.org/t/new-builtin-function-to-get-number-of-lanes-in-simd-vectors/73911).
This changes to address the PR : 55207
We update the volatility on the LValue by looking at the LHS cast operation qualifier and propagate the RValue volatile-ness from the CGF data structure .
Reviewed By: rjmccall
Differential Revision: https://reviews.llvm.org/D157890
For vector * scalar + vector, we emit `fmuladd` directly from clang.
This enables it also for matrix * scalar + matrix.
rdar://113967122
Differential Revision: https://reviews.llvm.org/D158883
Since we also have VLST for rvv now, it is not clear to keep using `isVLSTBuiltinType`, so I added prefix SVE to it.
Reviewed By: paulwalker-arm
Differential Revision: https://reviews.llvm.org/D158045
OpenCL and HIP have -cl-fp32-correctly-rounded-divide-sqrt and
-fno-hip-correctly-rounded-divide-sqrt. The corresponding fpmath metadata
was only set on fdiv, and not sqrt. The backend is currently underutilizing
sqrt lowering options, and the responsibility is split between the libraries
and backend and this metadata is needed.
CUDA/NVCC has -prec-div and -prev-sqrt but clang doesn't appear to be
aiming for compatibility with those. Don't know if OpenMP has a similar
control.
* Add `Address::withElementType()` as a replacement for
`CGBuilderTy::CreateElementBitCast`.
* Partial progress towards replacing `CreateElementBitCast`, as it no
longer does what its name suggests. Either replace its uses with
`Address::withElementType()`, or remove them if no longer needed.
* Remove unused parameter 'Name' of `CreateElementBitCast`
Reviewed By: barannikov88, nikic
Differential Revision: https://reviews.llvm.org/D153196
Pursuant to discussions at
https://discourse.llvm.org/t/rfc-c-23-p1467r9-extended-floating-point-types-and-standard-names/70033/22,
this commit enhances the handling of the __bf16 type in Clang.
- Firstly, it upgrades __bf16 from a storage-only type to an arithmetic
type.
- Secondly, it changes the mangling of __bf16 to DF16b on all
architectures except ARM. This change has been made in
accordance with the finalization of the mangling for the
std::bfloat16_t type, as discussed at
https://github.com/itanium-cxx-abi/cxx-abi/pull/147.
- Finally, this commit extends the existing excess precision support to
the __bf16 type. This applies to hardware architectures that do not
natively support bfloat16 arithmetic.
Appropriate tests have been added to verify the effects of these
changes and ensure no regressions in other areas of the compiler.
Reviewed By: rjmccall, pengfei, zahiraam
Differential Revision: https://reviews.llvm.org/D150913
Without opaque pointers, this code determined !heapallocsite based
on the innermost cast of the allocation call. With opaque pointers,
the casts no longer generate an instruction, so the outermost cast
is used. Add an explicit check for nested casts to prevent this.
Differential Revision: https://reviews.llvm.org/D145788
Allows us to handle expressions like -(a * b) + c
Based on the examples from D144366 that gcc seems to get.
Reviewed By: kpn
Differential Revision: https://reviews.llvm.org/D144447
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
Clang language-level address spaces and LLVM pointer address spaces are
not the same thing (even though they will both have a numeric value of
zero in many cases). LangAS is a enum class to avoid implicit conversions,
but eba69b59d1a30dead07da2c279c8ecfd2b62ba9f avoided the compiler error by
adding a `static_cast<>`. While touching this code, simplify it by using
CreatePointerBitCastOrAddrSpaceCast() which is already a no-op if the types
match.
This changes the code generation for spir64 to place the globals in
the sycl_global addreds space, which maps to `addrspace(1)`.
Reviewed By: bader
Differential Revision: https://reviews.llvm.org/D138284
A common post condition of the various visitor functions in CodeGen is that instructions, that do not return any values, simply return a nullptr Value as a sentinel. This has not been the case however for calls to some builtins returning void, as well as for an initializer expression of the form `void()`. This would then lead to ICEs in CodeGen on code relying on nullptr being returned for void values, which is eg. the case for conditional expressions [0].
This patch fixes that by returning nullptr Values for intrinsics known not to return any values as well as for a scalar initializer returning void.
Fixes https://github.com/llvm/llvm-project/issues/53127
[0] 266ec801fb/clang/lib/CodeGen/CGExprScalar.cpp (L4849-L4892)
Differential Revision: https://reviews.llvm.org/D136548
short will be promoted to int in UsualUnaryConversions.
Disable it for HLSL to keep int16_t as 16bit.
Reviewed By: aaron.ballman, rjmccall
Differential Revision: https://reviews.llvm.org/D133668
Seeing the wrong instruction for this name in IR is confusing.
Most of the tests are not even checking a subsequent use of
the value, so I just deleted the over-specified CHECKs.