The test for utimes added in #134167 might fail if the file for one test
hasn't been cleaned up by the OS before the second test starts. This
patch makes the tests use different files.
Whether the SDK supports builtin modules is a property of the SDK
itself, and really has nothing to do with the target. This was already
worked around for Mac Catalyst, but there are some other more esoteric
non-obvious target-to-sdk mappings that aren't handled. Have the SDK
parse its OS out of CanonicalName and use that instead of the target to
determine if builtin modules are supported.
PTX source files are expected to only contain ASCII text
(https://docs.nvidia.com/cuda/parallel-thread-execution/#source-format) and no null terminators.
`ptxas` has so far not enforced this but is moving towards doing so.
This revealed a problem where the null terminator is getting printed out
in the output file in MLIR path when outputting ptx directly. Only add the null on the assembly output path for JIT instead of in output of `moduleToObject `.
debugserver isn't saving and restoring the SVE/SME register state around
inferior function calls.
Making arbitrary function calls while in Streaming SVE mode is generally
a poor idea because a NEON instruction can be hit and crash the
expression execution, which is how I missed this, but they should be
handled correctly if the user knows it is safe to do.
Re-landing this change after fixing an incorrect behavior on systems
without SME support.
rdar://146886210
We should have had a release note in LLVM 20 about implementing P2165R4
since that is technically an ABI and API break for zip_view. We don't
expect anyone to actually hit the ABI issue, but we've come across some
(fairly small) breakage due to the API change, so this should at least
be mentioned in the release notes.
This PR implements the nonstandard intrinsic time.
In addition to running the unit tests, I also double checked that the
example code works by manually compiling and running it.
Resolves#99221
Key points: For SPIRV backend, it decompose into a `dot` followed a
`add`.
- [x] Implement dot2add clang builtin,
- [x] Link dot2add clang builtin with hlsl_intrinsics.h
- [x] Add sema checks for dot2add to CheckHLSLBuiltinFunctionCall in
SemaHLSL.cpp
- [x] Add codegen for dot2add to EmitHLSLBuiltinExpr in CGBuiltin.cpp
- [x] Add codegen tests to clang/test/CodeGenHLSL/builtins/dot2add.hlsl
- [x] Add sema tests to clang/test/SemaHLSL/BuiltIns/dot2add-errors.hlsl
- [x] Create the int_dx_dot2add intrinsic in IntrinsicsDirectX.td
- [x] Create the DXILOpMapping of int_dx_dot2add to 162 in DXIL.td
- [x] Create the dot2add.ll and dot2add_errors.ll tests in
llvm/test/CodeGen/DirectX/
After 3801bf6164f570a145e3ebd20cf9114782ae0329, SPIRVAnalysis needs to
include SPIRV.h provided by SPIRVCodegen, but the CodeGen target already
depends on Analysis, so that would cause a circular dependency.
Analysis is a subdirectory of CodeGen so it makes sense as a part of the
main CodeGen target too.
This PR adds the intrinsic `unlink` to flang.
## Test plan
- Added two codegen unit tests and ensured flang-check continues to
pass.
- Manually compiled and ran the example from the documentation.
The initial upstreaming of unary operations left promoted types
unhandled for the unary plus and minus operators. This change implements
support for promoted types and performs a bit of related code cleanup.
The ClangIR upstreaming project needs the same logic for
hasBooleanRepresentation() that is currently implemented in the standard
clang codegen. In order to share this code, this change moves the
implementation of this function into the AST Type class.
No functional change is intended by this change. The ClangIR use of this
function will be added separately in a later change.
This PR makes verification of .debug_names acceleration table
multithreaded. In local testing it improves verification of clang
.debug_names from four minutes to under a minute.
This PR relies on a current mechanism of extracting DIEs into a vector.
Future improvements can include creating API to extract one DIE at a
time, or grouping Entires into buckets by CUs and extracting before
parallel step.
Single Thread
4:12.37 real, 246.88 user, 3.54 sys, 0 amem,10232004 mmem
Multi Thread
0:49.40 real, 612.84 user, 515.73 sys, 0 amem, 11226292 mmem
When you run lldb without colors (`-X`), the status line looks weird
because it doesn't have a background. You end up with what appears to be
floating text at the bottom of your terminal.
This patch changes the statusline to use the reverse video effect, even
when colors are off. The effect doesn't introduce any new colors and
just inverts the foreground and background color.
I considered an alternative approach which changes the behavior of the
`-X` option, so that turning off colors doesn't prevent emitting
non-color related control characters such as bold, underline, and
reverse video. I decided to go with this more targeted fix as (1) nobody
is asking for this more general change and (2) it introduces significant
complexity to plumb this through using a setting and driver flag so that
it can be disabled when running the tests.
Fixes#134112.
If ScalarPH has predecessors, we may need to update its reduction resume
values. If there is a middle block, it must be the first predecessor.
Note that the first predecessor may not be the middle block, if the
middle block doesn't branch to the scalar preheader. In that case,
fixReductionScalarResumeWhenVectorizingEpilog will be a no-op.
In preparation for https://github.com/llvm/llvm-project/pull/106748.
Block a context root from being imported by its callers.
Suppose that happened. Its caller - usually a message pump - inlines its copy of the root. Then it (the root) and whatever it calls will be the non-contextually optimized callee versions.
## Problem
When the build ids of the profile and binary do not match, the error
reported by llvm-profdata is `no entries in callstack map after
symbolization`, but the root cause of this problem is the **build id
mismatch**.
## Trigger scenario
For example, when performing `memprof` optimization on `clang`,
`rawprofile` is collected through `ninja clang`. In addition to running
clang, some other programs will also be executed, and these programs
will also generate rawprofile. When `no entries in callstack map after
symbolization` appears during `llvm-profdata merge`, users may
mistakenly think that the **instrumentation failed or other reasons**,
and will **not directly realize that the binary and profile do not
match**.
## Changed
Currently, when the build id does not match, an assert error is
triggered only in debug mode. Change it to directly return an error when
the build id does not match.
The string used for intrinsic was not the correct one
"llvm.nvvm.match.any.sync.i32p". There was an extra `p` at the end.
Use the NVVM operation instead so we don't duplicate it.
This introduces a new class 'UnsignedOrNone', which models a lite
version of `std::optional<unsigned>`, but has the same size as
'unsigned'.
This replaces most uses of `std::optional<unsigned>`, and similar
schemes utilizing 'int' and '-1' as sentinel.
Besides the smaller size advantage, this is simpler to serialize, as its
internal representation is a single unsigned int as well.
ISD::ADDC, ISD::ADDE, ISD::SUBC and ISD::SUBE are being deprecated,
using ISD::UADDO_CARRY,ISD::USUBO_CARRY instead. Lowering the UADDO,
UADDO_CARRY, USUBO, USUBO_CARRY in the patch.
This adds support for all the surface read and write calls to clang. It
extends the pattern used for textures to surfaces too.
I tested this by generating all the various permutations of the calls
and argument types in a python script, compiling them with both clang
and nvcc, and comparing the generated ptx for equivilence. They all
agree, ignoring register allocation, and some places where Clang picks
different memory write instructions. An example kernel is:
```
__global__ void testKernel(cudaSurfaceObject_t surfObj, int x, float2* result) {
*result = surf1Dread<float2>(surfObj, x, cudaBoundaryModeZero);
}
```
---------
Signed-off-by: Austin Schuh <austin.linux@gmail.com>
This reapplies #132522.
Previously casts of scalable m_ImmConstant splats weren't being folded
by ConstantFoldCastOperand, triggering the "Constant-fold of ImmConstant
should not fail" assertion.
There are no changes to the code in this PR, instead we just needed
#133207 to land first.
A test has been added for the assertion in
llvm/test/Transforms/InstSimplify/vec-icmp-of-cast.ll
@icmp_ult_sext_scalable_splat_is_true.
<hr/>
#118806 fixed an infinite loop in FoldShiftByConstant that could occur
when the shift amount was a ConstantExpr.
However this meant that FoldShiftByConstant no longer kicked in for
scalable vectors because scalable splats are represented by
ConstantExprs.
This fixes it by allowing scalable splats of non-ConstantExprs in
m_ImmConstant, which also fixes a few other test cases where scalable
splats were being missed.
But I'm also hoping that UseConstantIntForScalableSplat will eventually
remove the need for this.
I noticed this when trying to reverse a combine on RISC-V in #132245,
and saw that the resulting vector and scalar forms were different.
If a block with a single predecessor also had its address taken,
it was getting deleted in this post-inline cleanup step. This would
result in the blockaddress in the resulting function getting deleted
and replaced with inttoptr 1.
This fixes one bug required to permit inlining of functions with blockaddress
uses.
At the moment this is not testable (at least without an annoyingly complex
unit test), and is a pre-bug fix for future patches. Functions with
blockaddress uses are rejected in isInlineViable, so we don't get this far
with the current InlineFunction uses (some of the existing cases seem to
reproduce this part of the rejection logic, like PartialInliner). This
will be tested in a pending llvm-reduce change.
Prerequisite for #38908
`tensor.insert_slice` needs to have read semantics on its destination
operand. Since it has a return value, its semantics are
- Copy dest to result
- Copy source to subview of destination.
`tensor.parallel_insert_slice` though has no result. So it does not need
to have read semantics. The op description
[here](a3ac318e5f/mlir/include/mlir/Dialect/Tensor/IR/TensorOps.td (L1524))
also says that it is expected to lower to a `memref.subview`, that does
not have read semantics on the destination (its just a view).
This patch drops the read semantics for destination of
`tensor.parallel_insert_slice` but also makes the `shared_outs` operands
of `scf.forall` have read semantics. Earlier it would rely indirectly on
read semantics of destination operand of `tensor.parallel_insert_slice`
to propagate the read semantics for `shared_outs`. Now that is specified
more directly.
Fixes#133964
---------
Signed-off-by: MaheshRavishankar <mahesh.ravishankar@gmail.com>
This moves all the common settings of the launch and attach operations
into the `lldb_dap::protocol::Configuration`. These common settings
can be in both `launch` and `attach` requests and allows us to isolate
the DAP configuration operations into a single common location.
This is split out from #133624.
Reverts llvm/llvm-project#134124
The build is failing again to a linking error:
[here](https://github.com/llvm/llvm-project/pull/134124#issuecomment-2776370486).
Again the error was not present locally or any of the pre-merge builds
and must have been transitively linked in these build environments...
With AVX512VL targets, use 128/256-bit VPERMV/VPERMV3 nodes when we only need the lower elements.
Reapplied version of #133923 with fix for typo in the VPERMV3 mask adjustment