The component diagnostic headers (i.e. `DiagnosticAST.h` and friends)
all follow the same format, and there’s enough of them (and in them) to
where updating all of them has become rather tedious (at least it was
for me while working on #132348), so this patch instead generates all of
them (or rather their contents) via Tablegen.
Also, it seems that `%enum_select` currently wouldn’t work in
`DiagnosticCommonKinds.td` because the infrastructure for that was
missing from `DiagnosticIDs.h`; this patch should fix that as well.
This is a follow-up to #132129.
Currently, only `Parser` and `SemaBase` get a `DiagCompat()` helper; I’m
planning to keep refactoring compatibility warnings and add more helpers
to other classes as needed. I also refactored a single parser compat
warning just to make sure everything works properly when diagnostics
across multiple components (i.e. Sema and Parser in this case) are
involved.
Implement all single-multi {BF/F/S/U/SU/US}MOP4{A/S} instructions in
clang and llvm following the acle in
https://github.com/ARM-software/acle/pull/381/files.
This PR depends on https://github.com/llvm/llvm-project/pull/127797
This patch updates the semantics of template arguments in intrinsic
names for clarity and ease of use. Previously, template argument numbers
indicated which character in the prototype string determined the final
type suffix, which was confusing—especially for intrinsics using
multiple prototype modifiers per operand (e.g., intrinsics operating on
arrays of vectors). The number had to reference the correct character in
the prototype (e.g., the ‘u’ in “2.u”), making the system cumbersome and
error-prone.
With this patch, template argument numbers now refer to the operand
number that determines the final type suffix, providing a more intuitive
and consistent approach.
Currently arm_neon.h emits C-style casts to do vector type casts. This
relies on implicit conversion between vector types to be enabled, which
is currently deprecated behaviour and soon will disappear. To ensure
NEON code will keep working afterwards, this patch changes all this
vector type casts into bitcasts.
Co-authored-by: Momchil Velikov <momchil.velikov@arm.com>
We can use *Set::insert_range to collapse:
for (auto Elem : Range)
Set.insert(E);
down to:
Set.insert_range(Range);
In some cases, we can further fold that into the set declaration.
We have more than 32 extensions in our downstream and had to change this
type from uint32_t to uint64_t.
To simplify our downstream and make the code more flexible, I propose to
make it an array of uint32_t that we can size based on the number of
extensions. I really wanted to use std::bitset, but we have to print the
bits to a .inc file which can't easily be done with std::bitset.
DenseSet, SmallPtrSet, SmallSet, SetVector, and StringSet recently
gained C++23-style insert_range. This patch replaces:
Dest.insert(Src.begin(), Src.end());
with:
Dest.insert_range(Src);
This patch does not touch custom begin like succ_begin for now.
This is the last item of the OpenACC 3.3 spec. It includes the
implicit-name version of 'routine', plus significant refactorings to
make the two work together. The implicit name version is represented as
an attribute on the function call. This patch also implements the
clauses for the implicit-name version, as well as the A.3.4 warning.
Make semantic matching case insensitive
update tests to reflect semantic printed as all lower case in error
messages
add new tests to show case insensitivity
Closes#128063
In arm-neon.h, we insert shufflevectors around each intrinsic when the
target is big-endian, to compensate for the difference between the
ABI-defined memory format of vectors (with the whole vector stored as
one big-endian access) and LLVM's target-independent expectations (with
the lowest-numbered lane in the lowest address). However, this code was
written for the AArch64 ABI, and the AArch32 ABI differs slightly: it
requires that vectors are stored in memory as-if stored with VSTM, which
does a series of 64-bit accesses, instead of the AArch64 VSTR, which
does a single 128-bit access. This means that for AArch32 we need to
reverse the lanes in each 64-bit chunk of the vector, instead of in the
whole vector.
Since there are only a small number of different shufflevector orderings
needed, I've split them out into macros, so that this doesn't need
separate conditions in each intrinsic definition.
Simpler detection of dynamic library operands as the readelf one seems
to be unreliable (works on my setup, not on buildbots).
This is a follow-up to #127020
This makes it significantly easier to add new builtin templates, since
you only have to modify two places instead of a dozen or so.
The `BuiltinTemplates.td` could also be extended to generate
documentation from it in the future.
cvise reimplements creduce in Python and bundles clang-delta and other
tools. In my experience, it is generally a more robust reduction tool
that is better maintained. I renamed the script to make it tool-neutral,
which also opens up the possibility that we teach it how to
automatically transition over to llvm-reduce and opt/llc to handle LLVM
backend crashes, but that is potential future work.
Internally, the variable names still say "creduce". I kept using the
verb "reduce" because "vise" is not a verb, but the external facing text
has been updated.
Using LLVM build itself for PGO training is convenient and a great
starting point but it also has several issues:
* LLVM build implicitly depends on tools other than CMake and C/C++
compiler and if those tools aren't available in PATH, the build will
fail.
* LLVM build also requires standard headers and libraries which may not
always be available in the default location requiring an explicit
sysroot.
* Building a single configuration (-DCMAKE_BUILD_TYPE=Release) only
exercises the -O3 pipeline and can pesimize other configurations.
* Building for the host target doesn't exercise all other targets.
* Since LLVMSupport is a static library, this doesn't exercise the
linker (beyond what the CMake itself does).
Rather than using LLVM build, ideally we would provide a more minimal,
purpose built corpus. While we're working on building such a corpus,
provide a CMake option that lets vendors disable the use LLVM build for
PGO training.
Summary:
Some attributes have gnu extensions that share names with clang
attributes. If these imply the same thing, we can specially declare this
to be an alternate but equivalent spelling. This patch enables this for
`no_sanitize` and provides the infrastructure for more to be added if
needed.
Discussions welcome on whether or not we want to bind ourselves to GNU
behavior, since theoretically it's possible for GNU to silently change
the semantics away from our implementation, but I'm not an expert.
Fixes: https://github.com/llvm/llvm-project/issues/125760
This requires adding support to the general builtins emission for
producing prefixed builtin infos separately from un-prefixed which is
a bit crufty. But we don't currently have any good way of having a more
refined model than a single hard-coded prefix string per TableGen
emission. Something more powerful and/or elegant is possible, but this
is a fairly minimal first step that at least allows factoring out the
builtin prefix for something like X86.
This moves the main builtins and several targets to use nice generated
string tables and info structures rather than X-macros. Even without
obvious prefixes factored out, the resulting tables are significantly
smaller and much cheaper to compile with out all the X-macro overhead.
This leaves the X-macros in place for atomic builtins which have a wide
range of uses that don't seem reasonable to fold into TableGen.
As future work, these should move to their own file (whether as X-macros
or just generated patterns) so the AST headers don't have to include all
the data for other builtins.
This leverages the sharded structure of the builtins to make it easy to
directly tablegen most of the AArch64 and ARM builtins while still using
X-macros for a few edge cases. It also extracts common prefixes as part
of that.
This makes the string tables for these targets dramatically smaller.
This is especially important as the SVE builtins represent (by far) the
largest string table and largest builtin table across all the targets in
Clang.
Note that PointerUnion::dyn_cast has been soft deprecated in
PointerUnion.h:
// FIXME: Replace the uses of is(), get() and dyn_cast() with
// isa<T>, cast<T> and the llvm::dyn_cast<T>
Literal migration would result in dyn_cast_if_present (see the
definition of PointerUnion::dyn_cast), but this patch uses dyn_cast
because we expect DiagsInPedantic and GroupsInPedantic to be nonnull.
In the discussion around #116792, @rjmccall mentioned that ARCMigrate
has been obsoleted and that we could go ahead and remove it from Clang,
so this patch does just that.
Historically, the main example of *very* large string tables used the
`EmitCharArray` to work around MSVC limitations with string literals,
but that was switched (without removing the API) in order to consolidate
on a nicer emission primitive.
While this large string table in `IntrinsicsImpl.inc` seems to compile
correctly on MSVC without the work around in `EmitCharArray` (and that
this PR adds back to the nicer emission path), other users have
repeatedly hit this MSVC limitation as you can see in the discussion on
PR https://github.com/llvm/llvm-project/pull/120534. This PR teaches the
string offset table emission to look at
the size of the table and switch to the char array emission strategy
when the table becomes too large.
This work around does have the downside of making compile times worse
for large string tables, but that appears unavoidable until we can
identify known good MSVC versions and switch to requiring them for all
LLVM users. It also reduces searchability of the generated string table
-- I looked at emitting a comment with each string but it is tricky
because the escaping rules for an inline comment are different from
those of of a string literal, and there's no real way to turn the string
literal into a comment.
While improving the output in this way, also clean up the output to not
emit an extraneous empty string at the end of the string table, and
update the `StringTable` class to not look for that. It isn't actually
used by anything and is wasteful.
This PR also switches the `IntrinsicsImpl.inc` string tables over to the
new `StringTable` runtime abstraction. I didn't want to do this until
landing the MSVC workaround in case it caused even this example to start
hitting the MSVC bug, but I wanted to switch here so that I could
simplify the API for emitting the string table with the workaround
present. With the two different emission strategies, its important to
use a very exact syntax and that seems better encapsulated in the API.
Last but not least, the `SDNodeInfoEmitter` is updated, including its
tests to match the new output.
This PR should unblock landing
https://github.com/llvm/llvm-project/pull/120534 and letting us switch
all of
Clang's builtins to use string tables. That PR has all the details
motivating the overall effort.
Follow-up patches will try to consolidate the remaining users onto the
single interface, but those at least were easy to separate into
follow-ups and keep this PR somewhat smaller.
This switches them to use tho common TableGen layer, extending it to
support the missing features needed by the NVPTX backend.
The biggest thing was to build a TableGen system that computes the
cumulative SM and PTX feature sets the same way the macros did. That's
done with some string concatenation tricks in TableGen, but they worked
out pretty neatly and are very comparable in complexity to the macro
version.
Then the actual defines were mapped over using a very hacky Python
script. It was never productionized or intended to work in the future,
but for posterity:
https://gist.github.com/chandlerc/10bdf8fb1312e252b4a501bace184b66
Last but not least, there was a very odd "bug" in one of the converted
builtins' prototype in the TableGen model: it didn't handle uses of `Z`
and `U` both as *qualifiers* of a single type, treating `Z` as its own
`int32_t` type. So my hacky Python script converted `ZUi` into two
types, an `int32_t` and an `unsigned int`. This produced a very wrong
prototype. But the tests caught this nicely and I fixed it manually
rather than trying to improve the Python script as it occurred in
exactly one place I could find.
This should provide direct benefits of allowing future refactorings to
more directly leverage TableGen to express builtins more structurally
rather than textually. It will also make my efforts to move builtins to
string tables significantly more effective for the NVPTX backend where
the X-macro approach resulted in *significantly* less efficient string
tables than other targets due to the long repeated feature strings.
- The FP8 scalar type (`__mfp8`) was described as a vector type
- The FP8 vector types were described/assumed to have integer element
type (the element type ought to be `__mfp8`)
- Add support for `m` type specifier (denoting `__mfp8`) in
`DecodeTypeFromStr` and create builtin function prototypes using that
specifier, instead of `int8_t`
Reimplement Neon FP8 vector types using attribute `neon_vector_type`
instead of having them as builtin types.
This allows to implement FP8 Neon intrinsics without the need to add
special cases for these types when using `__builtin_shufflevector`
or bitcast (using C-style cast operator) between vectors, both
extensively used in the generated code in `arm_neon.h`.
This change removes the need to call the clang-bolt target in order to
apply bolt optimizations to clang. Now running `ninja clang` will build
a clang with bolt optimizations, and `ninja check-clang` and `ninja
install-clang` will test and install bolt optimized clang too.
The clang-bolt target has been kept for compatibilty reasons, but it is
now just an alias to the clang target.
Also, this new design for applying the bolt optimizations to clang will
be easier to generalize and use to optimize other binaries/libraries in
the project.
---------
Co-authored-by: Amir Ayupov <fads93@gmail.com>
Co-authored-by: Petr Hosek <phosek@google.com>
When generating `arm_neon.h`, NeonEmitter outputs code that
violates strict aliasing rules (C23 6.5 Expressions #7,
C++23 7.2.1 Value category [basic.lval] #11), for example:
bfloat16_t __reint = __p0;
uint32_t __reint1 = (uint32_t)(*(uint16_t *) &__reint) << 16;
__ret = *(float32_t *) &__reint1;
This patch fixed the offending code by replacing it with
a call to `__builtin_bit_cast`.
This reverts commit c3ba6f378ef80d750e2278560c6f95a300114412.
We are seeing performance regressions of up to 40% on some compilations
with this patch, we will investigate and reland after fixing performance
issues.
Previously, they used a hand-rolled Pascal-string encoding different
from all the other string tables produced from TableGen. This moves them
to use the newly introduced runtime abstraction, and enhances that
abstraction to support iterating over the string table as used in this
case.
From what I can tell the Pascal-string encoding isn't critical here to
avoid expensive `strlen` calls, so I think this is a simpler and more
consistent model. But if folks would prefer a Pascal-string style
encoding, I can instead work to switch the `StringTable` abstraction
towards that. It would require some tricky tradeoffs though to make it
reasonably general: either using 4 bytes instead of 1 byte to encode the
size, or having a fallback to `strlen` for long strings.
…ecord level.
This fixes the incorrect diagnostic emitted when compiling the following
snippet
```
// string_view.h
template<class _CharT>
class basic_string_view;
typedef basic_string_view<char> string_view;
template<class _CharT>
class
__attribute__((__preferred_name__(string_view)))
basic_string_view {
public:
basic_string_view()
{
}
};
inline basic_string_view<char> foo()
{
return basic_string_view<char>();
}
// A.cppm
module;
#include "string_view.h"
export module A;
// Use.cppm
module;
#include "string_view.h"
export module Use;
import A;
```
The diagnostic is
```
string_view.h:11:5: error: 'basic_string_view<char>::basic_string_view' from module 'A.<global>' is not present in definition of 'string_view' provided earlier
```
The underlying issue is that deserialization of the `preferred_name`
attribute triggers deserialization of `basic_string_view<char>`, which
triggers the deserialization of the `preferred_name` attribute again
(since it's attached to the `basic_string_view` template).
The deserialization logic is implemented in a way that prevents it from
going on a loop in a literal sense (it detects early on that it has
already seen the `string_view` typedef when trying to start its
deserialization for the second time), but leaves the typedef
deserialization in an unfinished state. Subsequently, the `string_view`
typedef from the deserialized module cannot be merged with the same
typedef from `string_view.h`, resulting in the above diagnostic.
This PR resolves the problem by delaying the deserialization of the
`preferred_name` attribute until the deserialization of the
`basic_string_view` template is completed. As a result of deferring, the
deserialization of the `preferred_name` attribute doesn't need to go on
a loop since the type of the `string_view` typedef is already known when
it's deserialized.
This causes us to generate an enum to go along with the select
diagnostic, which allows for clearer diagnostic error emit lines.
The syntax for this is:
%enum_select<EnumerationName>{%OptionalEnumeratorName{Text}|{Text2}}0
Where the curley brackets around the select-text are only required if an
Enumerator name is provided.
The TableGen here emits this as a normal 'select' to the frontend, which
permits us to reuse all of the existing 'select' infrastructure.
Documentation is the same as well.
---------
Co-authored-by: Aaron Ballman <aaron@aaronballman.com>
This was an especially challenging escape hatch because it directly
forced the use of a specific X-macro structure and prevented any other
form of TableGen emission.
The problematic feature that motivated this is a case where a builtin's
prototype can't be represented in the mini-language used by TableGen.
Instead of adding a complete custom entry for this, this PR just teaches
the prototype handling to do the same thing the X-macros did in this
case: emit an empty string and let the Clang builtin handling respond
appropriately.
This should produce identical results while preserving all the rest of
the structured representation in the builtin TableGen code.
Replacing the extant streaming mode function call with an intrinsic
allows us to make further optimisations around it. For example, if it's
called within a function that has a known streaming mode, we can remove
the dead code, and avoid the redundant conditional branch.