GDB/MI requires unique names for each child, otherwise fails with
"Duplicate variable object name". Also wrapped containers printers
were flattened for cleaner visualization in IDEs and CLI.
Fixes#62340
This patch implements the forwarding to frozen C++03 headers as
discussed in
https://discourse.llvm.org/t/rfc-freezing-c-03-headers-in-libc. In the
RFC, we initially proposed selecting the right headers from the Clang
driver, however consensus seemed to steer towards handling this in the
library itself. This patch implements that direction.
At a high level, the changes basically amount to making each public
header look like this:
```
// inside <vector>
#ifdef _LIBCPP_CXX03_LANG
# include <__cxx03/vector>
#else
// normal <vector> content
#endif
```
In most cases, public headers are simple umbrella headers so there isn't
much code in the #else branch. In other cases, the #else branch contains
the actual implementation of the header.
This will allow using the $<LINK_LIBRARY> generator expression in some
of our configurations. We should separately pursue officially bumping
the minimum CMake version across all LLVM so we can use this feature
more widely.
When running a benchmark, also save the benchmark results in a JSON
file. That is cheap to do and useful to compare benchmark results
between different runs.
It makes more sense to start testing libc++ with the latest compiler and
only then to run the LLDB data formatter tests, since that provides more
signal than starting with the data formatter tests.
This patch refactors the tests around aligned allocation and sized
deallocation to avoid relying on passing the -fsized-deallocation or
-faligned-allocation flags by default. Since both of these features are
enabled by default in >= C++14 mode, it now makes sense to make that
assumption in the test suite.
A notable exception is MinGW and some older compilers, where sized
deallocation is still not enabled by default. We treat that as a "bug"
in the test suite and we work around it by explicitly adding
-fsized-deallocation, but only under those configurations.
This moves the configuration of the CMake features to turn off RTTI,
exceptions and friends to the beginning of the CMake file, where we
configure other optional parts of the library.
This fixes an important bug where we would disable the benchmarks
because these options were not defined yet, leading to the build
thinking they were defined to OFF.
This script does not depend on the generated headers since those are
already special-cased in header_information.py. Change the dependency
list to depend on header_information.py instead. While looking at this
code also simplify the assignment to libcxx_root inside this script.
This patch introduces a new kind of bounded iterator that knows the size
of its valid range at compile-time, as in std::array. This allows computing
the end of the range from the start of the range and the size, which requires
storing only the start of the range in the iterator instead of both the start
and the size (or start and end). The iterator wrapper is otherwise identical
in design to the existing __bounded_iter.
Since this requires changing the type of the iterators returned by
std::array, this new bounded iterator is controlled by an ABI flag.
As a drive-by, centralize the tests for std::array::operator[] and add
missing tests for OOB operator[] on non-empty arrays.
Fixes#70864
Instead of building the benchmarks separately via CMake and running them
separately from the test suite, this patch merges the benchmarks into
the test suite and handles both uniformly.
As a result:
- It is now possible to run individual benchmarks like we run tests
(e.g. using libcxx-lit), which is a huge quality-of-life improvement.
- The benchmarks will be run under exactly the same configuration as
the rest of the tests, which is a nice simplification. This does
mean that one has to be careful to enable the desired optimization
flags when running benchmarks, but that is easy with e.g.
`libcxx-lit <...> --param optimization=speed`.
- Benchmarks can use the same annotations as the rest of the test
suite, such as `// UNSUPPORTED` & friends.
When running the tests via `check-cxx`, we only compile the benchmarks
because running them would be too time consuming. This introduces a bit
of complexity in the testing setup, and instead it would be better to
allow passing a --dry-run flag to GoogleBenchmark executables, which is
the topic of https://github.com/google/benchmark/issues/1827.
I am not really satisfied with the layering violation of adding the
%{benchmark_flags} substitution to cmake-bridge, however I believe
this can be improved in the future.
Summary:
The GPU runs these tests using the files built from the `libc` project.
These will be placed in `include/<triple>` and `lib/<triple>`. We use
the `amdhsa-loader` and `nvptx-loader` tools, which are also provided by
`libc`. These launch a kernel called `_start` which calls `main` so we
can pretend like GPU programs are normal terminal applications.
We force serial exeuction here, because `llvm-lit` runs way too many
processes in parallel, which has a bad habit of making the GPU drivers
hang or run out of resources. This allows the compilation to be run in
parallel while the jobs themselves are serialized via a file lock.
In the future this can likely be refined to accept user specified
architectures, or better handle including the root directory by exposing
that instead of just `include/<triple>/c++/v1/`.
This currently fails ~1% of the tests on AMDGPU and ~3% of the tests on
NVPTX. This will hopefully be reduced further, and later patches can
XFAIL a lot of them once it's down to a reasonable number.
Future support will likely want to allow passing in a custom
architecture instead of simply relying on `-mcpu=native`.
This PR deprecates `<ccomplex>`, `<cstdbool>`, `<ctgmath>`, and
`<ciso646>` in C++17 and "removes" them in C++20 by special deprecation
warnings.
`<cstdalign>` is previously missing. This PR also tries to add them, and
then deprecates and "removes" `<cstdalign>`.
Papers:
- https://wg21.link/P0063R3
- https://wg21.link/P0619R4Closes#99985.
---------
Co-authored-by: Louis Dionne <ldionne.2@gmail.com>
Around half of the tests are based on the tests Arthur O'Dwyer's
original implementation of std::flat_map, with modifications and
removals.
partially implement #105190
In C++20 mode, `__cpp_lib_optional` and `__cpp_lib_variant` should be
`202106L` due to DR P2231R1.
In C++26 mode, `__cpp_lib_variant` should be bumped to `202306L` due to
P2637R3.
- Clang 16/17 shouldn't get this bumping (as member `visit` requires
explicit object parameters), but it's very tricky to make the bumping
conditionally enabled. I _hope_ unconditionally bumping in C++26 will be
OK for LLVM 20 when the support for Clang 17 is dropped.
Related PRs:
- https://reviews.llvm.org/D102119
- #83335
- #76447
This reverts commit 68c04b0ae62d8431d72d8b47fc13008002ee4387.
This disables the IWYU mapping that caused the failure, since
the headers aren't reachable for now.
This is the first part of the "Freezing C++03 headers" proposal
explained in
https://discourse.llvm.org/t/rfc-freezing-c-03-headers-in-libc/77319/58.
This patch mechanically copies the headers as of the LLVM 19.1 release
into a subdirectory of libc++ so that we can start using these headers
when building in C++03 mode. We are going to be backporting important
changes to that copy of the headers until the LLVM 21 release. After the
LLVM 21 release, only critical bugfixes will be fixed in the C++03 copy
of the headers.
This patch only performs a copy of the headers -- these headers are
still unused by the rest of the codebase.
Since we don't generate a full dependency graph of headers, we can
greatly simplify the script that parses the result of --trace-includes.
At the same time, we also unify the mechanism for detecting whether a
header is a public/C compat/internal/etc header with the existing
mechanism in header_information.py.
As a drive-by this fixes the headers_in_modulemap.sh.py test which had
been disabled by mistake because it used its own way of determining
the list of libc++ headers. By consistently using header_information.py
to get that information, problems like this shouldn't happen anymore.
This should also unblock #110303, which was blocked because of
a brittle implementation of the transitive includes check which broke
when the repository was cloned at a path like /path/__something/more.
Implements std::from_chars for float and double.
The implementation uses LLVM-libc to do the real parsing. Since this is
the first time libc++
uses LLVM-libc there is a bit of additional infrastructure code. The
patch is based on the
[RFC] Project Hand In Hand (LLVM-libc/libc++ code sharing)
https://discourse.llvm.org/t/rfc-project-hand-in-hand-llvm-libc-libc-code-sharing/77701
These headers are not doing anything beyond the system or compiler
provided equivalent headers, so there's no real reason to keep them
around. Reducing the number of C headers we provide in libc++ simplifies
our header layering and reduces the potential for confusion when headers
are layered incorrectly.
Currently, the library-internal feature test macros are only defined if
the feature is not available, and always have the prefix
`_LIBCPP_HAS_NO_`. This patch changes that, so that they are always
defined and have the prefix `_LIBCPP_HAS_` instead. This changes the
canonical use of these macros to `#if _LIBCPP_HAS_FEATURE`, which means
that using an undefined macro (e.g. due to a missing include) is
diagnosed now. While this is rather unlikely currently, a similar change
in `<__configuration/availability.h>` caught a few bugs. This also
improves readability, since it removes the double-negation of `#ifndef
_LIBCPP_HAS_NO_FEATURE`.
The current patch only touches the macros defined in `<__config>`. If
people are happy with this approach, I'll make a follow-up PR to also
change the macros defined in `<__config_site>`.
This improves the CI output by providing collapsable sections for
sub-parts of our build.
This was originally opened as #75233.
Co-authored-by: eric <eric@efcs.ca>
We've been increasing the coverage of libc++ LLDB tests in the pre-merge
CI (see https://github.com/llvm/llvm-project/pull/110570). Unfortunately
the tests are spread across different targets. It would be great if we
had a single target that libc++ maintainers could run.
We do this by passing the `libc++` test-category as a parameter to
LLDB's [`dotest` testing
framework](https://lldb.llvm.org/resources/test.html). This will only
run LLDB tests that have been marked as belonging to the `libc++`
category.
This removes the need for macOS nodes in Buildkite. It also moves to the
proper way of testing backdeployment, which is to actually run on the
target OS itself, instead of using packaged dylibs from previous OS
versions and trying to emulate backdeployment with DYLD_LIBRARY_PATH.
As a drive-by change, also fix a few back-deployment annotations that
were incorrect and add support for minor versions in the Lit feature
determining availability from the target triple.
This is a re-application of bc6bd3bc1e9 which was reverted in
f11abac6524 because it broke the Clang pre-commit CI.
Original commit message:
This patch rewrites the modulemap to have fewer top-level modules.
Previously, our modulemap had one top level module for each header in
the library, including private headers. This had the well-known problem
of making compilation times terrible, in addition to being somewhat
against the design principles of Clang modules.
This patch provides almost an order of magnitude compilation time
improvement when building modularized code (certainly subject to
variations). For example, including <ccomplex> without a module cache
went from 22.4 seconds to 1.6 seconds, a 14x improvement.
To achieve this, one might be tempted to simply put all the headers in a
single top-level module. Unfortunately, this doesn't work because libc++
provides C compatibility headers (e.g. stdlib.h) which create cycles
when the C Standard Library headers are modularized too. This is
especially tricky since base systems are usually not modularized: as far
as I know, only Xcode 16 beta contains a modularized SDK that makes this
issue visible. To understand it, imagine we have the following setup:
// in libc++'s include/c++/v1/module.modulemap
module std {
header stddef.h
header stdlib.h
}
// in the C library's include/module.modulemap
module clib {
header stddef.h
header stdlib.h
}
Now, imagine that the C library's <stdlib.h> includes <stddef.h>,
perhaps as an implementation detail. When building the `std` module,
libc++'s <stdlib.h> header does `#include_next <stdlib.h>` to get the C
library's <stdlib.h>, so libc++ depends on the `clib` module.
However, remember that the C library's <stdlib.h> header includes
<stddef.h> as an implementation detail. Since the header search paths
for libc++ are (and must be) before the search paths for the C library,
the C library ends up including libc++'s <stddef.h>, which means it
depends on the `std` module. That's a cycle.
To solve this issue, this patch creates one top-level module for each C
compatibility header. The rest of the libc++ headers are located in a
single top-level `std` module, with two main exceptions. First, the
module containing configuration headers (e.g. <__config>) has its own
top-level module too, because those headers are included by the C
compatibility headers.
Second, we create a top-level std_core module that contains several
dependency-free utilities used (directly or indirectly) from the __math
subdirectory. This is needed because __math pulls in a bunch of stuff,
and __math is used from the C compatibility header <math.h>.
As a direct benefit of this change, we don't need to generate an
artificial __std_clang_module header anymore to provide a monolithic
`std` module, since our modulemap does it naturally by construction.
A next step after this change would be to look into whether math.h
really needs to include the contents of __math, and if so, whether
libc++'s math.h truly needs to include the C library's math.h header.
Removing either dependency would break this annoying cycle.
Thanks to Eric Fiselier for pointing out this approach during a recent
meeting. This wasn't viable before some recent refactoring, but wrapping
everything (except the C headers) in a large module is by far the
simplest and the most effective way of doing this.
Fixes#86193
This allows catching OOB accesses inside `unique_ptr<T[]>` when the size
of the allocation is known. The size of the allocation can be known when
the unique_ptr has been created with make_unique & friends or when the
type necessitates an array cookie before the allocation.
This is a re-aplpication of 45a09d181 which had been reverted in
f11abac6 due to unrelated CI failures.
This reverts 3 commits:
45a09d1811d5d6597385ef02ecf2d4b7320c37c5
24bc3244d4e221f4e6740f45e2bf15a1441a3076
bc6bd3bc1e99c7ec9e22dff23b4f4373fa02cae3
The GitHub pre-merge CI has been broken since this PR went in. This
change reverts it to see if I can get the pre-merge CI working again.
This allows catching OOB accesses inside `unique_ptr<T[]>` when the size
of the allocation is known. The size of the allocation can be known when
the unique_ptr has been created with make_unique & friends or when the
type necessitates an array cookie before the allocation.
This patch rewrites the modulemap to have fewer top-level modules.
Previously, our modulemap had one top level module for each header in
the library, including private headers. This had the well-known problem
of making compilation times terrible, in addition to being somewhat
against the design principles of Clang modules.
This patch provides almost an order of magnitude compilation time
improvement when building modularized code (certainly subject to
variations). For example, including <ccomplex> without a module cache
went from 22.4 seconds to 1.6 seconds, a 14x improvement.
To achieve this, one might be tempted to simply put all the headers in a
single top-level module. Unfortunately, this doesn't work because libc++
provides C compatibility headers (e.g. stdlib.h) which create cycles
when the C Standard Library headers are modularized too. This is
especially tricky since base systems are usually not modularized: as far
as I know, only Xcode 16 beta contains a modularized SDK that makes this
issue visible. To understand it, imagine we have the following setup:
// in libc++'s include/c++/v1/module.modulemap
module std {
header stddef.h
header stdlib.h
}
// in the C library's include/module.modulemap
module clib {
header stddef.h
header stdlib.h
}
Now, imagine that the C library's <stdlib.h> includes <stddef.h>,
perhaps as an implementation detail. When building the `std` module,
libc++'s <stdlib.h> header does `#include_next <stdlib.h>` to get the C
library's <stdlib.h>, so libc++ depends on the `clib` module.
However, remember that the C library's <stdlib.h> header includes
<stddef.h> as an implementation detail. Since the header search paths
for libc++ are (and must be) before the search paths for the C library,
the C library ends up including libc++'s <stddef.h>, which means it
depends on the `std` module. That's a cycle.
To solve this issue, this patch creates one top-level module for each C
compatibility header. The rest of the libc++ headers are located in a
single top-level `std` module, with two main exceptions. First, the
module containing configuration headers (e.g. <__config>) has its own
top-level module too, because those headers are included by the C
compatibility headers.
Second, we create a top-level std_core module that contains several
dependency-free utilities used (directly or indirectly) from the __math
subdirectory. This is needed because __math pulls in a bunch of stuff,
and __math is used from the C compatibility header <math.h>.
As a direct benefit of this change, we don't need to generate an
artificial __std_clang_module header anymore to provide a monolithic
`std` module, since our modulemap does it naturally by construction.
A next step after this change would be to look into whether math.h
really needs to include the contents of __math, and if so, whether
libc++'s math.h truly needs to include the C library's math.h header.
Removing either dependency would break this annoying cycle.
Thanks to Eric Fiselier for pointing out this approach during a recent
meeting. This wasn't viable before some recent refactoring, but wrapping
everything (except the C headers) in a large module is by far the
simplest and the most effective way of doing this.
Fixes#86193
We were incorrectly computing whether a FTM has been implemented.
Instead of checking whether any version of the FTM is implemented for
the current Standard, we need to make sure that the correct version of
the FTM has been implemented.
As a drive-by fix, also correctly close the file that we load JSON from,
which was forgotten.