Summary:
Previously, we removed the special handling for the code object version
global. I erroneously thought that this meant we cold get rid of this
weird `-Xclang` option. However, this also emits an LLVM IR module flag,
which will then cause linking issues.
Summary:
When we were first porting to COV5, this lead to some ABI issues due to
a change in how we looked up the work group size. Bitcode libraries
relied on the builtins to emit code, but this was changed between
versions. This prevented the bitcode libraries, like OpenMP or libc,
from being used for both COV4 and COV5. The solution was to have this
'none' functionality which effectively emitted code that branched off of
a global to resolve to either version.
This isn't a great solution because it forced every TU to have this
variable in it. The patch in
https://github.com/llvm/llvm-project/pull/131033 removed support for
COV4 from OpenMP, which was the only consumer of this functionality.
Other users like HIP and OpenCL did not use this because they linked the
ROCm Device Library directly which has its own handling (The name was
borrowed from it after all).
So, now that we don't need to worry about backward compatibility with
COV4, we can remove this special handling. Users can still emit COV4
code, this simply removes the special handling used to make the OpenMP
device runtime bitcode version agnostic.
In recent versions of Clang, using -std=c++20 (and later) implies LSV
when compiling with modules. This change resulted in making our LSV job
redundant with the regular modules job, which uses the latest Standard.
This patch increases the coverage of our CI without increasing its cost
by pinning the LSV job to use C++17, which normally doesn't use LSV. A
related question is whether we should add coverage for non-LSV builds
using Clang modules.
This patch implements the forwarding to frozen C++03 headers as
discussed in
https://discourse.llvm.org/t/rfc-freezing-c-03-headers-in-libc. In the
RFC, we initially proposed selecting the right headers from the Clang
driver, however consensus seemed to steer towards handling this in the
library itself. This patch implements that direction.
At a high level, the changes basically amount to making each public
header look like this:
```
// inside <vector>
#ifdef _LIBCPP_CXX03_LANG
# include <__cxx03/vector>
#else
// normal <vector> content
#endif
```
In most cases, public headers are simple umbrella headers so there isn't
much code in the #else branch. In other cases, the #else branch contains
the actual implementation of the header.
This patch introduces a new kind of bounded iterator that knows the size
of its valid range at compile-time, as in std::array. This allows computing
the end of the range from the start of the range and the size, which requires
storing only the start of the range in the iterator instead of both the start
and the size (or start and end). The iterator wrapper is otherwise identical
in design to the existing __bounded_iter.
Since this requires changing the type of the iterators returned by
std::array, this new bounded iterator is controlled by an ABI flag.
As a drive-by, centralize the tests for std::array::operator[] and add
missing tests for OOB operator[] on non-empty arrays.
Fixes#70864
Summary:
The GPU runs these tests using the files built from the `libc` project.
These will be placed in `include/<triple>` and `lib/<triple>`. We use
the `amdhsa-loader` and `nvptx-loader` tools, which are also provided by
`libc`. These launch a kernel called `_start` which calls `main` so we
can pretend like GPU programs are normal terminal applications.
We force serial exeuction here, because `llvm-lit` runs way too many
processes in parallel, which has a bad habit of making the GPU drivers
hang or run out of resources. This allows the compilation to be run in
parallel while the jobs themselves are serialized via a file lock.
In the future this can likely be refined to accept user specified
architectures, or better handle including the root directory by exposing
that instead of just `include/<triple>/c++/v1/`.
This currently fails ~1% of the tests on AMDGPU and ~3% of the tests on
NVPTX. This will hopefully be reduced further, and later patches can
XFAIL a lot of them once it's down to a reasonable number.
Future support will likely want to allow passing in a custom
architecture instead of simply relying on `-mcpu=native`.
While these flags semantically are relevant only for C++, we do add them
to CMAKE_REQUIRED_FLAGS if they are detected. All flags in that variable
are used both when testing compilation of C and C++ (and for detecting
libraries, which uses the C compiler driver).
Therefore, to be sure we safely can add the flags to
CMAKE_REQUIRED_FLAGS, test for the option with the C language.
This should fix compilation with GCC; newer versions of GCC do support
the -nostdlib++ option, but it's only supported by the C++ compiler
driver, not the C driver. (However, many builds of GCC also do accept
the option with the C driver, if GCC was compiled with Ada support
enabled, see [1]. That's why this issue isn't noticed in all
configurations with GCC.)
Clang does support these options in both C and C++ driver modes.
This should fix#90332.
[1]
https://github.com/llvm/llvm-project/issues/90332#issuecomment-2325099254
This patch always defines the cxx_shared, cxx_static & other top-level
targets. However, they are marked as EXCLUDE_FROM_ALL when we don't want
to build them. Simply declaring the targets should be of no harm, and it
allows other projects to mention these targets regardless of whether
they end up being built or not.
This patch basically moves the definition of e.g. cxx_shared out of the
`if (LIBCXX_ENABLE_SHARED)` and instead marks it as EXCLUDE_FROM_ALL
conditionally on whether LIBCXX_ENABLE_SHARED is passed. It then does
the same for libunwind and libc++abi targets. I purposefully avoided to
reformat the files (which now has inconsistent indentation) because I
wanted to keep the diff minimal, and I know this is an area of the code
where folks may have downstream diffs. I will re-indent the code
separately once this patch lands.
This is a reapplication of 79ee0342dbf0, which was reverted in
a3539090884c because it broke the TSAN and the Fuchsia builds.
Resolves#77654
Differential Revision: https://reviews.llvm.org/D134221
On Apple platforms, using system-libcxxabi as an ABI library wouldn't
work because we'd try to re-export symbols from libc++abi that the
system libc++abi.dylib might not have. Instead, only re-export those
symbols when we're using the in-tree libc++abi.
This does mean that libc++.dylib won't re-export any libc++abi symbols
when building against the system libc++abi, which could be fixed in
various ways. However, the best solution really depends on the intended
use case, so this patch doesn't try to solve that problem.
As a drive-by, also improve the diagnostic message when the user forgets
to set the LIBCXX_CXX_ABI_INCLUDE_PATHS variable, which would previously
lead to a confusing error.
Closes#104672
This allows catching OOB accesses inside `unique_ptr<T[]>` when the size
of the allocation is known. The size of the allocation can be known when
the unique_ptr has been created with make_unique & friends or when the
type necessitates an array cookie before the allocation.
This is a re-aplpication of 45a09d181 which had been reverted in
f11abac6 due to unrelated CI failures.
This reverts 3 commits:
45a09d1811d5d6597385ef02ecf2d4b7320c37c5
24bc3244d4e221f4e6740f45e2bf15a1441a3076
bc6bd3bc1e99c7ec9e22dff23b4f4373fa02cae3
The GitHub pre-merge CI has been broken since this PR went in. This
change reverts it to see if I can get the pre-merge CI working again.
This allows catching OOB accesses inside `unique_ptr<T[]>` when the size
of the allocation is known. The size of the allocation can be known when
the unique_ptr has been created with make_unique & friends or when the
type necessitates an array cookie before the allocation.
Summary:
We specify the ABI to experimental, but we should leave this up to the
user. Primarily this is because we want the ABI to be compatible with
the user's CPU build, so the default should be used.
Summary:
I initially thought that it would be convenient to automatically link
these libraries like they are for standard C/C++ targets. However, this
created issues when trying to use C++ as a GPU target. This patch moves
the logic to now implicitly pass it as part of the offloading toolchain
instead, if found. This means that the user needs to set the target
toolchain for the link job for automatic detection, but can still be
done manually via `-Xoffload-linker -lc`.
We always strive to test libc++ as close as possible to the way we are
actually shipping it. This was approximated reasonably well by setting
up the minimal driver flags when running the test suite, however we were
running the test suite against the library located in the build
directory.
This patch improves the situation by installing the library (the
headers, the built library, modules, etc) into a fake location and then
running the test suite against that fake "installation root".
This should open the door to getting rid of the temporary copy of the
headers we make during the build process, however this is left for a
future improvement.
Note that this adds quite a bit of verbosity whenever running the test
suite because we install the headers beforehand every time. We should be
able to override this to silence it, however CMake doesn't currently
give us a way to do that, see https://gitlab.kitware.com/cmake/cmake/-/issues/26085.
Summary:
This patch adds a CMake cache config file for the GPU build. This cache
will set the default required options when used from the LLVM runtime
interface or directly. These options pretty much disable everything the
GPU can't handle.
With this and the following patches: #99259, #99243, #99287, and #99333,
we will be able to build `libc++` targeting the GPU with an invocation
like this.
```
$ cmake ../llvm
-DRUNTIMES_nvptx64-nvidia-cuda_CACHE_FILES=${LLVM_SRC}/../libcxx/cmake/caches/NVPTX.cmake \
-DRUNTIMES_amdgcn-amd-amdhsa_CACHE_FILES=${LLVM_SRC}/../libcxx/cmake/caches/AMDGPU.cmake \
-DRUNTIMES_nvptx64-nvidia-cuda_LLVM_ENABLE_RUNTIMES=compiler-rt;libc;libcxx \
-DRUNTIMES_amdgcn-amd-amdhsa_LLVM_ENABLE_RUNTIMES=compiler-rt;libc;libcxx \
-DLLVM_RUNTIME_TARGETS=amdgcn-amd-amdhsa;nvptx64-nvidia-cuda \
```
This will then install the libraries and headers into the appropriate
locations for use with `clang`.
This speeds up the CI a bit (anecdotally ~10%) for those jobs, and it
also helps ensure that we are clean w.r.t. Clang modules when we disable
some of the carve-outs like no-localization or no-threads.
This patch removes support for generating code coverage information of
libc++, which is unmaintained and I've never been able to utilize. It
would be great to have support for generating code coverage in the test
suite, however this incarnation of it doesn't seem to be properly hooked
up into the test suite. Since it gets in the way of making the test
suite more standalone, remove it.
~~NB: This PR depends on #78876. Ignore the first commit when reviewing,
and don't merge it until #78876 is resolved. When/if #78876 lands, I'll
clean this up.~~
This partially restores parity with the old, since removed debug build.
We now can re-enable a bunch of the disabled tests. Some things of note:
- `bounded_iter`'s converting constructor has never worked. It needs a
friend declaration to access the other `bound_iter` instantiation's
private fields.
- The old debug iterators also checked that callers did not try to
compare iterators from different objects. `bounded_iter` does not
currently do this, so I've left those disabled. However, I think we
probably should add those. See
https://github.com/llvm/llvm-project/issues/78771#issuecomment-1902999181
- The `std::vector` iterators are bounded up to capacity, not size. This
makes for a weaker safety check. This is because the STL promises not to
invalidate iterators when appending up to the capacity. Since we cannot
retroactively update all the iterators on `push_back()`, I've instead
sized it to the capacity. This is not as good, but at least will stop
the iterator from going off the end of the buffer.
There was also no test for this, so I've added one in the `std`
directory.
- `std::string` has two ambiguities to deal with. First, I opted not to
size it against the capacity. https://eel.is/c++draft/string.require#4
says iterators are invalidated on an non-const operation. Second,
whether the iterator can reach the NUL terminator. The previous debug
tests and the special-case in https://eel.is/c++draft/string.access#2
suggest no. If either of these causes widespread problems, I figure we
can revisit.
- `resize_and_overwrite.pass.cpp` assumed `std::string`'s iterator
supported `s.begin().base()`, but I see no promise of this in the
standard. GCC also doesn't support this. I fixed the test to use
`std::to_address`.
- `alignof.compile.pass.cpp`'s pointer isn't enough of a real pointer.
(It needs to satisfy `NullablePointer`, `LegacyRandomAccessIterator`,
and `LegacyContiguousIterator`.) `__bounded_iter` seems to instantiate
enough to notice. I've added a few more bits to satisfy it.
Fixes#78805
In order to test libc++ under the "Apple System Library" configuration,
we need to run the tests using DYLD_LIBRARY_PATH. This is required
because libc++ gets an install_name of /usr/lib when built as a system
library, which means that we must override the copy of libc++ used by
the whole process. This effectively reverts 2cf2f1b, which was the wrong
solution for the problem I was having.
Of course, this assumes that the just-built libc++ is sufficient to
replace the system library, which is not actually the case
out-of-the-box. Indeed, the system library contains a few symbols that
are not provided by the upstream library, leading to undefined symbols
when replacing the system library by the just-built one.
To solve this problem, we separately build shims that provide those
missing symbols and we manually link against them when we build
executables in the tests. While this is somewhat brittle, it provides a
localized and unintrusive way to allow testing the Apple system
configuration in an upstream environment, which has been a frequent
request.
`libcxx std::basic_ios` uses `WEOF` to indicate the `fill` value is
uninitialized. On some platforms (e.g AIX and zOS in 64-bit mode)
`wchar_t` is 4 bytes `unsigned` and `wint_t` is also 4 bytes which means
`WEOF` cannot be distinguished from `WCHAR_MAX` by
`std::char_traits<wchar_t>::eq_int_type()`, meaning this valid character
value cannot be stored on affected platforms (as the implementation
triggers reinitialization to `widen(’ ’)`).
This patch introduces a new helper class `_FillHelper` uses a boolean
variable to indicate whether the fill character has been initialized,
which is used by default in libcxx ABI version 2. The patch does not
affect ABI version 1 except for targets AIX in 32- and 64-bit and z/OS
in 64-bit (so that the layout of the implementation is compatible with
the current IBM system provided libc++)
This is a continuation of Phabricator patch
[D124555](https://reviews.llvm.org/D124555). This patch uses a modified
version of the [approach](https://reviews.llvm.org/D124555#3566746)
suggested by @ldionne .
---------
Co-authored-by: Louis Dionne <ldionne.2@gmail.com>
Co-authored-by: David Tenty <daltenty.dev@gmail.com>
This patch makes a few adjustments to the way we run the tests in the
Apple configuration on macOS:
First, we stop using DYLD_LIBRARY_PATH. Using that environment variable
leads to libc++.dylib being replaced by the just-built one for the whole
process, and that assumes compatibility between the system-provided
dylib and the just-built one. Unfortunately, that is not the case
anymore due to typed allocation, which is only available in the system
one. Instead, we want to layer the just-built libc++ on top of the
system-provided one, which seems to be what happens when we set a rpath
instead.
Second, add a missing XFAIL for a std::print test that didn't work as
expected when building with availability annotations enabled. When we
enable these annotations, std::print falls back to a non-unicode and
non-terminal output, which breaks the test.
This patch removes the two-level backend dispatching mechanism we had in
the PSTL. Instead of selecting both a PSTL backend and a PSTL CPU
backend, we now only select a top-level PSTL backend. This greatly
simplifies the PSTL configuration layer.
While this patch technically removes some flexibility from the PSTL
configuration mechanism because CPU backends are not considered
separately, it opens the door to a much more powerful configuration
mechanism based on chained backends in a follow-up patch.
This is a step towards overhauling the PSTL dispatching mechanism.
Picolibc does not provide the clock_gettime function nor the "rt" library.
check_library_exists was invalidly detecting the "rt" library due to cmake issue
present, when cross-compiling[1]. This resulted with "chrono.cpp" trying to link
to the "rt" library and following error:
unable to find library from dependent library specifier: rt
[1] https://gitlab.kitware.com/cmake/cmake/-/issues/18121
Installs the source files of the experimental libc++ modules. These
source files (.cppm) are used by the Clang to build the std and
std.compat modules.
The design of this patch is based on a discussing in SG-15 on
12.12.2023. (SG-15 is the ISO C++ Tooling study group):
- The modules are installed at a location, that is not known to build
systems and compilers.
- Next to the library there will be a module manifest json file.
This json file contains the information to build the module from the
libraries sources. This information includes the location where the
sources are installed. @ruoso supplied the specification of this json
file.
- If possible, the compiler has an option to give the location of the
module manifest file
(https://github.com/llvm/llvm-project/pull/76451).
Currently there is no build system support, but it expected to be added
in the future.
Fixes: https://github.com/llvm/llvm-project/issues/73089
Revert "Revert #76246 and #76083"
This reverts commit 5c150e7eeba9db13cc65b329b3c3537b613ae61d.
Adds a small fix that should properly disable the tests on Windows.
Unfortunately the original poster has not provided feedback and the
original patch did not fail in the LLVM CI infrastructure.
Modules are known to fail on Windows due to non compliance of the
C library. Currently not having this patch prevents testing on other
platforms.
These cause test build failures on Windows.
This reverts the following commits:
57ca74843586c9a93c425036c5538aae0a2cfa60
d06ae33ec32122bb526fb35025c1f0cf979f1090
This removes the entire modules testing infrastructure.
The current infrastructure uses CMake to generate the std and std.compat
module. This requires quite a bit of plumbing and uses CMake. Since
CMake introduced module support in CMake 3.26, modules have a higher
CMake requirement than the rest of the LLVM project. (The LLVM project
requires 3.20.) The main motivation for this approach was how libc++
generated its modules. Every header had its own module partition. This
was changed to improve performance and now only two modules remain. The
code to build these can be manually crafted.
A followup patch will reenable testing modules, using a different
approach.
I recently came across LIBCXXABI_USE_LLVM_UNWINDER and was surprised to
notice it was disabled by default. Since we build libunwind by default
and ship it in the LLVM toolchain, it would seem to make sense that
libc++ and libc++abi rely on libunwind for unwinding instead of using
the system-provided unwinding library (if any).
Most importantly, using the system unwinder implies that libc++abi is
ABI compatible with that system unwinder, which is not necessarily the
case. Hence, it makes a lot more sense to instead default to using the
known-to-be-compatible LLVM unwinder, and let vendors manually select a
different unwinder if desired.
As a follow-up change, we should probably apply the same default to
compiler-rt.
Differential Revision: https://reviews.llvm.org/D150897Fixes#77662
rdar://120801778
This patch adds a configuration of the libc++ test suite that enables
optimizations when building the tests. It also adds a new CI
configuration to exercise this on a regular basis. This is added in the
context of [1], which requires building with optimizations in order to
hit the bug.
[1]: https://github.com/llvm/llvm-project/issues/68552
And add a check in the python script that the binary given to `--qemu`
actually exists. Otherwise you get a generic Python error:
```
# .---command stderr------------
# | Traceback (most recent call last):
# | File "/home/david.spickett/modules-llvm-project/libcxx/utils/qemu_baremetal.py", line 70, in <module>
# | exit(main())
# | File "/home/david.spickett/modules-llvm-project/libcxx/utils/qemu_baremetal.py", line 66, in main
# | os.execvp(qemu_commandline[0], qemu_commandline)
# | File "/usr/lib/python3.8/os.py", line 568, in execvp
# | _execvpe(file, args)
# | File "/usr/lib/python3.8/os.py", line 610, in _execvpe
# | raise last_exc
# | File "/usr/lib/python3.8/os.py", line 601, in _execvpe
# | exec_func(fullname, *argrest)
# | FileNotFoundError: [Errno 2] No such file or directory
# `-----------------------------
# error: command failed with exit status: 1
```
When it tries to run the entire command later.
For the builder, it's only ever going to use qemu-system-arm so error at
config time if it's not there.
When we use the -nostdlib++ flag, we don't need to explicitly link
against compiler-rt, since the compiler already links against it by
default. This simplifies the flags that we need to use when building
with Clang and GCC, and opens the door to further simplifications since
most platforms won't need to detect whether libgcc and libgcc_s are
supported anymore.
Furthermore, on platforms where -nostdlib++ is used, this patch prevents
manually linking compiler-rt *before* other system libraries. For
example, Apple platforms have several compiler-rt symbols defined in
libSystem.dylib. If we manually link against compiler-rt, we end up
overriding the default link order preferred by the compiler and
potentially using the symbols from the clang-provided libclang_rt.a
library instead of the system provided one.
Note that we don't touch how libunwind links against compiler-rt when it
builds the .so/.a because libunwind currently doesn't use -nodefaultlibs
and we want to avoid rocking the boat too much.
rdar://119506163
There are a few drive-by fixes:
- Since the combination RTTI disabled and exceptions enabled do not
work, this combination is prohibited.
- A small NFC in any fixing clang-tidy.
The code in the Buildkite configuration is prepared for using the std
module. There are more fixes needed for that configuration which will be
done in a separate commit.
This patch actually runs the tests for picolibc behind an emulator,
removing a few workarounds and increasing coverage.
Differential Revision: https://reviews.llvm.org/D155521
Picolibc is a C Standard Library that is commonly used in embedded
environments. This patch adds initial support for this configuration
along with pre-commit CI. As of this patch, the test suite only builds
the tests and nothing is run. A follow-up patch will make the test suite
actually run the tests.
Differential Revision: https://reviews.llvm.org/D154246
1. Instead of using individual "boolean" macros, have an "enum" macro
`_LIBCPP_HARDENING_MODE`. This avoids issues with macros being
mutually exclusive and makes overriding the hardening mode within a TU
more straightforward.
2. Rename the safe mode to debug-lite.
This brings the code in line with the RFC:
https://discourse.llvm.org/t/rfc-hardening-in-libc/73925Fixes#65101
Android's librt and libpthread functionality is part of libc.{a,so}
instead. The atomic APIs are part of the compiler-rt builtins archive.
Android does have libdl.
Android's libc.so has `__cxa_thread_atexit_impl` starting in API 23, and
the oldest supported API is 21, so continue using feature detection for
that API.
These settings need to be declared explicitly for the sake of the fuzzer
library's custom libc++ build `add_custom_libcxx`. That macro builds
libc++ using `-DCMAKE_TRY_COMPILE_TARGET_TYPE=STATIC_LIBRARY`, which
breaks the feature detection.