
Implement P2036R3. Captured variables by copy (explicitely or not), are deduced correctly at the point we know whether the lambda is mutable, and ill-formed before that. Up until now, the entire lambda declaration up to the start of the body would be parsed in the parent scope, such that capture would not be available to look up. The scoping is changed to have an outer lambda scope, followed by the lambda prototype and body. The lambda scope is necessary because there may be a template scope between the start of the lambda (to which we want to attach the captured variable) and the prototype scope. We also need to introduce a declaration context to attach the captured variable to (and several parts of clang assume captures are handled from the call operator context), before we know the type of the call operator. The order of operations is as follow: * Parse the init capture in the lambda's parent scope * Introduce a lambda scope * Create the lambda class and call operator * Add the init captures to the call operator context and the lambda scope. But the variables are not capured yet (because we don't know their type). Instead, explicit captures are stored in a temporary map that conserves the order of capture (for the purpose of having a stable order in the ast dumps). * A flag is set on LambdaScopeInfo to indicate that we have not yet injected the captures. * The parameters are parsed (in the parent context, as lambda mangling recurses in the parent context, we couldn't mangle a lambda that is attached to the context of a lambda whose type is not yet known). * The lambda qualifiers are parsed, at this point We can switch (for the second time) inside the lambda context, unset the flag indicating that we have not parsed the lambda qualifiers, record the lambda is mutable and capture the explicit variables. * We can parse the rest of the lambda type, transform the lambda and call operator's types and also transform the call operator to a template function decl where necessary. At this point, both captures and parameters can be injected in the body's scope. When trying to capture an implicit variable, if we are before the qualifiers of a lambda, we need to remember that the variables are still in the parent's context (rather than in the call operator's). Reviewed By: aaron.ballman, #clang-language-wg, ChuanqiXu Differential Revision: https://reviews.llvm.org/D119136
The LLVM Compiler Infrastructure
This directory and its sub-directories contain the source code for LLVM, a toolkit for the construction of highly optimized compilers, optimizers, and run-time environments.
The README briefly describes how to get started with building LLVM. For more information on how to contribute to the LLVM project, please take a look at the Contributing to LLVM guide.
Getting Started with the LLVM System
Taken from here.
Overview
Welcome to the LLVM project!
The LLVM project has multiple components. The core of the project is itself called "LLVM". This contains all of the tools, libraries, and header files needed to process intermediate representations and convert them into object files. Tools include an assembler, disassembler, bitcode analyzer, and bitcode optimizer. It also contains basic regression tests.
C-like languages use the Clang frontend. This component compiles C, C++, Objective-C, and Objective-C++ code into LLVM bitcode -- and from there into object files, using LLVM.
Other components include: the libc++ C++ standard library, the LLD linker, and more.
Getting the Source Code and Building LLVM
The LLVM Getting Started documentation may be out of date. The Clang Getting Started page might have more accurate information.
This is an example work-flow and configuration to get and build the LLVM source:
-
Checkout LLVM (including related sub-projects like Clang):
-
git clone https://github.com/llvm/llvm-project.git
-
Or, on windows,
git clone --config core.autocrlf=false https://github.com/llvm/llvm-project.git
-
-
Configure and build LLVM and Clang:
-
cd llvm-project
-
cmake -S llvm -B build -G <generator> [options]
Some common build system generators are:
Ninja
--- for generating Ninja build files. Most llvm developers use Ninja.Unix Makefiles
--- for generating make-compatible parallel makefiles.Visual Studio
--- for generating Visual Studio projects and solutions.Xcode
--- for generating Xcode projects.
Some common options:
-
-DLLVM_ENABLE_PROJECTS='...'
and-DLLVM_ENABLE_RUNTIMES='...'
--- semicolon-separated list of the LLVM sub-projects and runtimes you'd like to additionally build.LLVM_ENABLE_PROJECTS
can include any of: clang, clang-tools-extra, cross-project-tests, flang, libc, libclc, lld, lldb, mlir, openmp, polly, or pstl.LLVM_ENABLE_RUNTIMES
can include any of libcxx, libcxxabi, libunwind, compiler-rt, libc or openmp. Some runtime projects can be specified either inLLVM_ENABLE_PROJECTS
or inLLVM_ENABLE_RUNTIMES
.For example, to build LLVM, Clang, libcxx, and libcxxabi, use
-DLLVM_ENABLE_PROJECTS="clang" -DLLVM_ENABLE_RUNTIMES="libcxx;libcxxabi"
. -
-DCMAKE_INSTALL_PREFIX=directory
--- Specify for directory the full path name of where you want the LLVM tools and libraries to be installed (default/usr/local
). Be careful if you install runtime libraries: if your system uses those provided by LLVM (like libc++ or libc++abi), you must not overwrite your system's copy of those libraries, since that could render your system unusable. In general, using something like/usr
is not advised, but/usr/local
is fine. -
-DCMAKE_BUILD_TYPE=type
--- Valid options for type are Debug, Release, RelWithDebInfo, and MinSizeRel. Default is Debug. -
-DLLVM_ENABLE_ASSERTIONS=On
--- Compile with assertion checks enabled (default is Yes for Debug builds, No for all other build types).
-
cmake --build build [-- [options] <target>]
or your build system specified above directly.-
The default target (i.e.
ninja
ormake
) will build all of LLVM. -
The
check-all
target (i.e.ninja check-all
) will run the regression tests to ensure everything is in working order. -
CMake will generate targets for each tool and library, and most LLVM sub-projects generate their own
check-<project>
target. -
Running a serial build will be slow. To improve speed, try running a parallel build. That's done by default in Ninja; for
make
, use the option-j NNN
, whereNNN
is the number of parallel jobs to run. In most cases, you get the best performance if you specify the number of CPU threads you have. On some Unix systems, you can specify this with-j$(nproc)
.
-
-
For more information see CMake.
-
Consult the Getting Started with LLVM page for detailed information on configuring and compiling LLVM. You can visit Directory Layout to learn about the layout of the source code tree.
Getting in touch
Join LLVM Discourse forums, discord chat or #llvm IRC channel on OFTC.
The LLVM project has adopted a code of conduct for participants to all modes of communication within the project.