Sanjoy Das 86771d0b65 Introduce a ConditionallySpeculatable op interface
This patch takes the first step towards a more principled modeling of undefined behavior in MLIR as discussed in the following discourse threads:

 1. https://discourse.llvm.org/t/semantics-modeling-undefined-behavior-and-side-effects/4812
 2. https://discourse.llvm.org/t/rfc-mark-tensor-dim-and-memref-dim-as-side-effecting/65729

This patch in particular does the following:

 1. Introduces a ConditionallySpeculatable OpInterface that dynamically determines whether an Operation can be speculated.
 2. Re-defines `NoSideEffect` to allow undefined behavior, making it necessary but not sufficient for speculation.  Also renames it to `NoMemoryEffect`.
 3. Makes LICM respect the above semantics.
 4. Changes all ops tagged with `NoSideEffect` today to additionally implement ConditionallySpeculatable and mark themselves as always speculatable.  This combined trait is named `Pure`.  This makes this change NFC.

For out of tree dialects:

 1. Replace `NoSideEffect` with `Pure` if the operation does not have any memory effects, undefined behavior or infinite loops.
 2. Replace `NoSideEffect` with `NoSideEffect` otherwise.

The next steps in this process are (I'm proposing to do these in upcoming patches):

 1. Update operations like `tensor.dim`, `memref.dim`, `scf.for`, `affine.for` to implement a correct hook for `ConditionallySpeculatable`.  I'm also happy to update ops in other dialects if the respective dialect owners would like to and can give me some pointers.
 2. Update other passes that speculate operations to consult `ConditionallySpeculatable` in addition to `NoMemoryEffect`.  I could not find any other than LICM on a quick skim, but I could have missed some.
 3. Add some documentation / FAQs detailing the differences between side effects, undefined behavior, speculatabilty.

Reviewed By: rriddle, mehdi_amini

Differential Revision: https://reviews.llvm.org/D135505
2022-10-12 10:56:12 -07:00
2022-10-07 14:38:49 -07:00
2022-08-08 16:29:06 +08:00

The LLVM Compiler Infrastructure

This directory and its sub-directories contain the source code for LLVM, a toolkit for the construction of highly optimized compilers, optimizers, and run-time environments.

The README briefly describes how to get started with building LLVM. For more information on how to contribute to the LLVM project, please take a look at the Contributing to LLVM guide.

Getting Started with the LLVM System

Taken from here.

Overview

Welcome to the LLVM project!

The LLVM project has multiple components. The core of the project is itself called "LLVM". This contains all of the tools, libraries, and header files needed to process intermediate representations and convert them into object files. Tools include an assembler, disassembler, bitcode analyzer, and bitcode optimizer. It also contains basic regression tests.

C-like languages use the Clang frontend. This component compiles C, C++, Objective-C, and Objective-C++ code into LLVM bitcode -- and from there into object files, using LLVM.

Other components include: the libc++ C++ standard library, the LLD linker, and more.

Getting the Source Code and Building LLVM

The LLVM Getting Started documentation may be out of date. The Clang Getting Started page might have more accurate information.

This is an example work-flow and configuration to get and build the LLVM source:

  1. Checkout LLVM (including related sub-projects like Clang):

    • git clone https://github.com/llvm/llvm-project.git

    • Or, on windows, git clone --config core.autocrlf=false https://github.com/llvm/llvm-project.git

  2. Configure and build LLVM and Clang:

    • cd llvm-project

    • cmake -S llvm -B build -G <generator> [options]

      Some common build system generators are:

      • Ninja --- for generating Ninja build files. Most llvm developers use Ninja.
      • Unix Makefiles --- for generating make-compatible parallel makefiles.
      • Visual Studio --- for generating Visual Studio projects and solutions.
      • Xcode --- for generating Xcode projects.

      Some common options:

      • -DLLVM_ENABLE_PROJECTS='...' and -DLLVM_ENABLE_RUNTIMES='...' --- semicolon-separated list of the LLVM sub-projects and runtimes you'd like to additionally build. LLVM_ENABLE_PROJECTS can include any of: clang, clang-tools-extra, cross-project-tests, flang, libc, libclc, lld, lldb, mlir, openmp, polly, or pstl. LLVM_ENABLE_RUNTIMES can include any of libcxx, libcxxabi, libunwind, compiler-rt, libc or openmp. Some runtime projects can be specified either in LLVM_ENABLE_PROJECTS or in LLVM_ENABLE_RUNTIMES.

        For example, to build LLVM, Clang, libcxx, and libcxxabi, use -DLLVM_ENABLE_PROJECTS="clang" -DLLVM_ENABLE_RUNTIMES="libcxx;libcxxabi".

      • -DCMAKE_INSTALL_PREFIX=directory --- Specify for directory the full path name of where you want the LLVM tools and libraries to be installed (default /usr/local). Be careful if you install runtime libraries: if your system uses those provided by LLVM (like libc++ or libc++abi), you must not overwrite your system's copy of those libraries, since that could render your system unusable. In general, using something like /usr is not advised, but /usr/local is fine.

      • -DCMAKE_BUILD_TYPE=type --- Valid options for type are Debug, Release, RelWithDebInfo, and MinSizeRel. Default is Debug.

      • -DLLVM_ENABLE_ASSERTIONS=On --- Compile with assertion checks enabled (default is Yes for Debug builds, No for all other build types).

    • cmake --build build [-- [options] <target>] or your build system specified above directly.

      • The default target (i.e. ninja or make) will build all of LLVM.

      • The check-all target (i.e. ninja check-all) will run the regression tests to ensure everything is in working order.

      • CMake will generate targets for each tool and library, and most LLVM sub-projects generate their own check-<project> target.

      • Running a serial build will be slow. To improve speed, try running a parallel build. That's done by default in Ninja; for make, use the option -j NNN, where NNN is the number of parallel jobs to run. In most cases, you get the best performance if you specify the number of CPU threads you have. On some Unix systems, you can specify this with -j$(nproc).

    • For more information see CMake.

Consult the Getting Started with LLVM page for detailed information on configuring and compiling LLVM. You can visit Directory Layout to learn about the layout of the source code tree.

Getting in touch

Join LLVM Discourse forums, discord chat or #llvm IRC channel on OFTC.

The LLVM project has adopted a code of conduct for participants to all modes of communication within the project.

Description
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
Readme 5 GiB
Languages
LLVM 39.9%
C++ 32.5%
C 13.5%
Assembly 9.4%
MLIR 1.4%
Other 2.8%