llvm-project/llvm/lib/IR/LLVMContext.cpp

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

399 lines
13 KiB
C++
Raw Normal View History

//===-- LLVMContext.cpp - Implement LLVMContext ---------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
//
// This file implements LLVMContext, as a wrapper around the opaque
// class LLVMContextImpl.
//
//===----------------------------------------------------------------------===//
#include "llvm/IR/LLVMContext.h"
#include "LLVMContextImpl.h"
#include "llvm/ADT/SmallVector.h"
#include "llvm/ADT/StringMap.h"
#include "llvm/ADT/StringRef.h"
#include "llvm/ADT/Twine.h"
#include "llvm/IR/DiagnosticInfo.h"
#include "llvm/IR/DiagnosticPrinter.h"
#include "llvm/IR/LLVMRemarkStreamer.h"
#include "llvm/Remarks/RemarkStreamer.h"
#include "llvm/Support/Casting.h"
#include "llvm/Support/ErrorHandling.h"
#include "llvm/Support/raw_ostream.h"
#include <cassert>
#include <cstdlib>
#include <string>
#include <utility>
using namespace llvm;
LLVMContext::LLVMContext() : pImpl(new LLVMContextImpl(*this)) {
// Create the fixed metadata kinds. This is done in the same order as the
// MD_* enum values so that they correspond.
std::pair<unsigned, StringRef> MDKinds[] = {
#define LLVM_FIXED_MD_KIND(EnumID, Name, Value) {EnumID, Name},
#include "llvm/IR/FixedMetadataKinds.def"
#undef LLVM_FIXED_MD_KIND
};
for (auto &MDKind : MDKinds) {
unsigned ID = getMDKindID(MDKind.second);
assert(ID == MDKind.first && "metadata kind id drifted");
(void)ID;
}
auto *DeoptEntry = pImpl->getOrInsertBundleTag("deopt");
assert(DeoptEntry->second == LLVMContext::OB_deopt &&
"deopt operand bundle id drifted!");
(void)DeoptEntry;
auto *FuncletEntry = pImpl->getOrInsertBundleTag("funclet");
assert(FuncletEntry->second == LLVMContext::OB_funclet &&
"funclet operand bundle id drifted!");
(void)FuncletEntry;
auto *GCTransitionEntry = pImpl->getOrInsertBundleTag("gc-transition");
assert(GCTransitionEntry->second == LLVMContext::OB_gc_transition &&
"gc-transition operand bundle id drifted!");
(void)GCTransitionEntry;
auto *CFGuardTargetEntry = pImpl->getOrInsertBundleTag("cfguardtarget");
assert(CFGuardTargetEntry->second == LLVMContext::OB_cfguardtarget &&
"cfguardtarget operand bundle id drifted!");
(void)CFGuardTargetEntry;
auto *PreallocatedEntry = pImpl->getOrInsertBundleTag("preallocated");
assert(PreallocatedEntry->second == LLVMContext::OB_preallocated &&
"preallocated operand bundle id drifted!");
(void)PreallocatedEntry;
auto *GCLiveEntry = pImpl->getOrInsertBundleTag("gc-live");
assert(GCLiveEntry->second == LLVMContext::OB_gc_live &&
"gc-transition operand bundle id drifted!");
(void)GCLiveEntry;
[ObjC][ARC] Use operand bundle 'clang.arc.attachedcall' instead of explicitly emitting retainRV or claimRV calls in the IR This reapplies ed4718eccb12bd42214ca4fb17d196d49561c0c7, which was reverted because it was causing a miscompile. The bug that was causing the miscompile has been fixed in 75805dce5ff874676f3559c069fcd6737838f5c0. Original commit message: Background: This fixes a longstanding problem where llvm breaks ARC's autorelease optimization (see the link below) by separating calls from the marker instructions or retainRV/claimRV calls. The backend changes are in https://reviews.llvm.org/D92569. https://clang.llvm.org/docs/AutomaticReferenceCounting.html#arc-runtime-objc-autoreleasereturnvalue What this patch does to fix the problem: - The front-end adds operand bundle "clang.arc.attachedcall" to calls, which indicates the call is implicitly followed by a marker instruction and an implicit retainRV/claimRV call that consumes the call result. In addition, it emits a call to @llvm.objc.clang.arc.noop.use, which consumes the call result, to prevent the middle-end passes from changing the return type of the called function. This is currently done only when the target is arm64 and the optimization level is higher than -O0. - ARC optimizer temporarily emits retainRV/claimRV calls after the calls with the operand bundle in the IR and removes the inserted calls after processing the function. - ARC contract pass emits retainRV/claimRV calls after the call with the operand bundle. It doesn't remove the operand bundle on the call since the backend needs it to emit the marker instruction. The retainRV and claimRV calls are emitted late in the pipeline to prevent optimization passes from transforming the IR in a way that makes it harder for the ARC middle-end passes to figure out the def-use relationship between the call and the retainRV/claimRV calls (which is the cause of PR31925). - The function inliner removes an autoreleaseRV call in the callee if nothing in the callee prevents it from being paired up with the retainRV/claimRV call in the caller. It then inserts a release call if claimRV is attached to the call since autoreleaseRV+claimRV is equivalent to a release. If it cannot find an autoreleaseRV call, it tries to transfer the operand bundle to a function call in the callee. This is important since the ARC optimizer can remove the autoreleaseRV returning the callee result, which makes it impossible to pair it up with the retainRV/claimRV call in the caller. If that fails, it simply emits a retain call in the IR if retainRV is attached to the call and does nothing if claimRV is attached to it. - SCCP refrains from replacing the return value of a call with a constant value if the call has the operand bundle. This ensures the call always has at least one user (the call to @llvm.objc.clang.arc.noop.use). - This patch also fixes a bug in replaceUsesOfNonProtoConstant where multiple operand bundles of the same kind were being added to a call. Future work: - Use the operand bundle on x86-64. - Fix the auto upgrader to convert call+retainRV/claimRV pairs into calls with the operand bundles. rdar://71443534 Differential Revision: https://reviews.llvm.org/D92808
2021-02-10 14:47:06 -08:00
auto *ClangAttachedCall =
pImpl->getOrInsertBundleTag("clang.arc.attachedcall");
assert(ClangAttachedCall->second == LLVMContext::OB_clang_arc_attachedcall &&
"clang.arc.attachedcall operand bundle id drifted!");
(void)ClangAttachedCall;
auto *PtrauthEntry = pImpl->getOrInsertBundleTag("ptrauth");
assert(PtrauthEntry->second == LLVMContext::OB_ptrauth &&
"ptrauth operand bundle id drifted!");
(void)PtrauthEntry;
KCFI sanitizer The KCFI sanitizer, enabled with `-fsanitize=kcfi`, implements a forward-edge control flow integrity scheme for indirect calls. It uses a !kcfi_type metadata node to attach a type identifier for each function and injects verification code before indirect calls. Unlike the current CFI schemes implemented in LLVM, KCFI does not require LTO, does not alter function references to point to a jump table, and never breaks function address equality. KCFI is intended to be used in low-level code, such as operating system kernels, where the existing schemes can cause undue complications because of the aforementioned properties. However, unlike the existing schemes, KCFI is limited to validating only function pointers and is not compatible with executable-only memory. KCFI does not provide runtime support, but always traps when a type mismatch is encountered. Users of the scheme are expected to handle the trap. With `-fsanitize=kcfi`, Clang emits a `kcfi` operand bundle to indirect calls, and LLVM lowers this to a known architecture-specific sequence of instructions for each callsite to make runtime patching easier for users who require this functionality. A KCFI type identifier is a 32-bit constant produced by taking the lower half of xxHash64 from a C++ mangled typename. If a program contains indirect calls to assembly functions, they must be manually annotated with the expected type identifiers to prevent errors. To make this easier, Clang generates a weak SHN_ABS `__kcfi_typeid_<function>` symbol for each address-taken function declaration, which can be used to annotate functions in assembly as long as at least one C translation unit linked into the program takes the function address. For example on AArch64, we might have the following code: ``` .c: int f(void); int (*p)(void) = f; p(); .s: .4byte __kcfi_typeid_f .global f f: ... ``` Note that X86 uses a different preamble format for compatibility with Linux kernel tooling. See the comments in `X86AsmPrinter::emitKCFITypeId` for details. As users of KCFI may need to locate trap locations for binary validation and error handling, LLVM can additionally emit the locations of traps to a `.kcfi_traps` section. Similarly to other sanitizers, KCFI checking can be disabled for a function with a `no_sanitize("kcfi")` function attribute. Relands 67504c95494ff05be2a613129110c9bcf17f6c13 with a fix for 32-bit builds. Reviewed By: nickdesaulniers, kees, joaomoreira, MaskRay Differential Revision: https://reviews.llvm.org/D119296
2022-02-15 14:32:08 -08:00
auto *KCFIEntry = pImpl->getOrInsertBundleTag("kcfi");
assert(KCFIEntry->second == LLVMContext::OB_kcfi &&
"kcfi operand bundle id drifted!");
(void)KCFIEntry;
auto *ConvergenceCtrlEntry = pImpl->getOrInsertBundleTag("convergencectrl");
assert(ConvergenceCtrlEntry->second == LLVMContext::OB_convergencectrl &&
"convergencectrl operand bundle id drifted!");
(void)ConvergenceCtrlEntry;
SyncScope::ID SingleThreadSSID =
pImpl->getOrInsertSyncScopeID("singlethread");
assert(SingleThreadSSID == SyncScope::SingleThread &&
"singlethread synchronization scope ID drifted!");
(void)SingleThreadSSID;
SyncScope::ID SystemSSID =
pImpl->getOrInsertSyncScopeID("");
assert(SystemSSID == SyncScope::System &&
"system synchronization scope ID drifted!");
(void)SystemSSID;
}
LLVMContext::~LLVMContext() { delete pImpl; }
void LLVMContext::addModule(Module *M) {
pImpl->OwnedModules.insert(M);
}
void LLVMContext::removeModule(Module *M) {
pImpl->OwnedModules.erase(M);
pImpl->MachineFunctionNums.erase(M);
}
unsigned LLVMContext::generateMachineFunctionNum(Function &F) {
Module *M = F.getParent();
assert(pImpl->OwnedModules.contains(M) && "Unexpected module!");
return pImpl->MachineFunctionNums[M]++;
}
//===----------------------------------------------------------------------===//
// Recoverable Backend Errors
//===----------------------------------------------------------------------===//
void LLVMContext::setDiagnosticHandlerCallBack(
DiagnosticHandler::DiagnosticHandlerTy DiagnosticHandler,
void *DiagnosticContext, bool RespectFilters) {
pImpl->DiagHandler->DiagHandlerCallback = DiagnosticHandler;
pImpl->DiagHandler->DiagnosticContext = DiagnosticContext;
pImpl->RespectDiagnosticFilters = RespectFilters;
}
void LLVMContext::setDiagnosticHandler(std::unique_ptr<DiagnosticHandler> &&DH,
bool RespectFilters) {
pImpl->DiagHandler = std::move(DH);
pImpl->RespectDiagnosticFilters = RespectFilters;
}
void LLVMContext::setDiagnosticsHotnessRequested(bool Requested) {
pImpl->DiagnosticsHotnessRequested = Requested;
}
bool LLVMContext::getDiagnosticsHotnessRequested() const {
return pImpl->DiagnosticsHotnessRequested;
[OptRemark,LDist] RFC: Add hotness attribute Summary: This is the first set of changes implementing the RFC from http://thread.gmane.org/gmane.comp.compilers.llvm.devel/98334 This is a cross-sectional patch; rather than implementing the hotness attribute for all optimization remarks and all passes in a patch set, it implements it for the 'missed-optimization' remark for Loop Distribution. My goal is to shake out the design issues before scaling it up to other types and passes. Hotness is computed as an integer as the multiplication of the block frequency with the function entry count. It's only printed in opt currently since clang prints the diagnostic fields directly. E.g.: remark: /tmp/t.c:3:3: loop not distributed: use -Rpass-analysis=loop-distribute for more info (hotness: 300) A new API added is similar to emitOptimizationRemarkMissed. The difference is that it additionally takes a code region that the diagnostic corresponds to. From this, hotness is computed using BFI. The new API is exposed via an analysis pass so that it can be made dependent on LazyBFI. (Thanks to Hal for the analysis pass idea.) This feature can all be enabled by setDiagnosticHotnessRequested in the LLVM context. If this is off, LazyBFI is not calculated (D22141) so there should be no overhead. A new command-line option is added to turn this on in opt. My plan is to switch all user of emitOptimizationRemark* to use this module instead. Reviewers: hfinkel Subscribers: rcox2, mzolotukhin, llvm-commits Differential Revision: http://reviews.llvm.org/D21771 llvm-svn: 275583
2016-07-15 17:23:20 +00:00
}
void LLVMContext::setDiagnosticsHotnessThreshold(std::optional<uint64_t> Threshold) {
pImpl->DiagnosticsHotnessThreshold = Threshold;
}
void LLVMContext::setMisExpectWarningRequested(bool Requested) {
pImpl->MisExpectWarningRequested = Requested;
}
bool LLVMContext::getMisExpectWarningRequested() const {
return pImpl->MisExpectWarningRequested;
}
uint64_t LLVMContext::getDiagnosticsHotnessThreshold() const {
return pImpl->DiagnosticsHotnessThreshold.value_or(UINT64_MAX);
}
void LLVMContext::setDiagnosticsMisExpectTolerance(
std::optional<uint32_t> Tolerance) {
pImpl->DiagnosticsMisExpectTolerance = Tolerance;
}
uint32_t LLVMContext::getDiagnosticsMisExpectTolerance() const {
return pImpl->DiagnosticsMisExpectTolerance.value_or(0);
}
bool LLVMContext::isDiagnosticsHotnessThresholdSetFromPSI() const {
2022-06-20 20:05:16 -07:00
return !pImpl->DiagnosticsHotnessThreshold.has_value();
}
remarks::RemarkStreamer *LLVMContext::getMainRemarkStreamer() {
return pImpl->MainRemarkStreamer.get();
Output optimization remarks in YAML (Re-committed after moving the template specialization under the yaml namespace. GCC was complaining about this.) This allows various presentation of this data using an external tool. This was first recommended here[1]. As an example, consider this module: 1 int foo(); 2 int bar(); 3 4 int baz() { 5 return foo() + bar(); 6 } The inliner generates these missed-optimization remarks today (the hotness information is pulled from PGO): remark: /tmp/s.c:5:10: foo will not be inlined into baz (hotness: 30) remark: /tmp/s.c:5:18: bar will not be inlined into baz (hotness: 30) Now with -pass-remarks-output=<yaml-file>, we generate this YAML file: --- !Missed Pass: inline Name: NotInlined DebugLoc: { File: /tmp/s.c, Line: 5, Column: 10 } Function: baz Hotness: 30 Args: - Callee: foo - String: will not be inlined into - Caller: baz ... --- !Missed Pass: inline Name: NotInlined DebugLoc: { File: /tmp/s.c, Line: 5, Column: 18 } Function: baz Hotness: 30 Args: - Callee: bar - String: will not be inlined into - Caller: baz ... This is a summary of the high-level decisions: * There is a new streaming interface to emit optimization remarks. E.g. for the inliner remark above: ORE.emit(DiagnosticInfoOptimizationRemarkMissed( DEBUG_TYPE, "NotInlined", &I) << NV("Callee", Callee) << " will not be inlined into " << NV("Caller", CS.getCaller()) << setIsVerbose()); NV stands for named value and allows the YAML client to process a remark using its name (NotInlined) and the named arguments (Callee and Caller) without parsing the text of the message. Subsequent patches will update ORE users to use the new streaming API. * I am using YAML I/O for writing the YAML file. YAML I/O requires you to specify reading and writing at once but reading is highly non-trivial for some of the more complex LLVM types. Since it's not clear that we (ever) want to use LLVM to parse this YAML file, the code supports and asserts that we're writing only. On the other hand, I did experiment that the class hierarchy starting at DiagnosticInfoOptimizationBase can be mapped back from YAML generated here (see D24479). * The YAML stream is stored in the LLVM context. * In the example, we can probably further specify the IR value used, i.e. print "Function" rather than "Value". * As before hotness is computed in the analysis pass instead of DiganosticInfo. This avoids the layering problem since BFI is in Analysis while DiagnosticInfo is in IR. [1] https://reviews.llvm.org/D19678#419445 Differential Revision: https://reviews.llvm.org/D24587 llvm-svn: 282539
2016-09-27 20:55:07 +00:00
}
const remarks::RemarkStreamer *LLVMContext::getMainRemarkStreamer() const {
return const_cast<LLVMContext *>(this)->getMainRemarkStreamer();
}
void LLVMContext::setMainRemarkStreamer(
std::unique_ptr<remarks::RemarkStreamer> RemarkStreamer) {
pImpl->MainRemarkStreamer = std::move(RemarkStreamer);
}
LLVMRemarkStreamer *LLVMContext::getLLVMRemarkStreamer() {
2020-02-04 17:42:47 -08:00
return pImpl->LLVMRS.get();
}
const LLVMRemarkStreamer *LLVMContext::getLLVMRemarkStreamer() const {
return const_cast<LLVMContext *>(this)->getLLVMRemarkStreamer();
}
void LLVMContext::setLLVMRemarkStreamer(
std::unique_ptr<LLVMRemarkStreamer> RemarkStreamer) {
2020-02-04 17:42:47 -08:00
pImpl->LLVMRS = std::move(RemarkStreamer);
Output optimization remarks in YAML (Re-committed after moving the template specialization under the yaml namespace. GCC was complaining about this.) This allows various presentation of this data using an external tool. This was first recommended here[1]. As an example, consider this module: 1 int foo(); 2 int bar(); 3 4 int baz() { 5 return foo() + bar(); 6 } The inliner generates these missed-optimization remarks today (the hotness information is pulled from PGO): remark: /tmp/s.c:5:10: foo will not be inlined into baz (hotness: 30) remark: /tmp/s.c:5:18: bar will not be inlined into baz (hotness: 30) Now with -pass-remarks-output=<yaml-file>, we generate this YAML file: --- !Missed Pass: inline Name: NotInlined DebugLoc: { File: /tmp/s.c, Line: 5, Column: 10 } Function: baz Hotness: 30 Args: - Callee: foo - String: will not be inlined into - Caller: baz ... --- !Missed Pass: inline Name: NotInlined DebugLoc: { File: /tmp/s.c, Line: 5, Column: 18 } Function: baz Hotness: 30 Args: - Callee: bar - String: will not be inlined into - Caller: baz ... This is a summary of the high-level decisions: * There is a new streaming interface to emit optimization remarks. E.g. for the inliner remark above: ORE.emit(DiagnosticInfoOptimizationRemarkMissed( DEBUG_TYPE, "NotInlined", &I) << NV("Callee", Callee) << " will not be inlined into " << NV("Caller", CS.getCaller()) << setIsVerbose()); NV stands for named value and allows the YAML client to process a remark using its name (NotInlined) and the named arguments (Callee and Caller) without parsing the text of the message. Subsequent patches will update ORE users to use the new streaming API. * I am using YAML I/O for writing the YAML file. YAML I/O requires you to specify reading and writing at once but reading is highly non-trivial for some of the more complex LLVM types. Since it's not clear that we (ever) want to use LLVM to parse this YAML file, the code supports and asserts that we're writing only. On the other hand, I did experiment that the class hierarchy starting at DiagnosticInfoOptimizationBase can be mapped back from YAML generated here (see D24479). * The YAML stream is stored in the LLVM context. * In the example, we can probably further specify the IR value used, i.e. print "Function" rather than "Value". * As before hotness is computed in the analysis pass instead of DiganosticInfo. This avoids the layering problem since BFI is in Analysis while DiagnosticInfo is in IR. [1] https://reviews.llvm.org/D19678#419445 Differential Revision: https://reviews.llvm.org/D24587 llvm-svn: 282539
2016-09-27 20:55:07 +00:00
}
DiagnosticHandler::DiagnosticHandlerTy
LLVMContext::getDiagnosticHandlerCallBack() const {
return pImpl->DiagHandler->DiagHandlerCallback;
}
void *LLVMContext::getDiagnosticContext() const {
return pImpl->DiagHandler->DiagnosticContext;
}
void LLVMContext::setYieldCallback(YieldCallbackTy Callback, void *OpaqueHandle)
{
pImpl->YieldCallback = Callback;
pImpl->YieldOpaqueHandle = OpaqueHandle;
}
void LLVMContext::yield() {
if (pImpl->YieldCallback)
pImpl->YieldCallback(this, pImpl->YieldOpaqueHandle);
}
void LLVMContext::emitError(const Twine &ErrorStr) {
diagnose(DiagnosticInfoInlineAsm(ErrorStr));
}
void LLVMContext::emitError(const Instruction *I, const Twine &ErrorStr) {
assert (I && "Invalid instruction");
diagnose(DiagnosticInfoInlineAsm(*I, ErrorStr));
}
static bool isDiagnosticEnabled(const DiagnosticInfo &DI) {
// Optimization remarks are selective. They need to check whether the regexp
// pattern, passed via one of the -pass-remarks* flags, matches the name of
// the pass that is emitting the diagnostic. If there is no match, ignore the
// diagnostic and return.
//
// Also noisy remarks are only enabled if we have hotness information to sort
// them.
if (auto *Remark = dyn_cast<DiagnosticInfoOptimizationBase>(&DI))
return Remark->isEnabled() &&
(!Remark->isVerbose() || Remark->getHotness());
return true;
}
const char *
LLVMContext::getDiagnosticMessagePrefix(DiagnosticSeverity Severity) {
switch (Severity) {
case DS_Error:
return "error";
case DS_Warning:
return "warning";
case DS_Remark:
return "remark";
case DS_Note:
return "note";
}
llvm_unreachable("Unknown DiagnosticSeverity");
}
void LLVMContext::diagnose(const DiagnosticInfo &DI) {
if (auto *OptDiagBase = dyn_cast<DiagnosticInfoOptimizationBase>(&DI))
if (LLVMRemarkStreamer *RS = getLLVMRemarkStreamer())
RS->emit(*OptDiagBase);
// If there is a report handler, use it.
if (pImpl->DiagHandler) {
if (DI.getSeverity() == DS_Error)
pImpl->DiagHandler->HasErrors = true;
if ((!pImpl->RespectDiagnosticFilters || isDiagnosticEnabled(DI)) &&
pImpl->DiagHandler->handleDiagnostics(DI))
return;
}
if (!isDiagnosticEnabled(DI))
return;
// Otherwise, print the message with a prefix based on the severity.
DiagnosticPrinterRawOStream DP(errs());
errs() << getDiagnosticMessagePrefix(DI.getSeverity()) << ": ";
DI.print(DP);
errs() << "\n";
if (DI.getSeverity() == DS_Error)
exit(1);
}
void LLVMContext::emitError(uint64_t LocCookie, const Twine &ErrorStr) {
diagnose(DiagnosticInfoInlineAsm(LocCookie, ErrorStr));
}
//===----------------------------------------------------------------------===//
// Metadata Kind Uniquing
//===----------------------------------------------------------------------===//
/// Return a unique non-zero ID for the specified metadata kind.
unsigned LLVMContext::getMDKindID(StringRef Name) const {
// If this is new, assign it its ID.
return pImpl->CustomMDKindNames.insert(
std::make_pair(
Name, pImpl->CustomMDKindNames.size()))
.first->second;
}
2009-08-11 06:31:57 +00:00
2015-08-24 23:20:16 +00:00
/// getHandlerNames - Populate client-supplied smallvector using custom
/// metadata name and ID.
void LLVMContext::getMDKindNames(SmallVectorImpl<StringRef> &Names) const {
Names.resize(pImpl->CustomMDKindNames.size());
for (StringMap<unsigned>::const_iterator I = pImpl->CustomMDKindNames.begin(),
E = pImpl->CustomMDKindNames.end(); I != E; ++I)
Names[I->second] = I->first();
}
void LLVMContext::getOperandBundleTags(SmallVectorImpl<StringRef> &Tags) const {
pImpl->getOperandBundleTags(Tags);
}
StringMapEntry<uint32_t> *
LLVMContext::getOrInsertBundleTag(StringRef TagName) const {
return pImpl->getOrInsertBundleTag(TagName);
}
uint32_t LLVMContext::getOperandBundleTagID(StringRef Tag) const {
return pImpl->getOperandBundleTagID(Tag);
}
SyncScope::ID LLVMContext::getOrInsertSyncScopeID(StringRef SSN) {
return pImpl->getOrInsertSyncScopeID(SSN);
}
void LLVMContext::getSyncScopeNames(SmallVectorImpl<StringRef> &SSNs) const {
pImpl->getSyncScopeNames(SSNs);
}
std::optional<StringRef> LLVMContext::getSyncScopeName(SyncScope::ID Id) const {
return pImpl->getSyncScopeName(Id);
}
void LLVMContext::setGC(const Function &Fn, std::string GCName) {
pImpl->GCNames[&Fn] = std::move(GCName);
}
const std::string &LLVMContext::getGC(const Function &Fn) {
return pImpl->GCNames[&Fn];
}
void LLVMContext::deleteGC(const Function &Fn) {
pImpl->GCNames.erase(&Fn);
}
bool LLVMContext::shouldDiscardValueNames() const {
return pImpl->DiscardValueNames;
}
bool LLVMContext::isODRUniquingDebugTypes() const { return !!pImpl->DITypeMap; }
void LLVMContext::enableDebugTypeODRUniquing() {
if (pImpl->DITypeMap)
return;
pImpl->DITypeMap.emplace();
}
void LLVMContext::disableDebugTypeODRUniquing() { pImpl->DITypeMap.reset(); }
void LLVMContext::setDiscardValueNames(bool Discard) {
pImpl->DiscardValueNames = Discard;
}
OptPassGate &LLVMContext::getOptPassGate() const {
return pImpl->getOptPassGate();
}
void LLVMContext::setOptPassGate(OptPassGate& OPG) {
pImpl->setOptPassGate(OPG);
}
const DiagnosticHandler *LLVMContext::getDiagHandlerPtr() const {
return pImpl->DiagHandler.get();
}
std::unique_ptr<DiagnosticHandler> LLVMContext::getDiagnosticHandler() {
return std::move(pImpl->DiagHandler);
}
CodeGen, IR: Add target-{cpu,features} attributes to functions created via createWithDefaultAttr(). Functions created with createWithDefaultAttr() need to have the correct target-{cpu,features} attributes to avoid miscompilations such as using the wrong relocation type to access globals (missing tagged-globals feature), clobbering registers specified via -ffixed-* (missing reserve-* feature), and so on. There's already a number of attributes copied from the module flags onto functions created by createWithDefaultAttr(). I don't think module flags are the right choice for the target attributes because we don't need the conflict resolution logic between modules with different target attributes, nor does it seem sensible to add it: there's no unambiguously "correct" set of target attributes when merging two modules with different attributes, and nor should there be; it's perfectly valid for two modules to be compiled with different target attributes, that's the whole reason why they are per-function. This also implies that it's unnecessary to serialize the attributes in bitcode, which implies that they shouldn't be stored on the module. We can also observe that for the most part, createWithDefaultAttr() is called from compiler passes such as sanitizers, coverage and profiling passes that are part of the compile time pipeline, not the LTO pipeline. This hints at a solution: we need to store the attributes in a non-serialized location associated with the ambient compilation context. Therefore in this patch I elected to store the attributes on the LLVMContext. There are calls to createWithDefaultAttr() in the NVPTX and AMDGPU backends, and those calls would happen at LTO time. For those callers, the bug still potentially exists and it would be necessary to refactor them to create the functions at compile time if this issue is relevant on those platforms. Fixes #93633. Reviewers: fmayer, MaskRay, eugenis Reviewed By: MaskRay Pull Request: https://github.com/llvm/llvm-project/pull/96721
2024-06-25 20:39:18 -07:00
StringRef LLVMContext::getDefaultTargetCPU() {
return pImpl->DefaultTargetCPU;
}
void LLVMContext::setDefaultTargetCPU(StringRef CPU) {
pImpl->DefaultTargetCPU = CPU;
}
StringRef LLVMContext::getDefaultTargetFeatures() {
return pImpl->DefaultTargetFeatures;
}
void LLVMContext::setDefaultTargetFeatures(StringRef Features) {
pImpl->DefaultTargetFeatures = Features;
}