llvm-project/clang/lib/AST/StmtProfile.cpp

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

2861 lines
79 KiB
C++
Raw Normal View History

//===---- StmtProfile.cpp - Profile implementation for Stmt ASTs ----------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
//
// This file implements the Stmt::Profile method, which builds a unique bit
// representation that identifies a statement/expression.
//
//===----------------------------------------------------------------------===//
#include "clang/AST/ASTContext.h"
#include "clang/AST/DeclCXX.h"
#include "clang/AST/DeclObjC.h"
#include "clang/AST/DeclTemplate.h"
#include "clang/AST/Expr.h"
#include "clang/AST/ExprCXX.h"
#include "clang/AST/ExprObjC.h"
#include "clang/AST/ExprOpenMP.h"
#include "clang/AST/ODRHash.h"
#include "clang/AST/OpenMPClause.h"
#include "clang/AST/StmtVisitor.h"
#include "llvm/ADT/FoldingSet.h"
using namespace clang;
namespace {
class StmtProfiler : public ConstStmtVisitor<StmtProfiler> {
protected:
llvm::FoldingSetNodeID &ID;
bool Canonical;
[C++20] [Modules] Allow Stmt::Profile to treat lambdas as equal conditionally Close https://github.com/llvm/llvm-project/issues/63544. Background: We landed std modules in libcxx recently but we haven't landed the corresponding in-tree tests. According to @Mordante, there are only 1% libcxx tests failing with std modules. And the major blocking issue is the lambda expression in the require clauses. The root cause of the issue is that previously we never consider any lambda expression as the same. Per [temp.over.link]p5: > Two lambda-expressions are never considered equivalent. I thought this is an oversight at first but @rsmith explains that in the wording, the program is as if there is only a single definition, and a single lambda-expression. So we don't need worry about this in the spec. The explanation makes sense. But it didn't reflect to the implementation directly. Here is a cycle in the implementation. If we want to merge two definitions, we need to make sure its implementation are the same. But according to the explanation above, we need to judge if two lambda-expression are the same by looking at its parent definitions. So here is the problem. To solve the problem, I think we have to profile the lambda expressions actually to get the accurate information. But we can't do this universally. So in this patch I tried to modify the interface of `Stmt::Profile` and only profile the lambda expression during the process of merging the constraint expressions. Differential Revision: https://reviews.llvm.org/D153957
2023-07-11 16:12:19 +08:00
bool ProfileLambdaExpr;
public:
[C++20] [Modules] Allow Stmt::Profile to treat lambdas as equal conditionally Close https://github.com/llvm/llvm-project/issues/63544. Background: We landed std modules in libcxx recently but we haven't landed the corresponding in-tree tests. According to @Mordante, there are only 1% libcxx tests failing with std modules. And the major blocking issue is the lambda expression in the require clauses. The root cause of the issue is that previously we never consider any lambda expression as the same. Per [temp.over.link]p5: > Two lambda-expressions are never considered equivalent. I thought this is an oversight at first but @rsmith explains that in the wording, the program is as if there is only a single definition, and a single lambda-expression. So we don't need worry about this in the spec. The explanation makes sense. But it didn't reflect to the implementation directly. Here is a cycle in the implementation. If we want to merge two definitions, we need to make sure its implementation are the same. But according to the explanation above, we need to judge if two lambda-expression are the same by looking at its parent definitions. So here is the problem. To solve the problem, I think we have to profile the lambda expressions actually to get the accurate information. But we can't do this universally. So in this patch I tried to modify the interface of `Stmt::Profile` and only profile the lambda expression during the process of merging the constraint expressions. Differential Revision: https://reviews.llvm.org/D153957
2023-07-11 16:12:19 +08:00
StmtProfiler(llvm::FoldingSetNodeID &ID, bool Canonical,
bool ProfileLambdaExpr)
: ID(ID), Canonical(Canonical), ProfileLambdaExpr(ProfileLambdaExpr) {}
virtual ~StmtProfiler() {}
void VisitStmt(const Stmt *S);
void VisitStmtNoChildren(const Stmt *S) {
HandleStmtClass(S->getStmtClass());
}
virtual void HandleStmtClass(Stmt::StmtClass SC) = 0;
#define STMT(Node, Base) void Visit##Node(const Node *S);
#include "clang/AST/StmtNodes.inc"
/// Visit a declaration that is referenced within an expression
/// or statement.
virtual void VisitDecl(const Decl *D) = 0;
/// Visit a type that is referenced within an expression or
/// statement.
virtual void VisitType(QualType T) = 0;
/// Visit a name that occurs within an expression or statement.
virtual void VisitName(DeclarationName Name, bool TreatAsDecl = false) = 0;
/// Visit identifiers that are not in Decl's or Type's.
virtual void VisitIdentifierInfo(const IdentifierInfo *II) = 0;
/// Visit a nested-name-specifier that occurs within an expression
/// or statement.
virtual void VisitNestedNameSpecifier(NestedNameSpecifier *NNS) = 0;
/// Visit a template name that occurs within an expression or
/// statement.
virtual void VisitTemplateName(TemplateName Name) = 0;
/// Visit template arguments that occur within an expression or
/// statement.
void VisitTemplateArguments(const TemplateArgumentLoc *Args,
unsigned NumArgs);
/// Visit a single template argument.
void VisitTemplateArgument(const TemplateArgument &Arg);
};
class StmtProfilerWithPointers : public StmtProfiler {
const ASTContext &Context;
public:
StmtProfilerWithPointers(llvm::FoldingSetNodeID &ID,
[C++20] [Modules] Allow Stmt::Profile to treat lambdas as equal conditionally Close https://github.com/llvm/llvm-project/issues/63544. Background: We landed std modules in libcxx recently but we haven't landed the corresponding in-tree tests. According to @Mordante, there are only 1% libcxx tests failing with std modules. And the major blocking issue is the lambda expression in the require clauses. The root cause of the issue is that previously we never consider any lambda expression as the same. Per [temp.over.link]p5: > Two lambda-expressions are never considered equivalent. I thought this is an oversight at first but @rsmith explains that in the wording, the program is as if there is only a single definition, and a single lambda-expression. So we don't need worry about this in the spec. The explanation makes sense. But it didn't reflect to the implementation directly. Here is a cycle in the implementation. If we want to merge two definitions, we need to make sure its implementation are the same. But according to the explanation above, we need to judge if two lambda-expression are the same by looking at its parent definitions. So here is the problem. To solve the problem, I think we have to profile the lambda expressions actually to get the accurate information. But we can't do this universally. So in this patch I tried to modify the interface of `Stmt::Profile` and only profile the lambda expression during the process of merging the constraint expressions. Differential Revision: https://reviews.llvm.org/D153957
2023-07-11 16:12:19 +08:00
const ASTContext &Context, bool Canonical,
bool ProfileLambdaExpr)
: StmtProfiler(ID, Canonical, ProfileLambdaExpr), Context(Context) {}
private:
void HandleStmtClass(Stmt::StmtClass SC) override {
ID.AddInteger(SC);
}
void VisitDecl(const Decl *D) override {
ID.AddInteger(D ? D->getKind() : 0);
if (Canonical && D) {
if (const NonTypeTemplateParmDecl *NTTP =
dyn_cast<NonTypeTemplateParmDecl>(D)) {
ID.AddInteger(NTTP->getDepth());
ID.AddInteger(NTTP->getIndex());
ID.AddBoolean(NTTP->isParameterPack());
// C++20 [temp.over.link]p6:
// Two template-parameters are equivalent under the following
// conditions: [...] if they declare non-type template parameters,
// they have equivalent types ignoring the use of type-constraints
// for placeholder types
//
// TODO: Why do we need to include the type in the profile? It's not
// part of the mangling.
VisitType(Context.getUnconstrainedType(NTTP->getType()));
return;
}
if (const ParmVarDecl *Parm = dyn_cast<ParmVarDecl>(D)) {
// The Itanium C++ ABI uses the type, scope depth, and scope
// index of a parameter when mangling expressions that involve
// function parameters, so we will use the parameter's type for
// establishing function parameter identity. That way, our
// definition of "equivalent" (per C++ [temp.over.link]) is at
// least as strong as the definition of "equivalent" used for
// name mangling.
//
// TODO: The Itanium C++ ABI only uses the top-level cv-qualifiers,
// not the entirety of the type.
VisitType(Parm->getType());
ID.AddInteger(Parm->getFunctionScopeDepth());
ID.AddInteger(Parm->getFunctionScopeIndex());
return;
}
if (const TemplateTypeParmDecl *TTP =
dyn_cast<TemplateTypeParmDecl>(D)) {
ID.AddInteger(TTP->getDepth());
ID.AddInteger(TTP->getIndex());
ID.AddBoolean(TTP->isParameterPack());
return;
}
if (const TemplateTemplateParmDecl *TTP =
dyn_cast<TemplateTemplateParmDecl>(D)) {
ID.AddInteger(TTP->getDepth());
ID.AddInteger(TTP->getIndex());
ID.AddBoolean(TTP->isParameterPack());
return;
}
}
ID.AddPointer(D ? D->getCanonicalDecl() : nullptr);
}
void VisitType(QualType T) override {
if (Canonical && !T.isNull())
T = Context.getCanonicalType(T);
ID.AddPointer(T.getAsOpaquePtr());
}
void VisitName(DeclarationName Name, bool /*TreatAsDecl*/) override {
ID.AddPointer(Name.getAsOpaquePtr());
}
void VisitIdentifierInfo(const IdentifierInfo *II) override {
ID.AddPointer(II);
}
void VisitNestedNameSpecifier(NestedNameSpecifier *NNS) override {
if (Canonical)
NNS = Context.getCanonicalNestedNameSpecifier(NNS);
ID.AddPointer(NNS);
}
void VisitTemplateName(TemplateName Name) override {
if (Canonical)
Name = Context.getCanonicalTemplateName(Name);
Name.Profile(ID);
}
};
class StmtProfilerWithoutPointers : public StmtProfiler {
ODRHash &Hash;
public:
StmtProfilerWithoutPointers(llvm::FoldingSetNodeID &ID, ODRHash &Hash)
[C++20] [Modules] Allow Stmt::Profile to treat lambdas as equal conditionally Close https://github.com/llvm/llvm-project/issues/63544. Background: We landed std modules in libcxx recently but we haven't landed the corresponding in-tree tests. According to @Mordante, there are only 1% libcxx tests failing with std modules. And the major blocking issue is the lambda expression in the require clauses. The root cause of the issue is that previously we never consider any lambda expression as the same. Per [temp.over.link]p5: > Two lambda-expressions are never considered equivalent. I thought this is an oversight at first but @rsmith explains that in the wording, the program is as if there is only a single definition, and a single lambda-expression. So we don't need worry about this in the spec. The explanation makes sense. But it didn't reflect to the implementation directly. Here is a cycle in the implementation. If we want to merge two definitions, we need to make sure its implementation are the same. But according to the explanation above, we need to judge if two lambda-expression are the same by looking at its parent definitions. So here is the problem. To solve the problem, I think we have to profile the lambda expressions actually to get the accurate information. But we can't do this universally. So in this patch I tried to modify the interface of `Stmt::Profile` and only profile the lambda expression during the process of merging the constraint expressions. Differential Revision: https://reviews.llvm.org/D153957
2023-07-11 16:12:19 +08:00
: StmtProfiler(ID, /*Canonical=*/false, /*ProfileLambdaExpr=*/false),
Hash(Hash) {}
private:
void HandleStmtClass(Stmt::StmtClass SC) override {
if (SC == Stmt::UnresolvedLookupExprClass) {
// Pretend that the name looked up is a Decl due to how templates
// handle some Decl lookups.
ID.AddInteger(Stmt::DeclRefExprClass);
} else {
ID.AddInteger(SC);
}
}
void VisitType(QualType T) override {
Hash.AddQualType(T);
}
void VisitName(DeclarationName Name, bool TreatAsDecl) override {
if (TreatAsDecl) {
// A Decl can be null, so each Decl is preceded by a boolean to
// store its nullness. Add a boolean here to match.
ID.AddBoolean(true);
}
Hash.AddDeclarationName(Name, TreatAsDecl);
}
void VisitIdentifierInfo(const IdentifierInfo *II) override {
ID.AddBoolean(II);
if (II) {
Hash.AddIdentifierInfo(II);
}
}
void VisitDecl(const Decl *D) override {
ID.AddBoolean(D);
if (D) {
Hash.AddDecl(D);
}
}
void VisitTemplateName(TemplateName Name) override {
Hash.AddTemplateName(Name);
}
void VisitNestedNameSpecifier(NestedNameSpecifier *NNS) override {
ID.AddBoolean(NNS);
if (NNS) {
Hash.AddNestedNameSpecifier(NNS);
}
}
};
}
void StmtProfiler::VisitStmt(const Stmt *S) {
assert(S && "Requires non-null Stmt pointer");
VisitStmtNoChildren(S);
for (const Stmt *SubStmt : S->children()) {
if (SubStmt)
Visit(SubStmt);
else
ID.AddInteger(0);
}
}
void StmtProfiler::VisitDeclStmt(const DeclStmt *S) {
VisitStmt(S);
for (const auto *D : S->decls())
VisitDecl(D);
}
void StmtProfiler::VisitNullStmt(const NullStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitCompoundStmt(const CompoundStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitCaseStmt(const CaseStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitDefaultStmt(const DefaultStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitLabelStmt(const LabelStmt *S) {
VisitStmt(S);
VisitDecl(S->getDecl());
}
void StmtProfiler::VisitAttributedStmt(const AttributedStmt *S) {
VisitStmt(S);
// TODO: maybe visit attributes?
}
void StmtProfiler::VisitIfStmt(const IfStmt *S) {
VisitStmt(S);
VisitDecl(S->getConditionVariable());
}
void StmtProfiler::VisitSwitchStmt(const SwitchStmt *S) {
VisitStmt(S);
VisitDecl(S->getConditionVariable());
}
void StmtProfiler::VisitWhileStmt(const WhileStmt *S) {
VisitStmt(S);
VisitDecl(S->getConditionVariable());
}
void StmtProfiler::VisitDoStmt(const DoStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitForStmt(const ForStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitGotoStmt(const GotoStmt *S) {
VisitStmt(S);
VisitDecl(S->getLabel());
}
void StmtProfiler::VisitIndirectGotoStmt(const IndirectGotoStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitContinueStmt(const ContinueStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitBreakStmt(const BreakStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitReturnStmt(const ReturnStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitGCCAsmStmt(const GCCAsmStmt *S) {
VisitStmt(S);
ID.AddBoolean(S->isVolatile());
ID.AddBoolean(S->isSimple());
VisitExpr(S->getAsmStringExpr());
ID.AddInteger(S->getNumOutputs());
for (unsigned I = 0, N = S->getNumOutputs(); I != N; ++I) {
ID.AddString(S->getOutputName(I));
VisitExpr(S->getOutputConstraintExpr(I));
}
ID.AddInteger(S->getNumInputs());
for (unsigned I = 0, N = S->getNumInputs(); I != N; ++I) {
ID.AddString(S->getInputName(I));
VisitExpr(S->getInputConstraintExpr(I));
}
ID.AddInteger(S->getNumClobbers());
for (unsigned I = 0, N = S->getNumClobbers(); I != N; ++I)
VisitExpr(S->getClobberExpr(I));
ID.AddInteger(S->getNumLabels());
for (auto *L : S->labels())
VisitDecl(L->getLabel());
}
void StmtProfiler::VisitMSAsmStmt(const MSAsmStmt *S) {
// FIXME: Implement MS style inline asm statement profiler.
VisitStmt(S);
}
void StmtProfiler::VisitCXXCatchStmt(const CXXCatchStmt *S) {
VisitStmt(S);
VisitType(S->getCaughtType());
}
void StmtProfiler::VisitCXXTryStmt(const CXXTryStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitCXXForRangeStmt(const CXXForRangeStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitMSDependentExistsStmt(const MSDependentExistsStmt *S) {
VisitStmt(S);
ID.AddBoolean(S->isIfExists());
VisitNestedNameSpecifier(S->getQualifierLoc().getNestedNameSpecifier());
VisitName(S->getNameInfo().getName());
}
void StmtProfiler::VisitSEHTryStmt(const SEHTryStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitSEHFinallyStmt(const SEHFinallyStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitSEHExceptStmt(const SEHExceptStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitSEHLeaveStmt(const SEHLeaveStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitCapturedStmt(const CapturedStmt *S) {
VisitStmt(S);
}
[SYCL] AST support for SYCL kernel entry point functions. (#122379) A SYCL kernel entry point function is a non-member function or a static member function declared with the `sycl_kernel_entry_point` attribute. Such functions define a pattern for an offload kernel entry point function to be generated to enable execution of a SYCL kernel on a device. A SYCL library implementation orchestrates the invocation of these functions with corresponding SYCL kernel arguments in response to calls to SYCL kernel invocation functions specified by the SYCL 2020 specification. The offload kernel entry point function (sometimes referred to as the SYCL kernel caller function) is generated from the SYCL kernel entry point function by a transformation of the function parameters followed by a transformation of the function body to replace references to the original parameters with references to the transformed ones. Exactly how parameters are transformed will be explained in a future change that implements non-trivial transformations. For now, it suffices to state that a given parameter of the SYCL kernel entry point function may be transformed to multiple parameters of the offload kernel entry point as needed to satisfy offload kernel argument passing requirements. Parameters that are decomposed in this way are reconstituted as local variables in the body of the generated offload kernel entry point function. For example, given the following SYCL kernel entry point function definition: ``` template<typename KernelNameType, typename KernelType> [[clang::sycl_kernel_entry_point(KernelNameType)]] void sycl_kernel_entry_point(KernelType kernel) { kernel(); } ``` and the following call: ``` struct Kernel { int dm1; int dm2; void operator()() const; }; Kernel k; sycl_kernel_entry_point<class kernel_name>(k); ``` the corresponding offload kernel entry point function that is generated might look as follows (assuming `Kernel` is a type that requires decomposition): ``` void offload_kernel_entry_point_for_kernel_name(int dm1, int dm2) { Kernel kernel{dm1, dm2}; kernel(); } ``` Other details of the generated offload kernel entry point function, such as its name and calling convention, are implementation details that need not be reflected in the AST and may differ across target devices. For that reason, only the transformation described above is represented in the AST; other details will be filled in during code generation. These transformations are represented using new AST nodes introduced with this change. `OutlinedFunctionDecl` holds a sequence of `ImplicitParamDecl` nodes and a sequence of statement nodes that correspond to the transformed parameters and function body. `SYCLKernelCallStmt` wraps the original function body and associates it with an `OutlinedFunctionDecl` instance. For the example above, the AST generated for the `sycl_kernel_entry_point<kernel_name>` specialization would look as follows: ``` FunctionDecl 'sycl_kernel_entry_point<kernel_name>(Kernel)' TemplateArgument type 'kernel_name' TemplateArgument type 'Kernel' ParmVarDecl kernel 'Kernel' SYCLKernelCallStmt CompoundStmt <original statements> OutlinedFunctionDecl ImplicitParamDecl 'dm1' 'int' ImplicitParamDecl 'dm2' 'int' CompoundStmt VarDecl 'kernel' 'Kernel' <initialization of 'kernel' with 'dm1' and 'dm2'> <transformed statements with redirected references of 'kernel'> ``` Any ODR-use of the SYCL kernel entry point function will (with future changes) suffice for the offload kernel entry point to be emitted. An actual call to the SYCL kernel entry point function will result in a call to the function. However, evaluation of a `SYCLKernelCallStmt` statement is a no-op, so such calls will have no effect other than to trigger emission of the offload kernel entry point. Additionally, as a related change inspired by code review feedback, these changes disallow use of the `sycl_kernel_entry_point` attribute with functions defined with a _function-try-block_. The SYCL 2020 specification prohibits the use of C++ exceptions in device functions. Even if exceptions were not prohibited, it is unclear what the semantics would be for an exception that escapes the SYCL kernel entry point function; the boundary between host and device code could be an implicit noexcept boundary that results in program termination if violated, or the exception could perhaps be propagated to host code via the SYCL library. Pending support for C++ exceptions in device code and clear semantics for handling them at the host-device boundary, this change makes use of the `sycl_kernel_entry_point` attribute with a function defined with a _function-try-block_ an error.
2025-01-22 16:39:08 -05:00
void StmtProfiler::VisitSYCLKernelCallStmt(const SYCLKernelCallStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitObjCForCollectionStmt(const ObjCForCollectionStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitObjCAtCatchStmt(const ObjCAtCatchStmt *S) {
VisitStmt(S);
ID.AddBoolean(S->hasEllipsis());
if (S->getCatchParamDecl())
VisitType(S->getCatchParamDecl()->getType());
}
void StmtProfiler::VisitObjCAtFinallyStmt(const ObjCAtFinallyStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitObjCAtTryStmt(const ObjCAtTryStmt *S) {
VisitStmt(S);
}
void
StmtProfiler::VisitObjCAtSynchronizedStmt(const ObjCAtSynchronizedStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitObjCAtThrowStmt(const ObjCAtThrowStmt *S) {
VisitStmt(S);
}
void
StmtProfiler::VisitObjCAutoreleasePoolStmt(const ObjCAutoreleasePoolStmt *S) {
VisitStmt(S);
}
namespace {
class OMPClauseProfiler : public ConstOMPClauseVisitor<OMPClauseProfiler> {
StmtProfiler *Profiler;
/// Process clauses with list of variables.
template <typename T>
void VisitOMPClauseList(T *Node);
2015-12-09 07:52:46 +00:00
public:
OMPClauseProfiler(StmtProfiler *P) : Profiler(P) { }
#define GEN_CLANG_CLAUSE_CLASS
#define CLAUSE_CLASS(Enum, Str, Class) void Visit##Class(const Class *C);
#include "llvm/Frontend/OpenMP/OMP.inc"
void VistOMPClauseWithPreInit(const OMPClauseWithPreInit *C);
void VistOMPClauseWithPostUpdate(const OMPClauseWithPostUpdate *C);
};
void OMPClauseProfiler::VistOMPClauseWithPreInit(
const OMPClauseWithPreInit *C) {
if (auto *S = C->getPreInitStmt())
Profiler->VisitStmt(S);
}
void OMPClauseProfiler::VistOMPClauseWithPostUpdate(
const OMPClauseWithPostUpdate *C) {
VistOMPClauseWithPreInit(C);
if (auto *E = C->getPostUpdateExpr())
Profiler->VisitStmt(E);
}
void OMPClauseProfiler::VisitOMPIfClause(const OMPIfClause *C) {
VistOMPClauseWithPreInit(C);
if (C->getCondition())
Profiler->VisitStmt(C->getCondition());
}
void OMPClauseProfiler::VisitOMPFinalClause(const OMPFinalClause *C) {
VistOMPClauseWithPreInit(C);
if (C->getCondition())
Profiler->VisitStmt(C->getCondition());
}
void OMPClauseProfiler::VisitOMPNumThreadsClause(const OMPNumThreadsClause *C) {
VistOMPClauseWithPreInit(C);
if (C->getNumThreads())
Profiler->VisitStmt(C->getNumThreads());
}
void OMPClauseProfiler::VisitOMPAlignClause(const OMPAlignClause *C) {
if (C->getAlignment())
Profiler->VisitStmt(C->getAlignment());
}
void OMPClauseProfiler::VisitOMPSafelenClause(const OMPSafelenClause *C) {
if (C->getSafelen())
Profiler->VisitStmt(C->getSafelen());
}
void OMPClauseProfiler::VisitOMPSimdlenClause(const OMPSimdlenClause *C) {
if (C->getSimdlen())
Profiler->VisitStmt(C->getSimdlen());
}
[OpenMP] Implement '#pragma omp tile', by Michael Kruse (@Meinersbur). The tile directive is in OpenMP's Technical Report 8 and foreseeably will be part of the upcoming OpenMP 5.1 standard. This implementation is based on an AST transformation providing a de-sugared loop nest. This makes it simple to forward the de-sugared transformation to loop associated directives taking the tiled loops. In contrast to other loop associated directives, the OMPTileDirective does not use CapturedStmts. Letting loop associated directives consume loops from different capture context would be difficult. A significant amount of code generation logic is taking place in the Sema class. Eventually, I would prefer if these would move into the CodeGen component such that we could make use of the OpenMPIRBuilder, together with flang. Only expressions converting between the language's iteration variable and the logical iteration space need to take place in the semantic analyzer: Getting the of iterations (e.g. the overload resolution of `std::distance`) and converting the logical iteration number to the iteration variable (e.g. overload resolution of `iteration + .omp.iv`). In clang, only CXXForRangeStmt is also represented by its de-sugared components. However, OpenMP loop are not defined as syntatic sugar. Starting with an AST-based approach allows us to gradually move generated AST statements into CodeGen, instead all at once. I would also like to refactor `checkOpenMPLoop` into its functionalities in a follow-up. In this patch it is used twice. Once for checking proper nesting and emitting diagnostics, and additionally for deriving the logical iteration space per-loop (instead of for the loop nest). Differential Revision: https://reviews.llvm.org/D76342
2021-02-12 11:26:59 -08:00
void OMPClauseProfiler::VisitOMPSizesClause(const OMPSizesClause *C) {
for (auto *E : C->getSizesRefs())
[OpenMP] Implement '#pragma omp tile', by Michael Kruse (@Meinersbur). The tile directive is in OpenMP's Technical Report 8 and foreseeably will be part of the upcoming OpenMP 5.1 standard. This implementation is based on an AST transformation providing a de-sugared loop nest. This makes it simple to forward the de-sugared transformation to loop associated directives taking the tiled loops. In contrast to other loop associated directives, the OMPTileDirective does not use CapturedStmts. Letting loop associated directives consume loops from different capture context would be difficult. A significant amount of code generation logic is taking place in the Sema class. Eventually, I would prefer if these would move into the CodeGen component such that we could make use of the OpenMPIRBuilder, together with flang. Only expressions converting between the language's iteration variable and the logical iteration space need to take place in the semantic analyzer: Getting the of iterations (e.g. the overload resolution of `std::distance`) and converting the logical iteration number to the iteration variable (e.g. overload resolution of `iteration + .omp.iv`). In clang, only CXXForRangeStmt is also represented by its de-sugared components. However, OpenMP loop are not defined as syntatic sugar. Starting with an AST-based approach allows us to gradually move generated AST statements into CodeGen, instead all at once. I would also like to refactor `checkOpenMPLoop` into its functionalities in a follow-up. In this patch it is used twice. Once for checking proper nesting and emitting diagnostics, and additionally for deriving the logical iteration space per-loop (instead of for the loop nest). Differential Revision: https://reviews.llvm.org/D76342
2021-02-12 11:26:59 -08:00
if (E)
Profiler->VisitExpr(E);
}
void OMPClauseProfiler::VisitOMPPermutationClause(
const OMPPermutationClause *C) {
for (Expr *E : C->getArgsRefs())
if (E)
Profiler->VisitExpr(E);
}
void OMPClauseProfiler::VisitOMPFullClause(const OMPFullClause *C) {}
void OMPClauseProfiler::VisitOMPPartialClause(const OMPPartialClause *C) {
if (const Expr *Factor = C->getFactor())
Profiler->VisitExpr(Factor);
}
void OMPClauseProfiler::VisitOMPAllocatorClause(const OMPAllocatorClause *C) {
if (C->getAllocator())
Profiler->VisitStmt(C->getAllocator());
}
void OMPClauseProfiler::VisitOMPCollapseClause(const OMPCollapseClause *C) {
if (C->getNumForLoops())
Profiler->VisitStmt(C->getNumForLoops());
}
void OMPClauseProfiler::VisitOMPDetachClause(const OMPDetachClause *C) {
if (Expr *Evt = C->getEventHandler())
Profiler->VisitStmt(Evt);
}
void OMPClauseProfiler::VisitOMPNovariantsClause(const OMPNovariantsClause *C) {
VistOMPClauseWithPreInit(C);
if (C->getCondition())
Profiler->VisitStmt(C->getCondition());
}
void OMPClauseProfiler::VisitOMPNocontextClause(const OMPNocontextClause *C) {
VistOMPClauseWithPreInit(C);
if (C->getCondition())
Profiler->VisitStmt(C->getCondition());
}
void OMPClauseProfiler::VisitOMPDefaultClause(const OMPDefaultClause *C) { }
void OMPClauseProfiler::VisitOMPProcBindClause(const OMPProcBindClause *C) { }
void OMPClauseProfiler::VisitOMPUnifiedAddressClause(
const OMPUnifiedAddressClause *C) {}
void OMPClauseProfiler::VisitOMPUnifiedSharedMemoryClause(
const OMPUnifiedSharedMemoryClause *C) {}
void OMPClauseProfiler::VisitOMPReverseOffloadClause(
const OMPReverseOffloadClause *C) {}
void OMPClauseProfiler::VisitOMPDynamicAllocatorsClause(
const OMPDynamicAllocatorsClause *C) {}
void OMPClauseProfiler::VisitOMPAtomicDefaultMemOrderClause(
const OMPAtomicDefaultMemOrderClause *C) {}
void OMPClauseProfiler::VisitOMPSelfMapsClause(const OMPSelfMapsClause *C) {}
void OMPClauseProfiler::VisitOMPAtClause(const OMPAtClause *C) {}
void OMPClauseProfiler::VisitOMPSeverityClause(const OMPSeverityClause *C) {}
void OMPClauseProfiler::VisitOMPMessageClause(const OMPMessageClause *C) {
if (C->getMessageString())
Profiler->VisitStmt(C->getMessageString());
}
void OMPClauseProfiler::VisitOMPScheduleClause(const OMPScheduleClause *C) {
VistOMPClauseWithPreInit(C);
if (auto *S = C->getChunkSize())
Profiler->VisitStmt(S);
}
void OMPClauseProfiler::VisitOMPOrderedClause(const OMPOrderedClause *C) {
if (auto *Num = C->getNumForLoops())
Profiler->VisitStmt(Num);
}
void OMPClauseProfiler::VisitOMPNowaitClause(const OMPNowaitClause *) {}
void OMPClauseProfiler::VisitOMPUntiedClause(const OMPUntiedClause *) {}
void OMPClauseProfiler::VisitOMPMergeableClause(const OMPMergeableClause *) {}
void OMPClauseProfiler::VisitOMPReadClause(const OMPReadClause *) {}
void OMPClauseProfiler::VisitOMPWriteClause(const OMPWriteClause *) {}
void OMPClauseProfiler::VisitOMPUpdateClause(const OMPUpdateClause *) {}
void OMPClauseProfiler::VisitOMPCaptureClause(const OMPCaptureClause *) {}
void OMPClauseProfiler::VisitOMPCompareClause(const OMPCompareClause *) {}
void OMPClauseProfiler::VisitOMPFailClause(const OMPFailClause *) {}
void OMPClauseProfiler::VisitOMPAbsentClause(const OMPAbsentClause *) {}
void OMPClauseProfiler::VisitOMPHoldsClause(const OMPHoldsClause *) {}
void OMPClauseProfiler::VisitOMPContainsClause(const OMPContainsClause *) {}
void OMPClauseProfiler::VisitOMPNoOpenMPClause(const OMPNoOpenMPClause *) {}
void OMPClauseProfiler::VisitOMPNoOpenMPRoutinesClause(
const OMPNoOpenMPRoutinesClause *) {}
void OMPClauseProfiler::VisitOMPNoOpenMPConstructsClause(
const OMPNoOpenMPConstructsClause *) {}
void OMPClauseProfiler::VisitOMPNoParallelismClause(
const OMPNoParallelismClause *) {}
void OMPClauseProfiler::VisitOMPSeqCstClause(const OMPSeqCstClause *) {}
void OMPClauseProfiler::VisitOMPAcqRelClause(const OMPAcqRelClause *) {}
void OMPClauseProfiler::VisitOMPAcquireClause(const OMPAcquireClause *) {}
void OMPClauseProfiler::VisitOMPReleaseClause(const OMPReleaseClause *) {}
void OMPClauseProfiler::VisitOMPRelaxedClause(const OMPRelaxedClause *) {}
void OMPClauseProfiler::VisitOMPWeakClause(const OMPWeakClause *) {}
void OMPClauseProfiler::VisitOMPThreadsClause(const OMPThreadsClause *) {}
void OMPClauseProfiler::VisitOMPSIMDClause(const OMPSIMDClause *) {}
void OMPClauseProfiler::VisitOMPNogroupClause(const OMPNogroupClause *) {}
void OMPClauseProfiler::VisitOMPInitClause(const OMPInitClause *C) {
VisitOMPClauseList(C);
}
void OMPClauseProfiler::VisitOMPUseClause(const OMPUseClause *C) {
if (C->getInteropVar())
Profiler->VisitStmt(C->getInteropVar());
}
void OMPClauseProfiler::VisitOMPDestroyClause(const OMPDestroyClause *C) {
if (C->getInteropVar())
Profiler->VisitStmt(C->getInteropVar());
}
void OMPClauseProfiler::VisitOMPFilterClause(const OMPFilterClause *C) {
VistOMPClauseWithPreInit(C);
if (C->getThreadID())
Profiler->VisitStmt(C->getThreadID());
}
template<typename T>
void OMPClauseProfiler::VisitOMPClauseList(T *Node) {
for (auto *E : Node->varlist()) {
if (E)
Profiler->VisitStmt(E);
}
}
void OMPClauseProfiler::VisitOMPPrivateClause(const OMPPrivateClause *C) {
VisitOMPClauseList(C);
for (auto *E : C->private_copies()) {
if (E)
Profiler->VisitStmt(E);
}
}
void
OMPClauseProfiler::VisitOMPFirstprivateClause(const OMPFirstprivateClause *C) {
VisitOMPClauseList(C);
VistOMPClauseWithPreInit(C);
for (auto *E : C->private_copies()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->inits()) {
if (E)
Profiler->VisitStmt(E);
}
}
void
OMPClauseProfiler::VisitOMPLastprivateClause(const OMPLastprivateClause *C) {
VisitOMPClauseList(C);
VistOMPClauseWithPostUpdate(C);
for (auto *E : C->source_exprs()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->destination_exprs()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->assignment_ops()) {
if (E)
Profiler->VisitStmt(E);
}
}
void OMPClauseProfiler::VisitOMPSharedClause(const OMPSharedClause *C) {
VisitOMPClauseList(C);
}
void OMPClauseProfiler::VisitOMPReductionClause(
const OMPReductionClause *C) {
Profiler->VisitNestedNameSpecifier(
C->getQualifierLoc().getNestedNameSpecifier());
Profiler->VisitName(C->getNameInfo().getName());
VisitOMPClauseList(C);
VistOMPClauseWithPostUpdate(C);
for (auto *E : C->privates()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->lhs_exprs()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->rhs_exprs()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->reduction_ops()) {
if (E)
Profiler->VisitStmt(E);
}
if (C->getModifier() == clang::OMPC_REDUCTION_inscan) {
for (auto *E : C->copy_ops()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->copy_array_temps()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->copy_array_elems()) {
if (E)
Profiler->VisitStmt(E);
}
}
}
void OMPClauseProfiler::VisitOMPTaskReductionClause(
const OMPTaskReductionClause *C) {
Profiler->VisitNestedNameSpecifier(
C->getQualifierLoc().getNestedNameSpecifier());
Profiler->VisitName(C->getNameInfo().getName());
VisitOMPClauseList(C);
VistOMPClauseWithPostUpdate(C);
for (auto *E : C->privates()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->lhs_exprs()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->rhs_exprs()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->reduction_ops()) {
if (E)
Profiler->VisitStmt(E);
}
}
void OMPClauseProfiler::VisitOMPInReductionClause(
const OMPInReductionClause *C) {
Profiler->VisitNestedNameSpecifier(
C->getQualifierLoc().getNestedNameSpecifier());
Profiler->VisitName(C->getNameInfo().getName());
VisitOMPClauseList(C);
VistOMPClauseWithPostUpdate(C);
for (auto *E : C->privates()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->lhs_exprs()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->rhs_exprs()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->reduction_ops()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->taskgroup_descriptors()) {
if (E)
Profiler->VisitStmt(E);
}
}
void OMPClauseProfiler::VisitOMPLinearClause(const OMPLinearClause *C) {
VisitOMPClauseList(C);
VistOMPClauseWithPostUpdate(C);
for (auto *E : C->privates()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->inits()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->updates()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->finals()) {
if (E)
Profiler->VisitStmt(E);
}
if (C->getStep())
Profiler->VisitStmt(C->getStep());
if (C->getCalcStep())
Profiler->VisitStmt(C->getCalcStep());
}
void OMPClauseProfiler::VisitOMPAlignedClause(const OMPAlignedClause *C) {
VisitOMPClauseList(C);
if (C->getAlignment())
Profiler->VisitStmt(C->getAlignment());
}
void OMPClauseProfiler::VisitOMPCopyinClause(const OMPCopyinClause *C) {
VisitOMPClauseList(C);
for (auto *E : C->source_exprs()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->destination_exprs()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->assignment_ops()) {
if (E)
Profiler->VisitStmt(E);
}
}
void
OMPClauseProfiler::VisitOMPCopyprivateClause(const OMPCopyprivateClause *C) {
VisitOMPClauseList(C);
for (auto *E : C->source_exprs()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->destination_exprs()) {
if (E)
Profiler->VisitStmt(E);
}
for (auto *E : C->assignment_ops()) {
if (E)
Profiler->VisitStmt(E);
}
}
void OMPClauseProfiler::VisitOMPFlushClause(const OMPFlushClause *C) {
VisitOMPClauseList(C);
}
void OMPClauseProfiler::VisitOMPDepobjClause(const OMPDepobjClause *C) {
if (const Expr *Depobj = C->getDepobj())
Profiler->VisitStmt(Depobj);
}
void OMPClauseProfiler::VisitOMPDependClause(const OMPDependClause *C) {
VisitOMPClauseList(C);
}
void OMPClauseProfiler::VisitOMPDeviceClause(const OMPDeviceClause *C) {
if (C->getDevice())
Profiler->VisitStmt(C->getDevice());
}
void OMPClauseProfiler::VisitOMPMapClause(const OMPMapClause *C) {
VisitOMPClauseList(C);
}
void OMPClauseProfiler::VisitOMPAllocateClause(const OMPAllocateClause *C) {
if (Expr *Allocator = C->getAllocator())
Profiler->VisitStmt(Allocator);
VisitOMPClauseList(C);
}
void OMPClauseProfiler::VisitOMPNumTeamsClause(const OMPNumTeamsClause *C) {
VisitOMPClauseList(C);
VistOMPClauseWithPreInit(C);
}
void OMPClauseProfiler::VisitOMPThreadLimitClause(
const OMPThreadLimitClause *C) {
VisitOMPClauseList(C);
VistOMPClauseWithPreInit(C);
}
void OMPClauseProfiler::VisitOMPPriorityClause(const OMPPriorityClause *C) {
VistOMPClauseWithPreInit(C);
if (C->getPriority())
Profiler->VisitStmt(C->getPriority());
}
void OMPClauseProfiler::VisitOMPGrainsizeClause(const OMPGrainsizeClause *C) {
VistOMPClauseWithPreInit(C);
if (C->getGrainsize())
Profiler->VisitStmt(C->getGrainsize());
}
void OMPClauseProfiler::VisitOMPNumTasksClause(const OMPNumTasksClause *C) {
VistOMPClauseWithPreInit(C);
if (C->getNumTasks())
Profiler->VisitStmt(C->getNumTasks());
}
void OMPClauseProfiler::VisitOMPHintClause(const OMPHintClause *C) {
if (C->getHint())
Profiler->VisitStmt(C->getHint());
}
void OMPClauseProfiler::VisitOMPToClause(const OMPToClause *C) {
VisitOMPClauseList(C);
}
void OMPClauseProfiler::VisitOMPFromClause(const OMPFromClause *C) {
VisitOMPClauseList(C);
}
void OMPClauseProfiler::VisitOMPUseDevicePtrClause(
const OMPUseDevicePtrClause *C) {
VisitOMPClauseList(C);
}
void OMPClauseProfiler::VisitOMPUseDeviceAddrClause(
const OMPUseDeviceAddrClause *C) {
VisitOMPClauseList(C);
}
void OMPClauseProfiler::VisitOMPIsDevicePtrClause(
const OMPIsDevicePtrClause *C) {
VisitOMPClauseList(C);
}
void OMPClauseProfiler::VisitOMPHasDeviceAddrClause(
const OMPHasDeviceAddrClause *C) {
VisitOMPClauseList(C);
}
void OMPClauseProfiler::VisitOMPNontemporalClause(
const OMPNontemporalClause *C) {
VisitOMPClauseList(C);
for (auto *E : C->private_refs())
Profiler->VisitStmt(E);
}
void OMPClauseProfiler::VisitOMPInclusiveClause(const OMPInclusiveClause *C) {
VisitOMPClauseList(C);
}
void OMPClauseProfiler::VisitOMPExclusiveClause(const OMPExclusiveClause *C) {
VisitOMPClauseList(C);
}
void OMPClauseProfiler::VisitOMPUsesAllocatorsClause(
const OMPUsesAllocatorsClause *C) {
for (unsigned I = 0, E = C->getNumberOfAllocators(); I < E; ++I) {
OMPUsesAllocatorsClause::Data D = C->getAllocatorData(I);
Profiler->VisitStmt(D.Allocator);
if (D.AllocatorTraits)
Profiler->VisitStmt(D.AllocatorTraits);
}
}
void OMPClauseProfiler::VisitOMPAffinityClause(const OMPAffinityClause *C) {
if (const Expr *Modifier = C->getModifier())
Profiler->VisitStmt(Modifier);
for (const Expr *E : C->varlist())
Profiler->VisitStmt(E);
}
void OMPClauseProfiler::VisitOMPOrderClause(const OMPOrderClause *C) {}
void OMPClauseProfiler::VisitOMPBindClause(const OMPBindClause *C) {}
void OMPClauseProfiler::VisitOMPXDynCGroupMemClause(
const OMPXDynCGroupMemClause *C) {
VistOMPClauseWithPreInit(C);
if (Expr *Size = C->getSize())
Profiler->VisitStmt(Size);
}
void OMPClauseProfiler::VisitOMPDoacrossClause(const OMPDoacrossClause *C) {
VisitOMPClauseList(C);
}
void OMPClauseProfiler::VisitOMPXAttributeClause(const OMPXAttributeClause *C) {
}
void OMPClauseProfiler::VisitOMPXBareClause(const OMPXBareClause *C) {}
} // namespace
void
StmtProfiler::VisitOMPExecutableDirective(const OMPExecutableDirective *S) {
VisitStmt(S);
OMPClauseProfiler P(this);
ArrayRef<OMPClause *> Clauses = S->clauses();
for (ArrayRef<OMPClause *>::iterator I = Clauses.begin(), E = Clauses.end();
I != E; ++I)
if (*I)
P.Visit(*I);
}
[clang][OpenMP] Use OpenMPIRBuilder for workshare loops. Initial support for using the OpenMPIRBuilder by clang to generate loops using the OpenMPIRBuilder. This initial support is intentionally limited to: * Only the worksharing-loop directive. * Recognizes only the nowait clause. * No loop nests with more than one loop. * Untested with templates, exceptions. * Semantic checking left to the existing infrastructure. This patch introduces a new AST node, OMPCanonicalLoop, which becomes parent of any loop that has to adheres to the restrictions as specified by the OpenMP standard. These restrictions allow OMPCanonicalLoop to provide the following additional information that depends on base language semantics: * The distance function: How many loop iterations there will be before entering the loop nest. * The loop variable function: Conversion from a logical iteration number to the loop variable. These allow the OpenMPIRBuilder to act solely using logical iteration numbers without needing to be concerned with iterator semantics between calling the distance function and determining what the value of the loop variable ought to be. Any OpenMP logical should be done by the OpenMPIRBuilder such that it can be reused MLIR OpenMP dialect and thus by flang. The distance and loop variable function are implemented using lambdas (or more exactly: CapturedStmt because lambda implementation is more interviewed with the parser). It is up to the OpenMPIRBuilder how they are called which depends on what is done with the loop. By default, these are emitted as outlined functions but we might think about emitting them inline as the OpenMPRuntime does. For compatibility with the current OpenMP implementation, even though not necessary for the OpenMPIRBuilder, OMPCanonicalLoop can still be nested within OMPLoopDirectives' CapturedStmt. Although OMPCanonicalLoop's are not currently generated when the OpenMPIRBuilder is not enabled, these can just be skipped when not using the OpenMPIRBuilder in case we don't want to make the AST dependent on the EnableOMPBuilder setting. Loop nests with more than one loop require support by the OpenMPIRBuilder (D93268). A simple implementation of non-rectangular loop nests would add another lambda function that returns whether a loop iteration of the rectangular overapproximation is also within its non-rectangular subset. Reviewed By: jdenny Differential Revision: https://reviews.llvm.org/D94973
2021-03-03 17:15:32 -06:00
void StmtProfiler::VisitOMPCanonicalLoop(const OMPCanonicalLoop *L) {
VisitStmt(L);
}
[OpenMP] Implement '#pragma omp tile', by Michael Kruse (@Meinersbur). The tile directive is in OpenMP's Technical Report 8 and foreseeably will be part of the upcoming OpenMP 5.1 standard. This implementation is based on an AST transformation providing a de-sugared loop nest. This makes it simple to forward the de-sugared transformation to loop associated directives taking the tiled loops. In contrast to other loop associated directives, the OMPTileDirective does not use CapturedStmts. Letting loop associated directives consume loops from different capture context would be difficult. A significant amount of code generation logic is taking place in the Sema class. Eventually, I would prefer if these would move into the CodeGen component such that we could make use of the OpenMPIRBuilder, together with flang. Only expressions converting between the language's iteration variable and the logical iteration space need to take place in the semantic analyzer: Getting the of iterations (e.g. the overload resolution of `std::distance`) and converting the logical iteration number to the iteration variable (e.g. overload resolution of `iteration + .omp.iv`). In clang, only CXXForRangeStmt is also represented by its de-sugared components. However, OpenMP loop are not defined as syntatic sugar. Starting with an AST-based approach allows us to gradually move generated AST statements into CodeGen, instead all at once. I would also like to refactor `checkOpenMPLoop` into its functionalities in a follow-up. In this patch it is used twice. Once for checking proper nesting and emitting diagnostics, and additionally for deriving the logical iteration space per-loop (instead of for the loop nest). Differential Revision: https://reviews.llvm.org/D76342
2021-02-12 11:26:59 -08:00
void StmtProfiler::VisitOMPLoopBasedDirective(const OMPLoopBasedDirective *S) {
VisitOMPExecutableDirective(S);
}
[OpenMP] Implement '#pragma omp tile', by Michael Kruse (@Meinersbur). The tile directive is in OpenMP's Technical Report 8 and foreseeably will be part of the upcoming OpenMP 5.1 standard. This implementation is based on an AST transformation providing a de-sugared loop nest. This makes it simple to forward the de-sugared transformation to loop associated directives taking the tiled loops. In contrast to other loop associated directives, the OMPTileDirective does not use CapturedStmts. Letting loop associated directives consume loops from different capture context would be difficult. A significant amount of code generation logic is taking place in the Sema class. Eventually, I would prefer if these would move into the CodeGen component such that we could make use of the OpenMPIRBuilder, together with flang. Only expressions converting between the language's iteration variable and the logical iteration space need to take place in the semantic analyzer: Getting the of iterations (e.g. the overload resolution of `std::distance`) and converting the logical iteration number to the iteration variable (e.g. overload resolution of `iteration + .omp.iv`). In clang, only CXXForRangeStmt is also represented by its de-sugared components. However, OpenMP loop are not defined as syntatic sugar. Starting with an AST-based approach allows us to gradually move generated AST statements into CodeGen, instead all at once. I would also like to refactor `checkOpenMPLoop` into its functionalities in a follow-up. In this patch it is used twice. Once for checking proper nesting and emitting diagnostics, and additionally for deriving the logical iteration space per-loop (instead of for the loop nest). Differential Revision: https://reviews.llvm.org/D76342
2021-02-12 11:26:59 -08:00
void StmtProfiler::VisitOMPLoopDirective(const OMPLoopDirective *S) {
VisitOMPLoopBasedDirective(S);
}
void StmtProfiler::VisitOMPMetaDirective(const OMPMetaDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPParallelDirective(const OMPParallelDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPSimdDirective(const OMPSimdDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPLoopTransformationDirective(
const OMPLoopTransformationDirective *S) {
[OpenMP] Implement '#pragma omp tile', by Michael Kruse (@Meinersbur). The tile directive is in OpenMP's Technical Report 8 and foreseeably will be part of the upcoming OpenMP 5.1 standard. This implementation is based on an AST transformation providing a de-sugared loop nest. This makes it simple to forward the de-sugared transformation to loop associated directives taking the tiled loops. In contrast to other loop associated directives, the OMPTileDirective does not use CapturedStmts. Letting loop associated directives consume loops from different capture context would be difficult. A significant amount of code generation logic is taking place in the Sema class. Eventually, I would prefer if these would move into the CodeGen component such that we could make use of the OpenMPIRBuilder, together with flang. Only expressions converting between the language's iteration variable and the logical iteration space need to take place in the semantic analyzer: Getting the of iterations (e.g. the overload resolution of `std::distance`) and converting the logical iteration number to the iteration variable (e.g. overload resolution of `iteration + .omp.iv`). In clang, only CXXForRangeStmt is also represented by its de-sugared components. However, OpenMP loop are not defined as syntatic sugar. Starting with an AST-based approach allows us to gradually move generated AST statements into CodeGen, instead all at once. I would also like to refactor `checkOpenMPLoop` into its functionalities in a follow-up. In this patch it is used twice. Once for checking proper nesting and emitting diagnostics, and additionally for deriving the logical iteration space per-loop (instead of for the loop nest). Differential Revision: https://reviews.llvm.org/D76342
2021-02-12 11:26:59 -08:00
VisitOMPLoopBasedDirective(S);
}
void StmtProfiler::VisitOMPTileDirective(const OMPTileDirective *S) {
VisitOMPLoopTransformationDirective(S);
}
void StmtProfiler::VisitOMPStripeDirective(const OMPStripeDirective *S) {
VisitOMPLoopTransformationDirective(S);
}
void StmtProfiler::VisitOMPUnrollDirective(const OMPUnrollDirective *S) {
VisitOMPLoopTransformationDirective(S);
}
void StmtProfiler::VisitOMPReverseDirective(const OMPReverseDirective *S) {
VisitOMPLoopTransformationDirective(S);
}
void StmtProfiler::VisitOMPInterchangeDirective(
const OMPInterchangeDirective *S) {
VisitOMPLoopTransformationDirective(S);
}
void StmtProfiler::VisitOMPForDirective(const OMPForDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPForSimdDirective(const OMPForSimdDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPSectionsDirective(const OMPSectionsDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPSectionDirective(const OMPSectionDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPScopeDirective(const OMPScopeDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPSingleDirective(const OMPSingleDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPMasterDirective(const OMPMasterDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPCriticalDirective(const OMPCriticalDirective *S) {
VisitOMPExecutableDirective(S);
VisitName(S->getDirectiveName().getName());
}
void
StmtProfiler::VisitOMPParallelForDirective(const OMPParallelForDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPParallelForSimdDirective(
const OMPParallelForSimdDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPParallelMasterDirective(
const OMPParallelMasterDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPParallelMaskedDirective(
const OMPParallelMaskedDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPParallelSectionsDirective(
const OMPParallelSectionsDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPTaskDirective(const OMPTaskDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPTaskyieldDirective(const OMPTaskyieldDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPBarrierDirective(const OMPBarrierDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPTaskwaitDirective(const OMPTaskwaitDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPAssumeDirective(const OMPAssumeDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPErrorDirective(const OMPErrorDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPTaskgroupDirective(const OMPTaskgroupDirective *S) {
VisitOMPExecutableDirective(S);
if (const Expr *E = S->getReductionRef())
VisitStmt(E);
}
void StmtProfiler::VisitOMPFlushDirective(const OMPFlushDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPDepobjDirective(const OMPDepobjDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPScanDirective(const OMPScanDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPOrderedDirective(const OMPOrderedDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPAtomicDirective(const OMPAtomicDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPTargetDirective(const OMPTargetDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPTargetDataDirective(const OMPTargetDataDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPTargetEnterDataDirective(
const OMPTargetEnterDataDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPTargetExitDataDirective(
const OMPTargetExitDataDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPTargetParallelDirective(
const OMPTargetParallelDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPTargetParallelForDirective(
const OMPTargetParallelForDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPTeamsDirective(const OMPTeamsDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPCancellationPointDirective(
const OMPCancellationPointDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPCancelDirective(const OMPCancelDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPTaskLoopDirective(const OMPTaskLoopDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPTaskLoopSimdDirective(
const OMPTaskLoopSimdDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPMasterTaskLoopDirective(
const OMPMasterTaskLoopDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPMaskedTaskLoopDirective(
const OMPMaskedTaskLoopDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPMasterTaskLoopSimdDirective(
const OMPMasterTaskLoopSimdDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPMaskedTaskLoopSimdDirective(
const OMPMaskedTaskLoopSimdDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPParallelMasterTaskLoopDirective(
const OMPParallelMasterTaskLoopDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPParallelMaskedTaskLoopDirective(
const OMPParallelMaskedTaskLoopDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPParallelMasterTaskLoopSimdDirective(
const OMPParallelMasterTaskLoopSimdDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPParallelMaskedTaskLoopSimdDirective(
const OMPParallelMaskedTaskLoopSimdDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPDistributeDirective(
const OMPDistributeDirective *S) {
VisitOMPLoopDirective(S);
}
void OMPClauseProfiler::VisitOMPDistScheduleClause(
const OMPDistScheduleClause *C) {
VistOMPClauseWithPreInit(C);
if (auto *S = C->getChunkSize())
Profiler->VisitStmt(S);
}
void OMPClauseProfiler::VisitOMPDefaultmapClause(const OMPDefaultmapClause *) {}
void StmtProfiler::VisitOMPTargetUpdateDirective(
const OMPTargetUpdateDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPDistributeParallelForDirective(
const OMPDistributeParallelForDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPDistributeParallelForSimdDirective(
const OMPDistributeParallelForSimdDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPDistributeSimdDirective(
const OMPDistributeSimdDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPTargetParallelForSimdDirective(
const OMPTargetParallelForSimdDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPTargetSimdDirective(
const OMPTargetSimdDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPTeamsDistributeDirective(
const OMPTeamsDistributeDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPTeamsDistributeSimdDirective(
const OMPTeamsDistributeSimdDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPTeamsDistributeParallelForSimdDirective(
const OMPTeamsDistributeParallelForSimdDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPTeamsDistributeParallelForDirective(
const OMPTeamsDistributeParallelForDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPTargetTeamsDirective(
const OMPTargetTeamsDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPTargetTeamsDistributeDirective(
const OMPTargetTeamsDistributeDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPTargetTeamsDistributeParallelForDirective(
const OMPTargetTeamsDistributeParallelForDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPTargetTeamsDistributeParallelForSimdDirective(
const OMPTargetTeamsDistributeParallelForSimdDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPTargetTeamsDistributeSimdDirective(
const OMPTargetTeamsDistributeSimdDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPInteropDirective(const OMPInteropDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPDispatchDirective(const OMPDispatchDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPMaskedDirective(const OMPMaskedDirective *S) {
VisitOMPExecutableDirective(S);
}
void StmtProfiler::VisitOMPGenericLoopDirective(
const OMPGenericLoopDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPTeamsGenericLoopDirective(
const OMPTeamsGenericLoopDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPTargetTeamsGenericLoopDirective(
const OMPTargetTeamsGenericLoopDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPParallelGenericLoopDirective(
const OMPParallelGenericLoopDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitOMPTargetParallelGenericLoopDirective(
const OMPTargetParallelGenericLoopDirective *S) {
VisitOMPLoopDirective(S);
}
void StmtProfiler::VisitExpr(const Expr *S) {
VisitStmt(S);
}
void StmtProfiler::VisitConstantExpr(const ConstantExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitDeclRefExpr(const DeclRefExpr *S) {
VisitExpr(S);
if (!Canonical)
VisitNestedNameSpecifier(S->getQualifier());
VisitDecl(S->getDecl());
if (!Canonical) {
ID.AddBoolean(S->hasExplicitTemplateArgs());
if (S->hasExplicitTemplateArgs())
VisitTemplateArguments(S->getTemplateArgs(), S->getNumTemplateArgs());
}
}
void StmtProfiler::VisitSYCLUniqueStableNameExpr(
const SYCLUniqueStableNameExpr *S) {
VisitExpr(S);
VisitType(S->getTypeSourceInfo()->getType());
}
void StmtProfiler::VisitPredefinedExpr(const PredefinedExpr *S) {
VisitExpr(S);
ID.AddInteger(llvm::to_underlying(S->getIdentKind()));
}
void StmtProfiler::VisitOpenACCAsteriskSizeExpr(
const OpenACCAsteriskSizeExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitIntegerLiteral(const IntegerLiteral *S) {
VisitExpr(S);
S->getValue().Profile(ID);
QualType T = S->getType();
if (Canonical)
T = T.getCanonicalType();
ID.AddInteger(T->getTypeClass());
if (auto BitIntT = T->getAs<BitIntType>())
BitIntT->Profile(ID);
else
ID.AddInteger(T->castAs<BuiltinType>()->getKind());
}
void StmtProfiler::VisitFixedPointLiteral(const FixedPointLiteral *S) {
VisitExpr(S);
S->getValue().Profile(ID);
ID.AddInteger(S->getType()->castAs<BuiltinType>()->getKind());
}
void StmtProfiler::VisitCharacterLiteral(const CharacterLiteral *S) {
VisitExpr(S);
ID.AddInteger(llvm::to_underlying(S->getKind()));
ID.AddInteger(S->getValue());
}
void StmtProfiler::VisitFloatingLiteral(const FloatingLiteral *S) {
VisitExpr(S);
S->getValue().Profile(ID);
ID.AddBoolean(S->isExact());
ID.AddInteger(S->getType()->castAs<BuiltinType>()->getKind());
}
void StmtProfiler::VisitImaginaryLiteral(const ImaginaryLiteral *S) {
VisitExpr(S);
}
void StmtProfiler::VisitStringLiteral(const StringLiteral *S) {
VisitExpr(S);
ID.AddString(S->getBytes());
ID.AddInteger(llvm::to_underlying(S->getKind()));
}
void StmtProfiler::VisitParenExpr(const ParenExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitParenListExpr(const ParenListExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitUnaryOperator(const UnaryOperator *S) {
VisitExpr(S);
ID.AddInteger(S->getOpcode());
}
void StmtProfiler::VisitOffsetOfExpr(const OffsetOfExpr *S) {
Completely reimplement __builtin_offsetof, based on a patch by Roberto Amadini. This change introduces a new expression node type, OffsetOfExpr, that describes __builtin_offsetof. Previously, __builtin_offsetof was implemented using a unary operator whose subexpression involved various synthesized array-subscript and member-reference expressions, which was ugly and made it very hard to instantiate as a template. OffsetOfExpr represents the AST more faithfully, with proper type source information and a more compact representation. OffsetOfExpr also has support for dependent __builtin_offsetof expressions; it can be value-dependent, but will never be type-dependent (like sizeof or alignof). This commit introduces template instantiation for __builtin_offsetof as well. There are two major caveats to this patch: 1) CodeGen cannot handle the case where __builtin_offsetof is not a constant expression, so it produces an error. So, to avoid regressing in C, we retain the old UnaryOperator-based __builtin_offsetof implementation in C while using the shiny new OffsetOfExpr implementation in C++. The old implementation can go away once we have proper CodeGen support for this case, which we expect won't cause much trouble in C++. 2) __builtin_offsetof doesn't work well with non-POD class types, particularly when the designated field is found within a base class. I will address this in a subsequent patch. Fixes PR5880 and a bunch of assertions when building Boost.Python tests. llvm-svn: 102542
2010-04-28 22:16:22 +00:00
VisitType(S->getTypeSourceInfo()->getType());
unsigned n = S->getNumComponents();
for (unsigned i = 0; i < n; ++i) {
const OffsetOfNode &ON = S->getComponent(i);
Completely reimplement __builtin_offsetof, based on a patch by Roberto Amadini. This change introduces a new expression node type, OffsetOfExpr, that describes __builtin_offsetof. Previously, __builtin_offsetof was implemented using a unary operator whose subexpression involved various synthesized array-subscript and member-reference expressions, which was ugly and made it very hard to instantiate as a template. OffsetOfExpr represents the AST more faithfully, with proper type source information and a more compact representation. OffsetOfExpr also has support for dependent __builtin_offsetof expressions; it can be value-dependent, but will never be type-dependent (like sizeof or alignof). This commit introduces template instantiation for __builtin_offsetof as well. There are two major caveats to this patch: 1) CodeGen cannot handle the case where __builtin_offsetof is not a constant expression, so it produces an error. So, to avoid regressing in C, we retain the old UnaryOperator-based __builtin_offsetof implementation in C while using the shiny new OffsetOfExpr implementation in C++. The old implementation can go away once we have proper CodeGen support for this case, which we expect won't cause much trouble in C++. 2) __builtin_offsetof doesn't work well with non-POD class types, particularly when the designated field is found within a base class. I will address this in a subsequent patch. Fixes PR5880 and a bunch of assertions when building Boost.Python tests. llvm-svn: 102542
2010-04-28 22:16:22 +00:00
ID.AddInteger(ON.getKind());
switch (ON.getKind()) {
case OffsetOfNode::Array:
Completely reimplement __builtin_offsetof, based on a patch by Roberto Amadini. This change introduces a new expression node type, OffsetOfExpr, that describes __builtin_offsetof. Previously, __builtin_offsetof was implemented using a unary operator whose subexpression involved various synthesized array-subscript and member-reference expressions, which was ugly and made it very hard to instantiate as a template. OffsetOfExpr represents the AST more faithfully, with proper type source information and a more compact representation. OffsetOfExpr also has support for dependent __builtin_offsetof expressions; it can be value-dependent, but will never be type-dependent (like sizeof or alignof). This commit introduces template instantiation for __builtin_offsetof as well. There are two major caveats to this patch: 1) CodeGen cannot handle the case where __builtin_offsetof is not a constant expression, so it produces an error. So, to avoid regressing in C, we retain the old UnaryOperator-based __builtin_offsetof implementation in C while using the shiny new OffsetOfExpr implementation in C++. The old implementation can go away once we have proper CodeGen support for this case, which we expect won't cause much trouble in C++. 2) __builtin_offsetof doesn't work well with non-POD class types, particularly when the designated field is found within a base class. I will address this in a subsequent patch. Fixes PR5880 and a bunch of assertions when building Boost.Python tests. llvm-svn: 102542
2010-04-28 22:16:22 +00:00
// Expressions handled below.
break;
case OffsetOfNode::Field:
Completely reimplement __builtin_offsetof, based on a patch by Roberto Amadini. This change introduces a new expression node type, OffsetOfExpr, that describes __builtin_offsetof. Previously, __builtin_offsetof was implemented using a unary operator whose subexpression involved various synthesized array-subscript and member-reference expressions, which was ugly and made it very hard to instantiate as a template. OffsetOfExpr represents the AST more faithfully, with proper type source information and a more compact representation. OffsetOfExpr also has support for dependent __builtin_offsetof expressions; it can be value-dependent, but will never be type-dependent (like sizeof or alignof). This commit introduces template instantiation for __builtin_offsetof as well. There are two major caveats to this patch: 1) CodeGen cannot handle the case where __builtin_offsetof is not a constant expression, so it produces an error. So, to avoid regressing in C, we retain the old UnaryOperator-based __builtin_offsetof implementation in C while using the shiny new OffsetOfExpr implementation in C++. The old implementation can go away once we have proper CodeGen support for this case, which we expect won't cause much trouble in C++. 2) __builtin_offsetof doesn't work well with non-POD class types, particularly when the designated field is found within a base class. I will address this in a subsequent patch. Fixes PR5880 and a bunch of assertions when building Boost.Python tests. llvm-svn: 102542
2010-04-28 22:16:22 +00:00
VisitDecl(ON.getField());
break;
case OffsetOfNode::Identifier:
VisitIdentifierInfo(ON.getFieldName());
Completely reimplement __builtin_offsetof, based on a patch by Roberto Amadini. This change introduces a new expression node type, OffsetOfExpr, that describes __builtin_offsetof. Previously, __builtin_offsetof was implemented using a unary operator whose subexpression involved various synthesized array-subscript and member-reference expressions, which was ugly and made it very hard to instantiate as a template. OffsetOfExpr represents the AST more faithfully, with proper type source information and a more compact representation. OffsetOfExpr also has support for dependent __builtin_offsetof expressions; it can be value-dependent, but will never be type-dependent (like sizeof or alignof). This commit introduces template instantiation for __builtin_offsetof as well. There are two major caveats to this patch: 1) CodeGen cannot handle the case where __builtin_offsetof is not a constant expression, so it produces an error. So, to avoid regressing in C, we retain the old UnaryOperator-based __builtin_offsetof implementation in C while using the shiny new OffsetOfExpr implementation in C++. The old implementation can go away once we have proper CodeGen support for this case, which we expect won't cause much trouble in C++. 2) __builtin_offsetof doesn't work well with non-POD class types, particularly when the designated field is found within a base class. I will address this in a subsequent patch. Fixes PR5880 and a bunch of assertions when building Boost.Python tests. llvm-svn: 102542
2010-04-28 22:16:22 +00:00
break;
case OffsetOfNode::Base:
// These nodes are implicit, and therefore don't need profiling.
break;
Completely reimplement __builtin_offsetof, based on a patch by Roberto Amadini. This change introduces a new expression node type, OffsetOfExpr, that describes __builtin_offsetof. Previously, __builtin_offsetof was implemented using a unary operator whose subexpression involved various synthesized array-subscript and member-reference expressions, which was ugly and made it very hard to instantiate as a template. OffsetOfExpr represents the AST more faithfully, with proper type source information and a more compact representation. OffsetOfExpr also has support for dependent __builtin_offsetof expressions; it can be value-dependent, but will never be type-dependent (like sizeof or alignof). This commit introduces template instantiation for __builtin_offsetof as well. There are two major caveats to this patch: 1) CodeGen cannot handle the case where __builtin_offsetof is not a constant expression, so it produces an error. So, to avoid regressing in C, we retain the old UnaryOperator-based __builtin_offsetof implementation in C while using the shiny new OffsetOfExpr implementation in C++. The old implementation can go away once we have proper CodeGen support for this case, which we expect won't cause much trouble in C++. 2) __builtin_offsetof doesn't work well with non-POD class types, particularly when the designated field is found within a base class. I will address this in a subsequent patch. Fixes PR5880 and a bunch of assertions when building Boost.Python tests. llvm-svn: 102542
2010-04-28 22:16:22 +00:00
}
}
Completely reimplement __builtin_offsetof, based on a patch by Roberto Amadini. This change introduces a new expression node type, OffsetOfExpr, that describes __builtin_offsetof. Previously, __builtin_offsetof was implemented using a unary operator whose subexpression involved various synthesized array-subscript and member-reference expressions, which was ugly and made it very hard to instantiate as a template. OffsetOfExpr represents the AST more faithfully, with proper type source information and a more compact representation. OffsetOfExpr also has support for dependent __builtin_offsetof expressions; it can be value-dependent, but will never be type-dependent (like sizeof or alignof). This commit introduces template instantiation for __builtin_offsetof as well. There are two major caveats to this patch: 1) CodeGen cannot handle the case where __builtin_offsetof is not a constant expression, so it produces an error. So, to avoid regressing in C, we retain the old UnaryOperator-based __builtin_offsetof implementation in C while using the shiny new OffsetOfExpr implementation in C++. The old implementation can go away once we have proper CodeGen support for this case, which we expect won't cause much trouble in C++. 2) __builtin_offsetof doesn't work well with non-POD class types, particularly when the designated field is found within a base class. I will address this in a subsequent patch. Fixes PR5880 and a bunch of assertions when building Boost.Python tests. llvm-svn: 102542
2010-04-28 22:16:22 +00:00
VisitExpr(S);
}
void
StmtProfiler::VisitUnaryExprOrTypeTraitExpr(const UnaryExprOrTypeTraitExpr *S) {
VisitExpr(S);
ID.AddInteger(S->getKind());
if (S->isArgumentType())
VisitType(S->getArgumentType());
}
void StmtProfiler::VisitArraySubscriptExpr(const ArraySubscriptExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitMatrixSubscriptExpr(const MatrixSubscriptExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitArraySectionExpr(const ArraySectionExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitOMPArrayShapingExpr(const OMPArrayShapingExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitOMPIteratorExpr(const OMPIteratorExpr *S) {
VisitExpr(S);
for (unsigned I = 0, E = S->numOfIterators(); I < E; ++I)
VisitDecl(S->getIteratorDecl(I));
}
void StmtProfiler::VisitCallExpr(const CallExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitMemberExpr(const MemberExpr *S) {
VisitExpr(S);
VisitDecl(S->getMemberDecl());
if (!Canonical)
VisitNestedNameSpecifier(S->getQualifier());
ID.AddBoolean(S->isArrow());
}
void StmtProfiler::VisitCompoundLiteralExpr(const CompoundLiteralExpr *S) {
VisitExpr(S);
ID.AddBoolean(S->isFileScope());
}
void StmtProfiler::VisitCastExpr(const CastExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitImplicitCastExpr(const ImplicitCastExpr *S) {
VisitCastExpr(S);
ID.AddInteger(S->getValueKind());
}
void StmtProfiler::VisitExplicitCastExpr(const ExplicitCastExpr *S) {
VisitCastExpr(S);
VisitType(S->getTypeAsWritten());
}
void StmtProfiler::VisitCStyleCastExpr(const CStyleCastExpr *S) {
VisitExplicitCastExpr(S);
}
void StmtProfiler::VisitBinaryOperator(const BinaryOperator *S) {
VisitExpr(S);
ID.AddInteger(S->getOpcode());
}
void
StmtProfiler::VisitCompoundAssignOperator(const CompoundAssignOperator *S) {
VisitBinaryOperator(S);
}
void StmtProfiler::VisitConditionalOperator(const ConditionalOperator *S) {
VisitExpr(S);
}
void StmtProfiler::VisitBinaryConditionalOperator(
const BinaryConditionalOperator *S) {
VisitExpr(S);
}
void StmtProfiler::VisitAddrLabelExpr(const AddrLabelExpr *S) {
VisitExpr(S);
VisitDecl(S->getLabel());
}
void StmtProfiler::VisitStmtExpr(const StmtExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitShuffleVectorExpr(const ShuffleVectorExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitConvertVectorExpr(const ConvertVectorExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitChooseExpr(const ChooseExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitGNUNullExpr(const GNUNullExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitVAArgExpr(const VAArgExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitInitListExpr(const InitListExpr *S) {
if (S->getSyntacticForm()) {
VisitInitListExpr(S->getSyntacticForm());
return;
}
VisitExpr(S);
}
void StmtProfiler::VisitDesignatedInitExpr(const DesignatedInitExpr *S) {
VisitExpr(S);
ID.AddBoolean(S->usesGNUSyntax());
for (const DesignatedInitExpr::Designator &D : S->designators()) {
if (D.isFieldDesignator()) {
ID.AddInteger(0);
VisitName(D.getFieldName());
continue;
}
if (D.isArrayDesignator()) {
ID.AddInteger(1);
} else {
assert(D.isArrayRangeDesignator());
ID.AddInteger(2);
}
ID.AddInteger(D.getArrayIndex());
}
}
// Seems that if VisitInitListExpr() only works on the syntactic form of an
// InitListExpr, then a DesignatedInitUpdateExpr is not encountered.
void StmtProfiler::VisitDesignatedInitUpdateExpr(
const DesignatedInitUpdateExpr *S) {
llvm_unreachable("Unexpected DesignatedInitUpdateExpr in syntactic form of "
"initializer");
}
void StmtProfiler::VisitArrayInitLoopExpr(const ArrayInitLoopExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitArrayInitIndexExpr(const ArrayInitIndexExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitNoInitExpr(const NoInitExpr *S) {
llvm_unreachable("Unexpected NoInitExpr in syntactic form of initializer");
}
void StmtProfiler::VisitImplicitValueInitExpr(const ImplicitValueInitExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitExtVectorElementExpr(const ExtVectorElementExpr *S) {
VisitExpr(S);
VisitName(&S->getAccessor());
}
void StmtProfiler::VisitBlockExpr(const BlockExpr *S) {
VisitExpr(S);
VisitDecl(S->getBlockDecl());
}
void StmtProfiler::VisitGenericSelectionExpr(const GenericSelectionExpr *S) {
VisitExpr(S);
for (const GenericSelectionExpr::ConstAssociation Assoc :
S->associations()) {
QualType T = Assoc.getType();
if (T.isNull())
ID.AddPointer(nullptr);
else
VisitType(T);
VisitExpr(Assoc.getAssociationExpr());
}
}
void StmtProfiler::VisitPseudoObjectExpr(const PseudoObjectExpr *S) {
VisitExpr(S);
for (PseudoObjectExpr::const_semantics_iterator
i = S->semantics_begin(), e = S->semantics_end(); i != e; ++i)
// Normally, we would not profile the source expressions of OVEs.
if (const OpaqueValueExpr *OVE = dyn_cast<OpaqueValueExpr>(*i))
Visit(OVE->getSourceExpr());
}
void StmtProfiler::VisitAtomicExpr(const AtomicExpr *S) {
VisitExpr(S);
ID.AddInteger(S->getOp());
}
void StmtProfiler::VisitConceptSpecializationExpr(
const ConceptSpecializationExpr *S) {
VisitExpr(S);
VisitDecl(S->getNamedConcept());
for (const TemplateArgument &Arg : S->getTemplateArguments())
VisitTemplateArgument(Arg);
}
void StmtProfiler::VisitRequiresExpr(const RequiresExpr *S) {
VisitExpr(S);
ID.AddInteger(S->getLocalParameters().size());
for (ParmVarDecl *LocalParam : S->getLocalParameters())
VisitDecl(LocalParam);
ID.AddInteger(S->getRequirements().size());
for (concepts::Requirement *Req : S->getRequirements()) {
if (auto *TypeReq = dyn_cast<concepts::TypeRequirement>(Req)) {
ID.AddInteger(concepts::Requirement::RK_Type);
ID.AddBoolean(TypeReq->isSubstitutionFailure());
if (!TypeReq->isSubstitutionFailure())
VisitType(TypeReq->getType()->getType());
} else if (auto *ExprReq = dyn_cast<concepts::ExprRequirement>(Req)) {
ID.AddInteger(concepts::Requirement::RK_Compound);
ID.AddBoolean(ExprReq->isExprSubstitutionFailure());
if (!ExprReq->isExprSubstitutionFailure())
Visit(ExprReq->getExpr());
// C++2a [expr.prim.req.compound]p1 Example:
// [...] The compound-requirement in C1 requires that x++ is a valid
// expression. It is equivalent to the simple-requirement x++; [...]
// We therefore do not profile isSimple() here.
ID.AddBoolean(ExprReq->getNoexceptLoc().isValid());
const concepts::ExprRequirement::ReturnTypeRequirement &RetReq =
ExprReq->getReturnTypeRequirement();
if (RetReq.isEmpty()) {
ID.AddInteger(0);
} else if (RetReq.isTypeConstraint()) {
ID.AddInteger(1);
Visit(RetReq.getTypeConstraint()->getImmediatelyDeclaredConstraint());
} else {
assert(RetReq.isSubstitutionFailure());
ID.AddInteger(2);
}
} else {
ID.AddInteger(concepts::Requirement::RK_Nested);
auto *NestedReq = cast<concepts::NestedRequirement>(Req);
ID.AddBoolean(NestedReq->hasInvalidConstraint());
if (!NestedReq->hasInvalidConstraint())
Visit(NestedReq->getConstraintExpr());
}
}
}
static Stmt::StmtClass DecodeOperatorCall(const CXXOperatorCallExpr *S,
UnaryOperatorKind &UnaryOp,
BinaryOperatorKind &BinaryOp,
unsigned &NumArgs) {
switch (S->getOperator()) {
case OO_None:
case OO_New:
case OO_Delete:
case OO_Array_New:
case OO_Array_Delete:
case OO_Arrow:
case OO_Conditional:
case NUM_OVERLOADED_OPERATORS:
llvm_unreachable("Invalid operator call kind");
case OO_Plus:
if (NumArgs == 1) {
UnaryOp = UO_Plus;
return Stmt::UnaryOperatorClass;
}
BinaryOp = BO_Add;
return Stmt::BinaryOperatorClass;
case OO_Minus:
if (NumArgs == 1) {
UnaryOp = UO_Minus;
return Stmt::UnaryOperatorClass;
}
BinaryOp = BO_Sub;
return Stmt::BinaryOperatorClass;
case OO_Star:
if (NumArgs == 1) {
UnaryOp = UO_Deref;
return Stmt::UnaryOperatorClass;
}
BinaryOp = BO_Mul;
return Stmt::BinaryOperatorClass;
case OO_Slash:
BinaryOp = BO_Div;
return Stmt::BinaryOperatorClass;
case OO_Percent:
BinaryOp = BO_Rem;
return Stmt::BinaryOperatorClass;
case OO_Caret:
BinaryOp = BO_Xor;
return Stmt::BinaryOperatorClass;
case OO_Amp:
if (NumArgs == 1) {
UnaryOp = UO_AddrOf;
return Stmt::UnaryOperatorClass;
}
BinaryOp = BO_And;
return Stmt::BinaryOperatorClass;
case OO_Pipe:
BinaryOp = BO_Or;
return Stmt::BinaryOperatorClass;
case OO_Tilde:
UnaryOp = UO_Not;
return Stmt::UnaryOperatorClass;
case OO_Exclaim:
UnaryOp = UO_LNot;
return Stmt::UnaryOperatorClass;
case OO_Equal:
BinaryOp = BO_Assign;
return Stmt::BinaryOperatorClass;
case OO_Less:
BinaryOp = BO_LT;
return Stmt::BinaryOperatorClass;
case OO_Greater:
BinaryOp = BO_GT;
return Stmt::BinaryOperatorClass;
case OO_PlusEqual:
BinaryOp = BO_AddAssign;
return Stmt::CompoundAssignOperatorClass;
case OO_MinusEqual:
BinaryOp = BO_SubAssign;
return Stmt::CompoundAssignOperatorClass;
case OO_StarEqual:
BinaryOp = BO_MulAssign;
return Stmt::CompoundAssignOperatorClass;
case OO_SlashEqual:
BinaryOp = BO_DivAssign;
return Stmt::CompoundAssignOperatorClass;
case OO_PercentEqual:
BinaryOp = BO_RemAssign;
return Stmt::CompoundAssignOperatorClass;
case OO_CaretEqual:
BinaryOp = BO_XorAssign;
return Stmt::CompoundAssignOperatorClass;
case OO_AmpEqual:
BinaryOp = BO_AndAssign;
return Stmt::CompoundAssignOperatorClass;
case OO_PipeEqual:
BinaryOp = BO_OrAssign;
return Stmt::CompoundAssignOperatorClass;
case OO_LessLess:
BinaryOp = BO_Shl;
return Stmt::BinaryOperatorClass;
case OO_GreaterGreater:
BinaryOp = BO_Shr;
return Stmt::BinaryOperatorClass;
case OO_LessLessEqual:
BinaryOp = BO_ShlAssign;
return Stmt::CompoundAssignOperatorClass;
case OO_GreaterGreaterEqual:
BinaryOp = BO_ShrAssign;
return Stmt::CompoundAssignOperatorClass;
case OO_EqualEqual:
BinaryOp = BO_EQ;
return Stmt::BinaryOperatorClass;
case OO_ExclaimEqual:
BinaryOp = BO_NE;
return Stmt::BinaryOperatorClass;
case OO_LessEqual:
BinaryOp = BO_LE;
return Stmt::BinaryOperatorClass;
case OO_GreaterEqual:
BinaryOp = BO_GE;
return Stmt::BinaryOperatorClass;
case OO_Spaceship:
BinaryOp = BO_Cmp;
return Stmt::BinaryOperatorClass;
case OO_AmpAmp:
BinaryOp = BO_LAnd;
return Stmt::BinaryOperatorClass;
case OO_PipePipe:
BinaryOp = BO_LOr;
return Stmt::BinaryOperatorClass;
case OO_PlusPlus:
UnaryOp = NumArgs == 1 ? UO_PreInc : UO_PostInc;
NumArgs = 1;
return Stmt::UnaryOperatorClass;
case OO_MinusMinus:
UnaryOp = NumArgs == 1 ? UO_PreDec : UO_PostDec;
NumArgs = 1;
return Stmt::UnaryOperatorClass;
case OO_Comma:
BinaryOp = BO_Comma;
return Stmt::BinaryOperatorClass;
case OO_ArrowStar:
BinaryOp = BO_PtrMemI;
return Stmt::BinaryOperatorClass;
case OO_Subscript:
return Stmt::ArraySubscriptExprClass;
case OO_Call:
return Stmt::CallExprClass;
case OO_Coawait:
UnaryOp = UO_Coawait;
return Stmt::UnaryOperatorClass;
}
llvm_unreachable("Invalid overloaded operator expression");
}
#if defined(_MSC_VER) && !defined(__clang__)
#if _MSC_VER == 1911
// Work around https://developercommunity.visualstudio.com/content/problem/84002/clang-cl-when-built-with-vc-2017-crashes-cause-vc.html
// MSVC 2017 update 3 miscompiles this function, and a clang built with it
// will crash in stage 2 of a bootstrap build.
#pragma optimize("", off)
#endif
#endif
void StmtProfiler::VisitCXXOperatorCallExpr(const CXXOperatorCallExpr *S) {
if (S->isTypeDependent()) {
// Type-dependent operator calls are profiled like their underlying
// syntactic operator.
//
// An operator call to operator-> is always implicit, so just skip it. The
// enclosing MemberExpr will profile the actual member access.
if (S->getOperator() == OO_Arrow)
return Visit(S->getArg(0));
UnaryOperatorKind UnaryOp = UO_Extension;
BinaryOperatorKind BinaryOp = BO_Comma;
unsigned NumArgs = S->getNumArgs();
Stmt::StmtClass SC = DecodeOperatorCall(S, UnaryOp, BinaryOp, NumArgs);
ID.AddInteger(SC);
for (unsigned I = 0; I != NumArgs; ++I)
Visit(S->getArg(I));
if (SC == Stmt::UnaryOperatorClass)
ID.AddInteger(UnaryOp);
else if (SC == Stmt::BinaryOperatorClass ||
SC == Stmt::CompoundAssignOperatorClass)
ID.AddInteger(BinaryOp);
else
assert(SC == Stmt::ArraySubscriptExprClass || SC == Stmt::CallExprClass);
return;
}
VisitCallExpr(S);
ID.AddInteger(S->getOperator());
}
void StmtProfiler::VisitCXXRewrittenBinaryOperator(
const CXXRewrittenBinaryOperator *S) {
// If a rewritten operator were ever to be type-dependent, we should profile
// it following its syntactic operator.
assert(!S->isTypeDependent() &&
"resolved rewritten operator should never be type-dependent");
ID.AddBoolean(S->isReversed());
VisitExpr(S->getSemanticForm());
}
#if defined(_MSC_VER) && !defined(__clang__)
#if _MSC_VER == 1911
#pragma optimize("", on)
#endif
#endif
void StmtProfiler::VisitCXXMemberCallExpr(const CXXMemberCallExpr *S) {
VisitCallExpr(S);
}
void StmtProfiler::VisitCUDAKernelCallExpr(const CUDAKernelCallExpr *S) {
VisitCallExpr(S);
}
void StmtProfiler::VisitAsTypeExpr(const AsTypeExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitCXXNamedCastExpr(const CXXNamedCastExpr *S) {
VisitExplicitCastExpr(S);
}
void StmtProfiler::VisitCXXStaticCastExpr(const CXXStaticCastExpr *S) {
VisitCXXNamedCastExpr(S);
}
void StmtProfiler::VisitCXXDynamicCastExpr(const CXXDynamicCastExpr *S) {
VisitCXXNamedCastExpr(S);
}
void
StmtProfiler::VisitCXXReinterpretCastExpr(const CXXReinterpretCastExpr *S) {
VisitCXXNamedCastExpr(S);
}
void StmtProfiler::VisitCXXConstCastExpr(const CXXConstCastExpr *S) {
VisitCXXNamedCastExpr(S);
}
void StmtProfiler::VisitBuiltinBitCastExpr(const BuiltinBitCastExpr *S) {
VisitExpr(S);
VisitType(S->getTypeInfoAsWritten()->getType());
}
void StmtProfiler::VisitCXXAddrspaceCastExpr(const CXXAddrspaceCastExpr *S) {
VisitCXXNamedCastExpr(S);
}
void StmtProfiler::VisitUserDefinedLiteral(const UserDefinedLiteral *S) {
VisitCallExpr(S);
}
void StmtProfiler::VisitCXXBoolLiteralExpr(const CXXBoolLiteralExpr *S) {
VisitExpr(S);
ID.AddBoolean(S->getValue());
}
void StmtProfiler::VisitCXXNullPtrLiteralExpr(const CXXNullPtrLiteralExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitCXXStdInitializerListExpr(
const CXXStdInitializerListExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitCXXTypeidExpr(const CXXTypeidExpr *S) {
VisitExpr(S);
if (S->isTypeOperand())
VisitType(S->getTypeOperandSourceInfo()->getType());
}
void StmtProfiler::VisitCXXUuidofExpr(const CXXUuidofExpr *S) {
VisitExpr(S);
if (S->isTypeOperand())
VisitType(S->getTypeOperandSourceInfo()->getType());
}
void StmtProfiler::VisitMSPropertyRefExpr(const MSPropertyRefExpr *S) {
VisitExpr(S);
VisitDecl(S->getPropertyDecl());
}
void StmtProfiler::VisitMSPropertySubscriptExpr(
const MSPropertySubscriptExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitCXXThisExpr(const CXXThisExpr *S) {
VisitExpr(S);
ID.AddBoolean(S->isImplicit());
ID.AddBoolean(S->isCapturedByCopyInLambdaWithExplicitObjectParameter());
}
void StmtProfiler::VisitCXXThrowExpr(const CXXThrowExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitCXXDefaultArgExpr(const CXXDefaultArgExpr *S) {
VisitExpr(S);
VisitDecl(S->getParam());
}
void StmtProfiler::VisitCXXDefaultInitExpr(const CXXDefaultInitExpr *S) {
VisitExpr(S);
VisitDecl(S->getField());
}
void StmtProfiler::VisitCXXBindTemporaryExpr(const CXXBindTemporaryExpr *S) {
VisitExpr(S);
VisitDecl(
const_cast<CXXDestructorDecl *>(S->getTemporary()->getDestructor()));
}
void StmtProfiler::VisitCXXConstructExpr(const CXXConstructExpr *S) {
VisitExpr(S);
VisitDecl(S->getConstructor());
ID.AddBoolean(S->isElidable());
}
P0136R1, DR1573, DR1645, DR1715, DR1736, DR1903, DR1941, DR1959, DR1991: Replace inheriting constructors implementation with new approach, voted into C++ last year as a DR against C++11. Instead of synthesizing a set of derived class constructors for each inherited base class constructor, we make the constructors of the base class visible to constructor lookup in the derived class, using the normal rules for using-declarations. For constructors, UsingShadowDecl now has a ConstructorUsingShadowDecl derived class that tracks the requisite additional information. We create shadow constructors (not found by name lookup) in the derived class to model the actual initialization, and have a new expression node, CXXInheritedCtorInitExpr, to model the initialization of a base class from such a constructor. (This initialization is special because it performs real perfect forwarding of arguments.) In cases where argument forwarding is not possible (for inalloca calls, variadic calls, and calls with callee parameter cleanup), the shadow inheriting constructor is not emitted and instead we directly emit the initialization code into the caller of the inherited constructor. Note that this new model is not perfectly compatible with the old model in some corner cases. In particular: * if B inherits a private constructor from A, and C uses that constructor to construct a B, then we previously required that A befriends B and B befriends C, but the new rules require A to befriend C directly, and * if a derived class has its own constructors (and so its implicit default constructor is suppressed), it may still inherit a default constructor from a base class llvm-svn: 274049
2016-06-28 19:03:57 +00:00
void StmtProfiler::VisitCXXInheritedCtorInitExpr(
const CXXInheritedCtorInitExpr *S) {
VisitExpr(S);
VisitDecl(S->getConstructor());
}
void StmtProfiler::VisitCXXFunctionalCastExpr(const CXXFunctionalCastExpr *S) {
VisitExplicitCastExpr(S);
}
void
StmtProfiler::VisitCXXTemporaryObjectExpr(const CXXTemporaryObjectExpr *S) {
VisitCXXConstructExpr(S);
}
void
StmtProfiler::VisitLambdaExpr(const LambdaExpr *S) {
[C++20] [Modules] Allow Stmt::Profile to treat lambdas as equal conditionally Close https://github.com/llvm/llvm-project/issues/63544. Background: We landed std modules in libcxx recently but we haven't landed the corresponding in-tree tests. According to @Mordante, there are only 1% libcxx tests failing with std modules. And the major blocking issue is the lambda expression in the require clauses. The root cause of the issue is that previously we never consider any lambda expression as the same. Per [temp.over.link]p5: > Two lambda-expressions are never considered equivalent. I thought this is an oversight at first but @rsmith explains that in the wording, the program is as if there is only a single definition, and a single lambda-expression. So we don't need worry about this in the spec. The explanation makes sense. But it didn't reflect to the implementation directly. Here is a cycle in the implementation. If we want to merge two definitions, we need to make sure its implementation are the same. But according to the explanation above, we need to judge if two lambda-expression are the same by looking at its parent definitions. So here is the problem. To solve the problem, I think we have to profile the lambda expressions actually to get the accurate information. But we can't do this universally. So in this patch I tried to modify the interface of `Stmt::Profile` and only profile the lambda expression during the process of merging the constraint expressions. Differential Revision: https://reviews.llvm.org/D153957
2023-07-11 16:12:19 +08:00
if (!ProfileLambdaExpr) {
// Do not recursively visit the children of this expression. Profiling the
// body would result in unnecessary work, and is not safe to do during
// deserialization.
VisitStmtNoChildren(S);
// C++20 [temp.over.link]p5:
// Two lambda-expressions are never considered equivalent.
VisitDecl(S->getLambdaClass());
return;
}
[C++20] [Modules] Allow Stmt::Profile to treat lambdas as equal conditionally Close https://github.com/llvm/llvm-project/issues/63544. Background: We landed std modules in libcxx recently but we haven't landed the corresponding in-tree tests. According to @Mordante, there are only 1% libcxx tests failing with std modules. And the major blocking issue is the lambda expression in the require clauses. The root cause of the issue is that previously we never consider any lambda expression as the same. Per [temp.over.link]p5: > Two lambda-expressions are never considered equivalent. I thought this is an oversight at first but @rsmith explains that in the wording, the program is as if there is only a single definition, and a single lambda-expression. So we don't need worry about this in the spec. The explanation makes sense. But it didn't reflect to the implementation directly. Here is a cycle in the implementation. If we want to merge two definitions, we need to make sure its implementation are the same. But according to the explanation above, we need to judge if two lambda-expression are the same by looking at its parent definitions. So here is the problem. To solve the problem, I think we have to profile the lambda expressions actually to get the accurate information. But we can't do this universally. So in this patch I tried to modify the interface of `Stmt::Profile` and only profile the lambda expression during the process of merging the constraint expressions. Differential Revision: https://reviews.llvm.org/D153957
2023-07-11 16:12:19 +08:00
CXXRecordDecl *Lambda = S->getLambdaClass();
for (const auto &Capture : Lambda->captures()) {
ID.AddInteger(Capture.getCaptureKind());
if (Capture.capturesVariable())
VisitDecl(Capture.getCapturedVar());
}
// Profiling the body of the lambda may be dangerous during deserialization.
// So we'd like only to profile the signature here.
ODRHash Hasher;
// FIXME: We can't get the operator call easily by
// `CXXRecordDecl::getLambdaCallOperator()` if we're in deserialization.
// So we have to do something raw here.
for (auto *SubDecl : Lambda->decls()) {
FunctionDecl *Call = nullptr;
if (auto *FTD = dyn_cast<FunctionTemplateDecl>(SubDecl))
Call = FTD->getTemplatedDecl();
else if (auto *FD = dyn_cast<FunctionDecl>(SubDecl))
Call = FD;
if (!Call)
continue;
Hasher.AddFunctionDecl(Call, /*SkipBody=*/true);
}
ID.AddInteger(Hasher.CalculateHash());
}
void
StmtProfiler::VisitCXXScalarValueInitExpr(const CXXScalarValueInitExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitCXXDeleteExpr(const CXXDeleteExpr *S) {
VisitExpr(S);
ID.AddBoolean(S->isGlobalDelete());
ID.AddBoolean(S->isArrayForm());
VisitDecl(S->getOperatorDelete());
}
void StmtProfiler::VisitCXXNewExpr(const CXXNewExpr *S) {
VisitExpr(S);
VisitType(S->getAllocatedType());
VisitDecl(S->getOperatorNew());
VisitDecl(S->getOperatorDelete());
ID.AddBoolean(S->isArray());
ID.AddInteger(S->getNumPlacementArgs());
ID.AddBoolean(S->isGlobalNew());
ID.AddBoolean(S->isParenTypeId());
ID.AddInteger(llvm::to_underlying(S->getInitializationStyle()));
}
void
StmtProfiler::VisitCXXPseudoDestructorExpr(const CXXPseudoDestructorExpr *S) {
VisitExpr(S);
ID.AddBoolean(S->isArrow());
VisitNestedNameSpecifier(S->getQualifier());
ID.AddBoolean(S->getScopeTypeInfo() != nullptr);
if (S->getScopeTypeInfo())
VisitType(S->getScopeTypeInfo()->getType());
ID.AddBoolean(S->getDestroyedTypeInfo() != nullptr);
if (S->getDestroyedTypeInfo())
VisitType(S->getDestroyedType());
else
VisitIdentifierInfo(S->getDestroyedTypeIdentifier());
}
void StmtProfiler::VisitOverloadExpr(const OverloadExpr *S) {
VisitExpr(S);
VisitNestedNameSpecifier(S->getQualifier());
VisitName(S->getName(), /*TreatAsDecl*/ true);
ID.AddBoolean(S->hasExplicitTemplateArgs());
if (S->hasExplicitTemplateArgs())
VisitTemplateArguments(S->getTemplateArgs(), S->getNumTemplateArgs());
}
void
StmtProfiler::VisitUnresolvedLookupExpr(const UnresolvedLookupExpr *S) {
VisitOverloadExpr(S);
}
void StmtProfiler::VisitTypeTraitExpr(const TypeTraitExpr *S) {
VisitExpr(S);
ID.AddInteger(S->getTrait());
ID.AddInteger(S->getNumArgs());
for (unsigned I = 0, N = S->getNumArgs(); I != N; ++I)
VisitType(S->getArg(I)->getType());
}
void StmtProfiler::VisitArrayTypeTraitExpr(const ArrayTypeTraitExpr *S) {
VisitExpr(S);
ID.AddInteger(S->getTrait());
VisitType(S->getQueriedType());
}
void StmtProfiler::VisitExpressionTraitExpr(const ExpressionTraitExpr *S) {
VisitExpr(S);
ID.AddInteger(S->getTrait());
VisitExpr(S->getQueriedExpression());
}
void StmtProfiler::VisitDependentScopeDeclRefExpr(
const DependentScopeDeclRefExpr *S) {
VisitExpr(S);
VisitName(S->getDeclName());
VisitNestedNameSpecifier(S->getQualifier());
ID.AddBoolean(S->hasExplicitTemplateArgs());
if (S->hasExplicitTemplateArgs())
VisitTemplateArguments(S->getTemplateArgs(), S->getNumTemplateArgs());
}
void StmtProfiler::VisitExprWithCleanups(const ExprWithCleanups *S) {
VisitExpr(S);
}
void StmtProfiler::VisitCXXUnresolvedConstructExpr(
const CXXUnresolvedConstructExpr *S) {
VisitExpr(S);
VisitType(S->getTypeAsWritten());
ID.AddInteger(S->isListInitialization());
}
void StmtProfiler::VisitCXXDependentScopeMemberExpr(
const CXXDependentScopeMemberExpr *S) {
ID.AddBoolean(S->isImplicitAccess());
if (!S->isImplicitAccess()) {
VisitExpr(S);
ID.AddBoolean(S->isArrow());
}
VisitNestedNameSpecifier(S->getQualifier());
VisitName(S->getMember());
ID.AddBoolean(S->hasExplicitTemplateArgs());
if (S->hasExplicitTemplateArgs())
VisitTemplateArguments(S->getTemplateArgs(), S->getNumTemplateArgs());
}
void StmtProfiler::VisitUnresolvedMemberExpr(const UnresolvedMemberExpr *S) {
ID.AddBoolean(S->isImplicitAccess());
if (!S->isImplicitAccess()) {
VisitExpr(S);
ID.AddBoolean(S->isArrow());
}
VisitNestedNameSpecifier(S->getQualifier());
VisitName(S->getMemberName());
ID.AddBoolean(S->hasExplicitTemplateArgs());
if (S->hasExplicitTemplateArgs())
VisitTemplateArguments(S->getTemplateArgs(), S->getNumTemplateArgs());
}
void StmtProfiler::VisitCXXNoexceptExpr(const CXXNoexceptExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitPackExpansionExpr(const PackExpansionExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitSizeOfPackExpr(const SizeOfPackExpr *S) {
VisitExpr(S);
if (S->isPartiallySubstituted()) {
auto Args = S->getPartialArguments();
ID.AddInteger(Args.size());
for (const auto &TA : Args)
VisitTemplateArgument(TA);
} else {
VisitDecl(S->getPack());
ID.AddInteger(0);
}
}
void StmtProfiler::VisitPackIndexingExpr(const PackIndexingExpr *E) {
VisitExpr(E);
VisitExpr(E->getPackIdExpression());
VisitExpr(E->getIndexExpr());
}
void StmtProfiler::VisitSubstNonTypeTemplateParmPackExpr(
const SubstNonTypeTemplateParmPackExpr *S) {
VisitExpr(S);
VisitDecl(S->getParameterPack());
VisitTemplateArgument(S->getArgumentPack());
}
void StmtProfiler::VisitSubstNonTypeTemplateParmExpr(
const SubstNonTypeTemplateParmExpr *E) {
// Profile exactly as the replacement expression.
Visit(E->getReplacement());
}
void StmtProfiler::VisitFunctionParmPackExpr(const FunctionParmPackExpr *S) {
VisitExpr(S);
VisitDecl(S->getParameterPack());
ID.AddInteger(S->getNumExpansions());
for (FunctionParmPackExpr::iterator I = S->begin(), E = S->end(); I != E; ++I)
VisitDecl(*I);
}
void StmtProfiler::VisitMaterializeTemporaryExpr(
const MaterializeTemporaryExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitCXXFoldExpr(const CXXFoldExpr *S) {
VisitExpr(S);
ID.AddInteger(S->getOperator());
}
void StmtProfiler::VisitCXXParenListInitExpr(const CXXParenListInitExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitCoroutineBodyStmt(const CoroutineBodyStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitCoreturnStmt(const CoreturnStmt *S) {
VisitStmt(S);
}
void StmtProfiler::VisitCoawaitExpr(const CoawaitExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitDependentCoawaitExpr(const DependentCoawaitExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitCoyieldExpr(const CoyieldExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitOpaqueValueExpr(const OpaqueValueExpr *E) {
VisitExpr(E);
}
void StmtProfiler::VisitTypoExpr(const TypoExpr *E) {
VisitExpr(E);
}
void StmtProfiler::VisitSourceLocExpr(const SourceLocExpr *E) {
VisitExpr(E);
}
void StmtProfiler::VisitEmbedExpr(const EmbedExpr *E) { VisitExpr(E); }
[AST] Add RecoveryExpr to retain expressions on semantic errors Normally clang avoids creating expressions when it encounters semantic errors, even if the parser knows which expression to produce. This works well for the compiler. However, this is not ideal for source-level tools that have to deal with broken code, e.g. clangd is not able to provide navigation features even for names that compiler knows how to resolve. The new RecoveryExpr aims to capture the minimal set of information useful for the tools that need to deal with incorrect code: source range of the expression being dropped, subexpressions of the expression. We aim to make constructing RecoveryExprs as simple as possible to ensure writing code to avoid dropping expressions is easy. Producing RecoveryExprs can result in new code paths being taken in the frontend. In particular, clang can produce some new diagnostics now and we aim to suppress bogus ones based on Expr::containsErrors. We deliberately produce RecoveryExprs only in the parser for now to minimize the code affected by this patch. Producing RecoveryExprs in Sema potentially allows to preserve more information (e.g. type of an expression), but also results in more code being affected. E.g. SFINAE checks will have to take presence of RecoveryExprs into account. Initial implementation only works in C++ mode, as it relies on compiler postponing diagnostics on dependent expressions. C and ObjC often do not do this, so they require more work to make sure we do not produce too many bogus diagnostics on the new expressions. See documentation of RecoveryExpr for more details. original patch from Ilya This change is based on https://reviews.llvm.org/D61722 Reviewers: sammccall, rsmith Reviewed By: sammccall, rsmith Tags: #clang Differential Revision: https://reviews.llvm.org/D69330
2020-03-19 16:30:40 +01:00
void StmtProfiler::VisitRecoveryExpr(const RecoveryExpr *E) { VisitExpr(E); }
void StmtProfiler::VisitObjCStringLiteral(const ObjCStringLiteral *S) {
VisitExpr(S);
}
void StmtProfiler::VisitObjCBoxedExpr(const ObjCBoxedExpr *E) {
VisitExpr(E);
}
void StmtProfiler::VisitObjCArrayLiteral(const ObjCArrayLiteral *E) {
VisitExpr(E);
}
void StmtProfiler::VisitObjCDictionaryLiteral(const ObjCDictionaryLiteral *E) {
VisitExpr(E);
}
void StmtProfiler::VisitObjCEncodeExpr(const ObjCEncodeExpr *S) {
VisitExpr(S);
VisitType(S->getEncodedType());
}
void StmtProfiler::VisitObjCSelectorExpr(const ObjCSelectorExpr *S) {
VisitExpr(S);
VisitName(S->getSelector());
}
void StmtProfiler::VisitObjCProtocolExpr(const ObjCProtocolExpr *S) {
VisitExpr(S);
VisitDecl(S->getProtocol());
}
void StmtProfiler::VisitObjCIvarRefExpr(const ObjCIvarRefExpr *S) {
VisitExpr(S);
VisitDecl(S->getDecl());
ID.AddBoolean(S->isArrow());
ID.AddBoolean(S->isFreeIvar());
}
void StmtProfiler::VisitObjCPropertyRefExpr(const ObjCPropertyRefExpr *S) {
VisitExpr(S);
if (S->isImplicitProperty()) {
VisitDecl(S->getImplicitPropertyGetter());
VisitDecl(S->getImplicitPropertySetter());
} else {
VisitDecl(S->getExplicitProperty());
}
if (S->isSuperReceiver()) {
ID.AddBoolean(S->isSuperReceiver());
VisitType(S->getSuperReceiverType());
}
}
void StmtProfiler::VisitObjCSubscriptRefExpr(const ObjCSubscriptRefExpr *S) {
VisitExpr(S);
VisitDecl(S->getAtIndexMethodDecl());
VisitDecl(S->setAtIndexMethodDecl());
}
void StmtProfiler::VisitObjCMessageExpr(const ObjCMessageExpr *S) {
VisitExpr(S);
VisitName(S->getSelector());
VisitDecl(S->getMethodDecl());
}
void StmtProfiler::VisitObjCIsaExpr(const ObjCIsaExpr *S) {
VisitExpr(S);
ID.AddBoolean(S->isArrow());
}
void StmtProfiler::VisitObjCBoolLiteralExpr(const ObjCBoolLiteralExpr *S) {
VisitExpr(S);
ID.AddBoolean(S->getValue());
}
void StmtProfiler::VisitObjCIndirectCopyRestoreExpr(
const ObjCIndirectCopyRestoreExpr *S) {
VisitExpr(S);
ID.AddBoolean(S->shouldCopy());
}
void StmtProfiler::VisitObjCBridgedCastExpr(const ObjCBridgedCastExpr *S) {
VisitExplicitCastExpr(S);
ID.AddBoolean(S->getBridgeKind());
}
void StmtProfiler::VisitObjCAvailabilityCheckExpr(
const ObjCAvailabilityCheckExpr *S) {
VisitExpr(S);
}
void StmtProfiler::VisitTemplateArguments(const TemplateArgumentLoc *Args,
unsigned NumArgs) {
ID.AddInteger(NumArgs);
for (unsigned I = 0; I != NumArgs; ++I)
VisitTemplateArgument(Args[I].getArgument());
}
void StmtProfiler::VisitTemplateArgument(const TemplateArgument &Arg) {
// Mostly repetitive with TemplateArgument::Profile!
ID.AddInteger(Arg.getKind());
switch (Arg.getKind()) {
case TemplateArgument::Null:
break;
case TemplateArgument::Type:
VisitType(Arg.getAsType());
break;
case TemplateArgument::Template:
case TemplateArgument::TemplateExpansion:
VisitTemplateName(Arg.getAsTemplateOrTemplatePattern());
break;
case TemplateArgument::Declaration:
VisitType(Arg.getParamTypeForDecl());
// FIXME: Do we need to recursively decompose template parameter objects?
VisitDecl(Arg.getAsDecl());
break;
case TemplateArgument::NullPtr:
VisitType(Arg.getNullPtrType());
break;
case TemplateArgument::Integral:
VisitType(Arg.getIntegralType());
Arg.getAsIntegral().Profile(ID);
break;
[c++20] P1907R1: Support for generalized non-type template arguments of scalar type. (#78041) Previously committed as 9e08e51a20d0d2b1c5724bb17e969d036fced4cd, and reverted because a dependency commit was reverted, then committed again as 4b574008aef5a7235c1f894ab065fe300d26e786 and reverted again because "dependency commit" 5a391d38ac6c561ba908334d427f26124ed9132e was reverted. But it doesn't seem that 5a391d38ac6c was a real dependency for this. This commit incorporates 4b574008aef5a7235c1f894ab065fe300d26e786 and 18e093faf726d15f210ab4917142beec51848258 by Richard Smith (@zygoloid), with some minor fixes, most notably: - `UncommonValue` renamed to `StructuralValue` - `VK_PRValue` instead of `VK_RValue` as default kind in lvalue and member pointer handling branch in `BuildExpressionFromNonTypeTemplateArgumentValue`; - handling of `StructuralValue` in `IsTypeDeclaredInsideVisitor`; - filling in `SugaredConverted` along with `CanonicalConverted` parameter in `Sema::CheckTemplateArgument`; - minor cleanup in `TemplateInstantiator::transformNonTypeTemplateParmRef`; - `TemplateArgument` constructors refactored; - `ODRHash` calculation for `UncommonValue`; - USR generation for `UncommonValue`; - more correct MS compatibility mangling algorithm (tested on MSVC ver. 19.35; toolset ver. 143); - IR emitting fixed on using a subobject as a template argument when the corresponding template parameter is used in an lvalue context; - `noundef` attribute and opaque pointers in `template-arguments` test; - analysis for C++17 mode is turned off for templates in `warn-bool-conversion` test; in C++17 and C++20 mode, array reference used as a template argument of pointer type produces template argument of UncommonValue type, and `BuildExpressionFromNonTypeTemplateArgumentValue` makes `OpaqueValueExpr` for it, and `DiagnoseAlwaysNonNullPointer` cannot see through it; despite of "These cases should not warn" comment, I'm not sure about correct behavior; I'd expect a suggestion to replace `if` by `if constexpr`; - `temp.arg.nontype/p1.cpp` and `dr18xx.cpp` tests fixed.
2024-01-21 23:28:57 +03:00
case TemplateArgument::StructuralValue:
VisitType(Arg.getStructuralValueType());
// FIXME: Do we need to recursively decompose this ourselves?
Arg.getAsStructuralValue().Profile(ID);
break;
case TemplateArgument::Expression:
Visit(Arg.getAsExpr());
break;
case TemplateArgument::Pack:
for (const auto &P : Arg.pack_elements())
VisitTemplateArgument(P);
break;
}
}
namespace {
class OpenACCClauseProfiler
: public OpenACCClauseVisitor<OpenACCClauseProfiler> {
StmtProfiler &Profiler;
public:
OpenACCClauseProfiler(StmtProfiler &P) : Profiler(P) {}
void VisitOpenACCClauseList(ArrayRef<const OpenACCClause *> Clauses) {
for (const OpenACCClause *Clause : Clauses) {
// TODO OpenACC: When we have clauses with expressions, we should
// profile them too.
Visit(Clause);
}
}
void VisitClauseWithVarList(const OpenACCClauseWithVarList &Clause) {
for (auto *E : Clause.getVarList())
Profiler.VisitStmt(E);
}
#define VISIT_CLAUSE(CLAUSE_NAME) \
void Visit##CLAUSE_NAME##Clause(const OpenACC##CLAUSE_NAME##Clause &Clause);
#include "clang/Basic/OpenACCClauses.def"
};
/// Nothing to do here, there are no sub-statements.
void OpenACCClauseProfiler::VisitDefaultClause(
const OpenACCDefaultClause &Clause) {}
void OpenACCClauseProfiler::VisitIfClause(const OpenACCIfClause &Clause) {
assert(Clause.hasConditionExpr() &&
"if clause requires a valid condition expr");
Profiler.VisitStmt(Clause.getConditionExpr());
}
void OpenACCClauseProfiler::VisitCopyClause(const OpenACCCopyClause &Clause) {
VisitClauseWithVarList(Clause);
}
void OpenACCClauseProfiler::VisitLinkClause(const OpenACCLinkClause &Clause) {
VisitClauseWithVarList(Clause);
}
void OpenACCClauseProfiler::VisitDeviceResidentClause(
const OpenACCDeviceResidentClause &Clause) {
VisitClauseWithVarList(Clause);
}
void OpenACCClauseProfiler::VisitCopyInClause(
const OpenACCCopyInClause &Clause) {
VisitClauseWithVarList(Clause);
}
void OpenACCClauseProfiler::VisitCopyOutClause(
const OpenACCCopyOutClause &Clause) {
VisitClauseWithVarList(Clause);
}
void OpenACCClauseProfiler::VisitCreateClause(
const OpenACCCreateClause &Clause) {
VisitClauseWithVarList(Clause);
}
void OpenACCClauseProfiler::VisitHostClause(const OpenACCHostClause &Clause) {
VisitClauseWithVarList(Clause);
}
void OpenACCClauseProfiler::VisitDeviceClause(
const OpenACCDeviceClause &Clause) {
VisitClauseWithVarList(Clause);
}
void OpenACCClauseProfiler::VisitSelfClause(const OpenACCSelfClause &Clause) {
if (Clause.isConditionExprClause()) {
if (Clause.hasConditionExpr())
Profiler.VisitStmt(Clause.getConditionExpr());
} else {
for (auto *E : Clause.getVarList())
Profiler.VisitStmt(E);
}
}
void OpenACCClauseProfiler::VisitFinalizeClause(
const OpenACCFinalizeClause &Clause) {}
void OpenACCClauseProfiler::VisitIfPresentClause(
const OpenACCIfPresentClause &Clause) {}
void OpenACCClauseProfiler::VisitNumGangsClause(
const OpenACCNumGangsClause &Clause) {
for (auto *E : Clause.getIntExprs())
Profiler.VisitStmt(E);
}
void OpenACCClauseProfiler::VisitTileClause(const OpenACCTileClause &Clause) {
for (auto *E : Clause.getSizeExprs())
Profiler.VisitStmt(E);
}
void OpenACCClauseProfiler::VisitNumWorkersClause(
const OpenACCNumWorkersClause &Clause) {
assert(Clause.hasIntExpr() && "num_workers clause requires a valid int expr");
Profiler.VisitStmt(Clause.getIntExpr());
}
void OpenACCClauseProfiler::VisitCollapseClause(
const OpenACCCollapseClause &Clause) {
assert(Clause.getLoopCount() && "collapse clause requires a valid int expr");
Profiler.VisitStmt(Clause.getLoopCount());
}
void OpenACCClauseProfiler::VisitPrivateClause(
const OpenACCPrivateClause &Clause) {
VisitClauseWithVarList(Clause);
}
void OpenACCClauseProfiler::VisitFirstPrivateClause(
const OpenACCFirstPrivateClause &Clause) {
VisitClauseWithVarList(Clause);
}
void OpenACCClauseProfiler::VisitAttachClause(
const OpenACCAttachClause &Clause) {
VisitClauseWithVarList(Clause);
}
void OpenACCClauseProfiler::VisitDetachClause(
const OpenACCDetachClause &Clause) {
VisitClauseWithVarList(Clause);
}
void OpenACCClauseProfiler::VisitDeleteClause(
const OpenACCDeleteClause &Clause) {
VisitClauseWithVarList(Clause);
}
void OpenACCClauseProfiler::VisitDevicePtrClause(
const OpenACCDevicePtrClause &Clause) {
VisitClauseWithVarList(Clause);
}
void OpenACCClauseProfiler::VisitNoCreateClause(
const OpenACCNoCreateClause &Clause) {
VisitClauseWithVarList(Clause);
}
void OpenACCClauseProfiler::VisitPresentClause(
const OpenACCPresentClause &Clause) {
VisitClauseWithVarList(Clause);
}
void OpenACCClauseProfiler::VisitUseDeviceClause(
const OpenACCUseDeviceClause &Clause) {
VisitClauseWithVarList(Clause);
}
void OpenACCClauseProfiler::VisitVectorLengthClause(
const OpenACCVectorLengthClause &Clause) {
assert(Clause.hasIntExpr() &&
"vector_length clause requires a valid int expr");
Profiler.VisitStmt(Clause.getIntExpr());
}
void OpenACCClauseProfiler::VisitAsyncClause(const OpenACCAsyncClause &Clause) {
if (Clause.hasIntExpr())
Profiler.VisitStmt(Clause.getIntExpr());
}
void OpenACCClauseProfiler::VisitDeviceNumClause(
const OpenACCDeviceNumClause &Clause) {
Profiler.VisitStmt(Clause.getIntExpr());
}
void OpenACCClauseProfiler::VisitDefaultAsyncClause(
const OpenACCDefaultAsyncClause &Clause) {
Profiler.VisitStmt(Clause.getIntExpr());
}
void OpenACCClauseProfiler::VisitWorkerClause(
const OpenACCWorkerClause &Clause) {
if (Clause.hasIntExpr())
Profiler.VisitStmt(Clause.getIntExpr());
}
void OpenACCClauseProfiler::VisitVectorClause(
const OpenACCVectorClause &Clause) {
if (Clause.hasIntExpr())
Profiler.VisitStmt(Clause.getIntExpr());
}
void OpenACCClauseProfiler::VisitWaitClause(const OpenACCWaitClause &Clause) {
if (Clause.hasDevNumExpr())
Profiler.VisitStmt(Clause.getDevNumExpr());
for (auto *E : Clause.getQueueIdExprs())
Profiler.VisitStmt(E);
}
/// Nothing to do here, there are no sub-statements.
void OpenACCClauseProfiler::VisitDeviceTypeClause(
const OpenACCDeviceTypeClause &Clause) {}
void OpenACCClauseProfiler::VisitAutoClause(const OpenACCAutoClause &Clause) {}
void OpenACCClauseProfiler::VisitIndependentClause(
const OpenACCIndependentClause &Clause) {}
void OpenACCClauseProfiler::VisitSeqClause(const OpenACCSeqClause &Clause) {}
void OpenACCClauseProfiler::VisitNoHostClause(
const OpenACCNoHostClause &Clause) {}
void OpenACCClauseProfiler::VisitGangClause(const OpenACCGangClause &Clause) {
for (unsigned I = 0; I < Clause.getNumExprs(); ++I) {
Profiler.VisitStmt(Clause.getExpr(I).second);
}
}
void OpenACCClauseProfiler::VisitReductionClause(
const OpenACCReductionClause &Clause) {
VisitClauseWithVarList(Clause);
}
void OpenACCClauseProfiler::VisitBindClause(const OpenACCBindClause &Clause) {
assert(false && "not implemented... what can we do about our expr?");
}
} // namespace
void StmtProfiler::VisitOpenACCComputeConstruct(
const OpenACCComputeConstruct *S) {
// VisitStmt handles children, so the AssociatedStmt is handled.
VisitStmt(S);
OpenACCClauseProfiler P{*this};
P.VisitOpenACCClauseList(S->clauses());
}
void StmtProfiler::VisitOpenACCLoopConstruct(const OpenACCLoopConstruct *S) {
// VisitStmt handles children, so the Loop is handled.
VisitStmt(S);
OpenACCClauseProfiler P{*this};
P.VisitOpenACCClauseList(S->clauses());
}
void StmtProfiler::VisitOpenACCCombinedConstruct(
const OpenACCCombinedConstruct *S) {
// VisitStmt handles children, so the Loop is handled.
VisitStmt(S);
OpenACCClauseProfiler P{*this};
P.VisitOpenACCClauseList(S->clauses());
}
void StmtProfiler::VisitOpenACCDataConstruct(const OpenACCDataConstruct *S) {
VisitStmt(S);
OpenACCClauseProfiler P{*this};
P.VisitOpenACCClauseList(S->clauses());
}
void StmtProfiler::VisitOpenACCEnterDataConstruct(
const OpenACCEnterDataConstruct *S) {
VisitStmt(S);
OpenACCClauseProfiler P{*this};
P.VisitOpenACCClauseList(S->clauses());
}
void StmtProfiler::VisitOpenACCExitDataConstruct(
const OpenACCExitDataConstruct *S) {
VisitStmt(S);
OpenACCClauseProfiler P{*this};
P.VisitOpenACCClauseList(S->clauses());
}
void StmtProfiler::VisitOpenACCHostDataConstruct(
const OpenACCHostDataConstruct *S) {
VisitStmt(S);
OpenACCClauseProfiler P{*this};
P.VisitOpenACCClauseList(S->clauses());
}
void StmtProfiler::VisitOpenACCWaitConstruct(const OpenACCWaitConstruct *S) {
// VisitStmt covers 'children', so the exprs inside of it are covered.
VisitStmt(S);
OpenACCClauseProfiler P{*this};
P.VisitOpenACCClauseList(S->clauses());
}
void StmtProfiler::VisitOpenACCCacheConstruct(const OpenACCCacheConstruct *S) {
// VisitStmt covers 'children', so the exprs inside of it are covered.
VisitStmt(S);
}
void StmtProfiler::VisitOpenACCInitConstruct(const OpenACCInitConstruct *S) {
VisitStmt(S);
OpenACCClauseProfiler P{*this};
P.VisitOpenACCClauseList(S->clauses());
}
void StmtProfiler::VisitOpenACCShutdownConstruct(
const OpenACCShutdownConstruct *S) {
VisitStmt(S);
OpenACCClauseProfiler P{*this};
P.VisitOpenACCClauseList(S->clauses());
}
void StmtProfiler::VisitOpenACCSetConstruct(const OpenACCSetConstruct *S) {
VisitStmt(S);
OpenACCClauseProfiler P{*this};
P.VisitOpenACCClauseList(S->clauses());
}
void StmtProfiler::VisitOpenACCUpdateConstruct(
const OpenACCUpdateConstruct *S) {
VisitStmt(S);
OpenACCClauseProfiler P{*this};
P.VisitOpenACCClauseList(S->clauses());
}
void StmtProfiler::VisitOpenACCAtomicConstruct(
const OpenACCAtomicConstruct *S) {
VisitStmt(S);
OpenACCClauseProfiler P{*this};
P.VisitOpenACCClauseList(S->clauses());
}
void StmtProfiler::VisitHLSLOutArgExpr(const HLSLOutArgExpr *S) {
VisitStmt(S);
}
void Stmt::Profile(llvm::FoldingSetNodeID &ID, const ASTContext &Context,
[C++20] [Modules] Allow Stmt::Profile to treat lambdas as equal conditionally Close https://github.com/llvm/llvm-project/issues/63544. Background: We landed std modules in libcxx recently but we haven't landed the corresponding in-tree tests. According to @Mordante, there are only 1% libcxx tests failing with std modules. And the major blocking issue is the lambda expression in the require clauses. The root cause of the issue is that previously we never consider any lambda expression as the same. Per [temp.over.link]p5: > Two lambda-expressions are never considered equivalent. I thought this is an oversight at first but @rsmith explains that in the wording, the program is as if there is only a single definition, and a single lambda-expression. So we don't need worry about this in the spec. The explanation makes sense. But it didn't reflect to the implementation directly. Here is a cycle in the implementation. If we want to merge two definitions, we need to make sure its implementation are the same. But according to the explanation above, we need to judge if two lambda-expression are the same by looking at its parent definitions. So here is the problem. To solve the problem, I think we have to profile the lambda expressions actually to get the accurate information. But we can't do this universally. So in this patch I tried to modify the interface of `Stmt::Profile` and only profile the lambda expression during the process of merging the constraint expressions. Differential Revision: https://reviews.llvm.org/D153957
2023-07-11 16:12:19 +08:00
bool Canonical, bool ProfileLambdaExpr) const {
StmtProfilerWithPointers Profiler(ID, Context, Canonical, ProfileLambdaExpr);
Profiler.Visit(this);
}
void Stmt::ProcessODRHash(llvm::FoldingSetNodeID &ID,
class ODRHash &Hash) const {
StmtProfilerWithoutPointers Profiler(ID, Hash);
Profiler.Visit(this);
}