[CSSPGO][llvm-profgen] Context-sensitive profile data generation
This stack of changes introduces `llvm-profgen` utility which generates a profile data file from given perf script data files for sample-based PGO. It’s part of(not only) the CSSPGO work. Specifically to support context-sensitive with/without pseudo probe profile, it implements a series of functionalities including perf trace parsing, instruction symbolization, LBR stack/call frame stack unwinding, pseudo probe decoding, etc. Also high throughput is achieved by multiple levels of sample aggregation and compatible format with one stop is generated at the end. Please refer to: https://groups.google.com/g/llvm-dev/c/1p1rdYbL93s for the CSSPGO RFC.
This change supports context-sensitive profile data generation into llvm-profgen. With simultaneous sampling for LBR and call stack, we can identify leaf of LBR sample with calling context from stack sample . During the process of deriving fall through path from LBR entries, we unwind LBR by replaying all the calls and returns (including implicit calls/returns due to inlining) backwards on top of the sampled call stack. Then the state of call stack as we unwind through LBR always represents the calling context of current fall through path.
we have two types of virtual unwinding 1) LBR unwinding and 2) linear range unwinding.
Specifically, for each LBR entry which can be classified into call, return, regular branch, LBR unwinding will replay the operation by pushing, popping or switching leaf frame towards the call stack and since the initial call stack is most recently sampled, the replay should be in anti-execution order, i.e. for the regular case, pop the call stack when LBR is call, push frame on call stack when LBR is return. After each LBR processed, it also needs to align with the next LBR by going through instructions from previous LBR's target to current LBR's source, which we named linear unwinding. As instruction from linear range can come from different function by inlining, linear unwinding will do the range splitting and record counters through the range with same inline context.
With each fall through path from LBR unwinding, we aggregate each sample into counters by the calling context and eventually generate full context sensitive profile (without relying on inlining) to driver compiler's PGO/FDO.
A breakdown of noteworthy changes:
- Added `HybridSample` class as the abstraction perf sample including LBR stack and call stack
* Extended `PerfReader` to implement auto-detect whether input perf script output contains CS profile, then do the parsing. Multiple `HybridSample` are extracted
* Speed up by aggregating `HybridSample` into `AggregatedSamples`
* Added VirtualUnwinder that consumes aggregated `HybridSample` and implements unwinding of calls, returns, and linear path that contains implicit call/return from inlining. Ranges and branches counters are aggregated by the calling context.
Here calling context is string type, each context is a pair of function name and callsite location info, the whole context is like `main:1 @ foo:2 @ bar`.
* Added PorfileGenerater that accumulates counters by ranges unfolding or branch target mapping, then generates context-sensitive function profile including function body, inferring callee's head sample, callsite target samples, eventually records into ProfileMap.
* Leveraged LLVM build-in(`SampleProfWriter`) writer to support different serialization format with no stop
- `getCanonicalFnName` for callee name and name from ELF section
- Added regression test for both unwinding and profile generation
Test Plan:
ninja & ninja check-llvm
Reviewed By: hoy, wenlei, wmi
Differential Revision: https://reviews.llvm.org/D89723
2020-10-19 12:55:59 -07:00
|
|
|
//===-- ProfileGenerator.h - Profile Generator -----------------*- C++ -*-===//
|
|
|
|
//
|
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
#ifndef LLVM_TOOLS_LLVM_PROGEN_PROFILEGENERATOR_H
|
|
|
|
#define LLVM_TOOLS_LLVM_PROGEN_PROFILEGENERATOR_H
|
[CSSPGO][llvm-profgen] Context-sensitive global pre-inliner
This change sets up a framework in llvm-profgen to estimate inline decision and adjust context-sensitive profile based on that. We call it a global pre-inliner in llvm-profgen.
It will serve two purposes:
1) Since context profile for not inlined context will be merged into base profile, if we estimate a context will not be inlined, we can merge the context profile in the output to save profile size.
2) For thinLTO, when a context involving functions from different modules is not inined, we can't merge functions profiles across modules, leading to suboptimal post-inline count quality. By estimating some inline decisions, we would be able to adjust/merge context profiles beforehand as a mitigation.
Compiler inline heuristic uses inline cost which is not available in llvm-profgen. But since inline cost is closely related to size, we could get an estimate through function size from debug info. Because the size we have in llvm-profgen is the final size, it could also be more accurate than the inline cost estimation in the compiler.
This change only has the framework, with a few TODOs left for follow up patches for a complete implementation:
1) We need to retrieve size for funciton//inlinee from debug info for inlining estimation. Currently we use number of samples in a profile as place holder for size estimation.
2) Currently the thresholds are using the values used by sample loader inliner. But they need to be tuned since the size here is fully optimized machine code size, instead of inline cost based on not yet fully optimized IR.
Differential Revision: https://reviews.llvm.org/D99146
2021-03-05 07:50:36 -08:00
|
|
|
#include "CSPreInliner.h"
|
[CSSPGO][llvm-profgen] Context-sensitive profile data generation
This stack of changes introduces `llvm-profgen` utility which generates a profile data file from given perf script data files for sample-based PGO. It’s part of(not only) the CSSPGO work. Specifically to support context-sensitive with/without pseudo probe profile, it implements a series of functionalities including perf trace parsing, instruction symbolization, LBR stack/call frame stack unwinding, pseudo probe decoding, etc. Also high throughput is achieved by multiple levels of sample aggregation and compatible format with one stop is generated at the end. Please refer to: https://groups.google.com/g/llvm-dev/c/1p1rdYbL93s for the CSSPGO RFC.
This change supports context-sensitive profile data generation into llvm-profgen. With simultaneous sampling for LBR and call stack, we can identify leaf of LBR sample with calling context from stack sample . During the process of deriving fall through path from LBR entries, we unwind LBR by replaying all the calls and returns (including implicit calls/returns due to inlining) backwards on top of the sampled call stack. Then the state of call stack as we unwind through LBR always represents the calling context of current fall through path.
we have two types of virtual unwinding 1) LBR unwinding and 2) linear range unwinding.
Specifically, for each LBR entry which can be classified into call, return, regular branch, LBR unwinding will replay the operation by pushing, popping or switching leaf frame towards the call stack and since the initial call stack is most recently sampled, the replay should be in anti-execution order, i.e. for the regular case, pop the call stack when LBR is call, push frame on call stack when LBR is return. After each LBR processed, it also needs to align with the next LBR by going through instructions from previous LBR's target to current LBR's source, which we named linear unwinding. As instruction from linear range can come from different function by inlining, linear unwinding will do the range splitting and record counters through the range with same inline context.
With each fall through path from LBR unwinding, we aggregate each sample into counters by the calling context and eventually generate full context sensitive profile (without relying on inlining) to driver compiler's PGO/FDO.
A breakdown of noteworthy changes:
- Added `HybridSample` class as the abstraction perf sample including LBR stack and call stack
* Extended `PerfReader` to implement auto-detect whether input perf script output contains CS profile, then do the parsing. Multiple `HybridSample` are extracted
* Speed up by aggregating `HybridSample` into `AggregatedSamples`
* Added VirtualUnwinder that consumes aggregated `HybridSample` and implements unwinding of calls, returns, and linear path that contains implicit call/return from inlining. Ranges and branches counters are aggregated by the calling context.
Here calling context is string type, each context is a pair of function name and callsite location info, the whole context is like `main:1 @ foo:2 @ bar`.
* Added PorfileGenerater that accumulates counters by ranges unfolding or branch target mapping, then generates context-sensitive function profile including function body, inferring callee's head sample, callsite target samples, eventually records into ProfileMap.
* Leveraged LLVM build-in(`SampleProfWriter`) writer to support different serialization format with no stop
- `getCanonicalFnName` for callee name and name from ELF section
- Added regression test for both unwinding and profile generation
Test Plan:
ninja & ninja check-llvm
Reviewed By: hoy, wenlei, wmi
Differential Revision: https://reviews.llvm.org/D89723
2020-10-19 12:55:59 -07:00
|
|
|
#include "ErrorHandling.h"
|
|
|
|
#include "PerfReader.h"
|
|
|
|
#include "ProfiledBinary.h"
|
2022-03-21 21:53:28 +01:00
|
|
|
#include "llvm/IR/DebugInfoMetadata.h"
|
[CSSPGO][llvm-profgen] Context-sensitive profile data generation
This stack of changes introduces `llvm-profgen` utility which generates a profile data file from given perf script data files for sample-based PGO. It’s part of(not only) the CSSPGO work. Specifically to support context-sensitive with/without pseudo probe profile, it implements a series of functionalities including perf trace parsing, instruction symbolization, LBR stack/call frame stack unwinding, pseudo probe decoding, etc. Also high throughput is achieved by multiple levels of sample aggregation and compatible format with one stop is generated at the end. Please refer to: https://groups.google.com/g/llvm-dev/c/1p1rdYbL93s for the CSSPGO RFC.
This change supports context-sensitive profile data generation into llvm-profgen. With simultaneous sampling for LBR and call stack, we can identify leaf of LBR sample with calling context from stack sample . During the process of deriving fall through path from LBR entries, we unwind LBR by replaying all the calls and returns (including implicit calls/returns due to inlining) backwards on top of the sampled call stack. Then the state of call stack as we unwind through LBR always represents the calling context of current fall through path.
we have two types of virtual unwinding 1) LBR unwinding and 2) linear range unwinding.
Specifically, for each LBR entry which can be classified into call, return, regular branch, LBR unwinding will replay the operation by pushing, popping or switching leaf frame towards the call stack and since the initial call stack is most recently sampled, the replay should be in anti-execution order, i.e. for the regular case, pop the call stack when LBR is call, push frame on call stack when LBR is return. After each LBR processed, it also needs to align with the next LBR by going through instructions from previous LBR's target to current LBR's source, which we named linear unwinding. As instruction from linear range can come from different function by inlining, linear unwinding will do the range splitting and record counters through the range with same inline context.
With each fall through path from LBR unwinding, we aggregate each sample into counters by the calling context and eventually generate full context sensitive profile (without relying on inlining) to driver compiler's PGO/FDO.
A breakdown of noteworthy changes:
- Added `HybridSample` class as the abstraction perf sample including LBR stack and call stack
* Extended `PerfReader` to implement auto-detect whether input perf script output contains CS profile, then do the parsing. Multiple `HybridSample` are extracted
* Speed up by aggregating `HybridSample` into `AggregatedSamples`
* Added VirtualUnwinder that consumes aggregated `HybridSample` and implements unwinding of calls, returns, and linear path that contains implicit call/return from inlining. Ranges and branches counters are aggregated by the calling context.
Here calling context is string type, each context is a pair of function name and callsite location info, the whole context is like `main:1 @ foo:2 @ bar`.
* Added PorfileGenerater that accumulates counters by ranges unfolding or branch target mapping, then generates context-sensitive function profile including function body, inferring callee's head sample, callsite target samples, eventually records into ProfileMap.
* Leveraged LLVM build-in(`SampleProfWriter`) writer to support different serialization format with no stop
- `getCanonicalFnName` for callee name and name from ELF section
- Added regression test for both unwinding and profile generation
Test Plan:
ninja & ninja check-llvm
Reviewed By: hoy, wenlei, wmi
Differential Revision: https://reviews.llvm.org/D89723
2020-10-19 12:55:59 -07:00
|
|
|
#include "llvm/ProfileData/SampleProfWriter.h"
|
2021-03-18 09:45:07 -07:00
|
|
|
#include <memory>
|
2021-04-07 23:06:39 -07:00
|
|
|
#include <unordered_set>
|
[CSSPGO][llvm-profgen] Context-sensitive profile data generation
This stack of changes introduces `llvm-profgen` utility which generates a profile data file from given perf script data files for sample-based PGO. It’s part of(not only) the CSSPGO work. Specifically to support context-sensitive with/without pseudo probe profile, it implements a series of functionalities including perf trace parsing, instruction symbolization, LBR stack/call frame stack unwinding, pseudo probe decoding, etc. Also high throughput is achieved by multiple levels of sample aggregation and compatible format with one stop is generated at the end. Please refer to: https://groups.google.com/g/llvm-dev/c/1p1rdYbL93s for the CSSPGO RFC.
This change supports context-sensitive profile data generation into llvm-profgen. With simultaneous sampling for LBR and call stack, we can identify leaf of LBR sample with calling context from stack sample . During the process of deriving fall through path from LBR entries, we unwind LBR by replaying all the calls and returns (including implicit calls/returns due to inlining) backwards on top of the sampled call stack. Then the state of call stack as we unwind through LBR always represents the calling context of current fall through path.
we have two types of virtual unwinding 1) LBR unwinding and 2) linear range unwinding.
Specifically, for each LBR entry which can be classified into call, return, regular branch, LBR unwinding will replay the operation by pushing, popping or switching leaf frame towards the call stack and since the initial call stack is most recently sampled, the replay should be in anti-execution order, i.e. for the regular case, pop the call stack when LBR is call, push frame on call stack when LBR is return. After each LBR processed, it also needs to align with the next LBR by going through instructions from previous LBR's target to current LBR's source, which we named linear unwinding. As instruction from linear range can come from different function by inlining, linear unwinding will do the range splitting and record counters through the range with same inline context.
With each fall through path from LBR unwinding, we aggregate each sample into counters by the calling context and eventually generate full context sensitive profile (without relying on inlining) to driver compiler's PGO/FDO.
A breakdown of noteworthy changes:
- Added `HybridSample` class as the abstraction perf sample including LBR stack and call stack
* Extended `PerfReader` to implement auto-detect whether input perf script output contains CS profile, then do the parsing. Multiple `HybridSample` are extracted
* Speed up by aggregating `HybridSample` into `AggregatedSamples`
* Added VirtualUnwinder that consumes aggregated `HybridSample` and implements unwinding of calls, returns, and linear path that contains implicit call/return from inlining. Ranges and branches counters are aggregated by the calling context.
Here calling context is string type, each context is a pair of function name and callsite location info, the whole context is like `main:1 @ foo:2 @ bar`.
* Added PorfileGenerater that accumulates counters by ranges unfolding or branch target mapping, then generates context-sensitive function profile including function body, inferring callee's head sample, callsite target samples, eventually records into ProfileMap.
* Leveraged LLVM build-in(`SampleProfWriter`) writer to support different serialization format with no stop
- `getCanonicalFnName` for callee name and name from ELF section
- Added regression test for both unwinding and profile generation
Test Plan:
ninja & ninja check-llvm
Reviewed By: hoy, wenlei, wmi
Differential Revision: https://reviews.llvm.org/D89723
2020-10-19 12:55:59 -07:00
|
|
|
|
|
|
|
using namespace llvm;
|
|
|
|
using namespace sampleprof;
|
|
|
|
|
|
|
|
namespace llvm {
|
|
|
|
namespace sampleprof {
|
|
|
|
|
2022-03-01 18:43:53 -08:00
|
|
|
using ProbeCounterMap =
|
|
|
|
std::unordered_map<const MCDecodedPseudoProbe *, uint64_t>;
|
|
|
|
|
2021-09-22 20:00:24 -07:00
|
|
|
// This base class for profile generation of sample-based PGO. We reuse all
|
|
|
|
// structures relating to function profiles and profile writers as seen in
|
|
|
|
// /ProfileData/SampleProf.h.
|
|
|
|
class ProfileGeneratorBase {
|
[CSSPGO][llvm-profgen] Context-sensitive profile data generation
This stack of changes introduces `llvm-profgen` utility which generates a profile data file from given perf script data files for sample-based PGO. It’s part of(not only) the CSSPGO work. Specifically to support context-sensitive with/without pseudo probe profile, it implements a series of functionalities including perf trace parsing, instruction symbolization, LBR stack/call frame stack unwinding, pseudo probe decoding, etc. Also high throughput is achieved by multiple levels of sample aggregation and compatible format with one stop is generated at the end. Please refer to: https://groups.google.com/g/llvm-dev/c/1p1rdYbL93s for the CSSPGO RFC.
This change supports context-sensitive profile data generation into llvm-profgen. With simultaneous sampling for LBR and call stack, we can identify leaf of LBR sample with calling context from stack sample . During the process of deriving fall through path from LBR entries, we unwind LBR by replaying all the calls and returns (including implicit calls/returns due to inlining) backwards on top of the sampled call stack. Then the state of call stack as we unwind through LBR always represents the calling context of current fall through path.
we have two types of virtual unwinding 1) LBR unwinding and 2) linear range unwinding.
Specifically, for each LBR entry which can be classified into call, return, regular branch, LBR unwinding will replay the operation by pushing, popping or switching leaf frame towards the call stack and since the initial call stack is most recently sampled, the replay should be in anti-execution order, i.e. for the regular case, pop the call stack when LBR is call, push frame on call stack when LBR is return. After each LBR processed, it also needs to align with the next LBR by going through instructions from previous LBR's target to current LBR's source, which we named linear unwinding. As instruction from linear range can come from different function by inlining, linear unwinding will do the range splitting and record counters through the range with same inline context.
With each fall through path from LBR unwinding, we aggregate each sample into counters by the calling context and eventually generate full context sensitive profile (without relying on inlining) to driver compiler's PGO/FDO.
A breakdown of noteworthy changes:
- Added `HybridSample` class as the abstraction perf sample including LBR stack and call stack
* Extended `PerfReader` to implement auto-detect whether input perf script output contains CS profile, then do the parsing. Multiple `HybridSample` are extracted
* Speed up by aggregating `HybridSample` into `AggregatedSamples`
* Added VirtualUnwinder that consumes aggregated `HybridSample` and implements unwinding of calls, returns, and linear path that contains implicit call/return from inlining. Ranges and branches counters are aggregated by the calling context.
Here calling context is string type, each context is a pair of function name and callsite location info, the whole context is like `main:1 @ foo:2 @ bar`.
* Added PorfileGenerater that accumulates counters by ranges unfolding or branch target mapping, then generates context-sensitive function profile including function body, inferring callee's head sample, callsite target samples, eventually records into ProfileMap.
* Leveraged LLVM build-in(`SampleProfWriter`) writer to support different serialization format with no stop
- `getCanonicalFnName` for callee name and name from ELF section
- Added regression test for both unwinding and profile generation
Test Plan:
ninja & ninja check-llvm
Reviewed By: hoy, wenlei, wmi
Differential Revision: https://reviews.llvm.org/D89723
2020-10-19 12:55:59 -07:00
|
|
|
|
|
|
|
public:
|
2022-06-23 20:14:47 -07:00
|
|
|
ProfileGeneratorBase(ProfiledBinary *Binary) : Binary(Binary){};
|
2021-09-22 20:00:24 -07:00
|
|
|
ProfileGeneratorBase(ProfiledBinary *Binary,
|
2022-03-30 12:27:10 -07:00
|
|
|
const ContextSampleCounterMap *Counters)
|
2021-09-22 20:00:24 -07:00
|
|
|
: Binary(Binary), SampleCounters(Counters){};
|
2022-03-30 12:27:10 -07:00
|
|
|
ProfileGeneratorBase(ProfiledBinary *Binary,
|
|
|
|
const SampleProfileMap &&Profiles)
|
|
|
|
: Binary(Binary), ProfileMap(std::move(Profiles)){};
|
|
|
|
|
2021-09-22 20:00:24 -07:00
|
|
|
virtual ~ProfileGeneratorBase() = default;
|
|
|
|
static std::unique_ptr<ProfileGeneratorBase>
|
2022-03-30 12:27:10 -07:00
|
|
|
create(ProfiledBinary *Binary, const ContextSampleCounterMap *Counters,
|
2022-04-28 11:31:02 -07:00
|
|
|
bool profileIsCS);
|
2022-03-30 12:27:10 -07:00
|
|
|
static std::unique_ptr<ProfileGeneratorBase>
|
2022-06-23 20:14:47 -07:00
|
|
|
create(ProfiledBinary *Binary, SampleProfileMap &ProfileMap,
|
2022-04-28 11:31:02 -07:00
|
|
|
bool profileIsCS);
|
[CSSPGO][llvm-profgen] Context-sensitive profile data generation
This stack of changes introduces `llvm-profgen` utility which generates a profile data file from given perf script data files for sample-based PGO. It’s part of(not only) the CSSPGO work. Specifically to support context-sensitive with/without pseudo probe profile, it implements a series of functionalities including perf trace parsing, instruction symbolization, LBR stack/call frame stack unwinding, pseudo probe decoding, etc. Also high throughput is achieved by multiple levels of sample aggregation and compatible format with one stop is generated at the end. Please refer to: https://groups.google.com/g/llvm-dev/c/1p1rdYbL93s for the CSSPGO RFC.
This change supports context-sensitive profile data generation into llvm-profgen. With simultaneous sampling for LBR and call stack, we can identify leaf of LBR sample with calling context from stack sample . During the process of deriving fall through path from LBR entries, we unwind LBR by replaying all the calls and returns (including implicit calls/returns due to inlining) backwards on top of the sampled call stack. Then the state of call stack as we unwind through LBR always represents the calling context of current fall through path.
we have two types of virtual unwinding 1) LBR unwinding and 2) linear range unwinding.
Specifically, for each LBR entry which can be classified into call, return, regular branch, LBR unwinding will replay the operation by pushing, popping or switching leaf frame towards the call stack and since the initial call stack is most recently sampled, the replay should be in anti-execution order, i.e. for the regular case, pop the call stack when LBR is call, push frame on call stack when LBR is return. After each LBR processed, it also needs to align with the next LBR by going through instructions from previous LBR's target to current LBR's source, which we named linear unwinding. As instruction from linear range can come from different function by inlining, linear unwinding will do the range splitting and record counters through the range with same inline context.
With each fall through path from LBR unwinding, we aggregate each sample into counters by the calling context and eventually generate full context sensitive profile (without relying on inlining) to driver compiler's PGO/FDO.
A breakdown of noteworthy changes:
- Added `HybridSample` class as the abstraction perf sample including LBR stack and call stack
* Extended `PerfReader` to implement auto-detect whether input perf script output contains CS profile, then do the parsing. Multiple `HybridSample` are extracted
* Speed up by aggregating `HybridSample` into `AggregatedSamples`
* Added VirtualUnwinder that consumes aggregated `HybridSample` and implements unwinding of calls, returns, and linear path that contains implicit call/return from inlining. Ranges and branches counters are aggregated by the calling context.
Here calling context is string type, each context is a pair of function name and callsite location info, the whole context is like `main:1 @ foo:2 @ bar`.
* Added PorfileGenerater that accumulates counters by ranges unfolding or branch target mapping, then generates context-sensitive function profile including function body, inferring callee's head sample, callsite target samples, eventually records into ProfileMap.
* Leveraged LLVM build-in(`SampleProfWriter`) writer to support different serialization format with no stop
- `getCanonicalFnName` for callee name and name from ELF section
- Added regression test for both unwinding and profile generation
Test Plan:
ninja & ninja check-llvm
Reviewed By: hoy, wenlei, wmi
Differential Revision: https://reviews.llvm.org/D89723
2020-10-19 12:55:59 -07:00
|
|
|
virtual void generateProfile() = 0;
|
|
|
|
void write();
|
|
|
|
|
2021-11-04 00:08:37 -07:00
|
|
|
static uint32_t
|
|
|
|
getDuplicationFactor(unsigned Discriminator,
|
|
|
|
bool UseFSD = ProfileGeneratorBase::UseFSDiscriminator) {
|
|
|
|
return UseFSD ? 1
|
|
|
|
: llvm::DILocation::getDuplicationFactorFromDiscriminator(
|
|
|
|
Discriminator);
|
2021-10-01 16:58:59 -07:00
|
|
|
}
|
|
|
|
|
2021-11-04 00:08:37 -07:00
|
|
|
static uint32_t
|
|
|
|
getBaseDiscriminator(unsigned Discriminator,
|
|
|
|
bool UseFSD = ProfileGeneratorBase::UseFSDiscriminator) {
|
|
|
|
return UseFSD ? Discriminator
|
|
|
|
: DILocation::getBaseDiscriminatorFromDiscriminator(
|
|
|
|
Discriminator, /* IsFSDiscriminator */ false);
|
2021-10-01 16:58:59 -07:00
|
|
|
}
|
|
|
|
|
2021-11-04 00:08:37 -07:00
|
|
|
static bool UseFSDiscriminator;
|
|
|
|
|
[CSSPGO][llvm-profgen] Context-sensitive profile data generation
This stack of changes introduces `llvm-profgen` utility which generates a profile data file from given perf script data files for sample-based PGO. It’s part of(not only) the CSSPGO work. Specifically to support context-sensitive with/without pseudo probe profile, it implements a series of functionalities including perf trace parsing, instruction symbolization, LBR stack/call frame stack unwinding, pseudo probe decoding, etc. Also high throughput is achieved by multiple levels of sample aggregation and compatible format with one stop is generated at the end. Please refer to: https://groups.google.com/g/llvm-dev/c/1p1rdYbL93s for the CSSPGO RFC.
This change supports context-sensitive profile data generation into llvm-profgen. With simultaneous sampling for LBR and call stack, we can identify leaf of LBR sample with calling context from stack sample . During the process of deriving fall through path from LBR entries, we unwind LBR by replaying all the calls and returns (including implicit calls/returns due to inlining) backwards on top of the sampled call stack. Then the state of call stack as we unwind through LBR always represents the calling context of current fall through path.
we have two types of virtual unwinding 1) LBR unwinding and 2) linear range unwinding.
Specifically, for each LBR entry which can be classified into call, return, regular branch, LBR unwinding will replay the operation by pushing, popping or switching leaf frame towards the call stack and since the initial call stack is most recently sampled, the replay should be in anti-execution order, i.e. for the regular case, pop the call stack when LBR is call, push frame on call stack when LBR is return. After each LBR processed, it also needs to align with the next LBR by going through instructions from previous LBR's target to current LBR's source, which we named linear unwinding. As instruction from linear range can come from different function by inlining, linear unwinding will do the range splitting and record counters through the range with same inline context.
With each fall through path from LBR unwinding, we aggregate each sample into counters by the calling context and eventually generate full context sensitive profile (without relying on inlining) to driver compiler's PGO/FDO.
A breakdown of noteworthy changes:
- Added `HybridSample` class as the abstraction perf sample including LBR stack and call stack
* Extended `PerfReader` to implement auto-detect whether input perf script output contains CS profile, then do the parsing. Multiple `HybridSample` are extracted
* Speed up by aggregating `HybridSample` into `AggregatedSamples`
* Added VirtualUnwinder that consumes aggregated `HybridSample` and implements unwinding of calls, returns, and linear path that contains implicit call/return from inlining. Ranges and branches counters are aggregated by the calling context.
Here calling context is string type, each context is a pair of function name and callsite location info, the whole context is like `main:1 @ foo:2 @ bar`.
* Added PorfileGenerater that accumulates counters by ranges unfolding or branch target mapping, then generates context-sensitive function profile including function body, inferring callee's head sample, callsite target samples, eventually records into ProfileMap.
* Leveraged LLVM build-in(`SampleProfWriter`) writer to support different serialization format with no stop
- `getCanonicalFnName` for callee name and name from ELF section
- Added regression test for both unwinding and profile generation
Test Plan:
ninja & ninja check-llvm
Reviewed By: hoy, wenlei, wmi
Differential Revision: https://reviews.llvm.org/D89723
2020-10-19 12:55:59 -07:00
|
|
|
protected:
|
2021-09-22 20:00:24 -07:00
|
|
|
// Use SampleProfileWriter to serialize profile map
|
|
|
|
void write(std::unique_ptr<SampleProfileWriter> Writer,
|
|
|
|
SampleProfileMap &ProfileMap);
|
[CSSPGO][llvm-profgen] Context-sensitive profile data generation
This stack of changes introduces `llvm-profgen` utility which generates a profile data file from given perf script data files for sample-based PGO. It’s part of(not only) the CSSPGO work. Specifically to support context-sensitive with/without pseudo probe profile, it implements a series of functionalities including perf trace parsing, instruction symbolization, LBR stack/call frame stack unwinding, pseudo probe decoding, etc. Also high throughput is achieved by multiple levels of sample aggregation and compatible format with one stop is generated at the end. Please refer to: https://groups.google.com/g/llvm-dev/c/1p1rdYbL93s for the CSSPGO RFC.
This change supports context-sensitive profile data generation into llvm-profgen. With simultaneous sampling for LBR and call stack, we can identify leaf of LBR sample with calling context from stack sample . During the process of deriving fall through path from LBR entries, we unwind LBR by replaying all the calls and returns (including implicit calls/returns due to inlining) backwards on top of the sampled call stack. Then the state of call stack as we unwind through LBR always represents the calling context of current fall through path.
we have two types of virtual unwinding 1) LBR unwinding and 2) linear range unwinding.
Specifically, for each LBR entry which can be classified into call, return, regular branch, LBR unwinding will replay the operation by pushing, popping or switching leaf frame towards the call stack and since the initial call stack is most recently sampled, the replay should be in anti-execution order, i.e. for the regular case, pop the call stack when LBR is call, push frame on call stack when LBR is return. After each LBR processed, it also needs to align with the next LBR by going through instructions from previous LBR's target to current LBR's source, which we named linear unwinding. As instruction from linear range can come from different function by inlining, linear unwinding will do the range splitting and record counters through the range with same inline context.
With each fall through path from LBR unwinding, we aggregate each sample into counters by the calling context and eventually generate full context sensitive profile (without relying on inlining) to driver compiler's PGO/FDO.
A breakdown of noteworthy changes:
- Added `HybridSample` class as the abstraction perf sample including LBR stack and call stack
* Extended `PerfReader` to implement auto-detect whether input perf script output contains CS profile, then do the parsing. Multiple `HybridSample` are extracted
* Speed up by aggregating `HybridSample` into `AggregatedSamples`
* Added VirtualUnwinder that consumes aggregated `HybridSample` and implements unwinding of calls, returns, and linear path that contains implicit call/return from inlining. Ranges and branches counters are aggregated by the calling context.
Here calling context is string type, each context is a pair of function name and callsite location info, the whole context is like `main:1 @ foo:2 @ bar`.
* Added PorfileGenerater that accumulates counters by ranges unfolding or branch target mapping, then generates context-sensitive function profile including function body, inferring callee's head sample, callsite target samples, eventually records into ProfileMap.
* Leveraged LLVM build-in(`SampleProfWriter`) writer to support different serialization format with no stop
- `getCanonicalFnName` for callee name and name from ELF section
- Added regression test for both unwinding and profile generation
Test Plan:
ninja & ninja check-llvm
Reviewed By: hoy, wenlei, wmi
Differential Revision: https://reviews.llvm.org/D89723
2020-10-19 12:55:59 -07:00
|
|
|
/*
|
|
|
|
For each region boundary point, mark if it is begin or end (or both) of
|
|
|
|
the region. Boundary points are inclusive. Log the sample count as well
|
|
|
|
so we can use it when we compute the sample count of each disjoint region
|
|
|
|
later. Note that there might be multiple ranges with different sample
|
|
|
|
count that share same begin/end point. We need to accumulate the sample
|
|
|
|
count for the boundary point for such case, because for the example
|
|
|
|
below,
|
|
|
|
|
|
|
|
|<--100-->|
|
|
|
|
|<------200------>|
|
|
|
|
A B C
|
|
|
|
|
|
|
|
sample count for disjoint region [A,B] would be 300.
|
|
|
|
*/
|
|
|
|
void findDisjointRanges(RangeSample &DisjointRanges,
|
|
|
|
const RangeSample &Ranges);
|
2022-03-01 18:43:53 -08:00
|
|
|
|
|
|
|
// Go through each address from range to extract the top frame probe by
|
|
|
|
// looking up in the Address2ProbeMap
|
|
|
|
void extractProbesFromRange(const RangeSample &RangeCounter,
|
|
|
|
ProbeCounterMap &ProbeCounter,
|
|
|
|
bool FindDisjointRanges = true);
|
|
|
|
|
2021-09-22 20:00:24 -07:00
|
|
|
// Helper function for updating body sample for a leaf location in
|
|
|
|
// FunctionProfile
|
|
|
|
void updateBodySamplesforFunctionProfile(FunctionSamples &FunctionProfile,
|
|
|
|
const SampleContextFrame &LeafLoc,
|
|
|
|
uint64_t Count);
|
2022-05-12 22:08:18 -07:00
|
|
|
|
|
|
|
void updateFunctionSamples();
|
|
|
|
|
2021-10-27 00:25:50 -07:00
|
|
|
void updateTotalSamples();
|
2021-11-28 18:42:09 -08:00
|
|
|
|
2022-05-12 22:08:18 -07:00
|
|
|
void updateCallsiteSamples();
|
|
|
|
|
[llvm-profgen] Filter out ambiguous cold profiles during profile generation (#81803)
For the built-in local initialization function(`__cxx_global_var_init`,
`__tls_init` prefix), there could be multiple versions of the functions
in the final binary, e.g. `__cxx_global_var_init`, which is a wrapper of
global variable ctors, the compiler could assign suffixes like
`__cxx_global_var_init.N` for different ctors.
However, in the profile generation, we call `getCanonicalFnName` to
canonicalize the names which strip the suffixes. Therefore, samples from
different functions queries the same profile(only
`__cxx_global_var_init`) and the counts are merged. As the functions are
essentially different, entries of the merged profile are ambiguous. In
sample loading, for each version of this function, the IR from one
version would be attributed towards a merged entries, which is
inaccurate, especially for fuzzy profile matching, it gets multiple
callsites(from different function) but using to match one callsite,
which mislead the matching and report a lot of false positives.
Hence, we want to filter them out from the profile map during the
profile generation time. The profiles are all cold functions, it won't
have perf impact.
2024-02-16 14:29:24 -08:00
|
|
|
void filterAmbiguousProfile(SampleProfileMap &Profiles);
|
|
|
|
|
|
|
|
bool filterAmbiguousProfile(FunctionSamples &FS);
|
|
|
|
|
2022-10-13 20:42:51 -07:00
|
|
|
StringRef getCalleeNameForAddress(uint64_t TargetAddress);
|
2021-11-28 18:42:09 -08:00
|
|
|
|
2022-06-23 20:14:47 -07:00
|
|
|
void computeSummaryAndThreshold(SampleProfileMap &ProfileMap);
|
2021-11-28 18:42:09 -08:00
|
|
|
|
2024-05-24 14:37:24 -04:00
|
|
|
void calculateBodySamplesAndSize(const FunctionSamples &FSamples,
|
|
|
|
uint64_t &TotalBodySamples,
|
|
|
|
uint64_t &FuncBodySize);
|
|
|
|
|
|
|
|
double calculateDensity(const SampleProfileMap &Profiles);
|
2021-11-28 18:42:09 -08:00
|
|
|
|
2024-05-24 14:37:24 -04:00
|
|
|
void calculateAndShowDensity(const SampleProfileMap &Profiles);
|
2021-11-28 18:42:09 -08:00
|
|
|
|
|
|
|
void showDensitySuggestion(double Density);
|
|
|
|
|
2022-03-23 12:36:44 -07:00
|
|
|
void collectProfiledFunctions();
|
|
|
|
|
2022-06-23 20:14:47 -07:00
|
|
|
bool collectFunctionsFromRawProfile(
|
|
|
|
std::unordered_set<const BinaryFunction *> &ProfiledFunctions);
|
|
|
|
|
|
|
|
// Collect profiled Functions for llvm sample profile input.
|
|
|
|
virtual bool collectFunctionsFromLLVMProfile(
|
|
|
|
std::unordered_set<const BinaryFunction *> &ProfiledFunctions) = 0;
|
|
|
|
|
[llvm-profgen] Filter out ambiguous cold profiles during profile generation (#81803)
For the built-in local initialization function(`__cxx_global_var_init`,
`__tls_init` prefix), there could be multiple versions of the functions
in the final binary, e.g. `__cxx_global_var_init`, which is a wrapper of
global variable ctors, the compiler could assign suffixes like
`__cxx_global_var_init.N` for different ctors.
However, in the profile generation, we call `getCanonicalFnName` to
canonicalize the names which strip the suffixes. Therefore, samples from
different functions queries the same profile(only
`__cxx_global_var_init`) and the counts are merged. As the functions are
essentially different, entries of the merged profile are ambiguous. In
sample loading, for each version of this function, the IR from one
version would be attributed towards a merged entries, which is
inaccurate, especially for fuzzy profile matching, it gets multiple
callsites(from different function) but using to match one callsite,
which mislead the matching and report a lot of false positives.
Hence, we want to filter them out from the profile map during the
profile generation time. The profiles are all cold functions, it won't
have perf impact.
2024-02-16 14:29:24 -08:00
|
|
|
// List of function prefix to filter out.
|
|
|
|
static constexpr const char *FuncPrefixsToFilter[] = {"__cxx_global_var_init",
|
|
|
|
"__tls_init"};
|
|
|
|
|
2021-11-28 18:42:09 -08:00
|
|
|
// Thresholds from profile summary to answer isHotCount/isColdCount queries.
|
|
|
|
uint64_t HotCountThreshold;
|
|
|
|
|
|
|
|
uint64_t ColdCountThreshold;
|
|
|
|
|
2022-03-30 12:27:10 -07:00
|
|
|
ProfiledBinary *Binary = nullptr;
|
|
|
|
|
2022-05-08 22:05:54 -07:00
|
|
|
std::unique_ptr<ProfileSummary> Summary;
|
|
|
|
|
[CSSPGO][llvm-profgen] Context-sensitive profile data generation
This stack of changes introduces `llvm-profgen` utility which generates a profile data file from given perf script data files for sample-based PGO. It’s part of(not only) the CSSPGO work. Specifically to support context-sensitive with/without pseudo probe profile, it implements a series of functionalities including perf trace parsing, instruction symbolization, LBR stack/call frame stack unwinding, pseudo probe decoding, etc. Also high throughput is achieved by multiple levels of sample aggregation and compatible format with one stop is generated at the end. Please refer to: https://groups.google.com/g/llvm-dev/c/1p1rdYbL93s for the CSSPGO RFC.
This change supports context-sensitive profile data generation into llvm-profgen. With simultaneous sampling for LBR and call stack, we can identify leaf of LBR sample with calling context from stack sample . During the process of deriving fall through path from LBR entries, we unwind LBR by replaying all the calls and returns (including implicit calls/returns due to inlining) backwards on top of the sampled call stack. Then the state of call stack as we unwind through LBR always represents the calling context of current fall through path.
we have two types of virtual unwinding 1) LBR unwinding and 2) linear range unwinding.
Specifically, for each LBR entry which can be classified into call, return, regular branch, LBR unwinding will replay the operation by pushing, popping or switching leaf frame towards the call stack and since the initial call stack is most recently sampled, the replay should be in anti-execution order, i.e. for the regular case, pop the call stack when LBR is call, push frame on call stack when LBR is return. After each LBR processed, it also needs to align with the next LBR by going through instructions from previous LBR's target to current LBR's source, which we named linear unwinding. As instruction from linear range can come from different function by inlining, linear unwinding will do the range splitting and record counters through the range with same inline context.
With each fall through path from LBR unwinding, we aggregate each sample into counters by the calling context and eventually generate full context sensitive profile (without relying on inlining) to driver compiler's PGO/FDO.
A breakdown of noteworthy changes:
- Added `HybridSample` class as the abstraction perf sample including LBR stack and call stack
* Extended `PerfReader` to implement auto-detect whether input perf script output contains CS profile, then do the parsing. Multiple `HybridSample` are extracted
* Speed up by aggregating `HybridSample` into `AggregatedSamples`
* Added VirtualUnwinder that consumes aggregated `HybridSample` and implements unwinding of calls, returns, and linear path that contains implicit call/return from inlining. Ranges and branches counters are aggregated by the calling context.
Here calling context is string type, each context is a pair of function name and callsite location info, the whole context is like `main:1 @ foo:2 @ bar`.
* Added PorfileGenerater that accumulates counters by ranges unfolding or branch target mapping, then generates context-sensitive function profile including function body, inferring callee's head sample, callsite target samples, eventually records into ProfileMap.
* Leveraged LLVM build-in(`SampleProfWriter`) writer to support different serialization format with no stop
- `getCanonicalFnName` for callee name and name from ELF section
- Added regression test for both unwinding and profile generation
Test Plan:
ninja & ninja check-llvm
Reviewed By: hoy, wenlei, wmi
Differential Revision: https://reviews.llvm.org/D89723
2020-10-19 12:55:59 -07:00
|
|
|
// Used by SampleProfileWriter
|
[CSSPGO] Split context string to deduplicate function name used in the context.
Currently context strings contain a lot of duplicated function names and that significantly increase the profile size. This change split the context into a series of {name, offset, discriminator} tuples so function names used in the context can be replaced by the index into the name table and that significantly reduce the size consumed by context.
A follow-up improvement made in the compiler and profiling tools is to avoid reconstructing full context strings which is time- and memory- consuming. Instead a context vector of `StringRef` is adopted to represent the full context in all scenarios. As a result, the previous prevalent profile map which was implemented as a `StringRef` is now engineered as an unordered map keyed by `SampleContext`. `SampleContext` is reshaped to using an `ArrayRef` to represent a full context for CS profile. For non-CS profile, it falls back to use `StringRef` to represent a contextless function name. Both the `ArrayRef` and `StringRef` objects are underpinned by real array and string objects that are stored in producer buffers. For compiler, they are maintained by the sample reader. For llvm-profgen, they are maintained in `ProfiledBinary` and `ProfileGenerator`. Full context strings can be generated only in those cases of debugging and printing.
When it comes to profile format, nothing has changed to the text format, though internally CS context is implemented as a vector. Extbinary format is only changed for CS profile, with an additional `SecCSNameTable` section which stores all full contexts logically in the form of `vector<int>`, which each element as an offset points to `SecNameTable`. All occurrences of contexts elsewhere are redirected to using the offset of `SecCSNameTable`.
Testing
This is no-diff change in terms of code quality and profile content (for text profile).
For our internal large service (aka ads), the profile generation is cut to half, with a 20x smaller string-based extbinary format generated.
The compile time of ads is dropped by 25%.
Differential Revision: https://reviews.llvm.org/D107299
2021-08-25 11:40:34 -07:00
|
|
|
SampleProfileMap ProfileMap;
|
2021-08-11 18:01:37 -07:00
|
|
|
|
2022-03-30 12:27:10 -07:00
|
|
|
const ContextSampleCounterMap *SampleCounters = nullptr;
|
2021-09-22 20:00:24 -07:00
|
|
|
};
|
|
|
|
|
|
|
|
class ProfileGenerator : public ProfileGeneratorBase {
|
|
|
|
|
|
|
|
public:
|
|
|
|
ProfileGenerator(ProfiledBinary *Binary,
|
2022-03-30 12:27:10 -07:00
|
|
|
const ContextSampleCounterMap *Counters)
|
2021-09-22 20:00:24 -07:00
|
|
|
: ProfileGeneratorBase(Binary, Counters){};
|
2022-03-30 12:27:10 -07:00
|
|
|
ProfileGenerator(ProfiledBinary *Binary, const SampleProfileMap &&Profiles)
|
|
|
|
: ProfileGeneratorBase(Binary, std::move(Profiles)){};
|
2021-09-22 20:00:24 -07:00
|
|
|
void generateProfile() override;
|
|
|
|
|
|
|
|
private:
|
|
|
|
void generateLineNumBasedProfile();
|
2022-03-01 18:43:53 -08:00
|
|
|
void generateProbeBasedProfile();
|
2021-09-23 22:53:12 -07:00
|
|
|
RangeSample preprocessRangeCounter(const RangeSample &RangeCounter);
|
[llvm-profdata] Do not create numerical strings for MD5 function names read from a Sample Profile. (#66164)
This is phase 2 of the MD5 refactoring on Sample Profile following
https://reviews.llvm.org/D147740
In previous implementation, when a MD5 Sample Profile is read, the
reader first converts the MD5 values to strings, and then create a
StringRef as if the numerical strings are regular function names, and
later on IPO transformation passes perform string comparison over these
numerical strings for profile matching. This is inefficient since it
causes many small heap allocations.
In this patch I created a class `ProfileFuncRef` that is similar to
`StringRef` but it can represent a hash value directly without any
conversion, and it will be more efficient (I will attach some benchmark
results later) when being used in associative containers.
ProfileFuncRef guarantees the same function name in string form or in
MD5 form has the same hash value, which also fix a few issue in IPO
passes where function matching/lookup only check for function name
string, while returns a no-match if the profile is MD5.
When testing on an internal large profile (> 1 GB, with more than 10
million functions), the full profile load time is reduced from 28 sec to
25 sec in average, and reading function offset table from 0.78s to 0.7s
2023-10-17 17:09:39 -04:00
|
|
|
FunctionSamples &getTopLevelFunctionProfile(FunctionId FuncName);
|
2021-09-22 20:00:24 -07:00
|
|
|
// Helper function to get the leaf frame's FunctionProfile by traversing the
|
|
|
|
// inline stack and meanwhile it adds the total samples for each frame's
|
|
|
|
// function profile.
|
|
|
|
FunctionSamples &
|
2021-12-02 16:51:42 -08:00
|
|
|
getLeafProfileAndAddTotalSamples(const SampleContextFrameVector &FrameVec,
|
|
|
|
uint64_t Count);
|
2021-09-22 20:00:24 -07:00
|
|
|
void populateBodySamplesForAllFunctions(const RangeSample &RangeCounter);
|
|
|
|
void
|
|
|
|
populateBoundarySamplesForAllFunctions(const BranchSample &BranchCounters);
|
2022-03-01 18:43:53 -08:00
|
|
|
void
|
2022-03-30 12:27:10 -07:00
|
|
|
populateBodySamplesWithProbesForAllFunctions(const RangeSample &RangeCounter);
|
|
|
|
void populateBoundarySamplesWithProbesForAllFunctions(
|
|
|
|
const BranchSample &BranchCounters);
|
2021-11-28 18:42:09 -08:00
|
|
|
void postProcessProfiles();
|
2021-11-28 23:43:11 -08:00
|
|
|
void trimColdProfiles(const SampleProfileMap &Profiles,
|
|
|
|
uint64_t ColdCntThreshold);
|
2022-06-23 20:14:47 -07:00
|
|
|
bool collectFunctionsFromLLVMProfile(
|
|
|
|
std::unordered_set<const BinaryFunction *> &ProfiledFunctions) override;
|
2021-09-22 20:00:24 -07:00
|
|
|
};
|
|
|
|
|
|
|
|
class CSProfileGenerator : public ProfileGeneratorBase {
|
[CSSPGO][llvm-profgen] Context-sensitive profile data generation
This stack of changes introduces `llvm-profgen` utility which generates a profile data file from given perf script data files for sample-based PGO. It’s part of(not only) the CSSPGO work. Specifically to support context-sensitive with/without pseudo probe profile, it implements a series of functionalities including perf trace parsing, instruction symbolization, LBR stack/call frame stack unwinding, pseudo probe decoding, etc. Also high throughput is achieved by multiple levels of sample aggregation and compatible format with one stop is generated at the end. Please refer to: https://groups.google.com/g/llvm-dev/c/1p1rdYbL93s for the CSSPGO RFC.
This change supports context-sensitive profile data generation into llvm-profgen. With simultaneous sampling for LBR and call stack, we can identify leaf of LBR sample with calling context from stack sample . During the process of deriving fall through path from LBR entries, we unwind LBR by replaying all the calls and returns (including implicit calls/returns due to inlining) backwards on top of the sampled call stack. Then the state of call stack as we unwind through LBR always represents the calling context of current fall through path.
we have two types of virtual unwinding 1) LBR unwinding and 2) linear range unwinding.
Specifically, for each LBR entry which can be classified into call, return, regular branch, LBR unwinding will replay the operation by pushing, popping or switching leaf frame towards the call stack and since the initial call stack is most recently sampled, the replay should be in anti-execution order, i.e. for the regular case, pop the call stack when LBR is call, push frame on call stack when LBR is return. After each LBR processed, it also needs to align with the next LBR by going through instructions from previous LBR's target to current LBR's source, which we named linear unwinding. As instruction from linear range can come from different function by inlining, linear unwinding will do the range splitting and record counters through the range with same inline context.
With each fall through path from LBR unwinding, we aggregate each sample into counters by the calling context and eventually generate full context sensitive profile (without relying on inlining) to driver compiler's PGO/FDO.
A breakdown of noteworthy changes:
- Added `HybridSample` class as the abstraction perf sample including LBR stack and call stack
* Extended `PerfReader` to implement auto-detect whether input perf script output contains CS profile, then do the parsing. Multiple `HybridSample` are extracted
* Speed up by aggregating `HybridSample` into `AggregatedSamples`
* Added VirtualUnwinder that consumes aggregated `HybridSample` and implements unwinding of calls, returns, and linear path that contains implicit call/return from inlining. Ranges and branches counters are aggregated by the calling context.
Here calling context is string type, each context is a pair of function name and callsite location info, the whole context is like `main:1 @ foo:2 @ bar`.
* Added PorfileGenerater that accumulates counters by ranges unfolding or branch target mapping, then generates context-sensitive function profile including function body, inferring callee's head sample, callsite target samples, eventually records into ProfileMap.
* Leveraged LLVM build-in(`SampleProfWriter`) writer to support different serialization format with no stop
- `getCanonicalFnName` for callee name and name from ELF section
- Added regression test for both unwinding and profile generation
Test Plan:
ninja & ninja check-llvm
Reviewed By: hoy, wenlei, wmi
Differential Revision: https://reviews.llvm.org/D89723
2020-10-19 12:55:59 -07:00
|
|
|
public:
|
2021-08-11 18:01:37 -07:00
|
|
|
CSProfileGenerator(ProfiledBinary *Binary,
|
2022-03-30 12:27:10 -07:00
|
|
|
const ContextSampleCounterMap *Counters)
|
2021-09-22 20:00:24 -07:00
|
|
|
: ProfileGeneratorBase(Binary, Counters){};
|
2022-06-23 20:14:47 -07:00
|
|
|
CSProfileGenerator(ProfiledBinary *Binary, SampleProfileMap &Profiles)
|
|
|
|
: ProfileGeneratorBase(Binary), ContextTracker(Profiles, nullptr){};
|
[CSSPGO] Load context profile for external functions in PreLink and populate ThinLTO import list
For ThinLTO's prelink compilation, we need to put external inline candidates into an import list attached to function's entry count metadata. This enables ThinLink to treat such cross module callee as hot in summary index, and later helps postlink to import them for profile guided cross module inlining.
For AutoFDO, the import list is retrieved by traversing the nested inlinee functions. For CSSPGO, since profile is flatterned, a few things need to happen for it to work:
- When loading input profile in extended binary format, we need to load all child context profile whose parent is in current module, so context trie for current module includes potential cross module inlinee.
- In order to make the above happen, we need to know whether input profile is CSSPGO profile before start reading function profile, hence a flag for profile summary section is added.
- When searching for cross module inline candidate, we need to walk through the context trie instead of nested inlinee profile (callsite sample of AutoFDO profile).
- Now that we have more accurate counts with CSSPGO, we swtiched to use entry count instead of total count to decided if an external callee is potentially beneficial to inline. This make it consistent with how we determine whether call tagert is potential inline candidate.
Differential Revision: https://reviews.llvm.org/D98590
2021-03-13 13:55:28 -08:00
|
|
|
void generateProfile() override;
|
[CSSPGO][llvm-profgen] Context-sensitive profile data generation
This stack of changes introduces `llvm-profgen` utility which generates a profile data file from given perf script data files for sample-based PGO. It’s part of(not only) the CSSPGO work. Specifically to support context-sensitive with/without pseudo probe profile, it implements a series of functionalities including perf trace parsing, instruction symbolization, LBR stack/call frame stack unwinding, pseudo probe decoding, etc. Also high throughput is achieved by multiple levels of sample aggregation and compatible format with one stop is generated at the end. Please refer to: https://groups.google.com/g/llvm-dev/c/1p1rdYbL93s for the CSSPGO RFC.
This change supports context-sensitive profile data generation into llvm-profgen. With simultaneous sampling for LBR and call stack, we can identify leaf of LBR sample with calling context from stack sample . During the process of deriving fall through path from LBR entries, we unwind LBR by replaying all the calls and returns (including implicit calls/returns due to inlining) backwards on top of the sampled call stack. Then the state of call stack as we unwind through LBR always represents the calling context of current fall through path.
we have two types of virtual unwinding 1) LBR unwinding and 2) linear range unwinding.
Specifically, for each LBR entry which can be classified into call, return, regular branch, LBR unwinding will replay the operation by pushing, popping or switching leaf frame towards the call stack and since the initial call stack is most recently sampled, the replay should be in anti-execution order, i.e. for the regular case, pop the call stack when LBR is call, push frame on call stack when LBR is return. After each LBR processed, it also needs to align with the next LBR by going through instructions from previous LBR's target to current LBR's source, which we named linear unwinding. As instruction from linear range can come from different function by inlining, linear unwinding will do the range splitting and record counters through the range with same inline context.
With each fall through path from LBR unwinding, we aggregate each sample into counters by the calling context and eventually generate full context sensitive profile (without relying on inlining) to driver compiler's PGO/FDO.
A breakdown of noteworthy changes:
- Added `HybridSample` class as the abstraction perf sample including LBR stack and call stack
* Extended `PerfReader` to implement auto-detect whether input perf script output contains CS profile, then do the parsing. Multiple `HybridSample` are extracted
* Speed up by aggregating `HybridSample` into `AggregatedSamples`
* Added VirtualUnwinder that consumes aggregated `HybridSample` and implements unwinding of calls, returns, and linear path that contains implicit call/return from inlining. Ranges and branches counters are aggregated by the calling context.
Here calling context is string type, each context is a pair of function name and callsite location info, the whole context is like `main:1 @ foo:2 @ bar`.
* Added PorfileGenerater that accumulates counters by ranges unfolding or branch target mapping, then generates context-sensitive function profile including function body, inferring callee's head sample, callsite target samples, eventually records into ProfileMap.
* Leveraged LLVM build-in(`SampleProfWriter`) writer to support different serialization format with no stop
- `getCanonicalFnName` for callee name and name from ELF section
- Added regression test for both unwinding and profile generation
Test Plan:
ninja & ninja check-llvm
Reviewed By: hoy, wenlei, wmi
Differential Revision: https://reviews.llvm.org/D89723
2020-10-19 12:55:59 -07:00
|
|
|
|
2021-08-09 11:44:33 -07:00
|
|
|
// Trim the context stack at a given depth.
|
|
|
|
template <typename T>
|
|
|
|
static void trimContext(SmallVectorImpl<T> &S, int Depth = MaxContextDepth) {
|
|
|
|
if (Depth < 0 || static_cast<size_t>(Depth) >= S.size())
|
|
|
|
return;
|
|
|
|
std::copy(S.begin() + S.size() - static_cast<size_t>(Depth), S.end(),
|
|
|
|
S.begin());
|
|
|
|
S.resize(Depth);
|
|
|
|
}
|
|
|
|
|
[CSSPGO][llvm-profgen] Compress recursive cycles in calling context
This change compresses the context string by removing cycles due to recursive function for CS profile generation. Removing recursion cycles is a way to normalize the calling context which will be better for the sample aggregation and also make the context promoting deterministic.
Specifically for implementation, we recognize adjacent repeated frames as cycles and deduplicated them through multiple round of iteration.
For example:
Considering a input context string stack:
[“a”, “a”, “b”, “c”, “a”, “b”, “c”, “b”, “c”, “d”]
For first iteration,, it removed all adjacent repeated frames of size 1:
[“a”, “b”, “c”, “a”, “b”, “c”, “b”, “c”, “d”]
For second iteration, it removed all adjacent repeated frames of size 2:
[“a”, “b”, “c”, “a”, “b”, “c”, “d”]
So in the end, we get compressed output:
[“a”, “b”, “c”, “d”]
Compression will be called in two place: one for sample's context key right after unwinding, one is for the eventual context string id in the ProfileGenerator.
Added a switch `compress-recursion` to control the size of duplicated frames, default -1 means no size limit.
Added unit tests and regression test for this.
Differential Revision: https://reviews.llvm.org/D93556
2021-01-29 15:00:08 -08:00
|
|
|
// Remove adjacent repeated context sequences up to a given sequence length,
|
|
|
|
// -1 means no size limit. Note that repeated sequences are identified based
|
|
|
|
// on the exact call site, this is finer granularity than function recursion.
|
|
|
|
template <typename T>
|
|
|
|
static void compressRecursionContext(SmallVectorImpl<T> &Context,
|
|
|
|
int32_t CSize = MaxCompressionSize) {
|
|
|
|
uint32_t I = 1;
|
|
|
|
uint32_t HS = static_cast<uint32_t>(Context.size() / 2);
|
|
|
|
uint32_t MaxDedupSize =
|
|
|
|
CSize == -1 ? HS : std::min(static_cast<uint32_t>(CSize), HS);
|
|
|
|
auto BeginIter = Context.begin();
|
|
|
|
// Use an in-place algorithm to save memory copy
|
|
|
|
// End indicates the end location of current iteration's data
|
|
|
|
uint32_t End = 0;
|
|
|
|
// Deduplicate from length 1 to the max possible size of a repeated
|
|
|
|
// sequence.
|
|
|
|
while (I <= MaxDedupSize) {
|
|
|
|
// This is a linear algorithm that deduplicates adjacent repeated
|
|
|
|
// sequences of size I. The deduplication detection runs on a sliding
|
|
|
|
// window whose size is 2*I and it keeps sliding the window to deduplicate
|
|
|
|
// the data inside. Once duplication is detected, deduplicate it by
|
|
|
|
// skipping the right half part of the window, otherwise just copy back
|
|
|
|
// the new one by appending them at the back of End pointer(for the next
|
|
|
|
// iteration).
|
|
|
|
//
|
|
|
|
// For example:
|
|
|
|
// Input: [a1, a2, b1, b2]
|
|
|
|
// (Added index to distinguish the same char, the origin is [a, a, b,
|
|
|
|
// b], the size of the dedup window is 2(I = 1) at the beginning)
|
|
|
|
//
|
|
|
|
// 1) The initial status is a dummy window[null, a1], then just copy the
|
|
|
|
// right half of the window(End = 0), then slide the window.
|
|
|
|
// Result: [a1], a2, b1, b2 (End points to the element right before ],
|
|
|
|
// after ] is the data of the previous iteration)
|
|
|
|
//
|
|
|
|
// 2) Next window is [a1, a2]. Since a1 == a2, then skip the right half of
|
|
|
|
// the window i.e the duplication happen. Only slide the window.
|
|
|
|
// Result: [a1], a2, b1, b2
|
|
|
|
//
|
|
|
|
// 3) Next window is [a2, b1], copy the right half of the window(b1 is
|
|
|
|
// new) to the End and slide the window.
|
|
|
|
// Result: [a1, b1], b1, b2
|
|
|
|
//
|
|
|
|
// 4) Next window is [b1, b2], same to 2), skip b2.
|
|
|
|
// Result: [a1, b1], b1, b2
|
|
|
|
// After resize, it will be [a, b]
|
|
|
|
|
|
|
|
// Use pointers like below to do comparison inside the window
|
|
|
|
// [a b c a b c]
|
|
|
|
// | | | | |
|
|
|
|
// LeftBoundary Left Right Left+I Right+I
|
|
|
|
// A duplication found if Left < LeftBoundry.
|
|
|
|
|
|
|
|
int32_t Right = I - 1;
|
|
|
|
End = I;
|
|
|
|
int32_t LeftBoundary = 0;
|
|
|
|
while (Right + I < Context.size()) {
|
|
|
|
// To avoids scanning a part of a sequence repeatedly, it finds out
|
|
|
|
// the common suffix of two hald in the window. The common suffix will
|
|
|
|
// serve as the common prefix of next possible pair of duplicate
|
|
|
|
// sequences. The non-common part will be ignored and never scanned
|
|
|
|
// again.
|
|
|
|
|
|
|
|
// For example.
|
|
|
|
// Input: [a, b1], c1, b2, c2
|
|
|
|
// I = 2
|
|
|
|
//
|
|
|
|
// 1) For the window [a, b1, c1, b2], non-common-suffix for the right
|
|
|
|
// part is 'c1', copy it and only slide the window 1 step.
|
|
|
|
// Result: [a, b1, c1], b2, c2
|
|
|
|
//
|
|
|
|
// 2) Next window is [b1, c1, b2, c2], so duplication happen.
|
|
|
|
// Result after resize: [a, b, c]
|
|
|
|
|
|
|
|
int32_t Left = Right;
|
|
|
|
while (Left >= LeftBoundary && Context[Left] == Context[Left + I]) {
|
|
|
|
// Find the longest suffix inside the window. When stops, Left points
|
|
|
|
// at the diverging point in the current sequence.
|
|
|
|
Left--;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool DuplicationFound = (Left < LeftBoundary);
|
|
|
|
// Don't need to recheck the data before Right
|
|
|
|
LeftBoundary = Right + 1;
|
|
|
|
if (DuplicationFound) {
|
|
|
|
// Duplication found, skip right half of the window.
|
|
|
|
Right += I;
|
|
|
|
} else {
|
|
|
|
// Copy the non-common-suffix part of the adjacent sequence.
|
|
|
|
std::copy(BeginIter + Right + 1, BeginIter + Left + I + 1,
|
|
|
|
BeginIter + End);
|
|
|
|
End += Left + I - Right;
|
|
|
|
// Only slide the window by the size of non-common-suffix
|
|
|
|
Right = Left + I;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// Don't forget the remaining part that's not scanned.
|
|
|
|
std::copy(BeginIter + Right + 1, Context.end(), BeginIter + End);
|
|
|
|
End += Context.size() - Right - 1;
|
|
|
|
I++;
|
|
|
|
Context.resize(End);
|
|
|
|
MaxDedupSize = std::min(static_cast<uint32_t>(End / 2), MaxDedupSize);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-09-22 20:00:24 -07:00
|
|
|
private:
|
|
|
|
void generateLineNumBasedProfile();
|
2022-06-27 22:57:22 -07:00
|
|
|
|
|
|
|
FunctionSamples *getOrCreateFunctionSamples(ContextTrieNode *ContextNode,
|
|
|
|
bool WasLeafInlined = false);
|
|
|
|
|
|
|
|
// Lookup or create ContextTrieNode for the context, FunctionSamples is
|
|
|
|
// created inside this function.
|
|
|
|
ContextTrieNode *getOrCreateContextNode(const SampleContextFrames Context,
|
|
|
|
bool WasLeafInlined = false);
|
|
|
|
|
2021-09-24 18:16:36 -07:00
|
|
|
// For profiled only functions, on-demand compute their inline context
|
|
|
|
// function byte size which is used by the pre-inliner.
|
|
|
|
void computeSizeForProfiledFunctions();
|
[CSSPGO][llvm-profgen] Context-sensitive global pre-inliner
This change sets up a framework in llvm-profgen to estimate inline decision and adjust context-sensitive profile based on that. We call it a global pre-inliner in llvm-profgen.
It will serve two purposes:
1) Since context profile for not inlined context will be merged into base profile, if we estimate a context will not be inlined, we can merge the context profile in the output to save profile size.
2) For thinLTO, when a context involving functions from different modules is not inined, we can't merge functions profiles across modules, leading to suboptimal post-inline count quality. By estimating some inline decisions, we would be able to adjust/merge context profiles beforehand as a mitigation.
Compiler inline heuristic uses inline cost which is not available in llvm-profgen. But since inline cost is closely related to size, we could get an estimate through function size from debug info. Because the size we have in llvm-profgen is the final size, it could also be more accurate than the inline cost estimation in the compiler.
This change only has the framework, with a few TODOs left for follow up patches for a complete implementation:
1) We need to retrieve size for funciton//inlinee from debug info for inlining estimation. Currently we use number of samples in a profile as place holder for size estimation.
2) Currently the thresholds are using the values used by sample loader inliner. But they need to be tuned since the size here is fully optimized machine code size, instead of inline cost based on not yet fully optimized IR.
Differential Revision: https://reviews.llvm.org/D99146
2021-03-05 07:50:36 -08:00
|
|
|
// Post processing for profiles before writing out, such as mermining
|
|
|
|
// and trimming cold profiles, running preinliner on profiles.
|
|
|
|
void postProcessProfiles();
|
|
|
|
|
2021-09-22 20:00:24 -07:00
|
|
|
void populateBodySamplesForFunction(FunctionSamples &FunctionProfile,
|
|
|
|
const RangeSample &RangeCounters);
|
2022-06-27 22:57:22 -07:00
|
|
|
|
|
|
|
void populateBoundarySamplesForFunction(ContextTrieNode *CallerNode,
|
2021-09-22 20:00:24 -07:00
|
|
|
const BranchSample &BranchCounters);
|
2022-06-27 22:57:22 -07:00
|
|
|
|
|
|
|
void populateInferredFunctionSamples(ContextTrieNode &Node);
|
|
|
|
|
|
|
|
void updateFunctionSamples();
|
[CSSPGO][llvm-profgen] Compress recursive cycles in calling context
This change compresses the context string by removing cycles due to recursive function for CS profile generation. Removing recursion cycles is a way to normalize the calling context which will be better for the sample aggregation and also make the context promoting deterministic.
Specifically for implementation, we recognize adjacent repeated frames as cycles and deduplicated them through multiple round of iteration.
For example:
Considering a input context string stack:
[“a”, “a”, “b”, “c”, “a”, “b”, “c”, “b”, “c”, “d”]
For first iteration,, it removed all adjacent repeated frames of size 1:
[“a”, “b”, “c”, “a”, “b”, “c”, “b”, “c”, “d”]
For second iteration, it removed all adjacent repeated frames of size 2:
[“a”, “b”, “c”, “a”, “b”, “c”, “d”]
So in the end, we get compressed output:
[“a”, “b”, “c”, “d”]
Compression will be called in two place: one for sample's context key right after unwinding, one is for the eventual context string id in the ProfileGenerator.
Added a switch `compress-recursion` to control the size of duplicated frames, default -1 means no size limit.
Added unit tests and regression test for this.
Differential Revision: https://reviews.llvm.org/D93556
2021-01-29 15:00:08 -08:00
|
|
|
|
2021-09-22 20:00:24 -07:00
|
|
|
void generateProbeBasedProfile();
|
2022-03-01 18:43:53 -08:00
|
|
|
|
2021-01-11 09:08:39 -08:00
|
|
|
// Fill in function body samples from probes
|
[CSSPGO] Split context string to deduplicate function name used in the context.
Currently context strings contain a lot of duplicated function names and that significantly increase the profile size. This change split the context into a series of {name, offset, discriminator} tuples so function names used in the context can be replaced by the index into the name table and that significantly reduce the size consumed by context.
A follow-up improvement made in the compiler and profiling tools is to avoid reconstructing full context strings which is time- and memory- consuming. Instead a context vector of `StringRef` is adopted to represent the full context in all scenarios. As a result, the previous prevalent profile map which was implemented as a `StringRef` is now engineered as an unordered map keyed by `SampleContext`. `SampleContext` is reshaped to using an `ArrayRef` to represent a full context for CS profile. For non-CS profile, it falls back to use `StringRef` to represent a contextless function name. Both the `ArrayRef` and `StringRef` objects are underpinned by real array and string objects that are stored in producer buffers. For compiler, they are maintained by the sample reader. For llvm-profgen, they are maintained in `ProfiledBinary` and `ProfileGenerator`. Full context strings can be generated only in those cases of debugging and printing.
When it comes to profile format, nothing has changed to the text format, though internally CS context is implemented as a vector. Extbinary format is only changed for CS profile, with an additional `SecCSNameTable` section which stores all full contexts logically in the form of `vector<int>`, which each element as an offset points to `SecNameTable`. All occurrences of contexts elsewhere are redirected to using the offset of `SecCSNameTable`.
Testing
This is no-diff change in terms of code quality and profile content (for text profile).
For our internal large service (aka ads), the profile generation is cut to half, with a 20x smaller string-based extbinary format generated.
The compile time of ads is dropped by 25%.
Differential Revision: https://reviews.llvm.org/D107299
2021-08-25 11:40:34 -07:00
|
|
|
void populateBodySamplesWithProbes(const RangeSample &RangeCounter,
|
2022-12-15 18:36:52 -08:00
|
|
|
const AddrBasedCtxKey *CtxKey);
|
2021-01-11 09:08:39 -08:00
|
|
|
// Fill in boundary samples for a call probe
|
[CSSPGO] Split context string to deduplicate function name used in the context.
Currently context strings contain a lot of duplicated function names and that significantly increase the profile size. This change split the context into a series of {name, offset, discriminator} tuples so function names used in the context can be replaced by the index into the name table and that significantly reduce the size consumed by context.
A follow-up improvement made in the compiler and profiling tools is to avoid reconstructing full context strings which is time- and memory- consuming. Instead a context vector of `StringRef` is adopted to represent the full context in all scenarios. As a result, the previous prevalent profile map which was implemented as a `StringRef` is now engineered as an unordered map keyed by `SampleContext`. `SampleContext` is reshaped to using an `ArrayRef` to represent a full context for CS profile. For non-CS profile, it falls back to use `StringRef` to represent a contextless function name. Both the `ArrayRef` and `StringRef` objects are underpinned by real array and string objects that are stored in producer buffers. For compiler, they are maintained by the sample reader. For llvm-profgen, they are maintained in `ProfiledBinary` and `ProfileGenerator`. Full context strings can be generated only in those cases of debugging and printing.
When it comes to profile format, nothing has changed to the text format, though internally CS context is implemented as a vector. Extbinary format is only changed for CS profile, with an additional `SecCSNameTable` section which stores all full contexts logically in the form of `vector<int>`, which each element as an offset points to `SecNameTable`. All occurrences of contexts elsewhere are redirected to using the offset of `SecCSNameTable`.
Testing
This is no-diff change in terms of code quality and profile content (for text profile).
For our internal large service (aka ads), the profile generation is cut to half, with a 20x smaller string-based extbinary format generated.
The compile time of ads is dropped by 25%.
Differential Revision: https://reviews.llvm.org/D107299
2021-08-25 11:40:34 -07:00
|
|
|
void populateBoundarySamplesWithProbes(const BranchSample &BranchCounter,
|
2022-12-15 18:36:52 -08:00
|
|
|
const AddrBasedCtxKey *CtxKey);
|
2022-06-27 22:57:22 -07:00
|
|
|
|
|
|
|
ContextTrieNode *
|
2022-12-15 18:36:52 -08:00
|
|
|
getContextNodeForLeafProbe(const AddrBasedCtxKey *CtxKey,
|
2022-06-27 22:57:22 -07:00
|
|
|
const MCDecodedPseudoProbe *LeafProbe);
|
|
|
|
|
2021-01-11 09:08:39 -08:00
|
|
|
// Helper function to get FunctionSamples for the leaf probe
|
[CSSPGO][llvm-profgen] Compress recursive cycles in calling context
This change compresses the context string by removing cycles due to recursive function for CS profile generation. Removing recursion cycles is a way to normalize the calling context which will be better for the sample aggregation and also make the context promoting deterministic.
Specifically for implementation, we recognize adjacent repeated frames as cycles and deduplicated them through multiple round of iteration.
For example:
Considering a input context string stack:
[“a”, “a”, “b”, “c”, “a”, “b”, “c”, “b”, “c”, “d”]
For first iteration,, it removed all adjacent repeated frames of size 1:
[“a”, “b”, “c”, “a”, “b”, “c”, “b”, “c”, “d”]
For second iteration, it removed all adjacent repeated frames of size 2:
[“a”, “b”, “c”, “a”, “b”, “c”, “d”]
So in the end, we get compressed output:
[“a”, “b”, “c”, “d”]
Compression will be called in two place: one for sample's context key right after unwinding, one is for the eventual context string id in the ProfileGenerator.
Added a switch `compress-recursion` to control the size of duplicated frames, default -1 means no size limit.
Added unit tests and regression test for this.
Differential Revision: https://reviews.llvm.org/D93556
2021-01-29 15:00:08 -08:00
|
|
|
FunctionSamples &
|
2022-12-15 18:36:52 -08:00
|
|
|
getFunctionProfileForLeafProbe(const AddrBasedCtxKey *CtxKey,
|
2021-08-11 18:01:37 -07:00
|
|
|
const MCDecodedPseudoProbe *LeafProbe);
|
2021-09-22 20:00:24 -07:00
|
|
|
|
2022-06-27 22:57:22 -07:00
|
|
|
void convertToProfileMap(ContextTrieNode &Node,
|
|
|
|
SampleContextFrameVector &Context);
|
|
|
|
|
|
|
|
void convertToProfileMap();
|
|
|
|
|
2022-06-23 20:14:47 -07:00
|
|
|
void computeSummaryAndThreshold();
|
|
|
|
|
|
|
|
bool collectFunctionsFromLLVMProfile(
|
|
|
|
std::unordered_set<const BinaryFunction *> &ProfiledFunctions) override;
|
|
|
|
|
2022-12-15 18:36:52 -08:00
|
|
|
void initializeMissingFrameInferrer();
|
|
|
|
|
|
|
|
// Given an input `Context`, output `NewContext` with inferred missing tail
|
|
|
|
// call frames.
|
|
|
|
void inferMissingFrames(const SmallVectorImpl<uint64_t> &Context,
|
|
|
|
SmallVectorImpl<uint64_t> &NewContext);
|
|
|
|
|
2022-06-27 22:57:22 -07:00
|
|
|
ContextTrieNode &getRootContext() { return ContextTracker.getRootContext(); };
|
|
|
|
|
|
|
|
// The container for holding the FunctionSamples used by context trie.
|
|
|
|
std::list<FunctionSamples> FSamplesList;
|
|
|
|
|
2021-09-22 20:00:24 -07:00
|
|
|
// Underlying context table serves for sample profile writer.
|
|
|
|
std::unordered_set<SampleContextFrameVector, SampleContextFrameHash> Contexts;
|
|
|
|
|
2022-06-27 22:57:22 -07:00
|
|
|
SampleContextTracker ContextTracker;
|
|
|
|
|
|
|
|
bool IsProfileValidOnTrie = true;
|
|
|
|
|
2021-09-22 20:00:24 -07:00
|
|
|
public:
|
|
|
|
// Deduplicate adjacent repeated context sequences up to a given sequence
|
|
|
|
// length. -1 means no size limit.
|
|
|
|
static int32_t MaxCompressionSize;
|
|
|
|
static int MaxContextDepth;
|
2020-11-25 20:33:17 -08:00
|
|
|
};
|
|
|
|
|
[CSSPGO][llvm-profgen] Context-sensitive profile data generation
This stack of changes introduces `llvm-profgen` utility which generates a profile data file from given perf script data files for sample-based PGO. It’s part of(not only) the CSSPGO work. Specifically to support context-sensitive with/without pseudo probe profile, it implements a series of functionalities including perf trace parsing, instruction symbolization, LBR stack/call frame stack unwinding, pseudo probe decoding, etc. Also high throughput is achieved by multiple levels of sample aggregation and compatible format with one stop is generated at the end. Please refer to: https://groups.google.com/g/llvm-dev/c/1p1rdYbL93s for the CSSPGO RFC.
This change supports context-sensitive profile data generation into llvm-profgen. With simultaneous sampling for LBR and call stack, we can identify leaf of LBR sample with calling context from stack sample . During the process of deriving fall through path from LBR entries, we unwind LBR by replaying all the calls and returns (including implicit calls/returns due to inlining) backwards on top of the sampled call stack. Then the state of call stack as we unwind through LBR always represents the calling context of current fall through path.
we have two types of virtual unwinding 1) LBR unwinding and 2) linear range unwinding.
Specifically, for each LBR entry which can be classified into call, return, regular branch, LBR unwinding will replay the operation by pushing, popping or switching leaf frame towards the call stack and since the initial call stack is most recently sampled, the replay should be in anti-execution order, i.e. for the regular case, pop the call stack when LBR is call, push frame on call stack when LBR is return. After each LBR processed, it also needs to align with the next LBR by going through instructions from previous LBR's target to current LBR's source, which we named linear unwinding. As instruction from linear range can come from different function by inlining, linear unwinding will do the range splitting and record counters through the range with same inline context.
With each fall through path from LBR unwinding, we aggregate each sample into counters by the calling context and eventually generate full context sensitive profile (without relying on inlining) to driver compiler's PGO/FDO.
A breakdown of noteworthy changes:
- Added `HybridSample` class as the abstraction perf sample including LBR stack and call stack
* Extended `PerfReader` to implement auto-detect whether input perf script output contains CS profile, then do the parsing. Multiple `HybridSample` are extracted
* Speed up by aggregating `HybridSample` into `AggregatedSamples`
* Added VirtualUnwinder that consumes aggregated `HybridSample` and implements unwinding of calls, returns, and linear path that contains implicit call/return from inlining. Ranges and branches counters are aggregated by the calling context.
Here calling context is string type, each context is a pair of function name and callsite location info, the whole context is like `main:1 @ foo:2 @ bar`.
* Added PorfileGenerater that accumulates counters by ranges unfolding or branch target mapping, then generates context-sensitive function profile including function body, inferring callee's head sample, callsite target samples, eventually records into ProfileMap.
* Leveraged LLVM build-in(`SampleProfWriter`) writer to support different serialization format with no stop
- `getCanonicalFnName` for callee name and name from ELF section
- Added regression test for both unwinding and profile generation
Test Plan:
ninja & ninja check-llvm
Reviewed By: hoy, wenlei, wmi
Differential Revision: https://reviews.llvm.org/D89723
2020-10-19 12:55:59 -07:00
|
|
|
} // end namespace sampleprof
|
|
|
|
} // end namespace llvm
|
|
|
|
|
|
|
|
#endif
|