The comment about the stripping of suffixes when creating the indexed
MemProf profile was partially incorrect, as we do not strip ".__uniq."
suffixes by default (by design). Update the comment accordingly.
Normally SampleContext does not allow using an empty StirngRef to
construct an object, this is to prevent bugs reading the profile.
However empty names may be emitted by a function which its name is
intentionally set to empty, or a bug in the remapper that returns an
empty string. Regardless, converting it to FunctionId first will prevent
the assert, and that assert check is unnecessary, which will be
addressed in another patch
Refactor this function to take a callback for each decoded string, rename it and change it to a static function in cpp. Move its (sole) caller definition from header to cpp.
- This is a split of patch https://github.com/llvm/llvm-project/pull/66825; to minimize the diff created in a big PR.
Part 1 of 3. This includes the LLVM back-end processing and profile
reading/writing components. compiler-rt changes are included.
Differential Revision: https://reviews.llvm.org/D138846
- For instance, `collectPGOFuncNameStrings` is reused a lot in https://github.com/llvm/llvm-project/pull/66825 to get the compressed vtable names; and in some added callsites it's just confusing to see 'func' since context clearly shows it's not. This function currently just takes a list of strings as input so name it to `collectGlobalObjectNameStrings`
- Do the rename in a standalone patch since the method is used in non-llvm codebase. It's easier to rollback this NFC in case rename in that codebase takes longer.
This is phase 2 of the MD5 refactoring on Sample Profile following
https://reviews.llvm.org/D147740
In previous implementation, when a MD5 Sample Profile is read, the
reader first converts the MD5 values to strings, and then create a
StringRef as if the numerical strings are regular function names, and
later on IPO transformation passes perform string comparison over these
numerical strings for profile matching. This is inefficient since it
causes many small heap allocations.
In this patch I created a class `ProfileFuncRef` that is similar to
`StringRef` but it can represent a hash value directly without any
conversion, and it will be more efficient (I will attach some benchmark
results later) when being used in associative containers.
ProfileFuncRef guarantees the same function name in string form or in
MD5 form has the same hash value, which also fix a few issue in IPO
passes where function matching/lookup only check for function name
string, while returns a no-match if the profile is MD5.
When testing on an internal large profile (> 1 GB, with more than 10
million functions), the full profile load time is reduced from 28 sec to
25 sec in average, and reading function offset table from 0.78s to 0.7s
Note that llvm::support::endianness has been renamed to
llvm::endianness while becoming an enum class. This patch replaces
{big,little,native} with llvm::endianness::{big,little,native}.
This patch completes the migration to llvm::endianness and
llvm::endianness::{big,little,native}. I'll post a separate patch to
remove the migration helpers in llvm/Support/Endian.h:
using endianness = llvm::endianness;
constexpr llvm::endianness big = llvm::endianness::big;
constexpr llvm::endianness little = llvm::endianness::little;
constexpr llvm::endianness native = llvm::endianness::native;
Note that llvm::support::endianness has been renamed to
llvm::endianness while becoming an enum class as opposed to an
enum. This patch replaces support::{big,little,native} with
llvm::endianness::{big,little,native}.
propagateCounts computes unmeasured arc counts (see
commit b9d086693b5baebc477793af0d86a447bae01b6f).
In a x86-64 build using -O3 -fno-omit-frame-pointer, propagateCounts uses 80
bytes per stack frame. If a function contains 1e5 basic blocks on a tree path
(Kirchoff's circuit law optimization), the used stack space will be 8MB (default
ulimit -s in many configurations). (In a -O0 build, a stack frame costs 224
bytes.) 1e5 is ample for most configurations. However, for library users using
threads (e.g. in RPC handlers), a remaining thread stack of 64KiB allows just
819 stack frames, which is too limited.
Switch to an iterative form to avoid stack overflow issues. Iterative forms
match other iterative form functions in this file
(https://reviews.llvm.org/D93073).
Alternative to #68455
Note that llvm::support::endianness has been renamed to
llvm::endianness while becoming an enum class as opposed to an enum.
This patch replaces llvm::support::{big,little,native} with
llvm::endianness::{big,little,native}.
Now that llvm::support::endianness has been renamed to
llvm::endianness, we can use the shorter form. This patch replaces
support::endianness with llvm::endianness.
Now that llvm::support::endianness has been renamed to
llvm::endianness, we can use the shorter form. This patch replaces
support::endianness::{big,little,native} with
llvm::endianness::{big,little,native}.
Now that llvm::support::endianness has been renamed to
llvm::endianness, we can use the shorter form. This patch replaces
llvm::support::endianness with llvm::endianness.
With the recent redefinition of llvm::endianness::native, it is equal
to either llvm::endianness::big or llvm::endianness::little depending
on the host endianness. Since getHostEndianness just returns
llvm::endianness::native, this patch removes the function and
"constant propagates" llvm::endianness:native.
- This function looks up MD5ToNameMap to return a name for a given MD5.
https://github.com/llvm/llvm-project/pull/66825 adds MD5 of global
variable names into this map. So rename methods and update comments
This causes stack overflows for real-world coverage reports.
Tested with build/bin/llvm-lit -a llvm/test/tools/llvm-cov
Co-authored-by: Qing Shen <qingshen@google.com>
This reverts commit 32db121b29f78e4c41116b2a8f1c730f9522b202 and subsequent commits.
This causes time regression on llvm-cov even with debug info correlation off.
This function uses 16 bytes buffer to encode two 64 bits data. However,
the encoding method requires 10 bytes to encode one 64 bits data, so
encoded data actually requiress 20 bytes total.
Based on disassembly and profiling results, it looks like the cost of
std::error_code and llvm::ErrorOr<> are non-trivial, as it involves
virtual function calls that are not optimized in -O3.
This patch moves error handling logic into the conditional branch in
error checking, only generates std::error_code on actual error instead
of the default case to generate sampleprof_error::success.
In the next patch I am looking to change the API to uses a plain old
enum for error checking instead, because based on profiling results
error handling related code accounts for 7-8% of the profile loading
time even if there's no error at all.
This patch fixes:
llvm/lib/ProfileData/Coverage/CoverageMapping.cpp:219:5: error:
default label in switch which covers all enumeration values
[-Werror,-Wcovered-switch-default]
This causes stack overflows for real-world coverage reports.
Ran $ build/bin/llvm-lit -a llvm/test/tools/llvm-cov locally and passed.
Co-authored-by: Qing Shen <qingshen@google.com>
This seems to cause Clang to crash, see comments on the code review. Reverting
until the problem can be investigated.
> Part 1 of 3. This includes the LLVM back-end processing and profile
> reading/writing components. compiler-rt changes are included.
>
> Differential Revision: https://reviews.llvm.org/D138846
This reverts commit a50486fd736ab2fe03fcacaf8b98876db77217a7.
Part 1 of 3. This includes the LLVM back-end processing and profile
reading/writing components. compiler-rt changes are included.
Differential Revision: https://reviews.llvm.org/D138846
Debug info correlation is an option in InstrProfiling pass, which is used by
both IR instrumentation and front-end instrumentation. So, Clang coverage can
also benefits the binary size saving from it.
Reviewed By: ellis
Differential Revision: https://reviews.llvm.org/D157913
The current llvm-cov error message for kind `coveragemap_error::malforme`, just gives the issue kind without any reason for what caused the issue. This patch is aimed at improving the llvm-cov error message to help identify what caused the issue.
Reviewed By: MaskRay
Close: https://github.com/llvm/llvm-project/pull/65264
Add a MemProfReader base class which can be used directly where
symbolization and processing a raw profile is unnecessary.
Reviewed By: tejohnson
Differential Revision: https://reviews.llvm.org/D159141
Canonicalize the function name (strip suffixes etc) to ensure that
function name suffixes added by late stage passes do not cause
mismatches when memprof profile data is consumed.
Reviewed By: tejohnson
Differential Revision: https://reviews.llvm.org/D159132
`llvm-cov convert-for-testing` only functions properly when the input binary contains a single source file. When the binary has multiple source files, a `Malformed coverage data` error will occur when the generated .covmapping is read back. This is because the testing format lacks support for indicating the size of its file records, and current implementation just assumes there's only one record in it. This patch fixes this problem by introducing a new testing format version.
Changes to the code:
- Add a new format version. The version number is stored in the the last 8 bytes of the orignial magic number field to be backward-compatible.
- Output a LEB128 number before the file records section to indicate its size in the new version.
- Change the format parsing code correspondingly.
- Update the document to formalize the testing format.
- Additionally, fix the bug when converting COFF binaries.
Reviewed By: phosek, gulfem
Differential Revision: https://reviews.llvm.org/D156611