Scanning functions without CFG information as well as the detection of
authentication oracles requires introducing more classes related to
register state analysis. To make the future code easier to understand,
rename several classes beforehand.
To detect authentication oracles, one has to query the properties of
*output* operands of authentication instructions *after* the instruction
is executed - this requires adding another analysis that iterates over
the instructions in reverse order, and a corresponding state class.
As the main difference of the existing `State` class is that it stores
the properties of source register operands of the instructions before
the instruction's execution, rename it to `SrcState` and
`PacRetAnalysis` to `SrcSafetyAnalysis`.
Apply minor adjustments to the debug output along the way.
In addition to authenticated pointers, consider the contents of a
register safe if it was
* written by PC-relative address computation
* updated by an arithmetic instruction whose input address is safe
TLS relocations may not have a valid BOLT symbol associated with them.
While symbolizing the operand, we were checking for the symbol value,
and since there was no symbol the check resulted in a crash.
Handle TLS case while performing operand symbolization on AArch64.
In 96e5ee2, I inadvertently broke the way non-trivial symbol references
got updated from non-optimized code. The breakage was a consequence of
`getTargetSymbol(MCExpr *)` not returning a symbol when the parameter
was a binary expression. Fix `getTargetSymbol()` to cover such cases.
In lite mode, we only emit code for a subset of functions while
preserving the original code in .bolt.org.text. This requires updating
code references in non-emitted functions to ensure that:
* Non-optimized versions of the optimized code never execute.
* Function pointer comparison semantics is preserved.
On x86-64, we can update code references in-place using "pending
relocations" added in scanExternalRefs(). However, on AArch64, this is
not always possible due to address range limitations and linker address
"relaxation".
There are two types of code-to-code references: control transfer (e.g.,
calls and branches) and function pointer materialization.
AArch64-specific control transfer instructions are covered by #116964.
For function pointer materialization, simply changing the immediate
field of an instruction is not always sufficient. In some cases, we need
to modify a pair of instructions, such as undoing linker relaxation and
converting NOP+ADR into ADRP+ADD sequence.
To achieve this, we use the instruction patch mechanism instead of
pending relocations. Instruction patches are emitted via the regular MC
layer, just like regular functions. However, they have a fixed address
and do not have an associated symbol table entry. This allows us to make
more complex changes to the code, ensuring that function pointers are
correctly updated. Such mechanism should also be portable to RISC-V and
other architectures.
To summarize, for AArch64, we extend the scanExternalRefs() process to
undo linker relaxation and use instruction patches to partially
overwrite unoptimized code.
In preparation for implementing support for detection of non-protected
call instructions, refine the definition of state which is computed for
each register by data-flow analysis.
Explicitly marking the registers which are known to be trusted at
function entry is crucial for finding non-protected calls. In addition,
it fixes less-common false negatives for pac-ret, such as `ret x1` in
`f_nonx30_ret_non_auted` test case.
In preparation for adding more gadget kinds to detect, streamline
issue reporting.
Rename classes representing issue reports. In particular, rename
`Annotation` base class to `Report`, as it has nothing to do with
"annotations" in `MCPlus` terms anymore. Remove references to "return
instructions" from variable names and report messages, use generic
terms instead. Rename NonPacProtectedRetAnalysis to PAuthGadgetScanner.
Remove `GeneralDiagnostic` as a separate class, make `GenericReport`
(former `GenDiag`) store `std::string Text` directly. Remove unused
`operator=` and `operator==` methods, as `Report`s are created on the
heap and referenced via `shared_ptr`s.
Introduce `GadgetKind` class - currently, it only wraps a `const char *`
description to display to the user. This description is intended to be
a per-gadget-kind constant (or a few hard-coded constants), so no need
to store it to `std::string` field in each report instance. To handle
both free-form `GenericReport`s and statically-allocated messages
without unnecessary overhead, move printing of the report header to the
base class (and take the message argument as a `StringRef`).
Factor out the code for mapping from physical registers to consecutive
array indexes.
Introduce helper functions to print instructions and registers to
prevent mixing of analysis logic and implementation details of debug
output.
Removed the debug printing from `Gadget::generateReport`, as it doesn't
seem to add important information to what was already printed in the
report itself.
Create entry points for addresses referenced by dynamic relocations and
allow getNewFunctionOrDataAddress to map addrs inside functions. By
adding addresses referenced by dynamic relocations as entry points. This
patch fixes an issue where bolt fails on code using computing goto's.
This also fixes a mapping issue with the bugfix from this PR:
https://github.com/llvm/llvm-project/pull/117766.
When the linker relaxes a GOT load, it changes ADRP+LDR instruction pair
into ADRP+ADD. It is relatively straightforward to detect and symbolize
the second instruction in the disassembler. However, it is not always
possible to properly symbolize the ADRP instruction without looking at
the second instruction. Hence, we have the FixRelaxationPass that adjust
the operand of ADRP by looking at the corresponding ADD.
This PR tries to properly symbolize ADRP earlier in the pipeline, i.e.
in AArch64MCSymbolizer. This change makes it easier to adjust the
instruction once we add AArch64 support in `scanExternalRefs()`.
Additionally, we get a benefit of looking at proper operands while
observing the function state prior to running FixRelaxationPass.
To disambiguate the operand of ADRP that has a GOT relocation against
it, we look at the contents/value of the operand. If it contains an
address of a page that is valid for GOT, we assume that the operand
wasn't modified by the linker and leave it up to FixRelaxationPass to do
a proper adjustment. If the page referenced by ADRP cannot point to GOT,
then it's an indication that the linker has modified the operand and we
substitute the operand with a non-GOT reference to the symbol.
When an indirect branch instruction is decoded, analyzeIndirectBranch
method is asked if this is a well-known code pattern. On AArch64, the
only special pattern which is detected is Jump Table, emitted as a
branch to the sum of a constant base address and a variable offset.
Therefore, `Inst.getOpcode()` being one of `AArch64::BRA*` means Inst
cannot belong to such Jump Table pattern, thus returning early.
These tests have been failing since:
commit 1cfca53b9f2eadbf864b85995ec7f819d7f29b5e
Author: Arthur Eubanks <aeubanks@google.com>
Date: Wed Mar 12 16:20:13 2025 -0700
This patch works around the failures by removing some FileCheck
directives. Hopefully, BOLT folks can chime in and commit a right
fix.
Add two additional profile quality stats for CG (call graph) and CFG
(control flow graph) flow conservations besides the CFG discontinuity
stats introduced in #109683. The two new stats quantify how different
"in-flow" is from "out-flow" in the following cases where they should be
equal. The smaller the reported stats, the better the flow conservations
are.
CG flow conservation: for each function that is not a program entry, the
number of times the function is called according to CG ("in-flow")
should be equal to the number of times the transition from an entry
basic block of the function to another basic block within the function
is recorded ("out-flow").
CFG flow conservation: for each basic block that is not a function entry
or exit, the number of times the transition into this basic block from
another basic block within the function is recorded ("in-flow") should
be equal to the number of times the transition from this basic block to
another basic block within the function is recorded ("out-flow").
Use `-v=1` for more detailed bucketed stats, and use `-v=2` to dump
functions / basic blocks with bad flow conservations.
BOLT instrumented binary today has a readable (R), writeable (W) and also
executable (X) segment, which Android system won't load due to its WX
attribute. Such RWX segment was produced because BOLT has a two step linking,
first for everything in the updated or rewritten input binary and next for
runtime library. Each linking will layout sections in the order of RX sections
followed by RO sections and then followed by RW sections. So we could end up
having a RW section `.bolt.instr.counters` surrounded by a number of RO and RX
sections, and a new text segment was then formed by including all RX sections
which includes the RW section in the middle, and hence the RWX segment. One
way to fix this is to separate the RW `.bolt.instr.counters` section into its
own segment by a). assigning the starting addresses for section
`.bolt.instr.counters` and its following section with regular page aligned
addresses and b). creating two extra program headers accordingly.
When processing BOLTed binaries with BAT section, we used to
indiscriminately use `BAT->getFallthroughsInTrace` to record
fall-throughs, even if the function is not covered by BAT.
Fix that by using non-BAT CFG-based `getFallthroughsInTrace` if the
function is not in BAT.
Test Plan: updated bolt-address-translation-yaml.test
BOLT used to mark multi-entry functions non-simple in non-relocation
mode with the reasoning that we can't move them due to potentially
undetected references. However, in aggregation mode it doesn't apply as
BOLT doesn't perform optimizations.
Relax this constraint in case of an aggregation job.
Test Plan: added entry-point-fallthru.s
... which is caused by a seemingly recent change in BOLTs basic block
calculation, where function calls seem to be ending basic blocks? I
don't have a pointer to the commit that caused this change. I'll be
looking for that later. For now, I'm trying to get the regression tests
passing again.
Traces are triplets of branch source, target, and fall-through end (next
branch).
Traces simplify differentiation of fall-throughs into local- and
external-origin, which improves performance over profile with
undifferentiated fall-throughs by eliminating profile discontinuity in
call to continuation fall-throughs. This makes it possible to avoid
converting return profile into call to continuation profile which may
introduce statistical biases.
The existing format makes provisions for local- (F) and external- (f)
origin fall-throughs, but the profile producer needs to know function
boundaries. BOLT has that information readily available, so providing
the origin branch of a fall-through is a functional replacement of the
fall-through kind (f or F). This also has an effect of combining
branches and fall-throughs into a single record.
As traces subsume other pre-aggregated profile kinds, BOLT may drop
support for them soon. Users of pre-aggregated profile format are
advised to migrate to the trace format.
Test Plan: Updated callcont-fallthru.s
When printing disassembly of a function with constant islands, include
the island info in the dump.
At the moment, only print islands in pre-CFG state. Include islands that
are interleaved with instructions.
A few tests generate a statically-linked position-independent executable
with `-nostdlib -Wl,--unresolved-symbols=ignore-all -pie` (`%clang`) and
test PLT handling. (--unresolved-symbols=ignore-all suppresses undefined
symbol errors and serves as a convenience hack.)
This relies on an unguaranteed linker behavior: a statically-linked PIE
does not necessarily generate PLT entries.
While current lld generates a PLT entry, it will change to suppress the
PLT entry to simplify internal handling and improve consistency.
(The behavior has no consistency in GNU ld, some ports generated a
.dynsym entry while some don't. While most seem to generate a PLT entry
but some ports use a weird `R_*_NONE` relocation.)
When a function has an indirect branch with unknown control flow, we
preserve nops in order to keep all instruction offsets (from the start
of the function) the same in case the indirect branch is used by a
PC-relative jump table. However, when we know the control flow of the
function, we should be able to safely remove nops.
The code for jump table detection on AArch64 asserts liberally whenever
the input instruction sequence does not match the expected pattern. As a
result, BOLT fails to process binaries with such sequences instead of
ignoring functions with unknown control flow.
Remove asserts in analyzeIndirectBranchFragment() and mark indirect
jumps as instructions with unknown control flow instead.
This fixes the following tests:
BOLT :: AArch64/check-init-not-moved.s
BOLT :: X86/dwarf5-dwarf4-types-backward-forward-cross-reference.test
BOLT :: X86/dwarf5-locexpr-referrence.test
When clang is compiled with `-DENABLE_LINKER_BUILD_ID=ON`.
- **Reapply "[BOLT] Add --pad-funcs-before=func:n (#117924)"**
- **[BOLT] Fix --pad-funcs{,-before} state misinteraction**
When --pad-funcs-before was introduced, it introduced a bug whereby the
first one to get parsed could influence the other.
Ensure that each has its own state and test that they don't interact in
this manner by testing how the `_subsequent` symbol moves when both
arguments are supplied with different padding values.
Fixed by having a function (and static state) for each of before/after.
14dcf8214f9c66172d17c1cfaec6aec0030748e0 introduced a subtle bug with
the static `FunctionPadding` map.
If either `opts::FunctionPadSpec` or `opts::FunctionPadBeforeSpec` are set,
the map is going to be populated with the respective spec in the first
invocation of `BinaryEmitter::emitFunction`. The subsequent invocations
will pick up the padding from the map irrespective of whether
`opts::FunctionPadSpec` or `opts::FunctionPadBeforeSpec` is passed as a
parameter.
This breaks an internal test, hence reverting the patch.
This patch makes sure that `BinaryContext::printInstruction` prints the
preferred disassembly. Preferred disassembly only gets printed when
there are no annotations on the MCInst. Therefore, this patch
temporarily removes the annotations before printing it.
A few examples of before and after on AArch64 instructions are as
follows:
```
BEFORE AFTER
(preferred disassembly)
ret x30 ret
orr x30, xzr, x0 mov x30, x0
hint #29 autiasp
hint #12 autia1716
```
Clearly, the preferred disassembly is easier for developers to read, and
is the disassembly that tools should be printing.
This patch is motivated as part of future work on the
llvm-bolt-binary-analysis tool, making sure that the reports it prints
do use preferred disassembly.
This patch was cherry-picked from
https://github.com/kbeyls/llvm-project/tree/bolt-gadget-scanner-prototype.
In this current patch, this only affects existing RISCV test cases.
This patch also does improve test cases in future patches that will
introduce a binary analysis for llvm-bolt-binary-analysis that checks
for correct application of pac-ret (pointer authentication on return
addresses).
Identical Code Folding (ICF) folds functions that are identical into one
function, and updates symbol addresses to the new address. This reduces
the size of a binary, but can lead to problems. For example when
function pointers are compared. This can be done either explicitly in
the code or generated IR by optimization passes like Indirect Call
Promotion (ICP). After ICF what used to be two different addresses
become the same address. This can lead to a different code path being
taken.
This is where safe ICF comes in. Linker (LLD) does it using address
significant section generated by clang. If symbol is in it, or an object
doesn't have this section symbols are not folded.
BOLT does not have the information regarding which objects do not have
this section, so can't re-use this mechanism.
This implementation scans code section and conservatively marks
functions symbols as unsafe. It treats symbols as unsafe if they are
used in non-control flow instruction. It also scans through the data
relocation sections and does the same for relocations that reference a
function symbol. The latter handles the case when function pointer is
stored in a local or global variable, etc. If a relocation address
points within a vtable these symbols are skipped.
The current implementation of `lookupStubFromGroup` is incorrect. The
function is intended to find and return the closest stub using
`lower_bound`, which identifies the first element in a sorted range that
is not less than a specified value. However, if such an element is not
found within `Candidates` and the list is not empty, the function
returns `nullptr`. Instead, it should check whether the last element
satisfies the condition.
The key address in the static keys jump table was incorrectly encoded as
an absolute value instead of PC-relative causing incorrect
interpretation of the "likely" property of the key.
merge-fdata used to consider misprediction count as part of "signature",
or the aggregation key. This prevented it from collapsing profile lines
with different misprediction counts, which resulted in duplicate
`(from, to)` pairs with different misprediction and execution counts.
Fix that by splitting out misprediction count and accumulating it
separately.
Test Plan: updated bolt/test/merge-fdata-lbr-mode.test
Eliminate splitting the buffer into lines, and use `std::getline`
directly. Simplify no_lbr and boltedcollection handling as well.
Test Plan: For a large fdata file (200MB), fstream version is ~10%
faster.
Added support to BOLT for DW_OP_GNU_push_tls_address. So now
DW_TAG_variable with this OP in DW_AT_location will appear in debug
names acceleration table. Although not in the DWARF 5 spec it is similar
to DW_OP_form_tls_address. Without this support llvm-dwarfdump --verify
--debug-names will report errors.
When a basic sample profile is gathered without LBR info, the generated
profile contains a "no-lbr" tag in the first line of the fdata file.
This PR fixes merge-fdata to recognize and save this tag to the output
file.
This patch adds a requirement for a non root user in
unreadable-profile.test. This test fails if run as a root user (like in
a container without explicitly changing the user), which can lead to
some CI test failures.