mirror of
https://github.com/llvm/llvm-project.git
synced 2025-05-04 14:06:07 +00:00

This change implements pseudo probe encoding and emission for CSSPGO. Please see RFC here for more context: https://groups.google.com/g/llvm-dev/c/1p1rdYbL93s Pseudo probes are in the form of intrinsic calls on IR/MIR but they do not turn into any machine instructions. Instead they are emitted into the binary as a piece of data in standalone sections. The probe-specific sections are not needed to be loaded into memory at execution time, thus they do not incur a runtime overhead. **ELF object emission** The binary data to emit are organized as two ELF sections, i.e, the `.pseudo_probe_desc` section and the `.pseudo_probe` section. The `.pseudo_probe_desc` section stores a function descriptor for each function and the `.pseudo_probe` section stores the actual probes, each fo which corresponds to an IR basic block or an IR function callsite. A function descriptor is stored as a module-level metadata during the compilation and is serialized into the object file during object emission. Both the probe descriptors and pseudo probes can be emitted into a separate ELF section per function to leverage the linker for deduplication. A `.pseudo_probe` section shares the same COMDAT group with the function code so that when the function is dead, the probes are dead and disposed too. On the contrary, a `.pseudo_probe_desc` section has its own COMDAT group. This is because even if a function is dead, its probes may be inlined into other functions and its descriptor is still needed by the profile generation tool. The format of `.pseudo_probe_desc` section looks like: ``` .section .pseudo_probe_desc,"",@progbits .quad 6309742469962978389 // Func GUID .quad 4294967295 // Func Hash .byte 9 // Length of func name .ascii "_Z5funcAi" // Func name .quad 7102633082150537521 .quad 138828622701 .byte 12 .ascii "_Z8funcLeafi" .quad 446061515086924981 .quad 4294967295 .byte 9 .ascii "_Z5funcBi" .quad -2016976694713209516 .quad 72617220756 .byte 7 .ascii "_Z3fibi" ``` For each `.pseudoprobe` section, the encoded binary data consists of a single function record corresponding to an outlined function (i.e, a function with a code entry in the `.text` section). A function record has the following format : ``` FUNCTION BODY (one for each outlined function present in the text section) GUID (uint64) GUID of the function NPROBES (ULEB128) Number of probes originating from this function. NUM_INLINED_FUNCTIONS (ULEB128) Number of callees inlined into this function, aka number of first-level inlinees PROBE RECORDS A list of NPROBES entries. Each entry contains: INDEX (ULEB128) TYPE (uint4) 0 - block probe, 1 - indirect call, 2 - direct call ATTRIBUTE (uint3) reserved ADDRESS_TYPE (uint1) 0 - code address, 1 - address delta CODE_ADDRESS (uint64 or ULEB128) code address or address delta, depending on ADDRESS_TYPE INLINED FUNCTION RECORDS A list of NUM_INLINED_FUNCTIONS entries describing each of the inlined callees. Each record contains: INLINE SITE GUID of the inlinee (uint64) ID of the callsite probe (ULEB128) FUNCTION BODY A FUNCTION BODY entry describing the inlined function. ``` To support building a context-sensitive profile, probes from inlinees are grouped by their inline contexts. An inline context is logically a call path through which a callee function lands in a caller function. The probe emitter builds an inline tree based on the debug metadata for each outlined function in the form of a trie tree. A tree root is the outlined function. Each tree edge stands for a callsite where inlining happens. Pseudo probes originating from an inlinee function are stored in a tree node and the tree path starting from the root all the way down to the tree node is the inline context of the probes. The emission happens on the whole tree top-down recursively. Probes of a tree node will be emitted altogether with their direct parent edge. Since a pseudo probe corresponds to a real code address, for size savings, the address is encoded as a delta from the previous probe except for the first probe. Variant-sized integer encoding, aka LEB128, is used for address delta and probe index. **Assembling** Pseudo probes can be printed as assembly directives alternatively. This allows for good assembly code readability and also provides a view of how optimizations and pseudo probes affect each other, especially helpful for diff time assembly analysis. A pseudo probe directive has the following operands in order: function GUID, probe index, probe type, probe attributes and inline context. The directive is generated by the compiler and can be parsed by the assembler to form an encoded `.pseudoprobe` section in the object file. A example assembly looks like: ``` foo2: # @foo2 # %bb.0: # %bb0 pushq %rax testl %edi, %edi .pseudoprobe 837061429793323041 1 0 0 je .LBB1_1 # %bb.2: # %bb2 .pseudoprobe 837061429793323041 6 2 0 callq foo .pseudoprobe 837061429793323041 3 0 0 .pseudoprobe 837061429793323041 4 0 0 popq %rax retq .LBB1_1: # %bb1 .pseudoprobe 837061429793323041 5 1 0 callq *%rsi .pseudoprobe 837061429793323041 2 0 0 .pseudoprobe 837061429793323041 4 0 0 popq %rax retq # -- End function .section .pseudo_probe_desc,"",@progbits .quad 6699318081062747564 .quad 72617220756 .byte 3 .ascii "foo" .quad 837061429793323041 .quad 281547593931412 .byte 4 .ascii "foo2" ``` With inlining turned on, the assembly may look different around %bb2 with an inlined probe: ``` # %bb.2: # %bb2 .pseudoprobe 837061429793323041 3 0 .pseudoprobe 6699318081062747564 1 0 @ 837061429793323041:6 .pseudoprobe 837061429793323041 4 0 popq %rax retq ``` **Disassembling** We have a disassembling tool (llvm-profgen) that can display disassembly alongside with pseudo probes. So far it only supports ELF executable file. An example disassembly looks like: ``` 00000000002011a0 <foo2>: 2011a0: 50 push rax 2011a1: 85 ff test edi,edi [Probe]: FUNC: foo2 Index: 1 Type: Block 2011a3: 74 02 je 2011a7 <foo2+0x7> [Probe]: FUNC: foo2 Index: 3 Type: Block [Probe]: FUNC: foo2 Index: 4 Type: Block [Probe]: FUNC: foo Index: 1 Type: Block Inlined: @ foo2:6 2011a5: 58 pop rax 2011a6: c3 ret [Probe]: FUNC: foo2 Index: 2 Type: Block 2011a7: bf 01 00 00 00 mov edi,0x1 [Probe]: FUNC: foo2 Index: 5 Type: IndirectCall 2011ac: ff d6 call rsi [Probe]: FUNC: foo2 Index: 4 Type: Block 2011ae: 58 pop rax 2011af: c3 ret ``` Reviewed By: wmi Differential Revision: https://reviews.llvm.org/D91878
//===---------------------------------------------------------------------===// Common register allocation / spilling problem: mul lr, r4, lr str lr, [sp, #+52] ldr lr, [r1, #+32] sxth r3, r3 ldr r4, [sp, #+52] mla r4, r3, lr, r4 can be: mul lr, r4, lr mov r4, lr str lr, [sp, #+52] ldr lr, [r1, #+32] sxth r3, r3 mla r4, r3, lr, r4 and then "merge" mul and mov: mul r4, r4, lr str r4, [sp, #+52] ldr lr, [r1, #+32] sxth r3, r3 mla r4, r3, lr, r4 It also increase the likelihood the store may become dead. //===---------------------------------------------------------------------===// bb27 ... ... %reg1037 = ADDri %reg1039, 1 %reg1038 = ADDrs %reg1032, %reg1039, %noreg, 10 Successors according to CFG: 0x8b03bf0 (#5) bb76 (0x8b03bf0, LLVM BB @0x8b032d0, ID#5): Predecessors according to CFG: 0x8b0c5f0 (#3) 0x8b0a7c0 (#4) %reg1039 = PHI %reg1070, mbb<bb76.outer,0x8b0c5f0>, %reg1037, mbb<bb27,0x8b0a7c0> Note ADDri is not a two-address instruction. However, its result %reg1037 is an operand of the PHI node in bb76 and its operand %reg1039 is the result of the PHI node. We should treat it as a two-address code and make sure the ADDri is scheduled after any node that reads %reg1039. //===---------------------------------------------------------------------===// Use local info (i.e. register scavenger) to assign it a free register to allow reuse: ldr r3, [sp, #+4] add r3, r3, #3 ldr r2, [sp, #+8] add r2, r2, #2 ldr r1, [sp, #+4] <== add r1, r1, #1 ldr r0, [sp, #+4] add r0, r0, #2 //===---------------------------------------------------------------------===// LLVM aggressively lift CSE out of loop. Sometimes this can be negative side- effects: R1 = X + 4 R2 = X + 7 R3 = X + 15 loop: load [i + R1] ... load [i + R2] ... load [i + R3] Suppose there is high register pressure, R1, R2, R3, can be spilled. We need to implement proper re-materialization to handle this: R1 = X + 4 R2 = X + 7 R3 = X + 15 loop: R1 = X + 4 @ re-materialized load [i + R1] ... R2 = X + 7 @ re-materialized load [i + R2] ... R3 = X + 15 @ re-materialized load [i + R3] Furthermore, with re-association, we can enable sharing: R1 = X + 4 R2 = X + 7 R3 = X + 15 loop: T = i + X load [T + 4] ... load [T + 7] ... load [T + 15] //===---------------------------------------------------------------------===// It's not always a good idea to choose rematerialization over spilling. If all the load / store instructions would be folded then spilling is cheaper because it won't require new live intervals / registers. See 2003-05-31-LongShifts for an example. //===---------------------------------------------------------------------===// With a copying garbage collector, derived pointers must not be retained across collector safe points; the collector could move the objects and invalidate the derived pointer. This is bad enough in the first place, but safe points can crop up unpredictably. Consider: %array = load { i32, [0 x %obj] }** %array_addr %nth_el = getelementptr { i32, [0 x %obj] }* %array, i32 0, i32 %n %old = load %obj** %nth_el %z = div i64 %x, %y store %obj* %new, %obj** %nth_el If the i64 division is lowered to a libcall, then a safe point will (must) appear for the call site. If a collection occurs, %array and %nth_el no longer point into the correct object. The fix for this is to copy address calculations so that dependent pointers are never live across safe point boundaries. But the loads cannot be copied like this if there was an intervening store, so may be hard to get right. Only a concurrent mutator can trigger a collection at the libcall safe point. So single-threaded programs do not have this requirement, even with a copying collector. Still, LLVM optimizations would probably undo a front-end's careful work. //===---------------------------------------------------------------------===// The ocaml frametable structure supports liveness information. It would be good to support it. //===---------------------------------------------------------------------===// The FIXME in ComputeCommonTailLength in BranchFolding.cpp needs to be revisited. The check is there to work around a misuse of directives in inline assembly. //===---------------------------------------------------------------------===// It would be good to detect collector/target compatibility instead of silently doing the wrong thing. //===---------------------------------------------------------------------===// It would be really nice to be able to write patterns in .td files for copies, which would eliminate a bunch of explicit predicates on them (e.g. no side effects). Once this is in place, it would be even better to have tblgen synthesize the various copy insertion/inspection methods in TargetInstrInfo. //===---------------------------------------------------------------------===// Stack coloring improvements: 1. Do proper LiveStacks analysis on all stack objects including those which are not spill slots. 2. Reorder objects to fill in gaps between objects. e.g. 4, 1, <gap>, 4, 1, 1, 1, <gap>, 4 => 4, 1, 1, 1, 1, 4, 4 //===---------------------------------------------------------------------===// The scheduler should be able to sort nearby instructions by their address. For example, in an expanded memset sequence it's not uncommon to see code like this: movl $0, 4(%rdi) movl $0, 8(%rdi) movl $0, 12(%rdi) movl $0, 0(%rdi) Each of the stores is independent, and the scheduler is currently making an arbitrary decision about the order. //===---------------------------------------------------------------------===// Another opportunitiy in this code is that the $0 could be moved to a register: movl $0, 4(%rdi) movl $0, 8(%rdi) movl $0, 12(%rdi) movl $0, 0(%rdi) This would save substantial code size, especially for longer sequences like this. It would be easy to have a rule telling isel to avoid matching MOV32mi if the immediate has more than some fixed number of uses. It's more involved to teach the register allocator how to do late folding to recover from excessive register pressure.