mirror of
https://github.com/llvm/llvm-project.git
synced 2025-05-19 01:16:06 +00:00

The function `CGDebugInfo::EmitFunctionDecl` is supposed to create a declaration -- never a _definition_ -- of a subprogram. This is made evident by the fact that the SPFlags never have the "Declaration" bit set by that function. However, when `EmitFunctionDecl` calls `DIBuilder::createFunction`, it still tries to fill the "Declaration" argument by passing it the result of `getFunctionDeclaration(D)`. This will query an internal cache of previously created declarations and, for most code paths, we return nullptr; all is good. However, as reported in [0], there are pathological cases in which we attempt to recreate a declaration, so the cache query succeeds, resulting in a subprogram declaration whose declaration field points to another declaration. Through a series of RAUWs, the declaration field ends up pointing to the SP itself. Self-referential MDNodes can't be `unique`, which causes the verifier to fail (declarations must be `unique`). We can argue that the caller should check the cache first, but this is not a correctness issue (declarations are `unique` anyway). The bug is that `CGDebugInfo::EmitFunctionDecl` should always pass `nullptr` to the declaration argument of `DIBuilder::createFunction`, expressing the fact that declarations don't point to other declarations. AFAICT this is not something for which any reasonable meaning exists. This seems a lot like a copy-paste mistake that has survived for ~10 years, since other places in this file have the exact same call almost token-by-token. I've tested this by compiling LLVMSupport with and without the patch, O2 and O0, and comparing the dwarfdump of the lib. The dumps are identical modulo the attributes decl_file/producer/comp_dir. [0]: https://github.com/llvm/llvm-project/issues/59241 Differential Revision: https://reviews.llvm.org/D143921
IRgen optimization opportunities. //===---------------------------------------------------------------------===// The common pattern of -- short x; // or char, etc (x == 10) -- generates an zext/sext of x which can easily be avoided. //===---------------------------------------------------------------------===// Bitfields accesses can be shifted to simplify masking and sign extension. For example, if the bitfield width is 8 and it is appropriately aligned then is is a lot shorter to just load the char directly. //===---------------------------------------------------------------------===// It may be worth avoiding creation of alloca's for formal arguments for the common situation where the argument is never written to or has its address taken. The idea would be to begin generating code by using the argument directly and if its address is taken or it is stored to then generate the alloca and patch up the existing code. In theory, the same optimization could be a win for block local variables as long as the declaration dominates all statements in the block. NOTE: The main case we care about this for is for -O0 -g compile time performance, and in that scenario we will need to emit the alloca anyway currently to emit proper debug info. So this is blocked by being able to emit debug information which refers to an LLVM temporary, not an alloca. //===---------------------------------------------------------------------===// We should try and avoid generating basic blocks which only contain jumps. At -O0, this penalizes us all the way from IRgen (malloc & instruction overhead), all the way down through code generation and assembly time. On 176.gcc:expr.ll, it looks like over 12% of basic blocks are just direct branches! //===---------------------------------------------------------------------===//