I have been looking to simplify parsing logic and improve the interfaces
so that they are both easier to use and harder to abuse. To be specific,
I am referring to functions such as `OptionArgParser::ToBoolean`: I
would like to go from its current interface to something more like
`llvm::Error<bool> ToBoolean(llvm::StringRef option_arg)`.
Through working on that, I encountered 2 inconveniences:
1. Option parsing code is not uniform. Every function writes a slightly
different error message, so incorporating an error message from the
`ToBoolean` implementation is going to be laborious as I figure out what
exactly needs to change or stay the same.
2. Changing the interface of `ToBoolean` would require a global atomic
change across all of the Command code. This would be quite frustrating
to do because of the non-uniformity of our existing code.
To address these frustrations, I think it would be easiest to first
standardize the error reporting mechanism when parsing options in
commands. I do so by introducing `CreateOptionParsingError` which will
create an error message of the shape:
Invalid value ('${option_arg}') for -${short_value} ('${long_value}'):
${additional_context}
Concretely, it would look something like this:
(lldb) breakpoint set -n main -G yay
error: Invalid value ('yay') for -G (auto-continue): Failed to parse as
boolean
After this, updating the interfaces for parsing the values themselves
should become simpler. Because this can be adopted incrementally, this
should be able to done over the course of time instead of all at once as
a giant difficult-to-review change. I've changed exactly one function
where this function would be used as an illustration of what I am
proposing.
This is a follow-on to:
https://github.com/llvm/llvm-project/pull/82085
The completer for register names was missing from the argument table. I
somehow missed that the only register completer test was x86_64, so that
test broke.
I added the completer in to the right slot in the argument table, and
added a small completions test that just uses the alias register names.
If we end up having a platform that doesn't define register names, we'll
have to skip this test there, but it should add a sniff test for
register completion that will run most everywhere.
This reverts commit 21631494b068d9364b8dc8f18e59adee9131a0a5.
Reverted because of greendragon failure:
******************** TEST 'lldb-api :: functionalities/completion/TestCompletion.py' FAILED ********************
Script:
Most commands were adding argument completion handling by themselves,
resulting in a lot of unnecessary boilerplate. In many cases, this could
be done generically given the argument definition and the entries in the
g_argument_table.
I'm going to address this in a couple passes. In this first pass, I
added handling of commands that have only one argument list, with one
argument type, either single or repeated, and changed all the commands
that are of this sort (and don't have other bits of business in their
completers.)
I also added some missing connections between arg types and completions
to the table, and added a RemoteFilename and RemotePath to use in places
where we were using the Remote completers. Those arguments used to say
they were "files" but they were in fact remote files.
I also added a module arg type to use where we were using the module
completer. In that case, we should call the argument module.
Updates:
- The previous patch changed the default behavior to not load dwos in
`DWARFUnit`
~~`SymbolFileDWARFDwo *GetDwoSymbolFile(bool load_all_debug_info =
false);`~~
`SymbolFileDWARFDwo *GetDwoSymbolFile(bool load_all_debug_info = true);`
- This broke some lldb-shell tests (see
https://green.lab.llvm.org/green/view/LLDB/job/as-lldb-cmake/16273/)
- TestDebugInfoSize.py
- with symbol on-demand, by default statistics dump only reports
skeleton debug info size
- `statistics dump -f` will load all dwos. debug info = skeleton debug
info + all dwo debug info
Currently running `statistics dump` will trigger lldb to load debug info
that's not yet loaded (eg. dwo files). Resulted in a delay in the
command return, which, can be interrupting.
This patch also added a new option `--load-all-debug-info` asking
statistics to dump all possible debug info, which will force loading all
debug info available if not yet loaded.
Currently running `statistics dump` will trigger lldb to load debug info
that's not yet loaded (eg. dwo files). Resulted in a delay in the
command return, which, can be interrupting.
This patch also added a new option `--load-all-debug-info` asking
statistics to dump all possible debug info, which will force loading all
debug info available if not yet loaded.
I was refactoring something else but ran into this function. It was
somewhat confusing to read through and understand, but it boils down to
two steps:
- First we try `OptionArgParser::ToBoolean`. If that works, then we're
good to go.
- Second, we try `llvm::to_integer` to see if it's an integer. If it
parses to 0 or 1, we're good.
- Failing either of the steps above means we cannot parse it into a
bool.
Instead of having an integer out param and a bool return value, the
interface is better served with an optional<bool> -- Either it parses
into true or false, or you get back nothing (nullopt).
This allows you to specify options and arguments and their definitions
and then have lldb handle the completions, help, etc. in the same way
that lldb does for its parsed commands internally.
This feature has some design considerations as well as the code, so I've
also set up an RFC, but I did this one first and will put the RFC
address in here once I've pushed it...
Note, the lldb "ParsedCommand interface" doesn't actually do all the
work that it should. For instance, saying the type of an option that has
a completer doesn't automatically hook up the completer, and ditto for
argument values. We also do almost no work to verify that the arguments
match their definition, or do auto-completion for them. This patch
allows you to make a command that's bug-for-bug compatible with built-in
ones, but I didn't want to stall it on getting the auto-command checking
to work all the way correctly.
As an overall design note, my primary goal here was to make an interface
that worked well in the script language. For that I needed, for
instance, to have a property-based way to get all the option values that
were specified. It was much more convenient to do that by making a
fairly bare-bones C interface to define the options and arguments of a
command, and set their values, and then wrap that in a Python class
(installed along with the other bits of the lldb python module) which
you can then derive from to make your new command. This approach will
also make it easier to experiment.
See the file test_commands.py in the test case for examples of how this
works.
I get a small but fairly steady stream of crash reports which I can only
explain by ValueObjectPrinter trying to access its m_valobj field, and
finding it NULL. I have never been able to reproduce any of these, and
the reports show a state too long after the fact to know what went
wrong.
I've read through this section of lldb a bunch of times trying to figure
out how this could happen, but haven't ever found anything actually
wrong that could cause this. OTOH, ValueObjectPrinter is somewhat sloppy
about how it handles the ValueObject it is printing.
a) lldb allows you to make a ValueObjectPrinter with a Null incoming
ValueObject. However, there's no affordance to set the ValueObject in
the Printer after the fact, and it doesn't really make sense to do that.
So I change the ValueObjectPrinter API's to take a ValueObject
reference, rather than a pointer. All the places that make
ValueObjectPrinters already check the non-null status of their
ValueObject's before making the ValueObjectPrinter, so sadly, I didn't
find the bug, but this will enforce the intent.
b) The next step in printing the ValueObject is deciding which of the
associated DynamicValue/SyntheticValue we are actually printing (based
on the use_dynamic and use_synthetic settings in the original
ValueObject. This was put in a pointer by GetMostSpecializedValue, but
most of the printer code just accessed the pointer, and it was hard to
reason out whether we were guaranteed to always call this before using
m_valobj. So far as I could see we always do (sigh, didn't find the bug
there either) but this was way too hard to reason about.
In fact, we figure out once which ValueObject we're going to print and
don't change that through the life of the printer. So I changed this to
both set the "most specialized value" in the constructor, and then to
always access it through GetMostSpecializedValue(). That makes it easier
to reason about the use of this ValueObject as well.
This is an NFC change, all it does is make the code easier to reason
about.
Adding command interpreter statistics into "statistics dump" command so
that we can track the command usage frequency for telemetry purpose.
This is useful to answer questions like what is the most frequently used
lldb commands across all our users.
---------
Co-authored-by: jeffreytan81 <jeffreytan@fb.com>
This patch is the next piece of work in my Large Watchpoint proposal,
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
This patch breaks a user's watchpoint into one or more
WatchpointResources which reflect what the hardware registers can cover.
This means we can watch objects larger than 8 bytes, and we can watched
unaligned address ranges. On a typical 64-bit target with 4 watchpoint
registers you can watch 32 bytes of memory if the start address is
doubleword aligned.
Additionally, if the remote stub implements AArch64 MASK style
watchpoints (e.g. debugserver on Darwin), we can watch any power-of-2
size region of memory up to 2GB, aligned to that same size.
I updated the Watchpoint constructor and CommandObjectWatchpoint to
create a CompilerType of Array<UInt8> when the size of the watched
region is greater than pointer-size and we don't have a variable type to
use. For pointer-size and smaller, we can display the watched granule as
an integer value; for larger-than-pointer-size we will display as an
array of bytes.
I have `watchpoint list` now print the WatchpointResources used to
implement the watchpoint.
I added a WatchpointAlgorithm class which has a top-level static method
that takes an enum flag mask WatchpointHardwareFeature and a user
address and size, and returns a vector of WatchpointResources covering
the request. It does not take into account the number of watchpoint
registers the target has, or the number still available for use. Right
now there is only one algorithm, which monitors power-of-2 regions of
memory. For up to pointer-size, this is what Intel hardware supports.
AArch64 Byte Address Select watchpoints can watch any number of
contiguous bytes in a pointer-size memory granule, that is not currently
supported so if you ask to watch bytes 3-5, the algorithm will watch the
entire doubleword (8 bytes). The newly default "modify" style means we
will silently ignore modifications to bytes outside the watched range.
I've temporarily skipped TestLargeWatchpoint.py for all targets. It was
only run on Darwin when using the in-tree debugserver, which was a proxy
for "debugserver supports MASK watchpoints". I'll be adding the
aforementioned feature flag from the stub and enabling full mask
watchpoints when a debugserver with that feature is enabled, and
re-enable this test.
I added a new TestUnalignedLargeWatchpoint.py which only has one test
but it's a great one, watching a 22-byte range that is unaligned and
requires four 8-byte watchpoints to cover.
I also added a unit test, WatchpointAlgorithmsTests, which has a number
of simple tests against WatchpointAlgorithms::PowerOf2Watchpoints. I
think there's interesting possible different approaches to how we cover
these; I note in the unit test that a user requesting a watch on address
0x12e0 of 120 bytes will be covered by two watchpoints today, a
128-bytes at 0x1280 and at 0x1300. But it could be done with a 16-byte
watchpoint at 0x12e0 and a 128-byte at 0x1300, which would have fewer
false positives/private stops. As we try refining this one, it's helpful
to have a collection of tests to make sure things don't regress.
I tested this on arm64 macOS, (genuine) x86_64 macOS, and AArch64
Ubuntu. I have not modifed the Windows process plugins yet, I might try
that as a standalone patch, I'd be making the change blind, but the
necessary changes (see ProcessGDBRemote::EnableWatchpoint) are pretty
small so it might be obvious enough that I can change it and see what
the Windows CI thinks.
There isn't yet a packet (or a qSupported feature query) for the gdb
remote serial protocol stub to communicate its watchpoint capabilities
to lldb. I'll be doing that in a patch right after this is landed,
having debugserver advertise its capability of AArch64 MASK watchpoints,
and have ProcessGDBRemote add eWatchpointHardwareArmMASK to
WatchpointAlgorithms so we can watch larger than 32-byte requests on
Darwin.
I haven't yet tackled WatchpointResource *sharing* by multiple
Watchpoints. This is all part of the goal, especially when we may be
watching a larger memory range than the user requested, if they then add
another watchpoint next to their first request, it may be covered by the
same WatchpointResource (hardware watchpoint register). Also one "read"
watchpoint and one "write" watchpoint on the same memory granule need to
be handled, making the WatchpointResource cover all requests.
As WatchpointResources aren't shared among multiple Watchpoints yet,
there's no handling of running the conditions/commands/etc on multiple
Watchpoints when their shared WatchpointResource is hit. The goal beyond
"large watchpoint" is to unify (much more) the Watchpoint and Breakpoint
behavior and commands. I have a feeling I may be slowly chipping away at
this for a while.
Re-landing this patch after fixing two undefined behaviors in
WatchpointAlgorithms found by UBSan and by failures on different
CI bots.
rdar://108234227
This patch is the next piece of work in my Large Watchpoint proposal,
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
This patch breaks a user's watchpoint into one or more
WatchpointResources which reflect what the hardware registers can cover.
This means we can watch objects larger than 8 bytes, and we can watched
unaligned address ranges. On a typical 64-bit target with 4 watchpoint
registers you can watch 32 bytes of memory if the start address is
doubleword aligned.
Additionally, if the remote stub implements AArch64 MASK style
watchpoints (e.g. debugserver on Darwin), we can watch any power-of-2
size region of memory up to 2GB, aligned to that same size.
I updated the Watchpoint constructor and CommandObjectWatchpoint to
create a CompilerType of Array<UInt8> when the size of the watched
region is greater than pointer-size and we don't have a variable type to
use. For pointer-size and smaller, we can display the watched granule as
an integer value; for larger-than-pointer-size we will display as an
array of bytes.
I have `watchpoint list` now print the WatchpointResources used to
implement the watchpoint.
I added a WatchpointAlgorithm class which has a top-level static method
that takes an enum flag mask WatchpointHardwareFeature and a user
address and size, and returns a vector of WatchpointResources covering
the request. It does not take into account the number of watchpoint
registers the target has, or the number still available for use. Right
now there is only one algorithm, which monitors power-of-2 regions of
memory. For up to pointer-size, this is what Intel hardware supports.
AArch64 Byte Address Select watchpoints can watch any number of
contiguous bytes in a pointer-size memory granule, that is not currently
supported so if you ask to watch bytes 3-5, the algorithm will watch the
entire doubleword (8 bytes). The newly default "modify" style means we
will silently ignore modifications to bytes outside the watched range.
I've temporarily skipped TestLargeWatchpoint.py for all targets. It was
only run on Darwin when using the in-tree debugserver, which was a proxy
for "debugserver supports MASK watchpoints". I'll be adding the
aforementioned feature flag from the stub and enabling full mask
watchpoints when a debugserver with that feature is enabled, and
re-enable this test.
I added a new TestUnalignedLargeWatchpoint.py which only has one test
but it's a great one, watching a 22-byte range that is unaligned and
requires four 8-byte watchpoints to cover.
I also added a unit test, WatchpointAlgorithmsTests, which has a number
of simple tests against WatchpointAlgorithms::PowerOf2Watchpoints. I
think there's interesting possible different approaches to how we cover
these; I note in the unit test that a user requesting a watch on address
0x12e0 of 120 bytes will be covered by two watchpoints today, a
128-bytes at 0x1280 and at 0x1300. But it could be done with a 16-byte
watchpoint at 0x12e0 and a 128-byte at 0x1300, which would have fewer
false positives/private stops. As we try refining this one, it's helpful
to have a collection of tests to make sure things don't regress.
I tested this on arm64 macOS, (genuine) x86_64 macOS, and AArch64
Ubuntu. I have not modifed the Windows process plugins yet, I might try
that as a standalone patch, I'd be making the change blind, but the
necessary changes (see ProcessGDBRemote::EnableWatchpoint) are pretty
small so it might be obvious enough that I can change it and see what
the Windows CI thinks.
There isn't yet a packet (or a qSupported feature query) for the gdb
remote serial protocol stub to communicate its watchpoint capabilities
to lldb. I'll be doing that in a patch right after this is landed,
having debugserver advertise its capability of AArch64 MASK watchpoints,
and have ProcessGDBRemote add eWatchpointHardwareArmMASK to
WatchpointAlgorithms so we can watch larger than 32-byte requests on
Darwin.
I haven't yet tackled WatchpointResource *sharing* by multiple
Watchpoints. This is all part of the goal, especially when we may be
watching a larger memory range than the user requested, if they then add
another watchpoint next to their first request, it may be covered by the
same WatchpointResource (hardware watchpoint register). Also one "read"
watchpoint and one "write" watchpoint on the same memory granule need to
be handled, making the WatchpointResource cover all requests.
As WatchpointResources aren't shared among multiple Watchpoints yet,
there's no handling of running the conditions/commands/etc on multiple
Watchpoints when their shared WatchpointResource is hit. The goal beyond
"large watchpoint" is to unify (much more) the Watchpoint and Breakpoint
behavior and commands. I have a feeling I may be slowly chipping away at
this for a while.
rdar://108234227
`FindBreakpointID` take a BreakpointID and a pointer to a size_t (so you
can get position information). It returns a bool to indicate whether the
id was found in the list or not.
There are 2 callers of this currently and neither one actually uses the
position information, so I removed it. After that, I renamed it to
Contains to more accurately reflect the intent. Additionally, I changed
the argument type from a reference to a value (because BreakpointID is
just a wrapper around 2 integers, copies are cheap).
Follow-up to #69422.
This PR puts all the highlighting settings into a single struct for
easier handling
Co-authored-by: Talha Tahir <talha.tahir@10xengineers.ai>
This fixes:
```
[6083/7449] Building CXX object tools\lldb\source\Commands\CMakeFiles\lldbCommands.dir\CommandObjectFrame.cpp.obj
C:\git\llvm-project\lldb\source\Commands\CommandObjectFrame.cpp(497) : warning C4715: 'CommandObjectFrameVariable::ScopeRequested': not all control paths return a value
```
Store a SupportFile, rather than a FileSpec, in LineEntry. This commit
works towards having the SourceManageroperate on SupportFiles so that it
can (1) validate the Checksum and (2) materialize the content of inline
source information.
BreakpointIDList does not need to know about CommandReturnObject.
BreakpointIDList::FindAndReplaceIDRanges is the last place that uses it
in BreakpointIDList.
Instead of passing in a CommandReturnObject, it now returns an
llvm::Error. The callsite uses the Error to populate the
CommandReturnObject as needed.
This abstraction is leaky and BreakpointIDList does not need to know
about CommandReturnObject.
Additionally, setting the CommandReturnObject inout param to a success
state does very little. The function returns immediately if the input
ArrayRef is empty, and reading
CommandObjectMultiwordBreakpoint::VerifyIDs more closely, the input is
always empty if the previous call to
BreakpointIDList::FindAndReplaceIDRanges failed. If the call was
successful, then the CommandReturnObject is already in a success state.
I have opted to remove the function altogether and inline the
functionality where it was used.
LLVM supports DWARF 5 linetable extension to store source files inline
in DWARF. This is particularly useful for compiler-generated source
code. This implementation tries to materialize them as temporary files
lazily, so SBAPI clients don't need to be aware of them.
rdar://110926168
This patch replaces uses of StringRef::{starts,ends}with with
StringRef::{starts,ends}_with for consistency with
std::{string,string_view}::{starts,ends}_with in C++20.
I'm planning to deprecate and eventually remove
StringRef::{starts,ends}with.
This patch revives the effort to get this Phabricator patch into
upstream:
https://reviews.llvm.org/D137900
This patch was accepted before in Phabricator but I found some
-gsimple-template-names issues that are fixed in this patch.
A fixed up version of the description from the original patch starts
now.
This patch started off trying to fix Module::FindFirstType() as it
sometimes didn't work. The issue was the SymbolFile plug-ins didn't do
any filtering of the matching types they produced, and they only looked
up types using the type basename. This means if you have two types with
the same basename, your type lookup can fail when only looking up a
single type. We would ask the Module::FindFirstType to lookup "Foo::Bar"
and it would ask the symbol file to find only 1 type matching the
basename "Bar", and then we would filter out any matches that didn't
match "Foo::Bar". So if the SymbolFile found "Foo::Bar" first, then it
would work, but if it found "Baz::Bar" first, it would return only that
type and it would be filtered out.
Discovering this issue lead me to think of the patch Alex Langford did a
few months ago that was done for finding functions, where he allowed
SymbolFile objects to make sure something fully matched before parsing
the debug information into an AST type and other LLDB types. So this
patch aimed to allow type lookups to also be much more efficient.
As LLDB has been developed over the years, we added more ways to to type
lookups. These functions have lots of arguments. This patch aims to make
one API that needs to be implemented that serves all previous lookups:
- Find a single type
- Find all types
- Find types in a namespace
This patch introduces a `TypeQuery` class that contains all of the state
needed to perform the lookup which is powerful enough to perform all of
the type searches that used to be in our API. It contain a vector of
CompilerContext objects that can fully or partially specify the lookup
that needs to take place.
If you just want to lookup all types with a matching basename,
regardless of the containing context, you can specify just a single
CompilerContext entry that has a name and a CompilerContextKind mask of
CompilerContextKind::AnyType.
Or you can fully specify the exact context to use when doing lookups
like: CompilerContextKind::Namespace "std"
CompilerContextKind::Class "foo"
CompilerContextKind::Typedef "size_type"
This change expands on the clang modules code that already used a
vector<CompilerContext> items, but it modifies it to work with
expression type lookups which have contexts, or user lookups where users
query for types. The clang modules type lookup is still an option that
can be enabled on the `TypeQuery` objects.
This mirrors the most recent addition of type lookups that took a
vector<CompilerContext> that allowed lookups to happen for the
expression parser in certain places.
Prior to this we had the following APIs in Module:
```
void
Module::FindTypes(ConstString type_name, bool exact_match, size_t max_matches,
llvm::DenseSet<lldb_private::SymbolFile *> &searched_symbol_files,
TypeList &types);
void
Module::FindTypes(llvm::ArrayRef<CompilerContext> pattern, LanguageSet languages,
llvm::DenseSet<lldb_private::SymbolFile *> &searched_symbol_files,
TypeMap &types);
void Module::FindTypesInNamespace(ConstString type_name,
const CompilerDeclContext &parent_decl_ctx,
size_t max_matches, TypeList &type_list);
```
The new Module API is much simpler. It gets rid of all three above
functions and replaces them with:
```
void FindTypes(const TypeQuery &query, TypeResults &results);
```
The `TypeQuery` class contains all of the needed settings:
- The vector<CompilerContext> that allow efficient lookups in the symbol
file classes since they can look at basename matches only realize fully
matching types. Before this any basename that matched was fully realized
only to be removed later by code outside of the SymbolFile layer which
could cause many types to be realized when they didn't need to.
- If the lookup is exact or not. If not exact, then the compiler context
must match the bottom most items that match the compiler context,
otherwise it must match exactly
- If the compiler context match is for clang modules or not. Clang
modules matches include a Module compiler context kind that allows types
to be matched only from certain modules and these matches are not needed
when d oing user type lookups.
- An optional list of languages to use to limit the search to only
certain languages
The `TypeResults` object contains all state required to do the lookup
and store the results:
- The max number of matches
- The set of SymbolFile objects that have already been searched
- The matching type list for any matches that are found
The benefits of this approach are:
- Simpler API, and only one API to implement in SymbolFile classes
- Replaces the FindTypesInNamespace that used a CompilerDeclContext as a
way to limit the search, but this only worked if the TypeSystem matched
the current symbol file's type system, so you couldn't use it to lookup
a type in another module
- Fixes a serious bug in our FindFirstType functions where if we were
searching for "foo::bar", and we found a "baz::bar" first, the basename
would match and we would only fetch 1 type using the basename, only to
drop it from the matching list and returning no results
This patch renames {starts,ends}with to {starts,ends}_with for
consistency with std::{string,string_view}::{starts,ends}_with in
C++20. Since there are only a handful of occurrences, this patch
skips the deprecation phase and simply renames them.
Fixes https://github.com/llvm/llvm-project/issues/57372
Previously some work has already been done on this. A PR was generated
but it remained in review:
https://reviews.llvm.org/D136462
In short previous approach was following:
Changing the symbol names (making the searched part colorized) ->
printing them -> restoring the symbol names back in their original form.
The reviewers suggested that instead of changing the symbol table, this
colorization should be done in the dump functions itself. Our strategy
involves passing the searched regex pattern to the existing dump
functions responsible for printing information about the searched
symbol. This pattern is propagated until it reaches the line in the dump
functions responsible for displaying symbol information on screen.
At this point, we've introduced a new function called
"PutCStringColorHighlighted," which takes the searched pattern, a prefix and suffix,
and the text and applies colorization to highlight the pattern in the
output. This approach aims to streamline the symbol search process to
improve readability of search results.
Co-authored-by: José L. Junior <josejunior@10xengineers.ai>
This patch is rearranging code a bit to add WatchpointResources to
Process. A WatchpointResource is meant to represent a hardware
watchpoint register in the inferior process. It has an address, a size,
a type, and a list of Watchpoints that are using this
WatchpointResource.
This current patch doesn't add any of the features of
WatchpointResources that make them interesting -- a user asking to watch
a 24 byte object could watch this with three 8 byte WatchpointResources.
Or a Watchpoint on 1 byte at 0x1002 and a second watchpoint on 1 byte at
0x1003, these must both be served by a single WatchpointResource on that
doubleword at 0x1000 on a 64-bit target, if two hardware watchpoint
registers were used to track these separately, one of them may not be
hit. Or if you have one Watchpoint on a variable with a condition set,
and another Watchpoint on that same variable with a command defined or
different condition, or ignorecount, both of those Watchpoints need to
evaluate their criteria/commands when their WatchpointResource has been
hit.
There's a bit of code movement to rearrange things in the direction I'll
need for implementing this feature, so I want to start with reviewing &
landing this mostly NFC patch and we can focus on the algorithmic
choices about how WatchpointResources are shared and handled as they're
triggeed, separately.
This patch also stops printing "Watchpoint <n> hit: old value: <x>, new
vlaue: <y>" for Read watchpoints. I could make an argument for print
"Watchpoint <n> hit: current value <x>" but the current output doesn't
make any sense, and the user can print the value if they are
particularly interested. Read watchpoints are used primarily to
understand what code is reading a variable.
This patch adds more fallbacks for how to print the objects being
watched if we have types, instead of assuming they are all integral
values, so a struct will print its elements. As large watchpoints are
added, we'll be doing a lot more of those.
To track the WatchpointSP in the WatchpointResources, I changed the
internal API which took a WatchpointSP and devolved it to a Watchpoint*,
which meant touching several different Process files. I removed the
watchpoint code in ProcessKDP which only reported that watchpoints
aren't supported, the base class does that already.
I haven't yet changed how we receive a watchpoint to identify the
WatchpointResource responsible for the trigger, and identify all
Watchpoints that are using this Resource to evaluate their conditions
etc. This is the same work that a BreakpointSite needs to do when it has
been tiggered, where multiple Breakpoints may be at the same address.
There is not yet any printing of the Resources that a Watchpoint is
implemented in terms of ("watchpoint list", or
SBWatchpoint::GetDescription).
"watchpoint set var" and "watchpoint set expression" take a size
argument which was previously 1, 2, 4, or 8 (an enum). I've changed this
to an unsigned int. Most hardware implementations can only watch 1, 2,
4, 8 byte ranges, but with Resources we'll allow a user to ask for
different sized watchpoints and set them in hardware-expressble terms
soon.
I've annotated areas where I know there is work still needed with
LWP_TODO that I'll be working on once this is landed.
I've tested this on aarch64 macOS, aarch64 Linux, and Intel macOS.
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
(cherry picked from commit fc6b72523f3d73b921690a713e97a433c96066c6)
...and follow ups.
As it has caused test failures on Linux Arm and AArch64:
https://lab.llvm.org/buildbot/#/builders/96/builds/49126https://lab.llvm.org/buildbot/#/builders/17/builds/45824
```
lldb-shell :: Subprocess/clone-follow-child-wp.test
lldb-shell :: Subprocess/fork-follow-child-wp.test
lldb-shell :: Subprocess/vfork-follow-child-wp.test
```
This reverts commit a6c62bf1a4717accc852463b664cd1012237d334,
commit a0a1ff3ab40e347589b4e27d8fd350c600526735 and commit
fc6b72523f3d73b921690a713e97a433c96066c6.
This patch is rearranging code a bit to add WatchpointResources to
Process. A WatchpointResource is meant to represent a hardware
watchpoint register in the inferior process. It has an address, a size,
a type, and a list of Watchpoints that are using this
WatchpointResource.
This current patch doesn't add any of the features of
WatchpointResources that make them interesting -- a user asking to watch
a 24 byte object could watch this with three 8 byte WatchpointResources.
Or a Watchpoint on 1 byte at 0x1002 and a second watchpoint on 1 byte at
0x1003, these must both be served by a single WatchpointResource on that
doubleword at 0x1000 on a 64-bit target, if two hardware watchpoint
registers were used to track these separately, one of them may not be
hit. Or if you have one Watchpoint on a variable with a condition set,
and another Watchpoint on that same variable with a command defined or
different condition, or ignorecount, both of those Watchpoints need to
evaluate their criteria/commands when their WatchpointResource has been
hit.
There's a bit of code movement to rearrange things in the direction I'll
need for implementing this feature, so I want to start with reviewing &
landing this mostly NFC patch and we can focus on the algorithmic
choices about how WatchpointResources are shared and handled as they're
triggeed, separately.
This patch also stops printing "Watchpoint <n> hit: old value: <x>, new
vlaue: <y>" for Read watchpoints. I could make an argument for print
"Watchpoint <n> hit: current value <x>" but the current output doesn't
make any sense, and the user can print the value if they are
particularly interested. Read watchpoints are used primarily to
understand what code is reading a variable.
This patch adds more fallbacks for how to print the objects being
watched if we have types, instead of assuming they are all integral
values, so a struct will print its elements. As large watchpoints are
added, we'll be doing a lot more of those.
To track the WatchpointSP in the WatchpointResources, I changed the
internal API which took a WatchpointSP and devolved it to a Watchpoint*,
which meant touching several different Process files. I removed the
watchpoint code in ProcessKDP which only reported that watchpoints
aren't supported, the base class does that already.
I haven't yet changed how we receive a watchpoint to identify the
WatchpointResource responsible for the trigger, and identify all
Watchpoints that are using this Resource to evaluate their conditions
etc. This is the same work that a BreakpointSite needs to do when it has
been tiggered, where multiple Breakpoints may be at the same address.
There is not yet any printing of the Resources that a Watchpoint is
implemented in terms of ("watchpoint list", or
SBWatchpoint::GetDescription).
"watchpoint set var" and "watchpoint set expression" take a size
argument which was previously 1, 2, 4, or 8 (an enum). I've changed this
to an unsigned int. Most hardware implementations can only watch 1, 2,
4, 8 byte ranges, but with Resources we'll allow a user to ask for
different sized watchpoints and set them in hardware-expressble terms
soon.
I've annotated areas where I know there is work still needed with
LWP_TODO that I'll be working on once this is landed.
I've tested this on aarch64 macOS, aarch64 Linux, and Intel macOS.
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
Fixes [#68035](https://github.com/llvm/llvm-project/issues/68035), where
an inconsistency in the order of "Process launched" and "Process
stopped" messages occurs during `process launch`.
The fix involves adjusting the message output sequence in
`CommandObjectProcessLaunch::DoExecute` within
`source/Commands/CommandObjectProcess.cpp`. This ensures "Process
launched" consistently precedes "Process stopped" when executing
commands with the '-o' flag, i.e., non-interactive mode.
Upon implementing this change, two tests failed:
`lldb/test/Shell/Breakpoint/jit-loader_jitlink_elf.test` and
`lldb/test/Shell/Breakpoint/jit-loader_rtdyld_elf.test`. These failures
were expected as they relied on the previous, now-corrected message
order. Updating these tests to align with the new message sequence is
part of this PR's scope.
These error messages are written in a way that makes sense to an lldb
developer, but not to an end user who asks lldb to run on a compressed
corefile or whatever. Simplfy the messages.
This patch changes the interface of
StructuredData::Array::GetItemAtIndexAsString to return a
`std::optional<llvm::StringRef>` instead of taking an out parameter.
More generally, this commit serves as proposal that we change all of the
sibling APIs (`GetItemAtIndexAs`) to do the same thing. The reason this
isn't one giant patch is because it is rather unwieldy changing just one
of these, so if this is approved, I will do all of the other ones as
individual follow-ups.
This completes the conversion of LocateSymbolFile into a SymbolLocator
plugin. The only remaining function is DownloadSymbolFileAsync which
doesn't really fit into the plugin model, and therefore moves into the
SymbolLocator class, while still relying on the plugins to do the
underlying work.
This builds on top of the work started in c3a302d to convert
LocateSymbolFile to a SymbolLocator plugin. This commit moves
DownloadObjectAndSymbolFile.
Often, we only care about the split-dwarf files that have failed to
load. This can be useful when diagnosing binaries with many separate
debug info files where only some have errors.
```
(lldb) help image dump separate-debug-info
List the separate debug info symbol files for one or more target modules.
Syntax: target modules dump separate-debug-info <cmd-options> [<filename> [<filename> [...]]]
Command Options Usage:
target modules dump separate-debug-info [-ej] [<filename> [<filename> [...]]]
-e ( --errors-only )
Filter to show only debug info files with errors.
-j ( --json )
Output the details in JSON format.
This command takes options and free-form arguments. If your arguments
resemble option specifiers (i.e., they start with a - or --), you must use
' -- ' between the end of the command options and the beginning of the
arguments.
'image' is an abbreviation for 'target modules'
```
I updated the following tests
```
# on Linux
bin/lldb-dotest -p TestDumpDwo
# on Mac
bin/lldb-dotest -p TestDumpOso
```
This change applies to both the table and JSON outputs.
---------
Co-authored-by: Tom Yang <toyang@fb.com>
Add the ability to get a C++ vtable ValueObject from another
ValueObject.
This patch adds the ability to ask a ValueObject for a ValueObject that
represents the virtual function table for a C++ class. If the
ValueObject is not a C++ class with a vtable, a valid ValueObject value
will be returned that contains an appropriate error. If it is successful
a valid ValueObject that represents vtable will be returned. The
ValueObject that is returned will have a name that matches the demangled
value for a C++ vtable mangled name like "vtable for <class-name>". It
will have N children, one for each virtual function pointer. Each
child's value is the function pointer itself, the summary is the
symbolication of this function pointer, and the type will be a valid
function pointer from the debug info if there is debug information
corresponding to the virtual function pointer.
The vtable SBValue will have the following:
- SBValue::GetName() returns "vtable for <class>"
- SBValue::GetValue() returns a string representation of the vtable
address
- SBValue::GetSummary() returns NULL
- SBValue::GetType() returns a type appropriate for a uintptr_t type for
the current process
- SBValue::GetLoadAddress() returns the address of the vtable adderess
- SBValue::GetValueAsUnsigned(...) returns the vtable address
- SBValue::GetNumChildren() returns the number of virtual function
pointers in the vtable
- SBValue::GetChildAtIndex(...) returns a SBValue that represents a
virtual function pointer
The child SBValue objects that represent a virtual function pointer has
the following values:
- SBValue::GetName() returns "[%u]" where %u is the vtable function
pointer index
- SBValue::GetValue() returns a string representation of the virtual
function pointer
- SBValue::GetSummary() returns a symbolicated respresentation of the
virtual function pointer
- SBValue::GetType() returns the function prototype type if there is
debug info, or a generic funtion prototype if there is no debug info
- SBValue::GetLoadAddress() returns the address of the virtual function
pointer
- SBValue::GetValueAsUnsigned(...) returns the virtual function pointer
- SBValue::GetNumChildren() returns 0
- SBValue::GetChildAtIndex(...) returns invalid SBValue for any index
Examples of using this API via python:
```
(lldb) script vtable = lldb.frame.FindVariable("shape_ptr").GetVTable()
(lldb) script vtable
vtable for Shape = 0x0000000100004088 {
[0] = 0x0000000100003d20 a.out`Shape::~Shape() at main.cpp:3
[1] = 0x0000000100003e4c a.out`Shape::~Shape() at main.cpp:3
[2] = 0x0000000100003e7c a.out`Shape::area() at main.cpp:4
[3] = 0x0000000100003e3c a.out`Shape::optional() at main.cpp:7
}
(lldb) script c = vtable.GetChildAtIndex(0)
(lldb) script c
(void ()) [0] = 0x0000000100003d20 a.out`Shape::~Shape() at main.cpp:3
```
[lldb] Part 2 of 2 - Refactor `CommandObject::DoExecute(...)` to return
`void` instead of ~~`bool`~~
Justifications:
- The code doesn't ultimately apply the `true`/`false` return values.
- The methods already pass around a `CommandReturnObject`, typically
with a `result` parameter.
- Each command return object already contains:
- A more precise status
- The error code(s) that apply to that status
Part 1 refactors the `CommandObject::Execute(...)` method.
- See
[https://github.com/llvm/llvm-project/pull/69989](https://github.com/llvm/llvm-project/pull/69989)
rdar://117378957
[lldb] Part 1 of 2 - Refactor `CommandObject::Execute(...)` to return
`void` instead of ~~`bool`~~
Justifications:
- The code doesn't ultimately apply the `true`/`false` return values.
- The methods already pass around a `CommandReturnObject`, typically
with a `result` parameter.
- Each command return object already contains:
- A more precise status
- The error code(s) that apply to that status
Part 2 refactors the `CommandObject::DoExecute(...)` method.
- See
[https://github.com/llvm/llvm-project/pull/69991](https://github.com/llvm/llvm-project/pull/69991)
rdar://117378957
The `po` alias now matches the behavior of the `expression` command when
the it can apply a Fix-It to an expression.
Modifications
- Add has `m_fixed_expression` to the `CommandObjectDWIMPrint` class a
`protected` member that stores the post Fix-It expression, just like the
`CommandObjectExpression` class.
- Converted messages to present tense.
- Add test cases that confirms a Fix-It for a C++ expression for both
`po` and `expressions`
rdar://115317419
The `po` alias now matches the behavior of the `expression` command when
the it can apply a Fix-It to an expression.
Modifications
- Add has `m_fixed_expression` to the `CommandObjectDWIMPrint` class a
`protected` member that stores the post Fix-It expression, just like the
`CommandObjectExpression` class.
- Converted messages to present tense.
- Add test cases that confirms a Fix-It for a C++ expression for both
`po` and `expressions`
rdar://115317419
Co-authored-by: Pete Lawrence <plawrence@apple.com>
## Description
This pull request adds a new `stop-at-user-entry` option to LLDB
`process launch` command, allowing users to launch a process and pause
execution at the entry point of the program (for C-based languages,
`main` function).
## Motivation
This option provides a convenient way to begin debugging a program by
launching it and breaking at the desired entry point.
## Changes Made
- Added `stop-at-user-entry` option to `Options.td` and the
corresponding case in `CommandOptionsProcessLaunch.cpp` (short option is
'm')
- Implemented `GetUserEntryPointName` method in the Language plugins
available at the moment.
- Declared the `CreateBreakpointAtUserEntry` method in the Target API.
- Create Shell test for the command
`command-process-launch-user-entry.test`.
## Usage
`process launch --stop-at-user-entry` or `process launch -m` launches
the process and pauses execution at the entry point of the program.
Prior to this the command would simply crash when run on a running
process.
Of the three register commands, "info" was the only one missing these
requirements. On some level it makes sense because you're not going to
read a value or modify anything, but practically I think lldb assumes
any time you're going to access register related stuff, the process
should be paused.
I noticed this debugging with a remote gdb stub, so I've recreated that
scenario using attach in a new test case.