Most commands were adding argument completion handling by themselves,
resulting in a lot of unnecessary boilerplate. In many cases, this could
be done generically given the argument definition and the entries in the
g_argument_table.
I'm going to address this in a couple passes. In this first pass, I
added handling of commands that have only one argument list, with one
argument type, either single or repeated, and changed all the commands
that are of this sort (and don't have other bits of business in their
completers.)
I also added some missing connections between arg types and completions
to the table, and added a RemoteFilename and RemotePath to use in places
where we were using the Remote completers. Those arguments used to say
they were "files" but they were in fact remote files.
I also added a module arg type to use where we were using the module
completer. In that case, we should call the argument module.
Updates:
- The previous patch changed the default behavior to not load dwos in
`DWARFUnit`
~~`SymbolFileDWARFDwo *GetDwoSymbolFile(bool load_all_debug_info =
false);`~~
`SymbolFileDWARFDwo *GetDwoSymbolFile(bool load_all_debug_info = true);`
- This broke some lldb-shell tests (see
https://green.lab.llvm.org/green/view/LLDB/job/as-lldb-cmake/16273/)
- TestDebugInfoSize.py
- with symbol on-demand, by default statistics dump only reports
skeleton debug info size
- `statistics dump -f` will load all dwos. debug info = skeleton debug
info + all dwo debug info
Currently running `statistics dump` will trigger lldb to load debug info
that's not yet loaded (eg. dwo files). Resulted in a delay in the
command return, which, can be interrupting.
This patch also added a new option `--load-all-debug-info` asking
statistics to dump all possible debug info, which will force loading all
debug info available if not yet loaded.
The Python documentation [1] says that `PyImport_AppendInittab` should
be called before `Py_Initialize()`. Starting with Python 3.12, this is
enforced with a fatal error:
Fatal Python error: PyImport_AppendInittab: PyImport_AppendInittab()
may not be called after Py_Initialize()
This commit ensures we only modify the table of built-in modules if
Python hasn't been initialized. For Python embedded in LLDB, that means
this happen exactly once, before the first call to `Py_Initialize`,
which becomes a NO-OP after. However, when lldb is imported in an
existing Python interpreter, Python will have already been initialized,
but by definition, the lldb module will already have been loaded, so
it's safe to skip adding it (again).
This fixes#70453.
[1] https://docs.python.org/3.12/c-api/import.html#c.PyImport_AppendInittab
Currently running `statistics dump` will trigger lldb to load debug info
that's not yet loaded (eg. dwo files). Resulted in a delay in the
command return, which, can be interrupting.
This patch also added a new option `--load-all-debug-info` asking
statistics to dump all possible debug info, which will force loading all
debug info available if not yet loaded.
This patch attempts to fix lookup in class template specialization.
The first fixed problem is that during type lookup `DeclContextGetName`
have been dropping template arguments. So when such a name was compared
against a name in `DW_AT_name`, which contains template arguments, false
mismatches have been occurring.
The second fixed problem is that LLDB's printing policy hasn't been
matching Clang's printing policy when it comes to integral non-type
template arguments. This again caused some false mismatches during type
lookup, because Clang puts e.g. `3U` in debug info for class
specializations, but LLDB has been expecting just `3`. This patch brings
printing policy in line with what Clang does.
This patch attempts to fix lookup in class template specialization.
The first fixed problem is that during type lookup `DeclContextGetName`
have been dropping template arguments. So when such a name was compared
against a name in `DW_AT_name`, which contains template arguments, false
mismatches have been occurring.
The second fixed problem is that LLDB's printing policy hasn't been
matching Clang's printing policy when it comes to integral non-type
template arguments. This again caused some false mismatches during type
lookup, because Clang puts e.g. `3U` in debug info for class
specializations, but LLDB has been expecting just `3`. This patch brings
printing policy in line with what Clang does.
Per discussions from https://github.com/llvm/llvm-project/pull/81026, it
was decided that having a class that manages a map of progress reports
would be beneficial in order to categorize them. This class is a part of
the overall `Progress` class and utilizes a map that keeps a count of
how many times a progress report category has been sent. This would be
used with the new debugger broadcast bit added in
https://github.com/llvm/llvm-project/pull/81169 so that a user listening
with that bit will receive grouped progress reports.
Adds a comment to indicate intention of a piece of value printing code.
I was initially surprised to see this code (distilled for emphasis):
```cpp
if (str.empty()) {
if (style == eValueObjectRepresentationStyleValue)
str = GetSummaryAsCString();
else if (style == eValueObjectRepresentationStyleSummary)
str = GetValueAsCString();
}
```
My first thought was "is this a bug?", but I realized it was likely intentional. This
change adds a comment to indicate yes, this is intentional.
On arm64 machines, when there is a hardware breakpoint or watchpoint
set, and lldb has instruction-stepped a thread, and then done a
Process::Resume, we will sometimes receive an extra "instruction step
completed" mach exception and the pc has not advanced. From a user's
perspective, they hit Continue and lldb stops again at the same spot.
From the testsuite's perspective, this has been a constant source of
testsuite failures for any test using hardware watchpoints and
breakpoints, the arm64 CI bots seem especially good at hitting this
issue.
Jim and I have been slowly looking at this for a few months now, and
finally I decided to try to detect this situation in lldb and silently
resume the process again when it happens.
We were already detecting this "got an insn-step finished mach exception
but this thread was not instruction stepping" combination in
StopInfoMachException where we take the mach exception and create a
StopInfo object for it. We had a lot of logging we used to understand
the failure as it was hit on the bots in assert builds.
This patch adds a new case to `Thread::GetPrivateStopInfo()` to call the
StopInfo's (new) `IsContinueInterrupted()` method. In
StopInfoMachException, where we previously had logging for assert
builds, I now note it in an ivar, and when
`Thread::GetPrivateStopInfo()` asks if this has happened, we check all
of the combination of events that this comes up: We have a hardware
breakpoint or watchpoint, we were not instruction stepping this thread
but got an insn-step mach exception, the pc is the same as the previous
stop's pc. And in that case, `Thread::GetPrivateStopInfo()` returns no
StopInfo -- indicating that this thread would like to resume execution.
The `Thread` object has two StackFrameLists, `m_curr_frames_sp` and
`m_prev_frames_sp`. When a thread resumes execution, we move
`m_curr_frames_sp` in to `m_prev_frames_sp` and when it stops executing,
w euse `m_prev_frames_sp` to seed the new `m_curr_frames_sp` if most of
the stack is the same as before.
In this same location, I now save the Thread's RegisterContext::GetPC
into an ivar, `m_prev_framezero_pc`. StopInfoMachException needs this
information to check all of the conditions I outlined above for
`IsContinueInterrupted`.
This has passed exhaustive testing and we do not have any testsuite
failures for hardware watchpoints and breakpoints due to this kernel bug
with the patch in place. In focusing on these tests for thousands of
runs, I have found two other uncommon race conditions for the
TestConcurrent* tests on arm64. TestConcurrentManyBreakpoints.py (which
uses no hardware watchpoint/breakpoints) will sometimes only have 99
breakpoints when it expects 100, and any of the concurrent tests using
the shared harness (I've seen it in
TestConcurrentWatchBreakDelay.py,
TestConcurrentTwoBreakpointsOneSignal.py,
TestConcurrentSignalDelayWatch.py) can fail when the test harness checks
that there is only one thread still running at the end, and it finds two
-- one of them under pthread_exit / pthread_terminate. Both of these
failures happen on github main without my changes, and with my changes -
they are unrelated race conditions in these tests, and I'm sure I'll be
looking into them at some point if they hit the CI bots with frequency.
On my computer, these are in the 0.3-0.5% of the time class. But the CI
bots do have different timing.
I was refactoring something else but ran into this function. It was
somewhat confusing to read through and understand, but it boils down to
two steps:
- First we try `OptionArgParser::ToBoolean`. If that works, then we're
good to go.
- Second, we try `llvm::to_integer` to see if it's an integer. If it
parses to 0 or 1, we're good.
- Failing either of the steps above means we cannot parse it into a
bool.
Instead of having an integer out param and a bool return value, the
interface is better served with an optional<bool> -- Either it parses
into true or false, or you get back nothing (nullopt).
This commit changes DebugNamesDWARFIndex so that it now overrides
`GetFullyQualifiedType` and attempts to use DW_IDX_parent, when
available, to speed up such queries. When this type of information is
not available, the base-class implementation is used.
With this commit, we now achieve the 4x speedups reported in [1].
[1]:
https://discourse.llvm.org/t/rfc-improve-dwarf-5-debug-names-type-lookup-parsing-speed/74151/38
This allows you to specify options and arguments and their definitions
and then have lldb handle the completions, help, etc. in the same way
that lldb does for its parsed commands internally.
This feature has some design considerations as well as the code, so I've
also set up an RFC, but I did this one first and will put the RFC
address in here once I've pushed it...
Note, the lldb "ParsedCommand interface" doesn't actually do all the
work that it should. For instance, saying the type of an option that has
a completer doesn't automatically hook up the completer, and ditto for
argument values. We also do almost no work to verify that the arguments
match their definition, or do auto-completion for them. This patch
allows you to make a command that's bug-for-bug compatible with built-in
ones, but I didn't want to stall it on getting the auto-command checking
to work all the way correctly.
As an overall design note, my primary goal here was to make an interface
that worked well in the script language. For that I needed, for
instance, to have a property-based way to get all the option values that
were specified. It was much more convenient to do that by making a
fairly bare-bones C interface to define the options and arguments of a
command, and set their values, and then wrap that in a Python class
(installed along with the other bits of the lldb python module) which
you can then derive from to make your new command. This approach will
also make it easier to experiment.
See the file test_commands.py in the test case for examples of how this
works.
The algorithm to find the DW_OP_entry_value requires you to find the
nearest non-inlined frame. It did that by counting the number of stack
frames so that it could use that as a loop stopper.
That is unnecessary and inefficient. Unnecessary because GetFrameAtIndex
will return a null frame when you step past the oldest frame, so you
already have the "got to the end" signal without counting all the stack
frames.
And counting all the stack frames can be expensive.
I get a small but fairly steady stream of crash reports which I can only
explain by ValueObjectPrinter trying to access its m_valobj field, and
finding it NULL. I have never been able to reproduce any of these, and
the reports show a state too long after the fact to know what went
wrong.
I've read through this section of lldb a bunch of times trying to figure
out how this could happen, but haven't ever found anything actually
wrong that could cause this. OTOH, ValueObjectPrinter is somewhat sloppy
about how it handles the ValueObject it is printing.
a) lldb allows you to make a ValueObjectPrinter with a Null incoming
ValueObject. However, there's no affordance to set the ValueObject in
the Printer after the fact, and it doesn't really make sense to do that.
So I change the ValueObjectPrinter API's to take a ValueObject
reference, rather than a pointer. All the places that make
ValueObjectPrinters already check the non-null status of their
ValueObject's before making the ValueObjectPrinter, so sadly, I didn't
find the bug, but this will enforce the intent.
b) The next step in printing the ValueObject is deciding which of the
associated DynamicValue/SyntheticValue we are actually printing (based
on the use_dynamic and use_synthetic settings in the original
ValueObject. This was put in a pointer by GetMostSpecializedValue, but
most of the printer code just accessed the pointer, and it was hard to
reason out whether we were guaranteed to always call this before using
m_valobj. So far as I could see we always do (sigh, didn't find the bug
there either) but this was way too hard to reason about.
In fact, we figure out once which ValueObject we're going to print and
don't change that through the life of the printer. So I changed this to
both set the "most specialized value" in the constructor, and then to
always access it through GetMostSpecializedValue(). That makes it easier
to reason about the use of this ValueObject as well.
This is an NFC change, all it does is make the code easier to reason
about.
LLDB has a setting (symbols.enable-background-lookup) that calls
dsymForUUID on a background thread for images as they appear in the
current backtrace. Originally, the laziness of only looking up symbols
for images in the backtrace only existed to bring the number of
dsymForUUID calls down to a manageable number.
Users have requesting the same functionality but blocking. This gives
them the same user experience as enabling dsymForUUID globally, but
without the massive upfront cost of having to download all the images,
the majority of which they'll likely not need.
This patch renames the setting to have a more generic name
(symbols.auto-download) and changes its values from a boolean to an
enum. Users can now specify "off", "background" and "foreground". The
default remains "off" although I'll probably change that in the near
future.
LLDB has a setting (symbols.enable-background-lookup) that calls
dsymForUUID on a background thread for images as they appear in the
current backtrace. Originally, the laziness of only looking up symbols
for images in the backtrace only existed to bring the number of
dsymForUUID calls down to a manageable number.
Users have requesting the same functionality but blocking. This gives
them the same user experience as enabling dsymForUUID globally, but
without the massive upfront cost of having to download all the images,
the majority of which they'll likely not need.
This patch renames the setting to have a more generic name
(symbols.auto-download) and changes its values from a boolean to an
enum. Users can now specify "off", "background" and "foreground". The
default remains "off" although I'll probably change that in the near
future.
Refactors logic in `ParseInternal` that was previously calling
`GetFormatFromCString` twice, once with `partial_match_ok` set to false,
and the second time set to true.
With this change, lldb formats (ie `%@`, `%S`, etc) are checked first.
If a format is not one of those, then `GetFormatFromCString` is called
once, and now always checks for partial matches.
This formatter
https://github.com/llvm/llvm-project/pull/78609
was originally passing the signed seconds (which can refer to times in
the past) with an unsigned printf formatter, and had tests that expected
to see negative values from the printf which always failed on macOS. I'm
not clear how they ever passed on any platform.
Fix the printf to print seconds as a signed value, and re-enable the
tests.
mach-o object files never have separate debug info, and in a big app
there can be quite a large number of object files, so even a few stats
per object file can slow launches considerably.
This patch avoids this search for Mach-o symbol files of object type.
I don't have a way to test this, the only effect is that you didn't do a
bunch of stats that weren't going to do any good anyway.
A user found a crash when they would do code like:
```
(lldb) script
>>> target = lldb.SBTarget()
>>> lldb.debugger.SetSelectedTarget(target)
```
We were not checking if the target was valid in and it caused a crash..
debugserver on arm64 devices can manage both Byte Address Select
watchpoints (1-8 bytes) and MASK watchpoints (8 bytes-2 gigabytes). This
adds a SupportedWatchpointTypes key to the QSupported response from
debugserver with a list of these, so lldb can take full advantage of
them when creating larger regions with a single hardware watchpoint.
Also add documentation for this, and two other lldb extensions, to the
lldb-gdb-remote.txt documentation.
Re-enable TestLargeWatchpoint.py on Darwin systems when testing with the
in-tree built debugserver. I can remove the "in-tree built debugserver"
in the future when this new key is handled by an Xcode debugserver.
We have a Python script that needs to locate coredump path during
debugging so that we can retrieve certain metadata files associated with
it. Currently, there is no API for this.
This patch adds a new `SBProcess::GetCoreFile()` to retrieve target dump
file spec used for dump debugging. Note: this is different from the main
executable module spec. To achieve this, the patch hoists m_core_file
into PostMortemProcess for sharing.
---------
Co-authored-by: jeffreytan81 <jeffreytan@fb.com>
Adding command interpreter statistics into "statistics dump" command so
that we can track the command usage frequency for telemetry purpose.
This is useful to answer questions like what is the most frequently used
lldb commands across all our users.
---------
Co-authored-by: jeffreytan81 <jeffreytan@fb.com>
The implementation of `FormatCache::Entry
&FormatCache::GetEntry(ConstString)` is effectively a duplication of
`std::map::operator[]`. This change deletes `GetEntry` and replaces its
use with `operator[]`.
Reverted due to an internally discovered lld crash due to the underlying
StringMap changes, which turned out to be an existing lld bug that got
tickled by the StringMap changes. That's addressed in
dee8786f70a3d62b639113343fa36ef55bdbad63 so let's have another go with
this change.
Original commit message:
lldb was rehashing the string 3 times (once to determine which StringMap
to use, once to query the StringMap, once to insert) on insertion (twice
on successful lookup).
This patch allows the lldb to benefit from hash improvements in LLVM
(from djbHash to xxh3).
Though further changes would be needed to cache this value to disk - we
shouldn't rely on the StringMap::hash remaining the same in the
future/this value should not be serialized to disk. If we want cache
this value StringMap should take a hashing template parameter to allow
for a fixed hash to be requested.
This reverts commit 5bc1adff69315dcef670e9fcbe04067b5d5963fb.
Effectively reapplying the original 2e197602305be18b963928e6ae024a004a95af6d.
When using split DWARF with .dwp files we had an issue where sometimes
the DWO file within the .dwp file would be parsed _before_ the skeleton
compile unit. The DWO file expects to be able to always be able to get a
link back to the skeleton compile unit. Prior to this fix, the only time
the skeleton compile unit backlink would get set, was if the unit
headers for the main executable have been parsed _and_ if the unit DIE
was parsed in that DWARFUnit. This patch ensures that we can always get
the skeleton compile unit for a DWO file by adding a function:
```
DWARFCompileUnit *DWARFUnit::GetSkeletonUnit();
```
Prior to this fix DWARFUnit had some unsafe accessors that were used to
store two different things:
```
void *DWARFUnit::GetUserData() const;
void DWARFUnit::SetUserData(void *d);
```
This was used by SymbolFileDWARF to cache the `lldb_private::CompileUnit
*` for a SymbolFileDWARF and was also used to store the `DWARFUnit *`
for SymbolFileDWARFDwo. This patch clears up this unsafe usage by adding
two separate accessors and ivars for this:
```
lldb_private::CompileUnit *DWARFUnit::GetLLDBCompUnit() const { return m_lldb_cu; }
void DWARFUnit::SetLLDBCompUnit(lldb_private::CompileUnit *cu) { m_lldb_cu = cu; }
DWARFCompileUnit *DWARFUnit::GetSkeletonUnit();
void DWARFUnit::SetSkeletonUnit(DWARFUnit *skeleton_unit);
```
This will stop anyone from calling `void *DWARFUnit::GetUserData()
const;` and casting the value to an incorrect value.
A crash could occur in `SymbolFileDWARF::GetCompUnitForDWARFCompUnit()`
when the `non_dwo_cu`, which is a backlink to the skeleton compile unit,
was not set and was NULL. There is an assert() in the code, and then the
code just will kill the program if the assert isn't enabled because the
code looked like:
```
if (dwarf_cu.IsDWOUnit()) {
DWARFCompileUnit *non_dwo_cu =
static_cast<DWARFCompileUnit *>(dwarf_cu.GetUserData());
assert(non_dwo_cu);
return non_dwo_cu->GetSymbolFileDWARF().GetCompUnitForDWARFCompUnit(
*non_dwo_cu);
}
```
This is now fixed by calling the `DWARFUnit::GetSkeletonUnit()` which
will correctly always get the skeleton compile uint for a DWO file
regardless of if the skeleton unit headers have been parse or if the
skeleton unit DIE wasn't parsed yet.
To implement the ability to get the skeleton compile units, I added code
the DWARFDebugInfo.cpp/.h that make a map of DWO ID -> skeleton
DWARFUnit * that gets filled in for DWARF5 when the unit headers are
parsed. The `DWARFUnit::GetSkeletonUnit()` will end up parsing the unit
headers of the main executable to fill in this map if it already hasn't
been done. For DWARF4 and earlier we maintain a separate map that gets
filled in only for any DWARF4 compile units that have a DW_AT_dwo_id or
DW_AT_gnu_dwo_id attributes. This is more expensive, so this is done
lazily and in a thread safe manor. This allows us to be as efficient as
possible when using DWARF5 and also be backward compatible with DWARF4 +
split DWARF.
There was also an issue that stopped type lookups from succeeding in
`DWARFDIE SymbolFileDWARF::GetDIE(const DIERef &die_ref)` where it
directly was accessing the `m_dwp_symfile` ivar without calling the
accessor function that could end up needing to locate and load the .dwp
file. This was fixed by calling the
`SymbolFileDWARF::GetDwpSymbolFile()` accessor to ensure we always get a
valid value back if we can find the .dwp file. Prior to this fix it was
down which APIs were called and if any APIs were called that loaded the
.dwp file, it worked fine, but it might not if no APIs were called that
did cause it to get loaded.
When we have valid debug info indexes and when the lldb index cache was
enabled, this would cause this issue to show up more often.
I modified an existing test case to test that all of this works
correctly and doesn't crash.
`statistics dump` command relies on `SymbolFile::GetDebugInfoSize()` to
get total debug info size.
The current implementation is missing debug info for split dwarf
scenarios which requires getting debug info from separate dwo/dwp files.
This patch fixes this issue for split dwarf by parsing debug info from
dwp/dwo.
New yaml tests are added.
---------
Co-authored-by: jeffreytan81 <jeffreytan@fb.com>
This patch is the next piece of work in my Large Watchpoint proposal,
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
This patch breaks a user's watchpoint into one or more
WatchpointResources which reflect what the hardware registers can cover.
This means we can watch objects larger than 8 bytes, and we can watched
unaligned address ranges. On a typical 64-bit target with 4 watchpoint
registers you can watch 32 bytes of memory if the start address is
doubleword aligned.
Additionally, if the remote stub implements AArch64 MASK style
watchpoints (e.g. debugserver on Darwin), we can watch any power-of-2
size region of memory up to 2GB, aligned to that same size.
I updated the Watchpoint constructor and CommandObjectWatchpoint to
create a CompilerType of Array<UInt8> when the size of the watched
region is greater than pointer-size and we don't have a variable type to
use. For pointer-size and smaller, we can display the watched granule as
an integer value; for larger-than-pointer-size we will display as an
array of bytes.
I have `watchpoint list` now print the WatchpointResources used to
implement the watchpoint.
I added a WatchpointAlgorithm class which has a top-level static method
that takes an enum flag mask WatchpointHardwareFeature and a user
address and size, and returns a vector of WatchpointResources covering
the request. It does not take into account the number of watchpoint
registers the target has, or the number still available for use. Right
now there is only one algorithm, which monitors power-of-2 regions of
memory. For up to pointer-size, this is what Intel hardware supports.
AArch64 Byte Address Select watchpoints can watch any number of
contiguous bytes in a pointer-size memory granule, that is not currently
supported so if you ask to watch bytes 3-5, the algorithm will watch the
entire doubleword (8 bytes). The newly default "modify" style means we
will silently ignore modifications to bytes outside the watched range.
I've temporarily skipped TestLargeWatchpoint.py for all targets. It was
only run on Darwin when using the in-tree debugserver, which was a proxy
for "debugserver supports MASK watchpoints". I'll be adding the
aforementioned feature flag from the stub and enabling full mask
watchpoints when a debugserver with that feature is enabled, and
re-enable this test.
I added a new TestUnalignedLargeWatchpoint.py which only has one test
but it's a great one, watching a 22-byte range that is unaligned and
requires four 8-byte watchpoints to cover.
I also added a unit test, WatchpointAlgorithmsTests, which has a number
of simple tests against WatchpointAlgorithms::PowerOf2Watchpoints. I
think there's interesting possible different approaches to how we cover
these; I note in the unit test that a user requesting a watch on address
0x12e0 of 120 bytes will be covered by two watchpoints today, a
128-bytes at 0x1280 and at 0x1300. But it could be done with a 16-byte
watchpoint at 0x12e0 and a 128-byte at 0x1300, which would have fewer
false positives/private stops. As we try refining this one, it's helpful
to have a collection of tests to make sure things don't regress.
I tested this on arm64 macOS, (genuine) x86_64 macOS, and AArch64
Ubuntu. I have not modifed the Windows process plugins yet, I might try
that as a standalone patch, I'd be making the change blind, but the
necessary changes (see ProcessGDBRemote::EnableWatchpoint) are pretty
small so it might be obvious enough that I can change it and see what
the Windows CI thinks.
There isn't yet a packet (or a qSupported feature query) for the gdb
remote serial protocol stub to communicate its watchpoint capabilities
to lldb. I'll be doing that in a patch right after this is landed,
having debugserver advertise its capability of AArch64 MASK watchpoints,
and have ProcessGDBRemote add eWatchpointHardwareArmMASK to
WatchpointAlgorithms so we can watch larger than 32-byte requests on
Darwin.
I haven't yet tackled WatchpointResource *sharing* by multiple
Watchpoints. This is all part of the goal, especially when we may be
watching a larger memory range than the user requested, if they then add
another watchpoint next to their first request, it may be covered by the
same WatchpointResource (hardware watchpoint register). Also one "read"
watchpoint and one "write" watchpoint on the same memory granule need to
be handled, making the WatchpointResource cover all requests.
As WatchpointResources aren't shared among multiple Watchpoints yet,
there's no handling of running the conditions/commands/etc on multiple
Watchpoints when their shared WatchpointResource is hit. The goal beyond
"large watchpoint" is to unify (much more) the Watchpoint and Breakpoint
behavior and commands. I have a feeling I may be slowly chipping away at
this for a while.
Re-landing this patch after fixing two undefined behaviors in
WatchpointAlgorithms found by UBSan and by failures on different
CI bots.
rdar://108234227
The purpose of m_being_created in these classes was to prevent
broadcasting an event related to these Breakpoints during the creation
of the breakpoint (i.e. in the constructor). In Breakpoint and
Watchpoint, m_being_created had no effect. That is to say, removing it
does not change behavior.
However, BreakpointLocation does still use m_being_created. In the
constructor, SetThreadID is called which does broadcast an event only if
`m_being_created` is false. Instead of having this logic be roundabout,
the constructor instead calls `SetThreadIDInternal`, which actually
changes the thread ID. `SetThreadID` also will call
`SetThreadIDInternal` in addition to broadcasting a changed event.