This is an ongoing series of commits that are reformatting our Python
code. Reformatting is done with `black` (23.1.0).
If you end up having problems merging this commit because you have made
changes to a python file, the best way to handle that is to run `git
checkout --ours <yourfile>` and then reformat it with black.
RFC: https://discourse.llvm.org/t/rfc-document-and-standardize-python-code-style
Differential revision: https://reviews.llvm.org/D151460
arm64e platforms.
On arm64e-capable Apple platforms, the system libraries are always
arm64e, but applications often are arm64. When a target is created
from file, LLDB recognizes it as an arm64 target, but debugserver will
still (technically correct) report the process as being arm64e. For
consistency, set the target to arm64 here.
rdar://92248684
Differential Revision: https://reviews.llvm.org/D133069
When opening core files (and also in some other situations) we could end
up with two vdso modules. This could happen because the vdso module is
very special, and over the years, we have accumulated various ways to
load it.
In D10800, we added one mechanism for loading it, which took the form of
a generic load-from-memory capability. Unfortunately loading an elf file
from memory is not possible (because the loader never loads the entire
file), and our attempts to do so were causing crashes. So, in D34352, we
partially reverted D10800 and implemented a custom mechanism specific to
the vdso.
Unfortunately, enough of D10800 remained such that, under the right
circumstances, it could end up loading a second (non-functional) copy of
the vdso module. This happened when the process plugin did not support
the extended MemoryRegionInfo query (added in D22219, to workaround a
different bug), which meant that the loader plugin was not able to
recognise that the linux-vdso.so.1 module (this is how the loader calls
it) is in fact the same as the [vdso] module (the name used in
/proc/$PID/maps) we loaded before. This typically happened in a core
file, as they don't store this kind of information.
This patch fixes the issue by completing the revert of D10800 -- the
memory loading code is removed completely. It also reduces the scope of
the hackaround introduced in D22219 -- it isn't completely sound and is
only relevant for fairly old (but still supported) versions of android.
I added the memory loading logic to the wasm dynamic loader, which has
since appeared and is relying on this feature (it even has a test). As
far as I can tell loading wasm modules from memory is possible and
reliable. MachO memory loading is not affected by this patch, as it uses
a completely different code path.
Since the scenarios/patches I described came without test cases, I have
created two new gdb-client tests cases for them. They're not
particularly readable, but right now, this is the best way we can
simulate the behavior (bugs) of a particular dynamic linker.
Differential Revision: https://reviews.llvm.org/D122660
This adds a new platform class, whose job is to enable running
(debugging) executables under qemu.
(For general information about qemu, I recommend reading the RFC thread
on lldb-dev
<https://lists.llvm.org/pipermail/lldb-dev/2021-October/017106.html>.)
This initial patch implements the necessary boilerplate as well as the
minimal amount of functionality needed to actually be able to do
something useful (which, in this case means debugging a fully statically
linked executable).
The knobs necessary to emulate dynamically linked programs, as well as
to control other aspects of qemu operation (the emulated cpu, for
instance) will be added in subsequent patches. Same goes for the ability
to automatically bind to the executables of the emulated architecture.
Currently only two settings are available:
- architecture: the architecture that we should emulate
- emulator-path: the path to the emulator
Even though this patch is relatively small, it doesn't lack subtleties
that are worth calling out explicitly:
- named sockets: qemu supports tcp and unix socket connections, both of
them in the "forward connect" mode (qemu listening, lldb connecting).
Forward TCP connections are impossible to realise in a race-free way.
This is the reason why I chose unix sockets as they have larger, more
structured names, which can guarantee that there are no collisions
between concurrent connection attempts.
- the above means that this code will not work on windows. I don't think
that's an issue since user mode qemu does not support windows anyway.
- Right now, I am leaving the code enabled for windows, but maybe it
would be better to disable it (otoh, disabling it means windows
developers can't check they don't break it)
- qemu-user also does not support macOS, so one could contemplate
disabling it there too. However, macOS does support named sockets, so
one can even run the (mock) qemu tests there, and I think it'd be a
shame to lose that.
Differential Revision: https://reviews.llvm.org/D114509
This is a preparatory commit to enable mocking of qemu startup. That
will involve running the mock server in a separate process, so there's
no need for multithreading.
Initialization is moved from the start function into the constructor
(which can then take an actual socket instead of a class), and the run
method is made public.
Depends on D114156.
Differential Revision: https://reviews.llvm.org/D114157
We were using the client socket close as a way to terminate the handler
thread. But this kind of concurrent access to the same socket is not
safe. It also complicates running the handler without a dedicated thread
(next patch).
Instead, here I add an explicit way for a packet handler to request
termination. Waiting for lldb to terminate the connection would almost
be sufficient, but in the pty test we want to keep the pty open so we
can examine its state. Ability to disconnect at an arbitrary point may
be useful for testing other aspects of lldb functionality as well.
The way this works is that now each packet handler can optionally return
a list of responses (instead of just one). One of those responses (it
only makes sense for it to be the last one) can be a special
RESPONSE_DISCONNECT object, which triggers a disconnection (via a new
TerminateConnectionException).
As the mock server now cleans up the connection whenever it disconnects,
the pty test needs to explicitly dup(2) the descriptors in order to
inspect the post-disconnect state.
Differential Revision: https://reviews.llvm.org/D114156
This infrastructure has proven proven its worth, so give it a more
prominent place.
My immediate motivation for this is the desire to reuse this
infrastructure for qemu platform testing, but I believe this move makes
sense independently of that. Moving this code to the packages tree will
allow as to add more structure to the gdb client tests -- currently they
are all crammed into the same test folder as that was the only way they
could access this code.
I'm splitting the code into two parts while moving it. The first once
contains just the generic gdb protocol wrappers, while the other one
contains the unit test glue. The reason for that is that for qemu
testing, I need to run the gdb code in a separate process, so I will
only be using the first part there.
Differential Revision: https://reviews.llvm.org/D113893