Jacob Lalonde f869d6efee
[LLDB][Minidump]Update MinidumpFileBuilder to read and write in chunks (#129307)
I recently received an internal error report that LLDB was OOM'ing when
creating a Minidump. In my 64b refactor we made a decision to acquire
buffers the size of the largest memory region so we could read all of
the contents in one call. This made error handling very simple (and
simpler coding for me!) but had the trade off of large allocations if
huge pages were enabled.

This patch is one I've had on the back burner for awhile, but we can
read and write the Minidump memory sections in discrete chunks which we
already do for writing to disk.

I had to refactor the error handling a bit, but it remains the same. We
make a best effort attempt to read as much of the memory region as
possible, but fail immediately if we receive an error writing to disk. I
did not add new tests for this because our existing test suite is quite
good, but I did manually verify a few Minidumps couldn't read beyond the
red_zone.

```
(lldb) reg read $sp
     rsp = 0x00007fffffffc3b0
(lldb) p/x 0x00007fffffffc3b0 - 128
(long) 0x00007fffffffc330
(lldb) memory read 0x00007fffffffc330
0x7fffffffc330: 60 c3 ff ff ff 7f 00 00 60 cd ff ff ff 7f 00 00  `.......`.......
0x7fffffffc340: 60 c3 ff ff ff 7f 00 00 65 e6 26 00 00 00 00 00  `.......e.&.....
(lldb) memory read 0x00007fffffffc329
error: could not parse memory info (Success!)
```

I'm not sure how to quantify the memory improvement other than we would
allocate the largest size regardless of the size. So a 2gb unreadable
region would cause a 2gb allocation even if we were reading 4096 kb. Now
we will take the range size or the max chunk size of 128 mb.
2025-04-08 09:47:52 -07:00
..