[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[cdt-dev] DSF GdbMemoryBlock endianness detection proposal
- From: John Dallaway <john@xxxxxxxxxxxxxxx>
- Date: Mon, 05 Mar 2012 09:29:19 +0000
- Delivered-to: email@example.com
- User-agent: Thunderbird 184.108.40.206 (X11/20111109)
I've been looking at the DSF-GDB memory block implementation with a view
to presenting memory words with correct endianness in the Memory view by
default. First, a couple of observations:
a) The current implementation of MIDataReadMemoryInfo.parseMemoryLines()
always creates MemoryByte objects with the default flags (including
ENDIANESS_KNOWN and excluding BIG_ENDIAN). This is clearly not correct
when the target is big endian.
b) The CDT traditional memory rendering code uses the endianness of the
zeroth byte of a MemoryBlock to determine whether to default to a big
endian or little endian presentation. This behaviour seems reasonable.
My proposal is to implement alternative DSF MIDataReadMemory and
MIDataReadMemoryInfo constructors that take a MemoryByte flag as a
parameter. This flag would then be used to set the MemoryByte object
endianness correctly. The original behaviour would be preserved when
using the original constructors so there would be no compatibility issues.
The target endianness could be retrieved at the DSF MIMemory service
level using a GDB "show endian" command just before the first memory
block read and then passed to MIDataReadMemory using the new
constructor. The endianness information could be cached within the DSF
MIMemory object so would be retrieved only once per session.
Would a patch along these lines be acceptable to the CDT committers?