While testing the new macros for working with 48 bit containers,
I faced a weird problem:
32 + 16: 0x2ef6e8da 0x79e60000
48: 0xffffe8da + 0x79e60000
All the bits starting from the 32nd were getting 1d in 9/10 cases.
The debug showed:
p[0]: 0x00002e0000000000
p[1]: 0x00002ef600000000
p[2]: 0xffffffffe8000000
p[3]: 0xffffffffe8da0000
p[4]: 0xffffffffe8da7900
p[5]: 0xffffffffe8da79e6
that the value becomes a garbage after the third OR, i.e. on
`p[2] << 24`.
When the 31st bit is 1 and there's no explicit cast to an unsigned,
it's being considered as a signed int and getting sign-extended on
OR, so `
e8000000` becomes `
ffffffffe8000000` and messes up the
result.
Cast the @p[2] to u64 as well to avoid this. Now:
32 + 16: 0x7ef6a490 0xddc10000
48: 0x7ef6a490 + 0xddc10000
p[0]: 0x00007e0000000000
p[1]: 0x00007ef600000000
p[2]: 0x00007ef6a4000000
p[3]: 0x00007ef6a4900000
p[4]: 0x00007ef6a490dd00
p[5]: 0x00007ef6a490ddc1
Fixes: c0086d34552f ("asm-generic: introduce be48 unaligned accessors")
Signed-off-by: Alexander Lobakin <alobakin@pm.me>
Link: https://lore.kernel.org/r/20220412215220.75677-1-alobakin@pm.me
Signed-off-by: Jens Axboe <axboe@kernel.dk>
static inline u64 __get_unaligned_be48(const u8 *p)
{
- return (u64)p[0] << 40 | (u64)p[1] << 32 | p[2] << 24 |
+ return (u64)p[0] << 40 | (u64)p[1] << 32 | (u64)p[2] << 24 |
p[3] << 16 | p[4] << 8 | p[5];
}