I was recently writing C++ code (which I don’t do much any more) for a little microcontroller to talk to an Allegro A6281 LED driver IC. It uses a 32-bit word to either set the driver PWM duty cycle, or a different 32-bit word to set some other properties. None of the subfields in the words align on 8-bit boundaries (note the MSBit is on the right):

The four bytes of this word are sent over a SPI bus to the LED driver IC.

I had initially tried to implement this with bit fields in C++ (well, I tried a union of both but could never get that to pack into four bytes):

struct Color {
    uint8_t     ignore  :   1;
    uint8_t     address :   1;
    uint16_t    red     :   10;
    uint16_t    green   :   10;
    uint16_t    blue    :   10;
} __attribute__((packed));

I was worried about endianness issues too, but figured I could solve those. Unfortunately, it turns out C/C++ don’t guarantee that bits are packed tightly or in order (which forces me to question the point of them at all), and no combination of pragmas I could find would tell the compiler to do the right thing.

Sure, this could be accomplished with shifting and masking, but in an embedded environment, the low-level performance of splitting bits mid-word and copying them to other arbitrary bits in another word might be suboptimal.

I wonder if Swift, could not be enhanced with a more useful notion of bit fields. I’d love to be able to declare a struct as fitting into a particular number of bytes, define its arbitrary bitfields in order, assign to or read those bit field’s values, and finally, get the individual bytes of the structure in specific order (this would mean specifying the endianness of the resultant layout in some way, or for reading/writing).

Expressing this at a high level would make it easier for the developer, obviously, but would also make it much easier for the compiler to apply any special bit-level instructions the target architecture might have. If the programmer is forced to write their own masking and shifting, it might make it impossible to take advantage of such instructions.

This would have benefits for memory-mapped IO as well, as many embedded control registers are broken up into bit fields.

1 Like

I'd try:

struct Color {
    uint32_t    ignore  :   1;
    uint32_t    address :   1;
    uint32_t    red     :   10;
    uint32_t    green   :   10;
    uint32_t    blue    :   10;
};

without attribute packed.

yeah, IIRC bitfields were never portable and when portability was required I always resorted to manual shifts and masks.

I vaguely remember seeing a macro based approach here where you specify basically offset and size (in bits?) for each field of a struct.

2 Likes

What does a C/C++ bitfield do that shifting and masking does not that could impact performance? (Answer: absolutely nothing--code generated from bitfields can pretty much only be worse than explicit shifting and masking. The only good reason to use bitfields is if the format specification is given in terms of one.)

It is interesting to think about ways in which Swift can do better than C[++] bitfields. See Swift MMIO's Bitfield for the beginnings of one take on that.

4 Likes

FWIW, many years ago when I did embedded stuff, it was pretty common for compilers to do a mediocre job on explicit bit-shifting code. e.g. they might emit a word load followed by integer rotation and then bitmask, when the target ISA has a perfectly good single instruction to load & mask (and maybe even shift too).

One might hope that compilers are smarter these days, but, you never know.

If one's going to rely on the compiler being smart, then presumably the best way to rely on those smarts is to use the language features (like bitfield structs) dedicated to them.

Perhaps more importantly, though, doing manual bit-shifting and masking is error-prone and verbose. It's obviously much better to just directly access members, and let the compiler worry about that stuff.

(yes, I know, only mostly - in some cases reads are volatile etc, but then strictly-speaking you have to use assembly to ensure the correct behaviour there anyway)

1 Like

I wasn't looking for help with the C++ code. Using uint32_t didn't change the behavior.

It's irrelevant what C/C++ does. I'm talking about what Swift could do. And I disagree with the assertion that “ code generated from bitfields can pretty much only be worse than explicit shifting and masking.” There could be a single instruction that exactly masks, shifts, and even flips endianness in a single instruction. But at the very least, the compiler could remove the burden off me.

Indeed! Fortunately compilers have absolutely zero issue doing this. For instance, if I use the Swift MMIO bitfield mechanism and do, say:

func getSomeBits(_ x: Int) -> Int {
    x[bits: 11 ..< 23]
}

swiftc -O generates:

_$s4bits11getSomeBitsyS2iF:
  ubfx	x0, x0, #11, #12 // extract 12 bits beginning from bit 11
  ret
6 Likes

Well, you mentioned you had a trouble of packing that into 4 bytes (and unions?!) and I merely suggested how to do that properly (and easier) with C/C++ bit fields without relying on "unpacked" pragma attribute.

And if such instruction exists I'd expect it being used by the compiler for my manual shift + mask implementation!

This part I agree with. Maybe not compiler necessarily, now that we could achieve this functionality with user defined macros.

2 Likes

+1 the compiler does a great job of optimizing this. Swift-MMIO includes file check tests to ensure mmio operations are maximally reduced by the compiler. This even includes optimizing through type safe accessors:

@Register(bitWidth: 16)
struct R16 {
  @ReadWrite(bits: 0..<1, as: Bool.self)
  var lo: LO
  @ReadWrite(bits: 15..<16, as: Bool.self)
  var hi: HI
}
let r16 = Register<R16>(unsafeAddress: 0x1000)

public func main16() {
  // CHECK-LABEL: void @"$s4main6main16yyF"()
  r16.modify {
    $0.lo = false
    $0.hi = true
  }
  // CHECK: %0 = load volatile i16
  // CHECK-NEXT: %1 = and i16 %0, 32766
  // CHECK-NEXT: %2 = or i16 %1, -32768
  // CHECK-NEXT: store volatile i16 %2
}

// another example:
public func main16() {
  // CHECK-LABEL: void @"$s4main6main16yyF"()
  r16.modify {
    $0.lo = true
    $0.hi = true
  }
  // CHECK: %0 = load volatile i16
  // CHECK-NEXT: %1 = or i16 %0, -32767
  // CHECK-NEXT: store volatile i16 %1
}
4 Likes

Many years ago, most embedded compilers were written from scratch by a small team working at the chip manufacturer. Those compilers always had lots of funny little extensions and an exciting relationship with the language standard and, indeed, with the very concept of language semantics. These days, people mostly just add new backends to long-established compiler frameworks instead of reimplementing the C parser for the fifth time. Usually the backend doesn’t even know that a particular memory access was part of a bit-field. It’s just a different world.

3 Likes

What is providing the comments in that code block you posted? I guess I better dive into Swift-MMIO!

If you're referring to the CHECK statements those are how you can write assertions using FileCheck (an llvm tool). This test asserts that the swift code optimizes to specific LLVM IR.

e.g. I'll interstitch comments into an example to explain:

public func main16() {
  // x: Assert we are in the function with the mangled name "$s4main6main16yyF"
  // CHECK-LABEL: void @"$s4main6main16yyF"()
  r16.modify {
    $0.lo = true
    $0.hi = true
  }
  // x: Assert the function ~starts with a volatile load - value %0 (1)
  // CHECK: %0 = load volatile i16
  // x: Assert the follow IR op is _exactly_ 1 bitwise `or` which sets both bits #0 and #15 to `1` - value %1
  // CHECK-NEXT: %1 = or i16 %0, -32767
  // x: Assert the next IR instruction volatile stores back the modified value %1
  // CHECK-NEXT: store volatile i16 %1
}

(1): LLVM Language Reference Manual — LLVM 19.0.0git documentation

3 Likes

Right, I realized after posting that that was something you added to the file. Thanks for the explanation of how it’s used.

2 Likes

Is this the correct link? It links to a section entitled “Volatile Memory Accesses.”