Right shift on signed types is not well defined

The shift operators (<< and >>) shift the bits in a word to the left or the right. From such an explanation it doesn’t follow directly what should happen with the bits at the word boundaries. There are several commonly used strategies

  • logical:
    Bits that go beyond the word boundary are dropped and the new positions are filled with zeroes.
  • ones:
    Bits that go beyond the word boundary are dropped and the new positions are filled with ones.
  • arithmetic:
    1. Shift is `logical’ for positive values.
    2. For negative values right shift is `ones’ and
    3. left shift is `logical’ but always sets the highest order bit (sign bit) to 1.
  • circular: Bits that go beyond the word boundary are reinserted at the other end.

`Arithmetic’ shift has its name from the fact that it implements an integer multiplication or division by a power of two.

For unsigned integer types C prescribes that the shift operators are `logical’ . So e.g (~0U >> 1) results in a word of all ones but for the highest order bit which is 0. The picture darkens when it comes to signed types. Here the compiler implementor may choose between a `logical’ and an `arithmetic’ shift. Basically this means that the use of the right shift operator on signed values is not portable unless very special care is taken. We can detect which shift is implemented by the simple expression ((~0 >> 1) < 0)

  • If the shift is `logical’ the highest order bit of the left side of the comparison is 0 so the result is positive.
  • If the shift is `arithmetic’ the highest order bit of the left side is 1 so the result is negative.

Observe in particular that in case of an arithmetic shift (~0 >> 1) == ~0. So this operator has two fixed points in that case, 0 and -1. If we want a portable shift we may choose the following operations

#define LOGSHIFTR(x,c) (((x) >> (c)) &amp; ~(~0 << (sizeof(int)*CHAR_BIT - (c)))

This produces a mask with the correct number of 1’s in the low order bits and performs a bitwise and with the result of the compiler shift. Observe

  • This supposes that x is of type int, a type independent definition would be much more complicated.
  • c is evaluated twice so don’t use side effects here.

Here is a C99 program to test your compiler.

#include <limits.h>
#include <stdio.h>

extern
int logshiftr(int x, unsigned c);

extern
int arishiftr(int x, unsigned c);

#define HIGHONES(c) ((signed)(~(unsigned)0 << (sizeof(signed)*CHAR_BIT - (c))))
#define HIGHZEROS(c) (~HIGHONES(c))

inline
int logshiftr(int x, unsigned c) {
  return (x >> c) &amp; HIGHZEROS(c);
}

inline
int arishiftr(int x, unsigned c) {
  return logshiftr(x, c) ^ (x < 0 ? HIGHONES(c) : 0);
}

int main(int argc) {
  int b = argc > 1 ? argc : 0;
  int val[11u] = { b, b + 1, b - 1, b + 2, b - 2, b + 3, b - 3, b + 4, b - 4, b + 5, b - 5};
  printf("shift\tvalue\t>>\tlogical\tarith\n");
  for (unsigned sh = 1; sh < 3; ++sh)
    for (unsigned i = 0; i < 11u; ++i)
      printf("%+d\t%+d\t%+d\t%+d\t%+d\n",
             sh,
             val[i],
             (val[i] >> sh),
             logshiftr(val[i], sh),
             arishiftr(val[i], sh));
}
Advertisement

How many bits has a byte?

I recently stumbled about this seemingly silly question when trying to write a C macro that depends on the width of a type.

So everybody knows the short answer, 8, as is also expressed in the commonly used French word for byte: `octet’. But surprisingly for me the long answer is more complicated than that: it depends on the historical context, the norms that you apply, and even then you have to dig a lot of text to come to it.

C has a definition of the type char, and the language specification basically uses the terms char and  byte interchangeably.

Historically, in times there have been platforms with chars (and thus bytes) that had a width different from 8, in particular some early computers coded printable characters with only 6 bits and had a word size of 36. And later other constructors found it more convenient to have words of 16 bits to be the least addressable unit. C90 didn’t wanted to exclude such platforms and just stated

The number of bits in a char is defined in the macro CHAR_BIT
CHAR_BIT can be any value but must be at least 8

and even C99 still just states:

A byte contains CHAR_BIT bits, and the values of type unsigned char range from 0 to (2^CHAR_BIT) – 1.

But then, on the page for the include file stdint.h it states

The typedef name int N _t designates a signed integer type with width N, no  padding  bits,  and  a  two’s-complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits.

So far so good, if there is an int8_t we can deduce that sizeof(int8_t) must be 1 and CHAR_BIT must be 8. But then the POSIX standard says

The following types are required:
int8_t
int16_t
int32_t
uint8_t
uint16_t
uint32_t

Which forces CHAR_BIT to be 8, and basically also implies that at least for small width types the representation must be two’s-complement on any POSIX compatible platform.

More reading:

Some forum discussion
The POSIX specification of stdint.h limits.h