In C99, §6.5.3.4:
2 The
sizeofoperator yields the size (in bytes) of its operand, which may be an expression or the parenthesized name of a type. ...4 The value of the result is implementation-defined, and its type (an unsigned integer type) is
size_t, defined in<stddef.h>(and other headers).
In C++14, §5.3.3:
1 The
sizeofoperator yields the number of bytes in the object representation of its operand. ... The result of sizeof applied to any other fundamental type (3.9.1) is implementation-defined.
The only guaranteed values are sizeof(char), sizeof(unsigned char) and sizeof(signed char) which is one.
However, "the number of bytes in the object representation" seems pretty iron-clad to me. For example, in C99 §6.2.6.1:
4 Values stored in non-bit-field objects of any other object type consist of n × CHAR_BIT bits, where n is the size of an object of that type, in bytes. ...
So why is it implementation-defined if it seems pretty defined?
Many of you seem to be misinterpretating my question. I never claimed that:
A) The size of types are defined or the same on all systems,
B) implementation-defined means it can return "random values"
What I'm getting at here is that n * CHAR_BITS is a fixed formula. The formula itself can't changed between implementations. Yes, an int may be 4 bytes or 8 bytes. I get that. But between all implementations, the value must n * CHAR_BITS.
Aucun commentaire:
Enregistrer un commentaire