Jump to content
Posted 30 January 2015 - 12:32 AM
Posted 19 February 2015 - 01:14 AM
To quote The C Programming Language, 2nd Edition by Brian Kernighan and Dennis Ritchie (p. 36):
"...int will normally be the natural size for a particular machine."
This means that the maximum size of the int data type is determined by the natural size, or architecture, of the machine's CPU that's running the code.
"short is often 16 bits, long 32 bits, and int either 16 or 32 bits."
This is stating that the short data type is 16 bits (2 bytes) and that the long data type is 32 bits (4 bytes). However, an int can be either a short OR a long, which is where things get a bit confusing for people who aren't familiar with low-level programming.
You might notice that a long is 32-bits. You may have heard "32-bit" when referring to a CPU's architecture--usually 32-bit or 64-bit. This is exactly what determines the length of the int data type.
You see, back when C was first introduced, the main computers ran on 16-bit or 32-bit CPUs, hence why int can be 16-bit OR 32-bit (at least in the C programming language). I'm not sure if 64-bit vs 32-bit will matter in your situation (it usually doesn't), but just remember that the size of certain data types depends on the CPU's architecture.
If you need any additional clarification, please feel free to let me know what you don't understand. Variable-length data types can definitely be confusing at first.
0 members, 0 guests, 0 anonymous users