Jump to content


 


Register a free account to unlock additional features at BleepingComputer.com
Welcome to BleepingComputer, a free community where people like yourself come together to discuss and learn how to use their computers. Using the site is easy and fun. As a guest, you can browse and view the various discussions in the forums, but can not create a new topic or reply to an existing one unless you are logged in. Other benefits of registering an account are subscribing to topics and forums, creating a blog, and having no ads shown anywhere on the site.


Click here to Register a free account now! or read our Welcome Guide to learn how to use this site.

Photo

What is the effect of primitive data types on the nature of programming language


  • Please log in to reply
1 reply to this topic

#1 radhika12

radhika12

  • Members
  • 9 posts
  • OFFLINE
  •  
  • Local time:01:41 PM

Posted 30 January 2015 - 12:32 AM

why the languages which are having primitive data types in them are considered to be not purely object oriented ,why is said that any language which depends on such primitive data types has actually an involvement of the machine architecture and hence it is not called a pure-object oriented programming language.[/size]
 
I am unable to get that how actually primitive data types size is decided by the underlying hardware?This is a big confusion because it is said that the size of primitive data types is totally dependent on the system architecture ,so actually how are these primitive data types represented in the system,in which form do they become a part of the architecture and how does actually the instruction when executed gets to know whether it is operating on a float data type or an int data type,so basically are there some codes defined for such data types ,according to which the whole execution takes place.please guide me ,I am under great confusion. [/size]

Edit: Topic moved from Internal Hardware to the more appropriate forum. ~ Animal

BC AdBot (Login to Remove)

 


#2 Taikoh

Taikoh

  • Members
  • 63 posts
  • OFFLINE
  •  
  • Gender:Female
  • Location:In front of a laptop
  • Local time:03:11 AM

Posted 19 February 2015 - 01:14 AM

To quote The C Programming Language, 2nd Edition by Brian Kernighan and Dennis Ritchie (p. 36):

 

 

"...int will normally be the natural size for a particular machine."

 

This means that the maximum size of the int data type is determined by the natural size, or architecture, of the machine's CPU that's running the code.

 

 

"short is often 16 bits, long 32 bits, and int either 16 or 32 bits."

 

This is stating that the short data type is 16 bits (2 bytes) and that the long data type is 32 bits (4 bytes). However, an int can be either a short OR a long, which is where things get a bit confusing for people who aren't familiar with low-level programming.

 

You might notice that a long is 32-bits. You may have heard "32-bit" when referring to a CPU's architecture--usually 32-bit or 64-bit. This is exactly what determines the length of the int data type.

 

You see, back when C was first introduced, the main computers ran on 16-bit or 32-bit CPUs, hence why int can be 16-bit OR 32-bit (at least in the C programming language). I'm not sure if 64-bit vs 32-bit will matter in your situation (it usually doesn't), but just remember that the size of certain data types depends on the CPU's architecture.

 

 

If you need any additional clarification, please feel free to let me know what you don't understand. Variable-length data types can definitely be confusing at first.  :thumbup2:






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users