Home : Internet Terms : Primitive Definition

Primitive

In computer science, a primitive is a fundamental data type that cannot be broken down into a more simple data type. For example, an integer is a primitive data type, while an array, which can store multiple data types, is not.

Some programming languages support more data types than others and not all languages implement data types the same way. However, most high-level languages share several common primitives.

Java, for instance, has eight primitive data types:

  1. boolean – a single TRUE or FALSE value (typically only requires one bit)
  2. byte – 8-bit signed integer (-127 to 128)
  3. short – 16-bit signed integer (-32,768 to 32,767)
  4. int – 32-bit signed integer (-231 to -231 -1)
  5. long – 64-bit signed integer (-263 to -263 -1)
  6. float – 32-bit floating point number
  7. double – 64-bit floating point number
  8. char – 16-bit Unicode character
The string data type is generally considered non-primitive since it is stored as an array of characters.

Primitives supported by each programming language are sometimes called "built-in data types" since they store values directly in memory. Non-primitive data types store references to values rather than the values themselves. Examples of non-primitive Java data types include arrays and classes.

Updated: May 23, 2019

Cite this definition:

https://techterms.com/definition/primitive

TechTerms - The Tech Terms Computer Dictionary

This page contains a technical definition of Primitive. It explains in computing terminology what Primitive means and is one of many Internet terms in the TechTerms dictionary.

All definitions on the TechTerms website are written to be technically accurate but also easy to understand. If you find this Primitive definition to be helpful, you can reference it using the citation links above. If you think a term should be updated or added to the TechTerms dictionary, please email TechTerms!