[go: nahoru, domu]

Jump to content

Precision (computer science): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
mNo edit summary
m two's complement
Line 1: Line 1:
In [[computer science]], '''precision''' of a numerical quantity is a measure of the detail in which the quantity is expressed. This is usually measured in binary bits, but sometimes in decimal digits. See [[precision (arithmetic)]].
In [[computer science]], '''precision''' of a numerical quantity is a measure of the detail in which the quantity is expressed. This is usually measured in binary bits, but sometimes in decimal digits. See [[precision (arithmetic)]].


In [[Java programming language|java]], one of the few [[programming language]]s with standardized precision [[data type]]s, the following precisions are defined for the standard integer numerical types of the language. The ranges given are for signed integer values.
In [[Java programming language|java]], one of the few [[programming language]]s with standardized precision [[data type]]s, the following precisions are defined for the standard integer numerical types of the language. The ranges given are for signed integer values represented in standard [[two's complement]] form.





Revision as of 13:59, 13 November 2004

In computer science, precision of a numerical quantity is a measure of the detail in which the quantity is expressed. This is usually measured in binary bits, but sometimes in decimal digits. See precision (arithmetic).

In java, one of the few programming languages with standardized precision data types, the following precisions are defined for the standard integer numerical types of the language. The ranges given are for signed integer values represented in standard two's complement form.


Precision of java data types
Type name Precision (binary bits) Range
byte 8 -128 to +127
short 16 -32,768 to 32,767
int 32 -2,147,483,648 to 2,147,483,647
long 64 -9,223,372,036,854,775,808 to -9,223,372,036,854,775,807