[go: nahoru, domu]

Jump to content

Precision (computer science): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
better rounding error links, de-stub
 
(34 intermediate revisions by 27 users not shown)
Line 1: Line 1:
{{Unreferenced|date=March 2007}}
{{More citations needed|date=March 2007}}


In [[computer science]], '''precision''' of a numerical quantity is a measure of the detail in which the quantity is expressed. This is usually measured in bits, but sometimes in decimal digits. It is related to [[precision (arithmetic)|precision in mathematics]], which describes the number of digits that are used to express a value.
In [[computer science]], the '''precision''' of a numerical quantity is a measure of the detail in which the quantity is expressed. This is usually measured in bits, but sometimes in decimal digits. It is related to [[precision (arithmetic)|precision in mathematics]], which describes the number of digits that are used to express a value.


Some of the standardized precision formats are
In [[Java (programming language)|Java]], one of the few [[programming language]]s with standardized precision [[data type]]s, the following precisions are defined for the standard integer numerical types of the language. The ranges given are for signed integer values represented in standard [[two's complement]] form.
* [[Half-precision floating-point format]]
* [[Single-precision floating-point format]]
* [[Double-precision floating-point format]]
* [[Quadruple-precision floating-point format]]
* [[Octuple-precision floating-point format]]


Of these, octuple-precision format is rarely used. The single- and double-precision formats are most widely used and supported on nearly all platforms. The use of half-precision format has been increasing especially in the field of [[machine learning]] since many machine learning algorithms are inherently error-tolerant.
{| width="60%" border="1" cellpadding="5" cellspacing="0" align="center"
|+'''Precision of java data types'''
|-
! width="10%" style="background:#efefef;" | Type name
! width="10%" style="background:#ffdead;" | Precision (binary bits)
| width="40%" style="background:#efefef;" | Range
|-
| <code>byte</code>
| 8
| -128 to +127
|-
| <code>short</code>
| 16
| -32,768 to 32,767
|-
| <code>int</code>
| 32
| -2,147,483,648 to 2,147,483,647
|-
| <code>long</code>
| 64
| -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
|}


==Rounding error==
==Rounding error==
{{further|Floating point}}


Precision is often the source of [[rounding errors]] in [[computation]]. The number of bits used to store a number will often cause some loss of accuracy. An example would be to store ''sin(0.1)'' in IEEE single precision floating point standard. The error is then often magnified as subsequent computations are made to the data (it can also be reduced).
Precision is often the source of [[rounding error]]s in [[computation]]. The number of bits used to store a number will often cause some loss of accuracy. An example would be to store "sin(0.1)" in IEEE single precision floating point standard. The error is then often magnified as subsequent computations are made using the data (although it can also be reduced).

{{further|[[Floating point]]}}


== See also ==
== See also ==
* [[Arbitrary-precision arithmetic]]
* [[Arbitrary-precision arithmetic]]
* [[Precision (arithmetic)]]
* [[Extended precision]]
* [[Granularity]]
* [[IEEE754]] (IEEE floating point standard)
* [[IEEE754]] (IEEE floating point standard)
* [[Integer (computer science)]]
* [[Significant figures]]
* [[Truncation]]
* [[Approximate computing]]

==References==
{{Reflist|60em}}


[[Category:Computer data]]
[[Category:Computer data]]
[[Category:Approximations]]

[[ja:精度 (算術)]]

Latest revision as of 06:36, 26 August 2023

In computer science, the precision of a numerical quantity is a measure of the detail in which the quantity is expressed. This is usually measured in bits, but sometimes in decimal digits. It is related to precision in mathematics, which describes the number of digits that are used to express a value.

Some of the standardized precision formats are

Of these, octuple-precision format is rarely used. The single- and double-precision formats are most widely used and supported on nearly all platforms. The use of half-precision format has been increasing especially in the field of machine learning since many machine learning algorithms are inherently error-tolerant.

Rounding error[edit]

Precision is often the source of rounding errors in computation. The number of bits used to store a number will often cause some loss of accuracy. An example would be to store "sin(0.1)" in IEEE single precision floating point standard. The error is then often magnified as subsequent computations are made using the data (although it can also be reduced).

See also[edit]

References[edit]