[go: nahoru, domu]

Jump to content

Talk:Methods of computing square roots: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Line 163: Line 163:


http://math.exeter.edu/rparris/peanut/cordic.pdf (page 16) gives a very weird way of computing square roots using trig functions -- anonymous
http://math.exeter.edu/rparris/peanut/cordic.pdf (page 16) gives a very weird way of computing square roots using trig functions -- anonymous

The exeter link seems dead.

The cordic cited method, I think (and will check), actually gives twice the square root and the error is copied from Table IV in the cited paper that is wrong. When Cordic is run in hyperbolic vectoring mode it computes sqrt(x^2 - y^2), so for x=s+1 and y=s-1 we get sqrt(s^2 + 2s + 1 - s^2 + 2s - 1) = sqrt(4s). [[User:David.J.Greaves|David.J.Greaves]] ([[User talk:David.J.Greaves|talk]]) 19:33, 22 August 2017 (UTC).




:We do have an article about [[CORDIC]], which mentions computing square roots but does not give further details. I added a sentence to this article with a link to CORDIC. I don't have an idea whether CORDIC is still used (or was ever used) to compute square roots, but it would be good to have some more details. -- [[User:Jitse Niesen|Jitse Niesen]] ([[User talk:Jitse Niesen|talk]]) 14:42, 8 December 2008 (UTC)
:We do have an article about [[CORDIC]], which mentions computing square roots but does not give further details. I added a sentence to this article with a link to CORDIC. I don't have an idea whether CORDIC is still used (or was ever used) to compute square roots, but it would be good to have some more details. -- [[User:Jitse Niesen|Jitse Niesen]] ([[User talk:Jitse Niesen|talk]]) 14:42, 8 December 2008 (UTC)

Revision as of 19:33, 22 August 2017

WikiProject iconMathematics Start‑class Low‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
LowThis article has been rated as Low-priority on the project's priority scale.

Ibn al-Banna

  • Surprising that the user user:pgk removed the section on Ibn al-Banna method of computing square roots. Its good if a reason is also provided for the edit done on the page. (LATER: the user has emailed me giving the reason for his edit: bad formatting)
Actually, this method appears already in the article, under the section "Babylonian Method". Do you have any reference that it is Ibn al-Banna who invented the method? This seems questionable. Also, notice that the first time you edited the article, a large portion of it was deleted - See this diff. My guess is that you have used an external text editing program, and something went bad with the copying back and forth. You should also use edit summaries when editing articles. -- Meni Rosenfeld (talk) 13:50, 21 June 2006 (UTC)[reply]
  • I agree that something definitely went wrong while editing the article first time. As for the reference, it is mentioned in the book 'A History of Algorithms: from the Pebble to the Microchip' by Barbin and Borowczyk. Maybe we should change the heading title of the Babylonian Method to the Ibn-Al Banna Method? Chem1
No, "Babylonian method" is a very well known name for this, and Ibn-Al Banna came much, much later. You might want to add a reference to him in the Babylonian section (and remove the redundant information). I am wondering why he is notable for discovering something that had already been discovered. Could you provide a quote from your source? - Rainwarrior 15:38, 21 June 2006 (UTC)[reply]
I've had a look in said book at Amazon reader, and what you are probably referring to is from page 202 (in the edition I looked at), which says "The method was described in the 13th century by the mathematician Ibn al-Banna". But the method mentioned is not the x := (x + r/x) / 2 method, which is clearly described in the text as being known much before al-Banna (for example, it was known to Theon of Alexandria around 370 AD). So, please remove the irrelevant content from the Ibn al-Banna article, and be more careful when making edits in the future. -- Meni Rosenfeld (talk) 19:19, 21 June 2006 (UTC)[reply]
  • I have removed the mentioning of this method from the Ibn al-Banna article. Also, I have removed the section from the Methods of computing square roots article and have added a line to the Babylonian section about al-Banna describing a similar method.
I would say based on the descriptions given that it is the same method, not merely similar. But, this isn't important. Do you know any more about Ibn al-Banna? His biography page has very little information so far... - Rainwarrior 01:23, 22 June 2006 (UTC)[reply]


  • I have found some stuff on the Internet here and here. Is it OK to mention this?
Yes, it is okay to add this information to the Ibn al-Banna article, as long as you don't copy it word for word (otherwise it's a copyright violation) and provide links to these sources, so that the information can be checked. About al-Banna's method: It is difficult to say for sure, according to the text in the book, what exactly is the algorithm - It gives an equation which should be solved, but doesn't specify the way to solve it - but my impression is that this is not the Babylonian method, but rather something that is reminiscent of the long division-like method. -- Meni Rosenfeld (talk) 14:23, 22 June 2006 (UTC)[reply]
I am currently trying to understand the method given in the book and attributed to al-Banna. Hopefully I will come up with a 'layman' explanation for this method.
al-Banna's method seems to be the same as the "long-division-like algorithm" already described in this article, not the 0.5(x+r/x) one. I think, in the course of editing, his section got moved around or otherwise mis-associated. --Jay (Histrion) (talkcontribs) 20:59, 22 June 2006 (UTC)[reply]
  • Have a look here. This is the algorithm for the method used by al-Banna. NOTE: this link opens up search results in the Amazon Reader. Click Next at the bottom of the page and click link # 12 on the resulting page. Pages 206-207 describe in detail what the 'root, not a root' method is. Cheers!

Duplication?

A lot of the algorithms in this article are variations on each other; for instance, the N + d/2N method is just the first iteration of Newton's algorithm; and the section listed under "Finding square roots using mental arithmetic" is really just the long-division-like algorithm without writing it out in a long division format. Can we either remove some of the redundancy, or, perhaps more usefully, tie the related algorithms together better? --Jay (Histrion) (talkcontribs) 16:27, 22 June 2006 (UTC)[reply]

i agree, at least the section under "Quadratic equation method" can be deleted completely, because it's explanation is way to complicated and the result basically boils down to the babylonian formula. It's obvious if you look closely at the given formula sqrt(r)=N+d/(N+sqrt(r)), which will be iterated to approximate sqrt(r). N is an initial guess (the same as x0 in the babylonian method) and d=r-N^2. if you substitute sqrt(r) with x (just another name) and N with x (since N is an approximation of x), the formula reads x=x+(r-x^2)/(x+x), which reduces to x=(x+r/x)/2, the babylonian formula. Besides the formula - due to the usage of constant N & d - converges slower. If nobody objects, i'll remove the section Catskineater 19:44, 1 September 2006 (UTC)[reply]

I think you could delete any or all of this article, except the Babylonian method which is the only method worth using. In any case, the article is much too long. JRSpriggs 05:32, 2 September 2006 (UTC)[reply]

Spring cleaning!

Yes, the article has been in a bad state for too long. I am currently in the middle of some major changes to the article, mostly trimming down some irrelevant information. I hope to be finished by tomorrow. Feedback welcome. -- Meni Rosenfeld (talk) 15:42, 8 September 2006 (UTC)[reply]

The article seems to be using "r" for the number whose square root is desired. Since "root" begins with "r", this is counter-intuitive. I suggest we pick another symbol for that number! And perhaps we could use "r" for the root itself? JRSpriggs 07:22, 9 September 2006 (UTC)[reply]

I'm not sure this is necessary, but I won't object to it. Which letter do you have in mind? As for using "r" for the root, I'm rather against it - It will be clearer if we explicitly write every time. -- Meni Rosenfeld (talk) 08:53, 9 September 2006 (UTC)[reply]

How about using S for the square whose root is desired? Do we need a consistent convention for the iterative approximations to the root? JRSpriggs 09:52, 10 September 2006 (UTC)[reply]

Fine with me. The article currently uses xn for the iterations wherever it is applicable. -- Meni Rosenfeld (talk) 10:55, 10 September 2006 (UTC)[reply]

Finished, more or less

The one section I don't know what to make of is Approximations that depend on IEEE representation (the last one). It looks like a useful section; But it looks like it can be written better, and I don't have the knowledge to do it. If anyone with better understanding of IEEE can take a look at it, it would be great. Also, adding references would be good, though I don't know of any good sources. -- Meni Rosenfeld (talk) 14:45, 9 September 2006 (UTC)[reply]

I've been trying to find a source for it for a while to no avail. I understand IEEE but I'd have to take a close look at it to understand what's going on (it's kind of a reverse engineering problem). What we have now describes how to do it (maybe, I haven't been able to verify that it works), but not why it works. I think it's probably a valid technique, and is certainly of use to graphics programmers who may need a quick way to normalize a vector, but I haven't taken the time to look at it yet. - Rainwarrior 23:28, 9 September 2006 (UTC)[reply]
If you are referring to fastsqrt, it is better to motivate the discussion without referring to log2. The explanation about computing the biased exponent for the square root needs clarification, especially because the right-shifting of the LSB of the exponent into the MSB of the trailing significand is important when motivating why the resulting significand approximates that for sqrt(x). The basic idea is let x = 1.t*2^e, where e is the unbiased exponent and t is the trailing significand. If e is even, say, e = 2*p, then sqrt(x) = sqrt(1.t)*2^p. The number 1.t represents 1+t and sqrt(1+t) is approximately 1+t/2 (use first two terms of Taylor series). The biased exponent is odd, so the subtraction of 1 produces an even number and bit 23 is a zero. The right-shift takes trailing significand t[22]..t[0] and makes it 0t[22]...t[1], so the significand is (approximately) 1.0t which represents 1+t/2. If e is odd, say, e = 2*p+1, then sqrt(x) = sqrt(2(1.t))*2^p. The number 2(1.t) represents 2+2*t and sqrt(2+2*t) is approximately 3/2+t/2. The biased exponents is even, so the subtraction of 1 produces an odd number and bit 23 is a 1. The right-shift takes trailing significand t[22]...t[0] and makes it 1t[22]...t[1], so the significand is (approximately) 1.1t which represents 3/2+t/2. If instead you are referring to the inverse square root, the Wikipedia page for the inverse square root has a history of the algorithm that is more detailed than on the square root page (and the two pages are not quite in agreement).Daveeberly (talk) 06:15, 14 July 2010 (UTC)[reply]

Digit by digit method is not fully explained.

As explained in the current article, it doesn't provide enough details of the method so one could apply it.

You could find a more detailed explanation: http://www.mathpath.org/Algor/squareroot/algor.square.root.htm —Preceding unsigned comment added by Muttley.meen (talkcontribs)

Care to explain which detail was missing? Often, providing excessive details makes it harder to apply the method. That's what examples are for. -- Meni Rosenfeld (talk) 11:29, 16 September 2006 (UTC)[reply]
I hope the edit which I did on 15 Sept 2006 made it clearer. Is there still a problem? JRSpriggs 03:09, 18 September 2006 (UTC)[reply]
Sory for the late reply. Now looks better. As for the example, yes, most of the time is better but the definition of the algorithm is also needed. -- Muttley.meen (talk)

I just found this page. I read of the DUPLEX METHOD of finding the square root in a book on Vedic mathematics. It is a precise algorithm. Geniuses can do it mentally. It is written like a long division problem with a subtraction of the duplex from the dividend prior to subtracting the product of the root digit times the divisor (double the first digit-groups square root). More later. A cube root algorithm is also given! Vedic Mathematics, by Jagadguru Swami Sri Bharati Krishna Tirthaji Maharaja Sankaracarya (1884-1960), 1965, 1978. Larry R. Holmgren 06:02, 26 February 2007 (UTC)[reply]

The author earned these titles and accolades. He traveled from India to New York to take seven Masters Degrees by examination. He passed with highest honors.

I shall need help in setting up the long division bracket for examples of roots of perfect squares and irrational roots. Larry R. Holmgren 07:27, 26 February 2007 (UTC)[reply]

I added duplex examples and a 5-digit root example. Would another example be of value? The square root of a non-perfect square or of Pi, in two treatments? Larry R. Holmgren 03:55, 27 February 2007 (UTC)[reply]

To Larry: There is no reason to put "~~~~" into your edit summaries. Please use either "&middot;" or "&times;" to indicate multiplication rather than "x". Please use "<sup>2</sup>" to indicate a square rather than "*2" (or "**2" which would be more conventional). JRSpriggs 06:47, 27 February 2007 (UTC)[reply]

Why is it called the "Babylonian method"?

I added three notes related to the Babylonian method for computing square roots. Is the ancient Babylonian method essentially the same as the method that is now called the "Babylonian method"? --Jtir 20:20, 12 October 2006 (UTC)[reply]

Basically, it is not known how the Babylonians computed square roots, although there are informed conjectures. I still haven't discovered the origin of the term "Babylonian method" in modern usage. I put my conclusions in a note here: Square root of 2#Notes and simplified the note in this article. --Jtir 18:58, 15 October 2006 (UTC)[reply]

"Inverse square root" eh?

Why does inverse square root redirect here? —Preceding unsigned comment added by 75.177.191.14 (talkcontribs)

Because of the section Methods of computing square roots#Iterative methods for reciprocal square roots. "Inverse square root" is the same as "reciprocal square root". JRSpriggs 10:45, 4 December 2006 (UTC)[reply]

whilest it is unstable surely if you just start from a very small number then it will always converge? (155.198.115.46 (talk) 16:25, 27 November 2008 (UTC))[reply]

If by small, you mean "close to zero", then yes it will converge, but very slowly. JRSpriggs (talk) 07:49, 28 November 2008 (UTC)[reply]

Continued fractions

JRSpriggs, perhaps you can explain:

  1. What made you modify the equation for ? It is now more cumbersome and less correct, since what we know is ,not .
  2. What were you trying to accomplish with the example? Is it to demonstrate why the method works? In this case, it would be better to provide an actual proof. Is it to show how to use the method? In this case, don't you think it should be kept as simple as possible - sticking to the actual method given, which doesn't use any square roots except for intsqrt at the very beginning?

-- Meni Rosenfeld (talk) 18:07, 5 December 2006 (UTC)[reply]

I could not understand the method given (before my example) until I ignored it and worked out the continued fractions for several square roots from scratch. The given method is completely un-intuitive and no hint of justification or proof is given. "Simplicity" is less important than allowing people to understand what the method is and why it works. One does not normally remember exactly most of the mathematical formulas or facts which one studies. One just remembers the general approach and then figures it out again as needed. That would be impossible for the given method without an example, such as mine, or some other kind of proof. If you compare the formula which I enhanced with my example, you will see that the floor (integer part) of the expression containing the square root is actually what one uses in the example. The fact that one can replace the square root with its floor within that expression is very useful, but not immediately obvious. So the intermediate expression which I added is necessary to make it clear. JRSpriggs 05:04, 6 December 2006 (UTC)[reply]

Again, when you say you didn't understand, did you not understand how to use the method, or why it works? In the former case, the method was really as simple as it gets, I don't know what else to say. In the latter case, perhaps an explanation of how to use the method simply and efficiently should be separated from an explanation of why it works and how to perform the calculation without having the formulae readily available? -- Meni Rosenfeld (talk) 07:07, 6 December 2006 (UTC)[reply]

Credit for Quake3 InvSqrt() algorithm

Why did a someone with IP "69.181.250.222" (on December 21st, 2006 at 02:34) say that Greg Walsh created the InSqrt() algorithm from Quake3? Not only should this be cited, but from what I hear, no one knows who created it. Read that external link about the Quake3 algorithm to see what I mean. I think this either needs to be cited or removed. 65.110.239.80

Ok, to answer my own question, after the (main) Quake3 article (that is in the external links) was published, Greg Walsh contacted the author and owned up to creating that algorithm. I added the link to the follow up story in the external links section, but I still believe this fact needs to be cited. I don't know if this link then belongs in both the references or just the external links or both, so I would like someone else to make that decision with the information I have found. 65.110.239.80
Just make sure it is there. If you put it in the wrong section, someone will move it to the right one. JRSpriggs 13:19, 28 January 2007 (UTC)[reply]
Me again with a different IP. I added the link to the external links section. 129.186.16.56 21:07, 29 January 2007 (UTC)[reply]
It seems like the external links are 404ed. 66.59.156.141 13:57, 13 April 2007 (UTC)[reply]

I'd always thought John Carmack invented that inverse square root approximation. A good analysis of the algorithm can be found here: [1]. AlphaPyro 18:02, 17 May 2007 (UTC)[reply]

Citations of source code used?

Does anyone have the place where the different more complex snippets in this page were taken from? E.g. Mul/Div-Free Algorithm.

The "faster" mul/div-free algorithm is C.C.Woo's book "The Fundamental Operations in Bead Arithmetic", China Lace Co, Hong Kong, adapted to binary. See [2]. A google search for "integer square root algorithm" will turn up a lot of stuff. Martinwguy 09:16, 7 September 2007 (UTC)[reply]

Separate article for integer Square Root algorithms

Would a separate article be appropriate for integer square root algorithms? There seem to be many of them out there, all very different. Martinwguy 09:15, 7 September 2007 (UTC)[reply]

I think they are better placed here. -- Meni Rosenfeld (talk) 11:18, 7 September 2007 (UTC)[reply]

Babylonian method

Repeat steps 2 and 3 until the desired accuracy is achieved

Let's say I want the iteration to stop when the relative (or absolute) error is below some required value. Is there an easy test I can perform?--Malcohol 13:36, 30 November 2007 (UTC)[reply]

The true root will lie between and . So when their absolute difference is less than the required value, then (or better still ) is close enough. JRSpriggs 20:45, 30 November 2007 (UTC)[reply]

Convergence: The Babylonian method only converges for . For the series is chaotic, never converges, and is highly sensitive to both and . If you guess , the series will converge to the negative root [3]. Pulu (talk) 11:10, 8 March 2011 (UTC)[reply]

I modified the lead to make it clear that this article only deals with the principal square root of nonnegative real number. JRSpriggs (talk) 20:03, 8 March 2011 (UTC)[reply]

Pell's equation

In this section, it's stated that:

  • In either case, is a rational approximation satisfying

In article Liouville number it's said that:

  • In number theory, a Liouville number is a real number x with the property that, for any positive integer n, there exist integers p and q with q > 1 and such that

I think it should be stated that is not a Liouville number, because it satisfies that property for n = 2 but not for all n (does it fail for n = 3?). Albmont (talk) 18:29, 5 August 2008 (UTC)[reply]

Hmm, it doesn't have much to do with methods of computing square roots, so the article on square root seems a more natural notation. However, the fact that is an algebraic number (which is a bit hidden in that article; perhaps that could be improved) and Liouville numbers cannot be algebraic implies that square roots are not Liouville numbers, so I'm not sure it should be mentioned at that article either.
The square root of 6 is 2.44949… so
However, what is true is that there are only finitely many pairs (p, q) such that
(the irrationality measure is 2). On the other hand, a Liouville number has infinitely many such pairs. -- Jitse Niesen (talk) 20:17, 5 August 2008 (UTC)#[reply]

Trigonometric Solutions

http://math.exeter.edu/rparris/peanut/cordic.pdf (page 16) gives a very weird way of computing square roots using trig functions -- anonymous

The exeter link seems dead.

The cordic cited method, I think (and will check), actually gives twice the square root and the error is copied from Table IV in the cited paper that is wrong. When Cordic is run in hyperbolic vectoring mode it computes sqrt(x^2 - y^2), so for x=s+1 and y=s-1 we get sqrt(s^2 + 2s + 1 - s^2 + 2s - 1) = sqrt(4s). David.J.Greaves (talk) 19:33, 22 August 2017 (UTC).[reply]


We do have an article about CORDIC, which mentions computing square roots but does not give further details. I added a sentence to this article with a link to CORDIC. I don't have an idea whether CORDIC is still used (or was ever used) to compute square roots, but it would be good to have some more details. -- Jitse Niesen (talk) 14:42, 8 December 2008 (UTC)[reply]

There are several methods of computing a square root using trigonometry. For example:

root = cos( asin( (n-1)/(n+1) ) ) * (n+1)/2

or

root = tan( acos( (n-1)/(n+1) ) ) * (n-1)/2

Should these methods be added to the article and/or to Square root? [1] RareEntity (talk) 03:14, 28 January 2016 (UTC)[reply]

Do you have a reliable source? We don't usually bother including ideas from every amateur web page. And is this really a method of computing? Or just a cute identity? Dicklyon (talk) 05:14, 28 January 2016 (UTC)[reply]
Using transcendental functions to calculate a much simpler algebraic function, √n, seems like burning diamonds to conserve coal. It is not efficient.
Also these formulas would not work unless n > 1. And it would not work well unless n ≈ 3+2√2 ≈ 5.8 . JRSpriggs (talk) 04:43, 29 January 2016 (UTC)[reply]
The first one I listed is an identity derived from Euclid's Elements Book 2, Prop 14. RareEntity (talk) 06:24, 17 February 2016 (UTC)[reply]

References

About A two-variable iterative method

It is not clear where the pair of equations come from, although reverse engineering them is not too difficult. However, it is simple enough to show that a[n+1] = (3*S*a[n] - a[n]^3)/(2*S), which makes the algorithm equivalent to applying Newton's method to F(y) = 1/y^2 - 1/S (the a[i] are the iterates with a[0] = S).Daveeberly (talk) 06:15, 14 July 2010 (UTC)[reply]

Last edits

In the hope of collaboration, the current version means nothing to laymen. Wikipedia is written for the general publix, not professionals. I believe the current explanation given is absolutely nonsensical to the common reader, and a much simpler explanation is easily possible. If I have confused one variable, that variable can be changed. The useful technical information (geometric mean vs arithmetic mean) is best nested into the prose, as most people will not understand what either is. - ʄɭoʏɗiaɲ τ ¢ 02:27, 8 August 2010 (UTC)[reply]

The established text contains this point:
2. Let xn+1 be the average of xn and S / xn (using the arithmetic mean to approximate the geometric mean).
One large problem with that is the typography relating to the '/' which requires a fair bit of confidence to be interpreted as "divides". The following may be more accessible:
2. Let xn+1 be the average of xn and the result from dividing S by xn (this uses the arithmetic mean to approximate the geometric mean).
Johnuniq (talk) 04:40, 8 August 2010 (UTC)[reply]
Regarding Floydian's remark about Wikipedia being written for the general public and not professionals, I do think one has to be a little careful about making quite such sweeping statements. For example Wikipedia quite properly has a very useful article on Lie groups but it's certainly not written for the general public. Realistically we have to expect that the target audience varies with the nature of the article. Computing square roots (by implication to arbitrary precision) strikes me as an essentially technical matter and in this case, beyond providing the customary lead designed to instruct the widest possible audience, one really isn't at pains to address the general public. The Square root article is a different matter but again we can overstate the case even there. FightingMac (talk) 04:13, 17 April 2011 (UTC)[reply]

Merging proposal

It seems to me that the scope of this article is methods for numerical calculuation of square roots, and that the sections on continued fractions and Pell's equation do not belong here. On the other hand, the example expressed as a periodic continued fraction would be good in the article periodic continued fraction. Any opposite views? Isheden (talk) 20:52, 7 December 2011 (UTC)[reply]

None from me. In fact, I just made Pell's equation a subsection of the Continued fractions group (it was a separate section) to simplify merging the the whole thing if appropriate. I have also removed one GCF for that after more careful analysis proved inappropriate. — Glenn L (talk) 02:05, 18 January 2012 (UTC)[reply]
I see that the Pell's equation section has been separated from the Continued fractions section; no problem there. However, I restored the GCF subsection since I don't see it as redundant, as the example of appears unique to Wikipedia. — Glenn L (talk) 04:28, 16 June 2016 (UTC)[reply]

Original research

This article seems to largely ignore several common algorithms used in numerical analysis, instead presenting original research stuff like the high/low method. This article should be checked against standard references in numerical analysis, as well as the Wikipedia articles numerical analysis and root-finding algorithm, in this case for solving the equation x2 - r = 0, where r is the radicand. Methods that cannot be found in the literature in this field comprise original research and should be removed. Isheden (talk) 10:17, 15 January 2012 (UTC)[reply]

I would not mind if someone took out the section Methods of computing square roots#High/low method. I refrain from taking it out myself for fear of inciting an edit-war (with its author) unnecessarily. Similarly for Methods of computing square roots#A two-variable iterative method and some of the others. However, I think we need to keep at least the sections: Rough estimation, Babylonian method, Digit-by-digit calculation, Iterative methods for reciprocal square roots, and Negative or complex square. JRSpriggs (talk) 06:01, 16 January 2012 (UTC)[reply]
First, regarding the High/low method, I don't know who was the original author. However, I think it's safe to replace it with the bisection method, which has known convergence properties. With the high/low method, in principle you need to calculate nine intermediate points (e.g. 4.41, 4.42, 4.43, 4.44, 4.45, 4.46, 4.47, 4.48, 4.49) to determine which ones are too high/too low, and which ones are "close". With the bisection method, you just evaluate one intermediate point and there is a clear rule for discarding one interval. Isheden (talk) 11:29, 16 January 2012 (UTC)[reply]
The bisection method would be better (when using a calculator) than the high/low method, but it is dominated by the Babylonian method which is the best method for calculators which have addition and division.
Digit-by-digit is the best method for low precision calculations by hand.
Using the reciprocal square-root is best when your calculator has subtraction and multiplication, but not division.
Rough estimation is necessary to initialize the other methods in an efficient way.
Simply applying various root-finding algorithms in the case of the square-root does not (to my way of thinking) add anything not already in those articles. At most, we could mention that they can be so applied. JRSpriggs (talk) 14:58, 17 January 2012 (UTC)[reply]
It's hard to know what's OR and what isn't as the article is so lacking in references. The digit-by-digit method given is perhaps a fairly well-known one (taught to me long before WP existed), so clearly isn't OR, but there's no indication of its name or origin, and no references have been given for the method. I would imagine that whoever added it to WP was recalling it from memory. I'll see what I can find for this one when I've a bit more time. — Smjg (talk) 11:22, 28 July 2017 (UTC)[reply]

Fast inverse square root attribution

The main article states "no conclusive proof of authorship exists", while this one attributes it without a doubt to Gary Tarolli. I'd be inclined to believe the main article, but either one or the other is definitely wrong, since they contradict each other. --Lohoris (talk) 12:23, 20 March 2012 (UTC)[reply]

The supporting references (for Tarolli) are external links to a website which does not appear to me to be a reliable source. So the information should probably be removed from this article. JRSpriggs (talk) 13:55, 20 March 2012 (UTC)[reply]

Requested move

I think the title "Methods of computing square roots" makes it unnecessarily difficult to find this page, since someone interested in computation of square roots would not expect that the page title begins with a general term such as "methods". A more natural title would be "Square root computation". Are there any objections to moving this page to Square root computation? Isheden (talk) 08:40, 27 March 2012 (UTC)[reply]

"Square root computation" is ambiguous. It could mean the same thing as "Methods of computing square roots", but it could also mean a computation which uses square roots in some way. For that reason and because of the very many existing mental and textual links to the article by its current title, I would strongly oppose a change in its name. I have made your suggested title into a redirect to make it easier for people who think like you to find this article. JRSpriggs (talk) 12:24, 27 March 2012 (UTC)[reply]
Then I would suggest "Square root computation methods". Of course all links to the present article name would be taken care of with a redirect to the new article name. I can't see that anyone would think of typing "methods" as the first word if they're looking for an article like this. Isheden (talk) 21:51, 31 March 2012 (UTC)[reply]

Complexity ?

I'm disappointed but almost even more surprised that the word "complexity" does not occur a single time on that page... On Computational_complexity_of_mathematical_operations it is written that the complexity of sqrt is that of multiplication, but that's IMO far from obvious. Some details here would have been nice. — MFH:Talk 19:34, 11 November 2012 (UTC)[reply]

I compare Methods of computing square roots#Iterative methods for reciprocal square roots with Division algorithm#Newton–Raphson division:
and conclude that the complexity of extracting a square-root is only slightly greater than the complexity of division. But division is significantly harder than multiplication since it requires repeated multiplication of the order of the logarithm of the number of digits times. JRSpriggs (talk) 07:16, 12 November 2012 (UTC)[reply]
Not true, because when performing Newton-Raphson approximation, we do not have to use the final desired precision. Instead we can use just enough precision at each iteration, and since the precision roughly doubles on each iteration, the total time complexity is asymptotically bounded by that of the final iteration. Therefore all arithmetic operations that can be reduced by Newton-Raphson approximation to addition and multiplication will have the same time complexity as multiplication. Lim Wei Quan (talk) 06:50, 13 April 2013 (UTC)[reply]
To Lim Wei Quan: Thank you for pointing that out.
If we just look at the final stage of division, then it would be about three multiplications: (1) D by X_i, (2) X_i by (2 - D X_i), and (3) the reciprocal of the denominator by the numerator.
If we just look at the final stage of extracting a square-root, then it would be about four multiplications: (1) square x_n, (2) S by that square, (3) x_n/2 by (3 - S x_n^2), and (4) S by the square-root of its reciprocal.
Is that correct? JRSpriggs (talk) 19:57, 13 April 2013 (UTC)[reply]
Or would we be better off going back to the Babylonian method and just using the three multiplications of the final division of S by X_n? JRSpriggs (talk) 20:01, 13 April 2013 (UTC)[reply]

Vedic Mathematics Claims Need Much Better Sources

I see the article as a whole is already flagged for lacking reliable sources. The specific claim that the "Vedic mathematics" method described in the article is "ancient" is not backed up by a reliable source. Many claims like this are found in many articles on mathematical topics in Wikipedia, and all either need better sourcing or to be edited out. -- WeijiBaikeBianji (talk, how I edit) 03:37, 1 February 2013 (UTC)[reply]

I#m fine with removing it and/or moving it temporarily to the article's talk page (here) until sufficient sourcing is provided. Vedic math is unfortunately a POV prone subject with a lot of dubious claims from questionable sources, hence I suggest to remove it wherever it shows up without appropriate sourcing. That's the only way to discourage the POV warriors and to keep our articles in a proper state.--Kmhkmh (talk) 02:40, 4 February 2013 (UTC)[reply]

First sentence of the lead

The first sentence presently reads "In numerical analysis, a branch of mathematics, there are several methods for calculating the principal square root of a nonnegative real number". This is in my view really awkward and seems to originate in an attempt to make the article title "Methods of computing square roots" the subject of the first sentence. The real subject of this article is square root algorithms, and "methods for calculating the principal square root of a nonnegative real number" is just a description/definition of this subject. Thus, since "square root algorithm" is a widely accepted name for the subject, a more logical way of structuring the first sentence would be "square root algorithms are methods for calculating the principal square root of a nonnegative real number", or even better in singular. Further information on how to write the first sentence can be found in WP:LEAD, particularly WP:FIRSTSENTENCE. Isheden (talk) 12:13, 28 March 2013 (UTC)[reply]

Regarding the section "Rough estimation"

With expressed in scientific notation as where , an estimate for is . The factor three is used because it approximates the geometric mean of the lowest and highest possible values with the given number of digits: .

Expressed as a binary number, we have where , an estimate for is , since the geometric mean of the lowest and highest possible values with the given number of digits is , which is close to one.

Slightly improved estimates can be obtained by expressing in normalized scientific notation as where , and treating the cases (even) and (odd) differently. Then the estimates are

In the binary case we obtain

To discuss: Is it relevant in practice to treat the odd and even cases differently? Is the difference more relevant in the decimal case than for binary numbers? Isheden (talk) 14:13, 13 July 2013 (UTC)[reply]

Since , I think your odd and even cases give the same end result as the binary estimate described in the article. Gandalf61 (talk) 15:22, 13 July 2013 (UTC)[reply]
In the example in the article, S has binary digits, so the odd case above yields if I'm not mistaken. It also seems reasonable that 512 should be a better initial guess than 256 if 600 is supposed to be better than 300. Isheden (talk) 15:59, 13 July 2013 (UTC)[reply]
b is one less than the number of digits. S has 17 binary digits because it lies between 2^16 and 2^17 so b = 16 and your method gives an estimate of 2^8 = 256, the same as the method in the article. The actual square root of S is about 354. Gandalf61 (talk) 16:19, 13 July 2013 (UTC)[reply]
OK, so then we actually have the even case which is equivalent to the formula above. Sorry for the confusion. Isheden (talk) 16:45, 13 July 2013 (UTC)[reply]

What I was actually mainly thinking of is the relevance of treating even and odd number of digits separately in the decimal case. For the example given, starting with 300 instead of 600 actually yields the answer to six significant figures in one iteration less. While this is surely just a coincidence, the question is if the extra effort is justified? Isheden (talk) 17:03, 13 July 2013 (UTC)[reply]

The idea of the rough estimate is to get an estimate of the square root of S using a method which is significantly easier than doing even merely one iteration of the Babylonian method. This means that the calculation can depend only on the order of magnitude of S, i.e. the number of digits to the left of the decimal point (when S≥1) or the number of zeroes immediately to the right of the decimal point (when S<1).
The estimates chosen were the ones which minimized the absolute value of the relative error (see Methods of computing square roots#Convergence) in the worst cases while preserving the simplicity of the method by limiting the estimate to less than one significant digit. If we used for the entire range instead of breaking it into two parts, then the relative error in the worst case would be 2.16 rather than 1.0 for the method with 2 or 6. This would require roughly one extra iteration of the Babylonian method to fix.
One should not expect the estimates used with binary to be the same as those used with decimal since the ranges being approximated are not the same ranges. JRSpriggs (talk) 08:26, 14 July 2013 (UTC)[reply]
I have reworked the decimal and binary cases, skipping the coarse estimate over the entire range based on your comment. Now it should be clear to the reader how the estimates are calculated and how they can be modified if required. By the way, why is the absolute value not contained in the relative error formula? Isheden (talk) 21:48, 14 July 2013 (UTC)[reply]
Your most recent changes have made the section more consistent and clear. Thank you.
I did not put an absolute value into my version of the definition because if I had, then xn could not be expressed as a function of εn (unless we assume that xn≥√S) which would interfere with deriving the formula for εn+1.
Also, is the wrong way to equate over-estimates with under-estimates. Better would be , but I did not want to use the transcendental function ln in an article targeted to a middle school audience. Notice that if y=S/x, then (y+(S/y))/2=(x+(S/x))/2. So, if x were an under-estimate of √S, then the equivalent over-estimate is S/x. If we replace that in the formula for relative error, we get . So we could use except for the fact that this function is not invertible. JRSpriggs (talk) 08:54, 15 July 2013 (UTC)[reply]

Regarding Vedic Method

Is it just me, or are there others who think that the Vedic method can be bettered by some exposition. While the algorithmic side is substantial, I am at loss on why it even works. (Manoguru (talk) 03:56, 17 December 2013 (UTC))[reply]

I am of the opinion less would be better as it isn't very notable, it is just someones idea in a book. Dmcq (talk) 10:29, 17 December 2013 (UTC)[reply]
Your idea of notability is quite strange, since every algorithm is from someone's book or paper. I myself think that it is a remarkably easier algorithm for digit-by-digit calculation compared to the usual long division method. My elementary school math would have been much easier if this method had been taught. It exploits the fact that it is easier to multiply two single digit numbers than multiple digit numbers. Today I read up a few external links related to this method, and here is my summary of it. We proceed as with the digit-by-digit calculation by assuming that we want to express a number N as a square of the sum of n positive numbers as
The right side of this expression can be expanded as
Define divisor as and the duplex for a sequence of m numbers as
Now the computation can proceed by recursively guessing the values of so as that
such that for all , with initialization Here the is the dividend. When the algorithm terminates and the sum of s give the square root.(Manoguru (talk) 12:43, 18 December 2013 (UTC))[reply]
Notability is determined by third party sources, not by the source book or paper. Have a look at Vedic Mathematics (book), it seems to be mostly controversy about 'Vedic' and people pushing it as that and the use in schools has been under the impression it helps nationalism rather than anything to do with its merits as maths. It doesn't have much in the way of notability as maths. Dmcq (talk) 15:00, 18 December 2013 (UTC)[reply]
I agree with your sentiments. I too think that the title of Vedic is a misnomer and it is best to label the method simply as the duplex method.(Manoguru (talk) 04:16, 19 December 2013 (UTC))[reply]
Personally I see very little point in going into the system in detail. I mean why do you want to learn off by heart a quicker way of doing square roots by hand? Does it confer greater understanding? You said you didn't find it obvious. Is it useful? I think it is worth showing students how to do things by hand but learning a more complicated algorithm which is faster is not a real gain when they'd use calculators if they really needed to know the result without error. I think it should just be described in the sort of detail given for the other methods, this isn't a how to manual. Dmcq (talk) 15:12, 18 December 2013 (UTC)[reply]
The flaw in this argument is that all the algorithms are done by hand before people can trust it for machine implementation. If hand computation yields faster results, then so will the machine implementation. For instance, the digit-by-digit square root method for binary strings is often implemented in computers. Also for much of history, the Babylonian method was done by hand. That being said, I agree with the last point that you make. A concise description of the duplex method is not given and one example would have been sufficient. But I don't dare make the edits myself, lest I incite edit wars. (Manoguru (talk) 04:16, 19 December 2013 (UTC))[reply]
Mainly it is the amount of space that is given to it that I feel is undue. And even at that it has very little about the method, just examples. If a couple of the examples were replaced by a better explanation that would be an overall gain I think. An encyclopaedia should tell about things, not train you in doing them. Dmcq (talk) 12:53, 19 December 2013 (UTC)[reply]
I have chopped out all the examples except the first. One properly explained example of the square root of a 6 digit number might be okay instead, but three long examples without proper explanation are not. Dmcq (talk) 09:11, 22 December 2013 (UTC)[reply]

Another algorithm

There is another algorithm which runs very fast using only shifts, increments, decrements, additions and subtractions, and works with any base. This algorithm has been in use successfully for almost 40 years in nuclear instrumentation. The algorithm and an example are published at http://www.king-cart.com/square-root.html

Marshall Dudley2602:306:BC45:8A30:290E:C082:979:2365 (talk) 16:58, 25 February 2015 (UTC)[reply]

So, how does this compare to the Odd-integer methods? As used by operators of hand-cranked decimal calculators or even electrically-cranked ones such as Friden.NickyMcLean (talk) 04:12, 26 February 2015 (UTC)[reply]

Are We Going to Add This to the Main Page?

While playing around with numbers, I discovered this, but if you're reading this, feel free to check this method: To approximate the square root of a positive integer n, start with a fraction that has 1 as the numerator and 0 as the denominator. For each subsequent fraction, the numerator is the numerator of the previous fraction + n*the denominator of the previous fraction, and the denominator is the sum of the numerator and denominator of the fraction before it. Although the fraction will never exactly reach the square root of n, it will keep coming closer. ---- — Preceding unsigned comment added by 2601:2C1:C003:EF7A:58A9:3D2:9198:302A (talk) 01:22, 13 February 2016 (UTC)[reply]

I noticed that your contribution was removed from the article because wikipedia is not meant for publishing own research. In any case, if for some n your method converges, it will indeed converge to the square root of n. Bye, Bob.v.R (talk) 22:07, 13 February 2016 (UTC)[reply]
Sounds a lot like continued fraction expansion. Manoguru (talk) 01:38, 4 October 2016 (UTC)[reply]

Note: 20p + x is simply twice p, with the digit x appended to the right).

Is 20p twice p? I'm not sure if this is a typo or a profound insight that I fail to grasp. Fotoguzzi (talk) 21:16, 16 February 2016 (UTC)[reply]

Recent rewrite of lede

Imho, the rewrite of the lede per 02:37, 19 August 2016‎ by Esquivalience is no improvement, but just adds imprecision.
I am prepared to discuss this, but did not want to go through manual revert. Purgy (talk) 05:54, 19 August 2016 (UTC)[reply]

This matter is settled since 19 August 2016. -Purgy (talk) 05:51, 4 October 2016 (UTC)[reply]

No mention of complexity?

I'm really surprised that there is no mention of the computational complexity of the various methods. The rate of convergence per iteration is meaningless without information about the amount of work that each iteration takes. NathanZook (talk) 20:12, 22 February 2017 (UTC)[reply]

If you have a reliable source for any of those complexities, we invite you to add that information. JRSpriggs (talk) 02:02, 24 February 2017 (UTC)[reply]

so which method is the FASTEST!

so which method is the FASTEST!

It would seem like that information should be on this page!

2601:14F:8006:2580:1948:BAF5:8A41:1D72 (talk) 13:23, 4 May 2017 (UTC)[reply]

simple argument for sqrt(S) = average(S/x, x)

A simple argument for the iteration of sqrt(S) would be that the correct answer with a good guess x_n would lie between x_n and S/x_n. So why not choose the average as x_{n+1} and repeat?

(this method might also converge for many other functions)