⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 algorithms.used

📁 任意精度的数学库
💻 USED
字号:
============================================================================ This file describes the algorithms used in the MAPM.         Sept 24, 1999============================================================================ADD:	         There is nothing real interesting about the ADD function.		 If the signs of the numbers are the same, continue with		 the add routine. If they are opposite, change one of them		 and call the subtract function. In order to perform the		 add, the number with the smaller exponent is scaled so		 the exponents match, then the byte-wise add is performed.SUBTRACT:        There is nothing real interesting about the SUBTRACT function		 either.  If the signs of the numbers are the same, continue 		 with the subtract routine. If they are opposite, change one 		 of them and call the add function. In order to perform the		 subtraction, the number with the smaller exponent is scaled so		 the exponents match, then the byte-wise subtract is performed.MULTIPLY:        The multiply function uses the simple O(n^2) model for 		 small input numbers and a 'fast' algorithm for large numbers.		 The following algorithm is used in the fast multiply routine.		 (sometimes called the divide-and-conquer technique.)  		 assume we have 2 numbers (a & b) with 2N digits.  		 let : a = (2^N) * A1 + A0 , b = (2^N) * B1 + B0       		 where 'A1' is the 'most significant half' of 'a' and 		 'A0' is the 'least significant half' of 'a'. Same for 		 B1 and B0. 		 Now use the identity : 			  2N   N            N                    N		 ab  =  (2  + 2 ) A1B1  +  2 (A1-A0)(B0-B1)  + (2 + 1)A0B0         		 The original problem of multiplying 2 (2N) digit numbers has 		 been reduced to 3 multiplications of N digit numbers plus some 		 additions, subtractions, and shifts.  		 The fast multiplication algorithm used here uses the above  		 identity in a recursive process. Knuth in "The Art of Computer 		 Programming" shows that this algorithm has a significantly  		 faster run time when N is large. 		 If N = 1000, the traditional multiplication algorithm will 		 require N^2 multiplications = 1,000,000. 		 The new algorithm is proportional to N ^ (log(3) / log(2))		 or approx N ^ 1.585. 		 So if N = 1000, the new algorithm will only require 56,871 		 multiplications.DIVIDE:          Used Knuth's division algorithm from 'The Art of Computer		 Programming, Volume 2' with a slight modification. I 		 determine right in step D3 whether q-hat is too big so 		 step D6 is unnecessary.INTEGER_DIVIDE:  Calls the divide function with a few extra decimal places 		 of precision and then truncates the result to the nearest 		 integer.RANDOM NUMBERS:  Used Knuth's The Art of Computer Programming, Volume 2 as		 the basis. Assuming the random number is X, compute :		 (using all integer math)		 X = (a * X + c) MOD m		 From Knuth:		 'm' should be large, at least 2^30 : we use 1.0E+15		 'a' should be between .01m and .99m and not have a simple		 pattern. 'a' should not have any large factors in common 		 with 'm' and (since 'm' is a power of 10) if 'a' MOD 200 		 = 21 then all 'm' different possible values will be 		 generated before 'X' starts to repeat.		 We use 'a' = 649378126517621.		 'a' MOD 200 = 21 and 'a' is also a prime number. The file 		 mapm_rnd.c contains many potential multipliers commented 		 out that are all prime and meet 'a' MOD 200 = 21.		 There are few restrictions on 'c' except 'c' can have no		 factor in common with 'm', hence we set 'c' = 'a'.		 On the first call, the system time is used to initialize X.SQUARE ROOT:     Used Newton's method. The following iteration was used :                                         N                 X     =  0.5  *  [ X + --- ]                  n+1                    X                                          EXP:             The exp function uses a modification of the Taylor series.		 The gist of the algorithm was borrowed from David H. Bailey's		 'MPFUN' package.  This is a multiple precision floating 		 point package for Fortran. Mr. Bailey's description for EXP 		 follows verbatim :    "    Exp (t) =  (1 + r + r^2 / 2! + r^3 / 3! + r^4 / 4! ...) ^ q * 2 ^ n    where q = 256, r = t' / q, t' = t - n Log(2) and where n is chosen so    that -0.5 Log(2) < t' <= 0.5 Log(2).  Reducing t mod Log(2) and    dividing by 256 insures that -0.001 < r <= 0.001, which accelerates    convergence in the above series.    "                 In the MAPM implementation, we set n = 0 but use a different 		 'q' :                 if |input| < 1        :  use q = 256                 if |input|  1 - 100   :  use q = 4096                 if |input| >= 100     :  use q = 65536                 We set n = 0 because we need an efficient way to compute 		 2 ^ n.  It's possible that 'n' could be quite large, > 1000.  		 Note that we can't use pow since pow calls the exp function.		 (Note: now that we have 'm_apm_integer_pow', we could 		 implement the real method, possibly in the future).POW:             Calls the EXP function. The formula used is :                  Y        A                                  X     =  e    where A = Y * log(X)INTEGER POW:     Compute X^N when N is an integer. Used Knuth's algorithm		 from The Art of Computer Programming, Volume 2.		 A1 : Y = 1, Z = X		 A2 : is N even/odd? and then N = N / 2 		      if N was even, goto step A5                 A3 : Y = Z * Y		 A4 : if N == 0, terminate with Y as the answer		 A5 : Z = Z * Z		 A6 : goto A2LOG:             The series expansion for log converges very slowly, hence                 we will call the EXP function and use Newton's method. The 		 following iteration was used :                                     N                 X     =  X - 1 + -------                  n+1              e ^ X		  LOG10:           Calls the LOG function. The formula used is :                 log10(x)  =  A * log(x) where A = log  (e) [0.43429...]                                                       10SIN:             The sin function uses the traditional series expansion, 		 though not right away. Two translations are performed 		 first and the series expansion is done in the third step.		 The reason for the manipulations is to minimize the 		 magnitude of the number passed to the series expansion.		 The smaller that number is, the faster the series will		 converge to the precision desired.                 Step 1:  Limit input argument to +/- PI.                 Step 2:  Use the multiple angle identity for sin(5x) :		          We will use this recursively 3 times, i.e. 			  we will multiply the input angle by 1 / (5*5*5).		 sin (5x) == 16 * sin^5 (x)  -  20 * sin^3 (x)  +  5 * sin(x)  		 [Step 2 yields a worst case |x| = ~0.0251]                 Step 3:  Traditional series expansion :                                x^3     x^5     x^7    x^9		 sin(x) == x -  ---  +  ---  -  ---  + ---  ...                                 3!      5!      7!     9!COS:             The cos function is very similiar to sin.                 Step 1:  Limit input argument to +/- PI.                 Step 2:  Use the multiple angle identity for cos(4x). 		          We will use this recursively 3 times, i.e. 			  we will multiply the input angle by 1 / (4*4*4).		 cos (4x) == 8 * [ cos^4 (x)  -  cos^2 (x) ]  +  1		 [Step 2 yields a worst case |x| = ~0.0491]                 Step 4:  Traditional series expansion :                                x^2     x^4     x^6    x^8		 cos(x) == 1 -  ---  +  ---  -  ---  + ---  ...                                 2!      4!      6!     8!TAN:             Compute cos(x), then compute sin(x) = sqrt(1 - cos(x) ^ 2).		 Finally, tan(x) = sin(x) / cos(x).		 Note that we need to keep track of the sign for the 		 'sin' after the sqrt since that result will always be 		 positive.ARC_SIN:         Since the 'arc' family of functions all converge very 		 slowly, we will call the sin/cos functions and iterate 		 using Newton's method. We actually just call the cos 		 function and determine the sin value, as was done for TAN.		 The following iteration was used :                                 sin(X) - N                 X     =  X  -  ------------                  n+1              cos(X)                 If we need the arcsin(x) and the magnitude of 		 'x' is > 0.85, use the following identities:		 For x > 0 : arcsin(x) =  arccos(sqrt(1 - x^2))		 For x < 0 : arcsin(x) = -arccos(sqrt(1 - x^2))		 The reason to switch to the arccos is as follows: As 'x'		 approaches +/- 1.0, the arcsin(x) approaches +/- PI / 2. 		 The cosine of +/- PI / 2 is zero. Hence, as we get closer 		 and closer to 1, the denominator term in the above iteration		 gets closer and closer to zero. The numerator also tends 		 towards zero, but having the denominator also tending 		 towards zero can considerably lengthen the number of 		 iterations needed to calculate a given precision level.ARC_COS:         The arccos function is similiar to the arcsin for the same 		 reasons.  The following iteration was used :                                 cos(X) - N                 X     =  X  +  ------------                  n+1              sin(X)                 If we need the arccos(x) and the magnitude of 		 'x' is > 0.85, use the following identities:		 For x > 0 : arccos(x) =  arcsin(sqrt(1 - x^2))		 For x < 0 : arccos(x) =  PI - arcsin(sqrt(1 - x^2))		 The reason for switching to the arcsin is as follows :		 (see the arcsin function above). As 'x' approaches +1.0, 		 the arccos(x) approaches zero and the sine of zero is 		 zero. As 'x' approaches -1.0, the arccos(x) approaches 		 PI and the sine of PI is zero.  Hence, as we get closer		 and closer to 1, the denominator term in the above 		 iteration gets closer and closer to zero. The numerator 		 also tends towards zero, but having the denominator also 		 tending towards zero can considerably lengthen the number 		 of iterations needed to calculate a given precision level.ARC_TAN:         Use the identity :                                               x                 arctan (x) == arcsin [ --------------- ]                                         sqrt(1 + x^2)ARC_TAN2:        Handle the special cases, x and/or y == 0.                 Call the arctan function with (y / x) as the argument.                  Adjust the returned value based on the signs of y, x.============================================================================

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -