NUMERICAL METHODS
Numerical Algorithms in PL/I.
"Is this the Room of Answers?" -- Gulliver's Travels [the film]
- Eigenvalues and Eigenvectors. Several algorithms are
available (see later also for others):
- Reduction of a matrix to co-diagonal form by
eliminations, using
Strachey and Francis' algorithm.
- Bauer's Romberg integration.
- Fairweather's Modified
Romberg quadrature.
- Filon Quadrature,
by Chase and Fosdick.
- Integration by the Trapezoidal Rule, with Romberg extrapolation,
single precision, and
double precision.
- Fresnel sine and cosine integrals
incorporates ACM Algorithms 89 and 90.
- To integrate some oscillating tails
by J. Lyness and G. Hines.
- Herndon's adjust inverse
of a matrix when an element is perturbed.
- Rutishauser's algorithm
inverts a positive definite symmetric matrix.
- Wells' Matrix division.
- Spath's Square root of a
positive definite matrix.
- W. Eberling's algorithm for
matrix exponentiation
is based on the method of M. L. Liou to evaluate e**At.
- Pfann and Straka:
Determinant evaluation.
- Boothroyd's In-situ transpose of
a matrix held as a vector.
- Brown's solution of non-linear equations.
- Kerner's Roots of a Polynomial.
- Perry: Rational roots of a polynomial
having integer coefficients.
- Roots of a complex polynomial,
single precision and
double precision.
- Bach's Complex roots of a
complex transcendental equation using the downhill
method.
- Invert a matrix, single precision,
and Invert a matrix, double precision.
- Busing and Levi's algorithm to
invert a symmetric matrix.
- Thatcher: Crout with pivoting.
- Means, Standard Deviation, Sums of Cross Products of
deviations, and Correlation Coefficients,
single precision and
double precision.
- T-statistics on means of population,
double precision.
- Businger & Golub:
Singular Value Decomposition of a complex matrix.
- Linear Least Squares using Householder transformation,
single precision, and
double precision.
- One Integration Step for a system of Partial Differential
Equations,
single precision, and
double precision.
- Differential Equations.
- Differential Equations.
- Kutta-Merson algorithm
to integrate a differential equation.
-
Bessel Functions
- Fast Fourier Transform.
- AS 117: Fast Fourier transform
of series of arbitrary length, using the CHIRP-Z transform.
- Jones' Spline fitter.
- Jones' Interpolation.
- Procedures for the Gamma function.
- Solve a system of sparse equations
using the method of M. Shacham and E. Kehat. This code
was originally published in PL/I. This needs a
Gauss equation solver.
The code includes a sample function FUN and main program
for solving example 3.
- Solve a system of linear equations, obtaining exact solutions,
using John Boothroyd's algorithm.
This uses integer arithmetic, and the results are expressed
as the ratios of two integers.
It is worth considering this algorithm afresh, in view of
the ability of PL/I to perform integer arithmetic with an
accuracy of 31 decimal digits.
Also available is a sample
test program.
- Jo Ann Howell's program uses residue arithmetic to
obtain exact solutions
from a system of linear equations.
The interface has been considerably simplified (fewer arguments
are required).
This PL/I code uses integer arithmetic - either 31 decimal
digits for variables or 31 binary digits depending on the
user's choice.
- Solves a system of
tri-diagonal equations of a kind
common in boundary-value problems, using D. J. Evans' fast
algorithm. In this system, the sub-diagonal, diagonal, and
super-diagonal values are constant.
- Solve a system of linear Diophantine equations, using
Blankinship's algorithm.
This requires
Blankinship's triangulation algorithm.
- Blankinship's algorithm for
matrix triangulation uses integer arithmetic.
Decimal arithmetic is used, employing 15 digit precision.
Precision up to 31 decimal digits can be obtained, if required.
- Least squares algorithms.
- Smooth surface fitting routine.
This subroutine fits a smooth surface of a single-valued bivariate
function z = z(x,y) to a set of input data points given at input
grid points in an x-y plane. It generates a set of output grid
points by equally dividing the x and y coordinates in each interval
between a pair of input grid points, interpolates the z value for
the x and y values of each output grid point, and generates a set
of output points consisting of input data points and the
interpolated points.
The method is based on a piece-wise function composed of a set of
bicubic polynomials in x and y. Each polynomial is applicable to a
rectangle of the input grid in the x-y plane. Each polynomial is
determined locally.
- Dave Jones'
routine smoothes the input function F by fitting a least
squares polynomial through successive sets of points of F and
resamples F at these points using the polynomials found.
- Allan MacLeod's procedure calculates the value of the
Cosine-integral defined as
COSINT(x) = Gamma + Ln(x) + Integral (0 to x) [cos(t)-1]/t dt
where Gamma is Euler's constant.
The code uses rational approximations with a maximum accuracy of
20 significant figures.
- Allan MacLeod's program calculates the value of the
Sine-integral defined as
SININT(x) = Integral (0 to x) sin(t)/t dt
The code uses rational approximations with the coefficients
given to 20 significant figures of accuracy.
- Real logarithmic integral.
- Solution of a Toeplitz system of
equations.
- Reduction of a matrix by eliminations.
- Jacobi polynomials F = pn(x) defined
by Rodrigues' formula, for n in the range 0 to 25.
- ACM Algorithm 619:
Automatic Numerical Inversion of the Laplace Transform,
by Robert Piessons and Rudi Huysmans
- Algorithm ACM 627:
A PL/I Subroutine for Solving Volterra Integral Equations,
by John M. Bownds and Lee Appelbaum.
- ACM Algorithm 629: An Integral Equation Program For Laplace's
Equation in Three Dimensions.
- ACM Algorithm 682:
Talbot's Method for the Numerical Inversion
of Laplace Transforms,
by Almerico Murli and Mariarosaria Rizzardi.
- Kendall E. Atkinson's program computes
Exponential Integrals E(N,Z) of a Complex Argument
Z by Quadrature on (T**(N-1)*Exp(-T)/(Z+T))/(N-1)! from T=0 to
T=Infinity.
- Heuristic algorithm to pick N rows of
X out of NCAND to maximize the determinant of X'X,
using the Fedorov exchange algorithm,
by Alan Miller and Nam Nguyen.
- Random Numbers
- In 2010, George Marsaglia produced a super random number
generator, MWC4691, producing 64-bit pseudo-random numbers
having an immense period, >10**45000, and high speed.
Says George:
"There are few RNGs that do as well as this one
on tests of randomness and are comparable in even one
of the categories: period, speed, simplicity---not to
mention comparable in all of them".
The unsigned 32-bit integer version.
The signed 32-bit integer version.
-
George Marsaglia and Wai Wan Tsang have produced a
64-bit
universal random number generator. It produces
uniform [0,1) variates directly, as 64-bit floating-point
numbers, without conversion from integers. The resultant
numbers have a very long period (2**202 or 10**61)
with excellent randomness. The generator produces the
same random numbers in IEEE machines, regardless of
computer language used.
This code comes with a driver program.
- George Marsaglia's portable random integer generator
uses 32-bit integer arithmetic.
It is fast, and has a period greater than 10**36.
- George Marsaglia's portable random integer generator
uses 64-bit integer arithmetic.
It is fast, and has a period of about 2**250 (about
10**74).
- In November 2009, George Marsaglia produced a portable
super random integer generator, having an incredibly long
period -- greater than 10**402575. It uses
64-bit integer arithmetic of PL/I.
- This file is the same algorithm as for the 64-bit
super random integer generator directly above, presented as a
PL/I PACKAGE.
- Also in November 2009, George Marsaglia produced a portable
super random integer generator, having an incredibly long
period -- greater than 10**402575. It uses
32-bit integer arithmetic of PL/I.
- Wagner's algorithm for
Extracting Phrases in a Space-Optimal Fashion.
The algorithm PARSE computes and prints a minimum-space form of a textual
message, MS. The minimization is performed over all possible "parses" of MS
into sequences of phrase references and character strings. Each phrase
reference represents one of a finite collection, P, of phrases. The
collection, P, must be selected before PARSE is applied.
- L. B. Smith's Computation of Chebychev
series coefficients.
- G. Ehrlich's four Combinatorial
algorithms
- Ginsberg & Gaborowski's algorithm for the
Dilogarithm function of a real
argument to 18 digits.
Two of the great names in numerical methods - James Wilkinson
of the National Physical Laboratory, U.K., and C. Reinsch of the
Mathematical Institute of TH, Munchen, Germany - combined to assemble
a masterful suite of numerical procedures in
Handbook for Automatic Computation, Volume II,
Linear Algebra.
The algorithms cover linear equations, least squares, linear
programming, and eigenvalue and eigenvector problems.
Note: in some of the following procedures, the original comment
documentation and description of the parameters - which were
missing - have been restored. As well, the original procedure
names have been restored. The routines have not been re-tested.
Most of these routines have active links; the remainder
will be added as soon as they have been converted.
- Cholesky symmetric decomposition of a
positive definite matrix.
- Cholesky symmetric decomposition of a
positive definite matrix, using compact storage.
(Otherwise, it's the same as the previous item.)
- LDLt symmetric decomposition of a positive
definite matrix.
- Iterative refinement of the solution
of a positive definite system.
- Inversion of positive definite matrices by
the Gauss-Jordan method. Two procedures are provided:
gjdef1 takes an n x n matrix as input. gjdef2
takes the lower triangle stored in compact form as an
n(n+1)/2 vector.
- Symmetric decomposition of positive definite
band matrices.
- The Conjugate gradient method
for solving a system of linear equations that have
symmetric positive definite coefficients.
- The solution of symmetric and unsymmetric
band equations & the calculation of eigenvectors of
band matrices.
- The solution of a real
system of linear equations by the Crout method.
- The solution of a complex
system of linear equations by the Crout method.
- Linear least squares solutions by
Householder transformations. This algorithm can also be
used for systems of linear equations of n equations in
n unknowns, and for matrix inversion.
- Elimination with weighted row
combinations.
Four procedures are provided for solving least squares problems,
and for solving n equations in n unknowns,
and for matrix inversion. Choose the one best suited to
your application:
ORTHOLIN1 is for solving for X a least
squares problem AX = B, where A is an n × m
matrix, X is an m × k matrix, and B is an
n × k matrix.
Orthlin2 is used when B is a single column.
ORTHO1 is used for matrix inversion.
ORTHO2 is for matrix inversion, and economises on storage.
- Singular value decomposition and
least squares solutions.
- Simplex method based on triangular
decompositions.
- The Jacobi method for a real
symmetric matrix converts it to tri-diagonal form.
- Householder's tri-diaagonalization
of a symmetric matrix.
Several algorithms are provided, some requiring the
matrix in compressed form, stored as a vector.
IBM's Scientific Subroutine Library. Most if not all of these
procedures are available. See
a list of their titles.
For the procedures, contact the undersigned.
STATISTICAL ALGORITHMS
These algorithms were originally published in
Journal of Royal Statistical Society (Series C):
Applied Statistics.
Alan Miller converted them to Fortran 90, and I have converted
those to PL/I.
- M. J. R. Healy's algorithm,
AS 6: Given a symmetric matrix a
of order n as lower triangle,
calculates an upper triangle, u, such that
uprime * u = a.
a must be positive semi-definite.
- P. R. Freeman's algorithm, AS 7:
Forms as a lower triangle, a generalised inverse
of the positive semi-definite symmetric matrix a of
order n, stored as lower triangle.
Tests both AS 6 and AS 7.
- AS 27: Calculate the upper tail area under
Student's t-distribution,
by G. A. R. Taylor.
- AS 60: Calculate the
eigenvalues and eigenvectors of a real symmetric matrix,
by D. N. Spanks & A. D. Dodd.
- AS 63: Log of the beta function (includes
log of the gamma function, by
K. L. Majumder & G. P. BhattaCharjee.
- AS 66:
Evaluates the tail area of the standardised normal curve
from x to infinity or from minus infinity to x,
by I. D. Hill.
- AS 91:
Evaluates the percentage points of the chi-squared
probability distribution function, by
D. J. Best & D. E. Roberts.
- AS 110:
Lp-NORM Fit of straight line by extension of Schlossmacher,
particularly for 1 <= p <= 2, by
V. A. Sposito, W. J. Kennedy, and J. E. Gentle.
- J. Barnard's algorithm, AS 126:
Computes the probability of the normal range given t, the
upper limit of integration, and n, the sample size.
- R. D. Armstrong and M. K. Tung's algorithm,
AS 132:
Fit Y = ALPHA + BETA.X + error
- AS 135:
Min-Max estimates for a linear multiple regression problem,
R. D. Armstrong and D. S. Tung.
- AS 136:
Divides M points in N-dimensional space into K clusters so that
the within cluster the sum of squares is minimized, by
J. A. Hartigan & M. A. Wong.
- R. E. Lund's algorithm, AS 152:
Cumulative hypergeometric probabilities
(Replaces AS 59 and AS 152, and incorporates AS R86.)
- AS 154:
Algorithm for exact maximum likelihood estimation of
autoregressive-moving average models by means of Kalman filtering,
by G. Gardner, A. C. Harvey, & G. D. A. Phillips.
- AS 155:
Distribution of a linear combination of non-central chi-squared
random variables, by R. B. Davies.
- AS 157:
The Runs-Up and Runs-Down Test.
- AS 177:
Exact calculation of Normal Scores.
- AS 181:
Calculates the Shapiro-Wilk W test and its significance level.
- AS 190:
Evaluates the probability from 0 to q for a studentized range
having v degrees of freedom and r samples.
- AS 192:
Computes approximate significance points of a Pearson curve with given
first four moments, or first three moments and left or right boundary.
- AS 205:
Enumerates all R*C contingency tables with given row totals N(I)
and column totals M(J) and calculates the hypergeometric
probability of each table.
- AS 207:
Fitting a generalized log-linear model
to fully or partially classified frequencies.
- AS 217:
Does the dip calculation for an ordered vector X using the
greatest convex minorant and the least concave majorant, skipping
through the data using the change points of these distributions.
- AS 227:
Generates all possible N-bit binary codes, and applies a users
procedure for each code generated.
- AS 239: Incomplete Gamma Integral.
- AS 241:
Produces the normal deviate Z corresponding to a given lower
tail area of P;
Z is accurate to about 1 part in 10**7.
- AS 245: Logarithm of the gamma function.
- AS 260:
Computes the C.D.F. for the distribution of the square of the
multiple correlation coefficient with parameters X, IP, N, and RHO2.
X is the sample value of R**2.
IP is the number of predictors, including 1 for the constant if one is
being fitted.
N is the number of cases.
RHO2 is the population value of the squared multiple correlation
coefficient (often set = 0).
- AS 261:
Computes the quantile of the distribution of the square of the
sample multiple correlation coefficient for given number of
random variables M, sample size SIZE, square of the population
multiple correlation coefficient RHO2, and lower tail area P.
- AS 275:
Computes the noncentral chi-square distribution function with positive
real degrees of freedom f and nonnegative noncentrality parameter theta.
- AS 282:
High breakdown regression and multivariate estimation,
by D. M. Hawkins.
Calculates the least median of squares regression, minimum volume
ellipsoid, and associated statistics.
- AS 285:
Finds the probability that a normally distributed random
N-vector with mean 0 and covariance COVAR falls
in area enclosed by the external user-defined function F,
S. L. Lohr.
- A. J. Miller's AS 290:
Generate a rectangular 2-D grid of variance ratios from which to plot
confidence regions for two parameters using Halperin's method.
If the model is linear in all parameters other than the two selected,
the confidence regions are exact; otherwise they are approximate and the
user should test the sensitivity of the confidence regions to variation
in the other parameters.
- AS 295: Alan Miller's & Nam Nguyen's
Heuristic algorithm to pick N rows of X out of NCAND to maximize
the determinant of X'X, using the Fedorov exchange algorithm.
(This is a modified version of the published algorithm.)
- AS 298:
Hybrid minimization routine using simulated annealing,
by S. P. Brooks.
- AS 304:
Fisher's non-parametric randomization test for two small
independent random samples, by L. E. Richards and J. Byrd.
- AS 310:
Computes the cumulative distribution function of a
non-central beta random variable, by R. Chattamvelli and R. Shanmugam.
- AS 319
Function minimization without derivatives.
R. A. Vowels, 25 February 2006. Updated 7 March 2006, 29 March 2006,
8 April 2006, 15 April 2006, 14 October 2006, 21 December 2006,
29 March 2007, 29 May 2007, 5 June 2007, 29 October 2007, 1 December 2007,
25 April 2008, 23 June 2008, 28 December 2008, 15 March 2009, 14 July 2009,
28 October 2009, 1 December 2009, 3 December 2010.