Numerical Analysis

Archive for July, 2010

Help needed in finding the exact amount of "US taxpayer subsidization" the "Iraq and Afghan insurgency" has gotten …

I need help needed in finding the exact amount of "US taxpayer
subsidization" the "Iraq and Afghan insurgency" has gotten.
Math is probably 90% of the way there, but by no means complete.

This is purely a "recreational math pursuit" (no political judgments made

The 2D spreadsheet format must be retained, this is a General Ledger
Accounting formatted computation.
No complex math, just accounting addition and subtraction — with averages
where necessary.
I initially got $90 USD / insurgent, but I saw many flaws in the way the
computation was reached.
Stick to XLS, but I can work with baseline 123 files without any problem.

Happy numerical analysis.

: )

posted by admin in Uncategorized and have Comment (1)

Errors in "Computer Approximations" by Hart et al.

I know that the subject line book is largely of historical interest
these days, but largely for my own edification I have been trying to
reproduce in math software minimax rational approximations from the
classic 1960s era literature on the subject. Of particular interest to
me are Cody’s several papers and the book in question.

I have access to an original 1968 university library copy. I understand
that there was a corrected 1978 edition which I can’t find. I do know
that in the copy I am using, the section that deals with the
approximation of the gamma function is just riddled with
errors–indeed, the reported approximation intervals seem wrong in one
table (i.e. [0,1] vs. the correct [2,3]), and in many of the
coefficient lists in the back of the book the reported figures make no
sense. I am able to replicate the reported errors when using the
correct interval mentioned in the text, but not the coefficients. The
whole thing seems a bit of a mess, and I am hoping that widely
published errata were not circulated shortly after publication since
users would have caught a lot of this stuff back then. I have tested
out a couple of the coefficient lists that I think are wrong, and the
resulting rational functions don’t estimate the gamma function remotely
closely on either the reported interval or any other one for that
matter. So, I know I am not imagining this. I am guessing that
coefficients for some other approximations where printed where the
correct ones for the gamma on [2,3] should be. This is no small
typo–this affects several whole pages of the book.

I am really glad that one can use relevant packages in Mathematica or
Maple to rapidly estimate minimax rational approximations of functions
these days. If this were 1973 and I really needed to depend on the Hart
work, I would probably be most frustrated.

Apart from this, the Hart work was a deeply impressive contribution to
the fields of numerical approximation and computer science in its day.
If anyone here knows anything about its history, whether errata were
published, and where one can find them, I would be very obliged. I know
that nowadays computers are so fast that rational approximations of
functions of become obsolete in most cases (e.g., why use Cody’s
rational approximations for erf and erfc when you can compute the
continued fractions or series expansions directly and quickly?), but I
would like to have my intellectual curiosity satisfied.

best regards,

Les Wright

posted by admin in Uncategorized and have Comments (6)

Statistical Challenges and Advances in Brain Scinece

Call for submissions on

Statistics and probability theory play key roles in cutting edge brain
research. Leading
examples include non-linear time series analysis for studying brain
dynamics using
electro/magnetoencephalograms and random field theory in analyzing 3D
datasets. This special issue will highlight statistical and
probabilistic topics related to all
aspects of brain science, including but not limited to, computational
neuronal modeling, and structural and functional neuroimaging.

Submissions preferably will provide either methodological or
theoretical advances in the
statistical aspects of brain science or demonstrate applications of
techniques using
statistics in neuroscience.  Priority will be given to papers that are
assessed to be,
statistically and scientifically speaking, the most innovative,
comprehensive, and of
interest to a wide readership.

Papers submitted to the special issue will be reviewed according to the
regular procedure
of the journal. Accepted papers will appear in a single issue of
Statistica Sinica, scheduled
for 2008.  Please contact John Aston for questions on the suitability
of your paper(s).
Submissions must be made online through the journal site at


Please use the LaTeX article template, also available at the above
site, for preparing your
manuscript submission. The deadline for submission is January 31, 2007.
Authors wishing
to receive email alerts as the deadline approachs may contact the
editorial assistant,
Karen Li (Karen at

Guest Editors:

John Aston, Academia Sinica
Emery Brown, MIT/Massachusetts General Hospital
Keith Worsley, McGill University
Yingnian Wu, UCLA

**Statistica Sinica  endeavors to meet the needs of statisticians faced
with a rapidly changing
world. It publishes significant and original articles that promote the
principled use of
statistics along with related theory and methods in quantitative
studies, essential to modern
technologies and sciences.**

posted by admin in Uncategorized and have No Comments

number of the beast

Hi there,
by accident I found an algorithm with a quite funny result.

I told my computer to sum up square roots, starting with i=1, stopping at n.
in Latex code:  $ \sum_{i=0}^n \sqrt{i} $

The result is quite funny if the UpperLimit n is 10^2k (k=1,2,3,4…)

Result: 19.3060005260357
Result: 661.462947103148
Result: 21065.833110879
Result: 666616.459197108
Result: 21081692.7461519
Result: 666666166.458841
Result: 21081849486.4393
Result: 666666661666.612

As you see I found the ‘number of the beast’ (666)

Has one of you an explanation for this funny repeating result?

Best regards

contact me via

posted by admin in Uncategorized and have Comments (3)

Solving Symmetric Positive-Definite Matrix

I have a linear system  A*X=B where I need to solve for X.
A is an NxN symmetric positive definite matrix which is formed by
    A = c1*v1*v1′ + c2*v2*v2′ + … + cm*vm*vm’
where ci are positive scalars
and vi are Nx1 column matrices
and vi’ is the transpose of vi
and m>N.
Note that vi are general; they are not necessarily normal or
othogonal, etc.

I could simply perform a Cholesky decomposition of A and then
solve A*X=B.  But can my knowledge of the A=SUM(ci*vi*vi’)
decomposition can be used to reduce the computations in solving


posted by admin in Uncategorized and have Comments (5)

Help with a point of intersection

I need to find a point of intersection of the following equations:
x-2y= – 8
x^2 + 4y^2 = 32

First i need to write both in form "y= …"

I  knew how to express the first equation, but not the second one.

The first one goes like:
x-2y= – 8
2y = x + 8
y = 1/2x + a

Help me with the second one …

posted by admin in Uncategorized and have No Comments

Discretization and the variational principle


I’m trying to get a proper FD discretizion of the radial Schroedinger
equation [-1/2 * d^2/dr^2+V(r)] * f(r) = E * f(r) with f(0)=0 on a
_non-uniform_ mesh. Using a 3-point approximation for the differential
operator works quite well but I expect to get better results if I discretize
the underlying variational principle, i.e. the Lagrangian (as proposed in
S.E. Koonin’s "Computational Physics").

I already found some examples for this procedure in literature but only for
grids with equi-distant collocation points. Do you know any papers or books
where the whole discretization is done explicitely for non-uniform grids?
Unfortunately my own efforts were not very successful so far (the results
were worse than the direct discretization) …

Greetings, Stefan

posted by admin in Uncategorized and have Comment (1)

approximation by means of simplify fraction

Let f(x) is continuous function on [a,b].
\rho_n (x) = \sum\limits_{k=1}^n 1/{x-t_k} – the simplest fraction of best approximation for f(x)
i.e. E_n(f)=max|f(x)-\rho_n (x)| (on [a,b]) where E_n – the smallest deviation for f(x).
t_k – some real numbers.
THEN exists more than n point of Chebyshev alternation (i.e. >=(n+1) points) i.e. exists points:
a <= x_1 < x_2 < … < x_{n+1} <=b : \rho_n (x_j) – f(x_j) = (-1)^{j+1}*E_n(f) for j=1,…,n+1.

Help me to prove this theorem or offer idea’s.


P.S: Chebyshev proved that exist polynomial P_n(x) of best approximation for f(x) that exist >=(n+2)
point of Chebyshev alternation!

posted by admin in Uncategorized and have Comments (14)

Problem solving A*X = B

Hi all,

let me introduce myself and my problem, as it’s the first time I’m
posting here…

I’m working in machine vision applications dedicated to the inspection
of different 3D objects. The problem is that I reach a system of
equations of the form:

A*X = B

where X is 12-element vector corresponding to the elements of a 3×4
projective transformation matrix, composed by R, a 3×3 rotation matrix,
and T, a 3×1 translation matrix.

The system should have one solution, but when I solve it using SVD, the
solution I obtain is not coherent, since the submatrix corresponding to
the obtained rotation matrix is not orthogonal.

Should I use this solution to begin some iterative algorithm to find a
real solution??

Any other idea of how should I impose some condition to solve the
system and get an appropiate solution??

Thanks in advance! Best regards,


posted by admin in Uncategorized and have Comments (9)

Zagier's polynomials

How do you compute Zagier’s polynomials P(n, m) when 0<= m<= n both integers?


posted by admin in Uncategorized and have No Comments