Friday, 20 June 2008

Looking Back

Several methods have been proposed to calculate the LP coefficients; the fact that no clear winner has emerged demonstrates that it's a tricky task. My choice was casual. I remember being into the library (it was probably 1991), browsing the latest issue of a chemistry journal (don't remember which) where I probably I read the expression "Linear Prediction" for the first time in my life. What I remember with certainty are the names of the author: Gesmar and Led. They were discussing the application of a novel algorithm ("Fast Linear Prediction"). Curious to see it in action, I wrote to the duo asking for the source code. They replied that the code was available for sale, and the price resulted to be higher than my monthly salary. What they were selling in practice was the FORTRAN source code of a program written for the VAX. A program with a single routine. I had no FORTRAN compiler, no VAX machine and no idea about a practical application of LP that could justify the expense. Now, being that the algorithm had been invented by someone else, a mathematician called George Cybenko, I wrote to the latter. Unfortunately, during that period he was recovering from a broken leg (skiing). When he returned to work, after months, he sent a fax. From that moment on, he began collaborating with me and, fax after fax, helped me in writing his algorithm into my language (during the period I had a Pascal compiler). I spent a lot of time first to understand the algorithm, then to write it, finally to optimize it for speed on my computer. Enough reasons to remain affectionate and faithful to it for years, yet now I almost convinced it represents a second choice.
Do you remember the synthetic spectra I used to demonstrate the properties of zero-filling? It contains 10 exponentially decaying sinusoids, with identical line-widths, not-too-different intensities and spread-out frequencies. I have used Cybenko's algorithm to calculate 10 prediction coefficients; then I have told the computer to prolong the FID using the coefficients. The synthetic FID is perfect:The part at the right of the vertical red line has been predicted correctly. So far, so good. All of a sudden, if I repeat the calculation with 11 coefficients instead of 10, it's a disaster:

This time there is no necessity for the red line! What's the cause of the disaster? The lack of noise! When there is a minimal amount of noise, it can neutralize the excessive number of coefficients. When the number of coefficients grows further (e.g. > 40) the disaster is almost guaranteed, however, there's no excuse. I avoid using this algorithm with more than 24 coefficients. The alternative is to use a generic algorithm, therefore slower, to solve the same system of equations. A popular algorithm, probably the most popular method of Linear Algebra, is the singular value decomposition. With SVD I get the same perfect results if I calculate 10, 11 or even 100 coefficients. There's more. I also get the ordered list of the so-called singular values. For example, if I ask the computer to compute 20 LP coefficients, it can also give me the following singular values:

01 227399.55548
02 212850.09660
03 208405.44529
04 171681.01548
05 154987.64513
06 120658.53137
07 104391.19112
08 99881.44088
09 83773.77349
10 64064.17610
11 0.00547
12 0.00540
13 0.00536
14 0.00535
15 0.00532
16 0.00527
17 0.00519
18 0.00512
19 0.00503
20 0.00500

I have not explained what a singular value is because I am an ignorant. Even an ignorant, however, understands that SVD offers the welcome bonus of measuring the number of signals contained into an unknown FID. A useful option of SVD is that the computer, after having detected that the nature of the problem can be better described by a smaller number of values, can ignore the rest. This option is ON by default. I supposed that this self-regulation was the cause of the stability of SVD. I wanted to verify what happens if I disabled the option. Surprise: nothing changes! (and nothing remains for me to speculate about).
From this single, unrepresentative test, it's clear that SVD is far better, but I haven't the slightest idea why. (For completeness: SVD can cut down the number of singular values, but not the number of LP coefficients; this is something you need to do on your own).
There are also abundant methods to save the day when the coefficients don't work, and they don't depend on how you have extracted them. When I introduced my "theory" of LP, two days ago, I explained that the LP coefficients of the isolated signals (related to the frequency and line-width) are different from the LP coefficients of the full spectrum, but can be recalculated from the latter. It's enough... (!) to solve a single equation of the nth degree (where n is the number of coefficients). Assuming that the task is simple, when you have the roots of the equation, you can verify the only condition we know: the ratio α = I₂ / I₁ must be < 1 in the case of forward LP and > 1 in the case of backward LP. (Cfg. my old post for the meaning of the symbols).
Everybody agrees on this point. No theory, unfortunately, says what we should do when the condition is not verified (and it happens incredibly often!). In the case of forward LP it is possible to substitute α with its reciprocal. The consequence is that the FID decays even faster than it would do naturally. The result is an hybrid between LP and zero-filling, and it can't be harmful. In the case of backward LP, instead, the same trick will make the FID grow (backward) more than it ought to do, with invariably catastrophic results.
That's why I prefer forward LP.

No comments:

Post a Comment