

Has anyone run across "moving least squares" code in .m, .cc, .f, etc.
formats that would be adaptable for use with octave? Apparently, moving
least squares can be used to create approximating curves for
interpolation/approximation much in the same way that spline curves can be
used. In some situations, the moving least squares approach can reduce the
need to solve large matrices, which is normally associated with the curve
fit process.
A Google of "moving least squares" shows several papers, but no code that I
can find. Any ideas?
Joe

Octave is freely available under the terms of the GNU GPL.
Octave's home on the web: http://www.octave.orgHow to fund new projects: http://www.octave.org/funding.htmlSubscription information: http://www.octave.org/archive.html


On 11/18/05, Joe Koski < [hidden email]> wrote:
> Has anyone run across "moving least squares" code in .m, .cc, .f, etc.
> formats that would be adaptable for use with octave? Apparently, moving
> least squares can be used to create approximating curves for
> interpolation/approximation much in the same way that spline curves can be
> used. In some situations, the moving least squares approach can reduce the
> need to solve large matrices, which is normally associated with the curve
> fit process.
>
> A Google of "moving least squares" shows several papers, but no code that I
> can find. Any ideas?
>
Would Kalman filter do the same / better?
> Joe
>
Dmitri.


Octave is freely available under the terms of the GNU GPL.
Octave's home on the web: http://www.octave.orgHow to fund new projects: http://www.octave.org/funding.htmlSubscription information: http://www.octave.org/archive.html


On Nov 18, 2005, at 5:27 PM, Dmitri A. Sergatskov wrote:
> On 11/18/05, Joe Koski < [hidden email]> wrote:
>> Has anyone run across "moving least squares" code in .m, .cc, .f, etc.
>> formats that would be adaptable for use with octave? Apparently,
>> moving
>> least squares can be used to create approximating curves for
>> interpolation/approximation much in the same way that spline curves
>> can be
>> used. In some situations, the moving least squares approach can
>> reduce the
>> need to solve large matrices, which is normally associated with the
>> curve
>> fit process.
>>
>> A Google of "moving least squares" shows several papers, but no code
>> that I
>> can find. Any ideas?
>>
>
> Would Kalman filter do the same / better?
SavitskyGolay is a moving least squares filter. You can use it for
curve smoothing if the data are equally spaced.
 Paul

Octave is freely available under the terms of the GNU GPL.
Octave's home on the web: http://www.octave.orgHow to fund new projects: http://www.octave.org/funding.htmlSubscription information: http://www.octave.org/archive.html


I have been using the attached function for data smoothing and
calculation of smoothed derivatives for nonequally spaced data. It
uses moving leastsquares polynomial fits of the given data. If I
remember it right, the savitzkygolay algorithm does the same, only
that the procedure is nicely vectorized for equally spaced data.
My code is terrible:
 the function arguments x and y are ordered the wrong way round
 the calculation is slow because it is not vectorized (does
anybody know a way to vectorize it?)
 the treatment of the vector edges can probably be improved a lot
But the function has been working for me for quite some time now. And
you can certainly use it to see if the algorithm does what you need.
regards
Thorsten
function dydx = deriv(x1, y1, nl, nr, m, ld, edge)
% deriv  calculates numerically smoothed derivatives
%
% calling syntax:
%
% dydx = deriv(x, y, nl, nr, m, ld, edge);
%
% where x is a vector of xvalues
% y is a vector of yvalues (y(i) = f(x(i)))
% nl is the number of leftward data points
% used for smoothing
% nr is the number of rightward data points
% m is the order of the polynomials fitted
% to the data
% ld is the order of derivative to be calculated
% (default is 1). For ld=0, the function
% itself is smoothed
% edge is a flag that determines the way the
% routine deals with the edges of the
% vector y.
% edge = 1 : the last full polynomial
% based on the values y(1:nl+nr+1)
% and y(length(y)nlnr:length(y))
% respectively is used to calculate
% all the values dydx(1:nl) and
% dydx(length(y)nlnr1:length(y))
% respectively.
% edge = 2 : the window used for fitting is
% linearly shrunk and shifted as
% the edges are approached
% (default is 2)
% dydx is assigned the nth derivative of y(x)
%
if nargin < 7
edge = 2;
if nargin < 6
ld = 1;
end;
end;
if m < ld
m = ld;
end;
faktor = gamma(ld + 1);
xmax = max(abs(x1));
ymax = max(abs(y1));
x = x1 / xmax;
y = y1 / ymax;
ly = length(x);
dydx = zeros(size(x));
for i = 1+nl : lynr
xx = x(inl:i+nr)  x(i);
yy = y(inl:i+nr);
pp = polyfit(xx, yy, m);
dydx(i) = faktor * pp(m+1ld);
end;
if edge == 1
xx = x(1:(nl + nr + 1))  x(nl+1);
yy = y(1:(nl + nr + 1));
pp = polyfit(xx, yy, m);
dpp = polydiff(pp, ld);
dydx(1:nl) = polyval(dpp, xx(1:nl));
xx = x((ly  (nl + nr)) : ly)  x(ly  nr);
yy = y((ly  (nl + nr)) : ly);
pp = polyfit(xx, yy, m);
dpp = polydiff(pp, ld);
dydx((ly  nr + 1) : ly) = polyval(dpp, xx(nl+2:nl+1+nr));
elseif edge == 2
nn = nl + nr + 1;
if nl < 2
nnn = m * 2;
else
nnn = round(linspace(m * 2, nl + nr, nl));
end
for i = 1:nl
xx = x(1:nnn(i))  x(i);
yy = y(1:nnn(i));
pp = polyfit(xx, yy, m);
dydx(i) = faktor * pp(m+1ld);
end;
if nr < 2
nnn = m * 2;
else
nnn = round(linspace(nl + nr, m * 2, nr));
end
il = ly  nnn + 1;
for i = 1:nr
xx = x(il(i):ly)  x(ly  nr + i);
yy = y(il(i):ly);
pp = polyfit(xx, yy, m);
dydx(ly  nr + i) = faktor * pp(m+1ld);
end;
end;
if ld == 0
dydx = dydx * ymax;
else
dydx = dydx * ymax / xmax ^ ld;
end
Paul Kienzle wrote:
On Nov 18, 2005, at 5:27 PM, Dmitri A. Sergatskov wrote:
On 11/18/05, Joe Koski
[hidden email] wrote:
Has anyone run across "moving least
squares" code in .m, .cc, .f, etc.
formats that would be adaptable for use with octave? Apparently, moving
least squares can be used to create approximating curves for
interpolation/approximation much in the same way that spline curves can
be
used. In some situations, the moving least squares approach can reduce
the
need to solve large matrices, which is normally associated with the
curve
fit process.
A Google of "moving least squares" shows several papers, but no code
that I
can find. Any ideas?
Would Kalman filter do the same / better?
SavitskyGolay is a moving least squares filter. You can use it for
curve smoothing if the data are equally spaced.
 Paul

Octave is freely available under the terms of the GNU GPL.
Octave's home on the web: http://www.octave.org
How to fund new projects: http://www.octave.org/funding.html
Subscription information: http://www.octave.org/archive.html



On Sat, Nov 19, 2005 at 02:15:58PM +0100, Thorsten Meyer wrote:
> My code is terrible:
>
> * the function arguments x and y are ordered the wrong way round
> * the calculation is slow because it is not vectorized (does anybody
> know a way to vectorize it?)
> * the treatment of the vector edges can probably be improved a lot
>
> But the function has been working for me for quite some time now. And
> you can certainly use it to see if the algorithm does what you need.
You can look at loess/lowess or sizer algorithms.
For short data I found smooth from Wafo Toolbox (GPL) usefull.
Thread autor asked about MLS without writing area of interrest :(
I found googling then exact MLS term is mostly used in surface approximation
(from a cloud of points). I found one saying that the Kalman filter is a
part MLS (in GPS context) ...

Octave is freely available under the terms of the GNU GPL.
Octave's home on the web: http://www.octave.orgHow to fund new projects: http://www.octave.org/funding.htmlSubscription information: http://www.octave.org/archive.html


Re: Moving Least Squares, Anyone?
I tried Thorsten’s code on another problem, and it works as advertised. Yes the edges are an issue, but that’s the case with most filtering approaches. Also the code does cause noticeable pauses during execution. I do like the trick of using the 0th derivative to smooth the function itself.
OK, I got many suggestions (SavitzkyGolay, Kalman filters, etc.), but I should clarify what I’m after. There is a recent paper by Blakely (http://www.cscamm.umd.edu/publications/, Christopher D. Blakely "A Fast Empirical Mode Decomposition Technique for Nonstationary Nonlinear Time Series") that describes a new approach to Empirical Mode Decomposition (EMD). I’m trying to implement the contents of Blakely’s paper in octave.
EMD is a means of decomposing a signal afterthefact into a series of “intrinsic mode functions” that can be summed together to recover the original signal. The method was described in Norden Huang’s paper (http://www.inrialpes.fr/is2/people/pgoncalv/pub/emdeurasip03.pdf). The original approach uses cubic splines to connect the signal envelopes. The upper envelope connects all the maxima of the signal, and the lower envelope connects all the minima. A signal that is the mean of the upper and lower envelopes is repetitively calculated and subtracted from the data until there is no change. I have tried the approach on several experimental data sets, and, once the rough edges are knocked off, I think it will be a good tool to add to the signal analysis arsenal. Among other things, it allows a time resolved frequency spectra to be calculated in a manner better than I’ve seen with wavelet transforms. Huang’s paper lists many other advantages.
Blakely has an approach that uses “moving leastsquares,” as developed by Fasshauer, to replace the cubic splines. Blakely says that moving leastsquares converts the leastsquares curve fit into a Lagrangian multiplier problem, which should be near and dear to heart of physicists everywhere. This may be a case of applied mathematicians plowing the same field that the signal analysis people plowed twenty or more years ago, but with different results, and the approach sounds interesting. I also like the looks of the envelopes calculated with moving leastsquares. They seem to have fewer over and undershoots of the extrema than cubic splines.
My question is specific. Has anyone seen an implementation of Fasshauer’s moving leastsquares in a language (Fortran, C, etc.) suitable for use with octave? I could then modify the octave EMD code to replace the cubic spline routines and give the approach a try. I contacted Blakely, but he says that he did all his proofofconcept coding in Java. He’s also now working on his PhD thesis, and this was only a summer diversion.
Joe
on 11/19/05 6:15 AM, Thorsten Meyer at [hidden email] wrote:
I have been using the attached function for data smoothing and calculation of smoothed derivatives for nonequally spaced data. It uses moving leastsquares polynomial fits of the given data. If I remember it right, the savitzkygolay algorithm does the same, only that the procedure is nicely vectorized for equally spaced data.
My code is terrible:
 the function arguments x and y are ordered the wrong way round
 the calculation is slow because it is not vectorized (does anybody know a way to vectorize it?)

 the treatment of the vector edges can probably be improved a lot
But the function has been working for me for quite some time now. And you can certainly use it to see if the algorithm does what you need.
regards
Thorsten
function dydx = deriv(x1, y1, nl, nr, m, ld, edge)
% deriv  calculates numerically smoothed derivatives
%
% calling syntax:
%
% dydx = deriv(x, y, nl, nr, m, ld, edge);
%
% where x is a vector of xvalues
% y is a vector of yvalues (y(i) = f(x(i)))
% nl is the number of leftward data points
% used for smoothing
% nr is the number of rightward data points
% m is the order of the polynomials fitted
% to the data
% ld is the order of derivative to be calculated
% (default is 1). For ld=0, the function
% itself is smoothed
% edge is a flag that determines the way the
% routine deals with the edges of the
% vector y.
% edge = 1 : the last full polynomial
% based on the values y(1:nl+nr+1)
% and y(length(y)nlnr:length(y))
% respectively is used to calculate
% all the values dydx(1:nl) and
% dydx(length(y)nlnr1:length(y))
% respectively.
% edge = 2 : the window used for fitting is
% linearly shrunk and shifted as
% the edges are approached
% (default is 2)
% dydx is assigned the nth derivative of y(x)
%
if nargin < 7
edge = 2;
if nargin < 6
ld = 1;
end;
end;
if m < ld
m = ld;
end;
faktor = gamma(ld + 1);
xmax = max(abs(x1));
ymax = max(abs(y1));
x = x1 / xmax;
y = y1 / ymax;
ly = length(x);
dydx = zeros(size(x));
for i = 1+nl : lynr
xx = x(inl:i+nr)  x(i);
yy = y(inl:i+nr);
pp = polyfit(xx, yy, m);
dydx(i) = faktor * pp(m+1ld);
end;
if edge == 1
xx = x(1:(nl + nr + 1))  x(nl+1);
yy = y(1:(nl + nr + 1));
pp = polyfit(xx, yy, m);
dpp = polydiff(pp, ld);
dydx(1:nl) = polyval(dpp, xx(1:nl));
xx = x((ly  (nl + nr)) : ly)  x(ly  nr);
yy = y((ly  (nl + nr)) : ly);
pp = polyfit(xx, yy, m);
dpp = polydiff(pp, ld);
dydx((ly  nr + 1) : ly) = polyval(dpp, xx(nl+2:nl+1+nr));
elseif edge == 2
nn = nl + nr + 1;
if nl < 2
nnn = m * 2;
else
nnn = round(linspace(m * 2, nl + nr, nl));
end
for i = 1:nl
xx = x(1:nnn(i))  x(i);
yy = y(1:nnn(i));
pp = polyfit(xx, yy, m);
dydx(i) = faktor * pp(m+1ld);
end;
if nr < 2
nnn = m * 2;
else
nnn = round(linspace(nl + nr, m * 2, nr));
end
il = ly  nnn + 1;
for i = 1:nr
xx = x(il(i):ly)  x(ly  nr + i);
yy = y(il(i):ly);
pp = polyfit(xx, yy, m);
dydx(ly  nr + i) = faktor * pp(m+1ld);
end;
end;
if ld == 0
dydx = dydx * ymax;
else
dydx = dydx * ymax / xmax ^ ld;
end
Paul Kienzle wrote:
On Nov 18, 2005, at 5:27 PM, Dmitri A. Sergatskov wrote:
On 11/18/05, Joe Koski <[hidden email]> [hidden email] wrote:
Has anyone run across "moving least squares" code in .m, .cc, .f, etc.
formats that would be adaptable for use with octave? Apparently, moving
least squares can be used to create approximating curves for
interpolation/approximation much in the same way that spline curves can be
used. In some situations, the moving least squares approach can reduce the
need to solve large matrices, which is normally associated with the curve
fit process.
A Google of "moving least squares" shows several papers, but no code that I
can find. Any ideas?
Would Kalman filter do the same / better?
SavitskyGolay is a moving least squares filter. You can use it for
curve smoothing if the data are equally spaced.
 Paul

Octave is freely available under the terms of the GNU GPL.
Octave's home on the web: http://www.octave.org
How to fund new projects: http://www.octave.org/funding.html
Subscription information: http://www.octave.org/archive.html



Re: Moving Least Squares, Anyone?
OK, I just tried the references and there are problems. Sorry. The Huang paper has disappeared from the internet. I have a .pdf if anyone needs it. The next best reference on EMD is a .pdf of the paper by Rilling. You can get it by googling ON EMPIRICAL MODE DECOMPOSITION AND ITS ALGORITHMS.
Joe
on 11/21/05 1:22 PM, Joe Koski at [hidden email] wrote:
I tried Thorsten’s code on another problem, and it works as advertised. Yes the edges are an issue, but that’s the case with most filtering approaches. Also the code does cause noticeable pauses during execution. I do like the trick of using the 0th derivative to smooth the function itself.
OK, I got many suggestions (SavitzkyGolay, Kalman filters, etc.), but I should clarify what I’m after. There is a recent paper by Blakely (http://www.cscamm.umd.edu/publications/, Christopher D. Blakely "A Fast Empirical Mode Decomposition Technique for Nonstationary Nonlinear Time Series") that describes a new approach to Empirical Mode Decomposition (EMD). I’m trying to implement the contents of Blakely’s paper in octave.
EMD is a means of decomposing a signal afterthefact into a series of “intrinsic mode functions” that can be summed together to recover the original signal. The method was described in Norden Huang’s paper (http://www.inrialpes.fr/is2/people/pgoncalv/pub/emdeurasip03.pdf). The original approach uses cubic splines to connect the signal envelopes. The upper envelope connects all the maxima of the signal, and the lower envelope connects all the minima. A signal that is the mean of the upper and lower envelopes is repetitively calculated and subtracted from the data until there is no change. I have tried the approach on several experimental data sets, and, once the rough edges are knocked off, I think it will be a good tool to add to the signal analysis arsenal. Among other things, it allows a time resolved frequency spectra to be calculated in a manner better than I’ve seen with wavelet transforms. Huang’s paper lists many other advantages.
Blakely has an approach that uses “moving leastsquares,” as developed by Fasshauer, to replace the cubic splines. Blakely says that moving leastsquares converts the leastsquares curve fit into a Lagrangian multiplier problem, which should be near and dear to heart of physicists everywhere. This may be a case of applied mathematicians plowing the same field that the signal analysis people plowed twenty or more years ago, but with different results, and the approach sounds interesting. I also like the looks of the envelopes calculated with moving leastsquares. They seem to have fewer over and undershoots of the extrema than cubic splines.
My question is specific. Has anyone seen an implementation of Fasshauer’s moving leastsquares in a language (Fortran, C, etc.) suitable for use with octave? I could then modify the octave EMD code to replace the cubic spline routines and give the approach a try. I contacted Blakely, but he says that he did all his proofofconcept coding in Java. He’s also now working on his PhD thesis, and this was only a summer diversion.
Joe
on 11/19/05 6:15 AM, Thorsten Meyer at [hidden email] wrote:
I have been using the attached function for data smoothing and calculation of smoothed derivatives for nonequally spaced data. It uses moving leastsquares polynomial fits of the given data. If I remember it right, the savitzkygolay algorithm does the same, only that the procedure is nicely vectorized for equally spaced data.
My code is terrible:
 the function arguments x and y are ordered the wrong way round
 the calculation is slow because it is not vectorized (does anybody know a way to vectorize it?)

 the treatment of the vector edges can probably be improved a lot
But the function has been working for me for quite some time now. And you can certainly use it to see if the algorithm does what you need.
regards
Thorsten
function dydx = deriv(x1, y1, nl, nr, m, ld, edge)
% deriv  calculates numerically smoothed derivatives
%
% calling syntax:
%
% dydx = deriv(x, y, nl, nr, m, ld, edge);
%
% where x is a vector of xvalues
% y is a vector of yvalues (y(i) = f(x(i)))
% nl is the number of leftward data points
% used for smoothing
% nr is the number of rightward data points
% m is the order of the polynomials fitted
% to the data
% ld is the order of derivative to be calculated
% (default is 1). For ld=0, the function
% itself is smoothed
% edge is a flag that determines the way the
% routine deals with the edges of the
% vector y.
% edge = 1 : the last full polynomial
% based on the values y(1:nl+nr+1)
% and y(length(y)nlnr:length(y))
% respectively is used to calculate
% all the values dydx(1:nl) and
% dydx(length(y)nlnr1:length(y))
% respectively.
% edge = 2 : the window used for fitting is
% linearly shrunk and shifted as
% the edges are approached
% (default is 2)
% dydx is assigned the nth derivative of y(x)
%
if nargin < 7
edge = 2;
if nargin < 6
ld = 1;
end;
end;
if m < ld
m = ld;
end;
faktor = gamma(ld + 1);
xmax = max(abs(x1));
ymax = max(abs(y1));
x = x1 / xmax;
y = y1 / ymax;
ly = length(x);
dydx = zeros(size(x));
for i = 1+nl : lynr
xx = x(inl:i+nr)  x(i);
yy = y(inl:i+nr);
pp = polyfit(xx, yy, m);
dydx(i) = faktor * pp(m+1ld);
end;
if edge == 1
xx = x(1:(nl + nr + 1))  x(nl+1);
yy = y(1:(nl + nr + 1));
pp = polyfit(xx, yy, m);
dpp = polydiff(pp, ld);
dydx(1:nl) = polyval(dpp, xx(1:nl));
xx = x((ly  (nl + nr)) : ly)  x(ly  nr);
yy = y((ly  (nl + nr)) : ly);
pp = polyfit(xx, yy, m);
dpp = polydiff(pp, ld);
dydx((ly  nr + 1) : ly) = polyval(dpp, xx(nl+2:nl+1+nr));
elseif edge == 2
nn = nl + nr + 1;
if nl < 2
nnn = m * 2;
else
nnn = round(linspace(m * 2, nl + nr, nl));
end
for i = 1:nl
xx = x(1:nnn(i))  x(i);
yy = y(1:nnn(i));
pp = polyfit(xx, yy, m);
dydx(i) = faktor * pp(m+1ld);
end;
if nr < 2
nnn = m * 2;
else
nnn = round(linspace(nl + nr, m * 2, nr));
end
il = ly  nnn + 1;
for i = 1:nr
xx = x(il(i):ly)  x(ly  nr + i);
yy = y(il(i):ly);
pp = polyfit(xx, yy, m);
dydx(ly  nr + i) = faktor * pp(m+1ld);
end;
end;
if ld == 0
dydx = dydx * ymax;
else
dydx = dydx * ymax / xmax ^ ld;
end
Paul Kienzle wrote:
On Nov 18, 2005, at 5:27 PM, Dmitri A. Sergatskov wrote:
On 11/18/05, Joe Koski <[hidden email]> [hidden email] wrote:
Has anyone run across "moving least squares" code in .m, .cc, .f, etc.
formats that would be adaptable for use with octave? Apparently, moving
least squares can be used to create approximating curves for
interpolation/approximation much in the same way that spline curves can be
used. In some situations, the moving least squares approach can reduce the
need to solve large matrices, which is normally associated with the curve
fit process.
A Google of "moving least squares" shows several papers, but no code that I
can find. Any ideas?
Would Kalman filter do the same / better?
SavitskyGolay is a moving least squares filter. You can use it for
curve smoothing if the data are equally spaced.
 Paul

Octave is freely available under the terms of the GNU GPL.
Octave's home on the web: http://www.octave.org
How to fund new projects: http://www.octave.org/funding.html
Subscription information: http://www.octave.org/archive.html



Re: Moving Least Squares, Anyone?
Is
this the original (96 page!) paper you were referring to?
There
seems to be some other related at
I
might also be interested in any work someone may have already started
here...I'll have to do some reading
OK, I just
tried the references and there are problems. Sorry. The Huang paper has
disappeared from the internet. I have a .pdf if anyone needs it. The next best
reference on EMD is a .pdf of the paper by Rilling. You can get it by googling
ON EMPIRICAL MODE DECOMPOSITION AND ITS ALGORITHMS.
Joe
on
11/21/05 1:22 PM, Joe Koski at [hidden email]
wrote:
I tried Thorsten's code on another problem, and it
works as advertised. Yes the edges are an issue, but that's the case with
most filtering approaches. Also the code does cause noticeable pauses during
execution. I do like the trick of using the 0th derivative to smooth the
function itself.
OK, I got many suggestions (SavitzkyGolay, Kalman
filters, etc.), but I should clarify what I'm after. There is a recent paper
by Blakely (http://www.cscamm.umd.edu/publications/,
Christopher D. Blakely "A Fast Empirical Mode
Decomposition Technique for Nonstationary Nonlinear Time Series") that
describes a new approach to Empirical Mode Decomposition (EMD). I'm trying
to implement the contents of Blakely's paper in octave.
EMD is a means of decomposing a signal afterthefact into a
series of "intrinsic mode functions" that can be summed together to recover
the original signal. The method was described in Norden Huang's paper (http://www.inrialpes.fr/is2/people/pgoncalv/pub/emdeurasip03.pdf).
The original approach uses cubic splines to connect the signal envelopes.
The upper envelope connects all the maxima of the signal, and the lower
envelope connects all the minima. A signal that is the mean of the upper and
lower envelopes is repetitively calculated and subtracted from the data
until there is no change. I have tried the approach on several experimental
data sets, and, once the rough edges are knocked off, I think it will be a
good tool to add to the signal analysis arsenal. Among other things, it
allows a time resolved frequency spectra to be calculated in a manner better
than I've seen with wavelet transforms. Huang's paper lists many other
advantages.
Blakely has an approach that uses "moving leastsquares,"
as developed by Fasshauer, to replace the cubic splines. Blakely says that
moving leastsquares converts the leastsquares curve fit into a Lagrangian
multiplier problem, which should be near and dear to heart of physicists
everywhere. This may be a case of applied mathematicians plowing the same
field that the signal analysis people plowed twenty or more years ago, but
with different results, and the approach sounds interesting. I also like the
looks of the envelopes calculated with moving leastsquares. They seem to
have fewer over and undershoots of the extrema than cubic
splines.
My question is specific. Has anyone seen an implementation
of Fasshauer's moving leastsquares in a language (Fortran, C, etc.)
suitable for use with octave? I could then modify the octave EMD code to
replace the cubic spline routines and give the approach a try. I contacted
Blakely, but he says that he did all his proofofconcept coding in Java.
He's also now working on his PhD thesis, and this was only a summer
diversion.
Joe
on
11/19/05 6:15 AM, Thorsten Meyer at [hidden email]
wrote:
I have been using the attached function for data
smoothing and calculation of smoothed derivatives for nonequally spaced
data. It uses moving leastsquares polynomial fits of the given data. If I
remember it right, the savitzkygolay algorithm does the same, only that
the procedure is nicely vectorized for equally spaced data.
My code
is terrible:
 the function arguments x and y are ordered the
wrong way round
 the calculation is slow because it is not
vectorized (does anybody know a way to vectorize it?)

 the treatment of the vector edges can probably
be improved a lot
But the
function has been working for me for quite some time now. And you can
certainly use it to see if the algorithm does what you
need.
regards
Thorsten
function dydx = deriv(x1, y1,
nl, nr, m, ld, edge) % deriv  calculates numerically smoothed
derivatives % % calling syntax: % %
dydx = deriv(x, y, nl, nr, m, ld, edge); % %
where x is a vector of xvalues %
y is a
vector of yvalues (y(i) = f(x(i))) %
nl is the
number of leftward data points %
used
for smoothing % nr
is the number of rightward data points %
m is the
order of the polynomials fitted %
to
the data % ld
is the order of derivative to be calculated %
(default
is 1). For ld=0, the function %
itself
is smoothed % edge is a
flag that determines the way the %
routine
deals with the edges of the %
vector
y. %
edge
= 1 : the last full polynomial %
based
on the values y(1:nl+nr+1) %
and
y(length(y)nlnr:length(y)) %
respectively
is used to calculate %
all
the values dydx(1:nl) and %
dydx(length(y)nlnr1:length(y)) %
respectively. %
edge
= 2 : the window used for fitting is %
linearly
shrunk and shifted as %
the
edges are approached %
(default
is 2) % dydx is
assigned the nth derivative of y(x) %
if nargin <
7 edge = 2; if nargin <
6 ld = 1; end; end;
if
m < ld m = ld; end;
faktor = gamma(ld +
1);
xmax = max(abs(x1)); ymax = max(abs(y1)); x
= x1 / xmax; y = y1 /
ymax;
ly = length(x);
dydx = zeros(size(x)); for i = 1+nl
: lynr xx = x(inl:i+nr) 
x(i); yy = y(inl:i+nr); pp
= polyfit(xx, yy, m); dydx(i) = faktor *
pp(m+1ld); end;
if edge == 1 xx = x(1:(nl
+ nr + 1))  x(nl+1); yy = y(1:(nl + nr +
1)); pp = polyfit(xx, yy, m); dpp =
polydiff(pp, ld); dydx(1:nl) = polyval(dpp,
xx(1:nl)); xx = x((ly  (nl + nr)) : ly)  x(ly 
nr); yy = y((ly  (nl + nr)) : ly); pp
= polyfit(xx, yy, m); dpp = polydiff(pp,
ld); dydx((ly  nr + 1) : ly) = polyval(dpp,
xx(nl+2:nl+1+nr)); elseif edge == 2 nn = nl + nr +
1;
if nl < 2 nnn = m *
2; else nnn = round(linspace(m *
2, nl + nr, nl)); end
for i =
1:nl xx = x(1:nnn(i)) 
x(i); yy =
y(1:nnn(i)); pp = polyfit(xx, yy,
m); dydx(i) = faktor *
pp(m+1ld); end;
if nr <
2 nnn = m *
2; else nnn = round(linspace(nl
+ nr, m * 2, nr)); end il = ly  nnn +
1; for i =
1:nr xx = x(il(i):ly)  x(ly  nr +
i); yy =
y(il(i):ly); pp = polyfit(xx, yy,
m); dydx(ly  nr + i) = faktor *
pp(m+1ld); end; end;
if ld ==
0 dydx = dydx * ymax; else dydx = dydx *
ymax / xmax ^ ld; end
Paul Kienzle wrote:
On Nov 18, 2005, at 5:27 PM, Dmitri A.
Sergatskov wrote:
On 11/18/05, Joe Koski
<[hidden email]> [hidden email]
wrote:
Has anyone run across "moving least squares"
code in .m, .cc, .f, etc. formats that would be adaptable for
use with octave? Apparently, moving least squares can be used to
create approximating curves for interpolation/approximation much
in the same way that spline curves can be used. In some
situations, the moving least squares approach can reduce the
need to solve large matrices, which is normally associated with
the curve fit process. A Google of "moving least
squares" shows several papers, but no code that I can find. Any
ideas?
Would Kalman filter do the same / better?
SavitskyGolay is a moving least squares
filter. You can use it for curve smoothing if the data are
equally spaced.  Paul

Octave is freely available under the terms of the GNU GPL.
Octave's home on the web: http://www.octave.org How to fund
new projects: http://www.octave.org/funding.html
Subscription information: http://www.octave.org/archive.html



On Mon, Nov 21, 2005 at 01:22:38PM 0700, Joe Koski wrote:
> My question is specific. Has anyone seen an implementation of Fasshauer’s
> moving leastsquares in a language (Fortran, C, etc.) suitable for use with
> octave? I could then modify the octave EMD code to replace the cubic spline
> routines and give the approach a try. I contacted Blakely, but he says that he
> did all his proofofconcept coding in Java. He’s also now working on his PhD
> thesis, and this was only a summer diversion.
If you can get hold of his Java code, it might be easy to convert it
to dynamically loadable C++ functions  depending on whether he used
external libraries, and on how complicated his classes are.
Sorry, I hope someone else has a more useful solution.
Stéfan

Octave is freely available under the terms of the GNU GPL.
Octave's home on the web: http://www.octave.orgHow to fund new projects: http://www.octave.org/funding.htmlSubscription information: http://www.octave.org/archive.html


Re: Moving Least Squares, Anyone?
Ben,
Yes, that’s the original paper in the Proc. of the Royal Soc.
Yes, Huang is at NASA.
Joe
on 11/21/05 2:14 PM, Hall, Benjamin at [hidden email] wrote:
Is this the original (96 page!) paper you were referring to?
http://www.keck.ucsf.edu/%7eschenk/Huang_etal98.pdf
There seems to be some other related at
http://techtransfer.gsfc.nasa.gov/HHT/
I might also be interested in any work someone may have already started here...I'll have to do some reading
Original Message
From: Joe Koski [[hidden email]
Sent: Monday, November 21, 2005 3:54 PM
To: J. Koski; Thorsten Meyer; Octave Help
Subject: Re: Moving Least Squares, Anyone?
OK, I just tried the references and there are problems. Sorry. The Huang paper has disappeared from the internet. I have a .pdf if anyone needs it. The next best reference on EMD is a .pdf of the paper by Rilling. You can get it by googling ON EMPIRICAL MODE DECOMPOSITION AND ITS ALGORITHMS.
Joe
on 11/21/05 1:22 PM, Joe Koski at [hidden email] wrote:
I tried Thorsten's code on another problem, and it works as advertised. Yes the edges are an issue, but that's the case with most filtering approaches. Also the code does cause noticeable pauses during execution. I do like the trick of using the 0th derivative to smooth the function itself.
OK, I got many suggestions (SavitzkyGolay, Kalman filters, etc.), but I should clarify what I'm after. There is a recent paper by Blakely (http://www.cscamm.umd.edu/publications/, Christopher D. Blakely "A Fast Empirical Mode Decomposition Technique for Nonstationary Nonlinear Time Series") that describes a new approach to Empirical Mode Decomposition (EMD). I'm trying to implement the contents of Blakely's paper in octave.
EMD is a means of decomposing a signal afterthefact into a series of "intrinsic mode functions" that can be summed together to recover the original signal. The method was described in Norden Huang's paper (http://www.inrialpes.fr/is2/people/pgoncalv/pub/emdeurasip03.pdf). The original approach uses cubic splines to connect the signal envelopes. The upper envelope connects all the maxima of the signal, and the lower envelope connects all the minima. A signal that is the mean of the upper and lower envelopes is repetitively calculated and subtracted from the data until there is no change. I have tried the approach on several experimental data sets, and, once the rough edges are knocked off, I think it will be a good tool to add to the signal analysis arsenal. Among other things, it allows a time resolved frequency spectra to be calculated in a manner better than I've seen with wavelet transforms. Huang's paper lists many other advantages.
Blakely has an approach that uses "moving leastsquares," as developed by Fasshauer, to replace the cubic splines. Blakely says that moving leastsquares converts the leastsquares curve fit into a Lagrangian multiplier problem, which should be near and dear to heart of physicists everywhere. This may be a case of applied mathematicians plowing the same field that the signal analysis people plowed twenty or more years ago, but with different results, and the approach sounds interesting. I also like the looks of the envelopes calculated with moving leastsquares. They seem to have fewer over and undershoots of the extrema than cubic splines.
My question is specific. Has anyone seen an implementation of Fasshauer's moving leastsquares in a language (Fortran, C, etc.) suitable for use with octave? I could then modify the octave EMD code to replace the cubic spline routines and give the approach a try. I contacted Blakely, but he says that he did all his proofofconcept coding in Java. He's also now working on his PhD thesis, and this was only a summer diversion.
Joe
on 11/19/05 6:15 AM, Thorsten Meyer at [hidden email] wrote:
I have been using the attached function for data smoothing and calculation of smoothed derivatives for nonequally spaced data. It uses moving leastsquares polynomial fits of the given data. If I remember it right, the savitzkygolay algorithm does the same, only that the procedure is nicely vectorized for equally spaced data.
My code is terrible:
 the function arguments x and y are ordered the wrong way round
 the calculation is slow because it is not vectorized (does anybody know a way to vectorize it?)

 the treatment of the vector edges can probably be improved a lot
But the function has been working for me for quite some time now. And you can certainly use it to see if the algorithm does what you need.
regards
Thorsten
function dydx = deriv(x1, y1, nl, nr, m, ld, edge)
% deriv  calculates numerically smoothed derivatives
%
% calling syntax:
%
% dydx = deriv(x, y, nl, nr, m, ld, edge);
%
% where x is a vector of xvalues
% y is a vector of yvalues (y(i) = f(x(i)))
% nl is the number of leftward data points
% used for smoothing
% nr is the number of rightward data points
% m is the order of the polynomials fitted
% to the data
% ld is the order of derivative to be calculated
% (default is 1). For ld=0, the function
% itself is smoothed
% edge is a flag that determines the way the
% routine deals with the edges of the
% vector y.
% edge = 1 : the last full polynomial
% based on the values y(1:nl+nr+1)
% and y(length(y)nlnr:length(y))
% respectively is used to calculate
% all the values dydx(1:nl) and
% dydx(length(y)nlnr1:length(y))
% respectively.
% edge = 2 : the window used for fitting is
% linearly shrunk and shifted as
% the edges are approached
% (default is 2)
% dydx is assigned the nth derivative of y(x)
%
if nargin < 7
edge = 2;
if nargin < 6
ld = 1;
end;
end;
if m < ld
m = ld;
end;
faktor = gamma(ld + 1);
xmax = max(abs(x1));
ymax = max(abs(y1));
x = x1 / xmax;
y = y1 / ymax;
ly = length(x);
dydx = zeros(size(x));
for i = 1+nl : lynr
xx = x(inl:i+nr)  x(i);
yy = y(inl:i+nr);
pp = polyfit(xx, yy, m);
dydx(i) = faktor * pp(m+1ld);
end;
if edge == 1
xx = x(1:(nl + nr + 1))  x(nl+1);
yy = y(1:(nl + nr + 1));
pp = polyfit(xx, yy, m);
dpp = polydiff(pp, ld);
dydx(1:nl) = polyval(dpp, xx(1:nl));
xx = x((ly  (nl + nr)) : ly)  x(ly  nr);
yy = y((ly  (nl + nr)) : ly);
pp = polyfit(xx, yy, m);
dpp = polydiff(pp, ld);
dydx((ly  nr + 1) : ly) = polyval(dpp, xx(nl+2:nl+1+nr));
elseif edge == 2
nn = nl + nr + 1;
if nl < 2
nnn = m * 2;
else
nnn = round(linspace(m * 2, nl + nr, nl));
end
for i = 1:nl
xx = x(1:nnn(i))  x(i);
yy = y(1:nnn(i));
pp = polyfit(xx, yy, m);
dydx(i) = faktor * pp(m+1ld);
end;
if nr < 2
nnn = m * 2;
else
nnn = round(linspace(nl + nr, m * 2, nr));
end
il = ly  nnn + 1;
for i = 1:nr
xx = x(il(i):ly)  x(ly  nr + i);
yy = y(il(i):ly);
pp = polyfit(xx, yy, m);
dydx(ly  nr + i) = faktor * pp(m+1ld);
end;
end;
if ld == 0
dydx = dydx * ymax;
else
dydx = dydx * ymax / xmax ^ ld;
end
Paul Kienzle wrote:
On Nov 18, 2005, at 5:27 PM, Dmitri A. Sergatskov wrote:
On 11/18/05, Joe Koski <[hidden email]> [hidden email] wrote:
Has anyone run across "moving least squares" code in .m, .cc, .f, etc.
formats that would be adaptable for use with octave? Apparently, moving
least squares can be used to create approximating curves for
interpolation/approximation much in the same way that spline curves can be
used. In some situations, the moving least squares approach can reduce the
need to solve large matrices, which is normally associated with the curve
fit process.
A Google of "moving least squares" shows several papers, but no code that I
can find. Any ideas?
Would Kalman filter do the same / better?
SavitskyGolay is a moving least squares filter. You can use it for
curve smoothing if the data are equally spaced.
 Paul

Octave is freely available under the terms of the GNU GPL.
Octave's home on the web: http://www.octave.org
How to fund new projects: http://www.octave.org/funding.html
Subscription information: http://www.octave.org/archive.html


