thanks for your suggestion, looping wasn't that obvious for me. I now

the same time as gnuplot: roughly 3 minutes for 500,000 entries.

> Hi,

>

> Did you already try the obvious way of reducing memory consumption--namely processing parts of 'x' in a loop? For instance such that each iteration processes 50 or so x-values?

>

> for k = 1:400

> i = (k-1)*50 + (1:50);

> outdata(i) = sum(gauss(x(i), inputdata(:,1), inputdata(:,2), 0.8), 1);

> end

>

> You would obviously need to make the magic constants 50 and 400 something that's more adaptable, but this would be my first approach to reducing the overall memory requirements.

>

>

> Regards,

>

> Bård Skaflestad

> SINTEF Digital, Mathematics & Cybernetics

> Computational Geosciences group

>

> -----Original Message-----

> From: Help-octave <help-octave-bounces+bard.skaflestad=

[hidden email]> On Behalf Of Maxim Gawrilow

> Sent: Wednesday, August 28, 2019 2:58 PM

> To:

[hidden email]
> Subject: Perform Gauss-broadening on line spectrum

>

> Dear Octave users,

>

> I have created a bunch of spectral lines and would like to simulate the Gaussian broadened spectrum of these. The spectral lines are a Nx2 matrix where the first column contains the position and the second the intensity of each line. What I want is to place a Gaussian at each position, scale it with the appropriate intensity and calculate the sum.

> In theory, this code would do the job:

>

> gauss = @(x,x0,Int,sigma)

> Int./(sigma.*sqrt(2*pi)).*exp(-0.5*((x-x0)./sigma).^2);

> x = linspace (0, 4000, 20000);

> outdata = sum(gauss(x,inputdata(:,1),inputdata(:,2),0.8),1);

> exportdata = [x', outdata'];

>

> However, inputdata contains quite a few lines: at least half a million, sometimes even several dozen million entries. Execution of said code results in "out of memory of dimension too large for Octave's index type". Therefore I'd need a different approach.

> I am able to get the correct result in gnuplot with

>

> plot "inputdata" smooth kdensity bandwidth 0.8

>

> but this is extremely slow. Already for half a million entries it takes

> 5 minutes for a single plot, and will obviously take hours to compute cases with dozens of millions of entries.

>

> Does anyone have an idea of speeding up this process? I guess convolution with a Gaussian would be a correct approach as well, but I didn't find any function that suits my case (conv only takes equidistant vectors, and I'm a bit of a loss at understanding kernel_density from econometrics).

>

> With kind regards, Maxim

>

> --

> Maxim Gawrilow

> Georg-August-Universität Göttingen

> Institut für Physikalische Chemie

> AG Suhm

>

> Tammannstr. 6

> 37077 Göttingen

>

>

>

Tammannstr. 6