> From: Evan Thomas <[hidden email]>
> Organization: Depts of Anatomy & Cell Biology
> and of Physiology, University of Melbourne
> To: [hidden email] > Subject: Slow performance on Linux
> Sender: [hidden email] >
> I have recently installed octave-1.1.1 on a PC-pentium/133 and a Sun
> SPARC station 2, both with 32Mb memory. Published benchmark figures and
> my own experience would suggest that the PC should be significantly
> faster than the Sun. However the opposite is true. In fact, for the
> calculations I've been doing, the Linux machine is unusable (almost as
> if it wasn't using the floating point unit(?)).
> Is this what I should expect or have I configured/done something wrong?
> The system is Linux kernel 1.3.72 on top of a Debian 0.93R6
> distribution. I've tried the both the binary and source distributions of
> Thanks, Evan.
> Evan Thomas
> Department of Anatomy & Cell Biology
> University of Melbourne
> Parkville, 3052
> ph: 9344-5849 fax: 9344-5818
You may be interested in some lsode running-time results on my
I am running Linux version 1.2.1 (gcc version 2.5.8)
#3 Sun Mar 19 )8;17:38 CST 1995
on a 90 MHz Pentium (Zenith) with 16M RAM.
The octave version is 1.1.1 .
I have loaded a binary version of octave and immediately started using
it, not having the time or inclination to optimize octave.
For example, I have a file, .octaverc, in my home directory which
contains the command (among others)
Yet, when octave is running on one virtual screen, if I run the command
cat /proc/<octave_process_id>/environment |tr "\000" "\n"
on another, the variable name, LOADPATH, does not appear.
I am using lsode in an octave script to integrate Duffing's equation
in a computationally intensive way.
The first line of my script file contains:
#! /usr/local/bin/octave -qf
and I run the script as a background process on a Linux virtual screen
(not from an Xwindows xterm, as I do when I want graphical output.)
The integration runs over 10 neighboring parameter points.
Each integration cycle runs over one harmonic forcing cycle.
The script asks lsode to return results over each cycle at 4096
points. The lsode_options 'rel' and 'abs' are set to 1e-12.
(In the example I quote below, I think the requirement of 4096 points
per cycle controls, not the accuracy requirement; i.e, that lsode
doesn't use more than 4096 points per cycle to attain the required
In the example, each parameter point includes 180 forcing cycles.
Thus, the script calls lsode 180 times at each parameter point.
When it finishes at each point, it writes about 80 lines to a result
file. At the beginning and end of the script,
and after finishing each set of 180 cycles, the system command
'date' is issued. Thus, there is a record of how long the calculation
takes. I often run the script files overnight. However, the two
examples below, which used almost identical scripts, were run while I
was working at my desk, hardly using the computer at all.
(I know this was the case for the second run. For the first, I guess
it was the case because the during-lunch intervals were about the
same as the after-lunch intervals.)
In the first run, the sorted, five-lowest elapsed times
per 180 cycles were
14:18, 14:22, 14:24, 14:28, 14:29, ... (min:sec).
For the second run, the first command of the script was
ignore_function_time_stamp='all' . Otherwise, the two scripts were
identical. The corresponding sorted, five-lowest elapsed times were
14:56, 14:58, 15:02, 15:07, 15:08, ... (min:sec).
I don't think the differences are practically significant.
The script calls a file, duffing.m, to evaluate derivatives,
say 4 times per interval times 4096 intervals / cycle times 180 cycles,
or about 3e6 times in each 15 minutes, or about 3,000 times per second.
Could this calling be what takes the time?
Stevens Institute of Technology
Hoboken, NJ 07030