For this task I've been using rpm vs throttle position, via ELM and logging in PCM Scan.
I can write a script which basically watches the values and fills each entry block with an average value.
So I have say 25rpm blocks from 2250rpm > 2750rpm, drive at 2500rpm in 5th, and log the throttle input value and put it in the right block. The table is then averaged over time, and I take the central blocks around 2500rpm (say 8 each side of 2500rpm and plot a little graph of average throttle vs rpm)
I then do that three times over a route that I can drive smoothly (say a quiet 5 mile run of motorway or dual carriageway) and go the same direction along it each time.
I then make my change, and then run again.
Since per-run correlations are well within 1-2%, any significant change should stand out if you average the three runs...
If the average of the after runs combine to one line that sits within the ranges of the original three runs then you can probably argue it's statistical error you are looking at.
If the average after is better than average before, then maybe it is better. But I'd want to do 5 before/after runs to try remove as much error as possible and resolve the actual result.
Since throttle and rpm are logged at a high resolution (1000 intervals for throttle iirc, and 500 intervals between 2250-2750rpm say), then you can get pretty good quality data.
Downsides are that the more you test you might just see more variance in natural variances like wind changing, air temp changing, tyres hotter/cooler on road, fuel tank going down = less tyre drag etc etc...
That is why if you generally can't see an average trend improvement within the error range evident on any given 'before' run, chances are you won't get results that are gonna stand out within the error your measurement technique offers.
Ie, your improvement might just be a few runs that were preferential.
Doing 6 runs before/after might improve things but this is where you get into statistical analysis and doing clever averaging of the before/after runs, otherwise your stat method might just be introducing more error to make spotting improvements even harder.
Needless to say, there isn't much you can get from these cars bar the obvious stuff. Drive slower, drive smoother, better tyres, clean aero surfaces, good oils etc... even a remap is struggling to boost steady state economy in all my tests... the only improvement a remap seems to offer is in 'driving style' improvements in all the transient phases.
But I'm testing lots of stuff actively.
I might do a test on this orange plug next time I do some testing if my test subject is happy to. Probably do some in January!
Dave
(28-12-2012, 02:18 PM)Poodle Wrote: [ -> ]I was going to test at 60 over 20 miles, as that is how ive done my mpg testing before, so ill have some easily comparable data. I wanted to ask you about recommended data logging progs actually, ive got pp2k, galetto and kwp leeads, so anything that works with those would be great.
Is that just doing brim to brim method?
All these techniques are just open to error really. Brim to brim is ok as a rough guide but still might give decent sized errors for all sorts of reasons.
I've taken the usual 'claimed' economy boost remaps and not seen anything is steady state driving. My error is ~ 3% and if there is an improvement it's in the 0-3% range hehe...
However, customers always say their mpg is better, and so I guess that is noticed in transient driving benefits, ie, better gear choice, better use of revs/torque combo with more power on tap etc...
But that is REALLY tough to measure hehe... all you can really do is do brim to brim and drive as you feel fit.
In my car I saw no benefit on brim to brim mpg but then other people do on essentially the same remap files.
Dave