Single-trip fuel economy tests are notoriously difficult to do properly, and they require a level of precision that’s often beyond the means of most fleets. - Photo: Jim Park

Single-trip fuel economy tests are notoriously difficult to do properly, and they require a level of precision that’s often beyond the means of most fleets.

Photo: Jim Park


Fuel economy testing is notoriously difficult to do properly. Naturally, if you plan to invest some time and energy evaluating a fuel-saving device, you hope to get results you can trust. But can you trust the results you get?

If you wanted to compare a classic long-nose tractor to one of today’s more streamlined models, the results of an A to B comparison would be obvious. But if you try evaluating a product or technology that promises small gains, say 1-2%, chances are there would be more noise in your test results than the gains the product could provide. In other words, if the gains were there, you probably wouldn’t notice them in a small-scale test. If you couldn’t prove the effectiveness of the product, would you bother buying it? If you didn’t buy it, how much money might you be leaving on the table?

Consider this astonishing example. Mesilla Valley Transportation saved $290,000 in the first year using EcoFlaps mud flaps instead of the traditional solid rubber mud flaps on its fleet of 1,500 tractors and 6,000 trailers. That figure doesn’t include the up-front cost of the product, but the product will go on saving the company nearly $300,000 each year for as long as they are in service.


Fuel economy tests always require a test truck and a baseline truck so accurate comparisons can be made. - Photo: Jim Park

Fuel economy tests always require a test truck and a baseline truck so accurate comparisons can be made.

Photo: Jim Park


MVT’s in-house fuel-economy testing company, Mesilla Valley Testing Solutions, tested EcoFlaps in a controlled environment and found they reduced fuel consumption by 0.89 gallons per 1,000 miles. In real-world conditions, the results were 0.80/1,000 gallons. Expressed as a percentage, the savings was about 0.77% based on MVT’s 9-mpg fleet-average fuel economy.

Even with diligent monitoring, almost no do-it-yourself in-service fleet fuel economy test would reveal a number that small.

“The fleets with the best in-service testing that we’ve seen have not been able to measure less than 1%,” says Daryl Bear, chief operating officer and lead engineer for Mesilla Valley Transportation Solutions. “But once MVT measured EcoFlaps in a proper control test, and had reliable numbers, they made the decision in a day to install them fleet-wide. The fuel savings of just 1% will save more in a year than we’ll spend on testing in five years.”

This poses a real dilemma for fleets. They want to make a wise investment in a fuel-saving technology, but how can they be sure of what they are buying without testing? In-service testing is one method, but it can be notoriously imprecise.

“One of the issues with in-service tests is the number of variables that cannot be controlled the way they can be on a test track,” explains Jan Michaelsen, FPInnovations’ PIT Group leader. “We try to control every variable, removing as much as we can from driver technique, environmental variations, etc., which can all impact fuel consumption.”

PIT Group testing activities under the ISO 17025 certification include fuel consumption testing according to SAE J1321 and TMC Type II (RP 1102A), SAE J1526 and TMC Type III (RP 1103A), and EPA SmartWay test methods.

In-Service Testing

Two factors can help improve the results a fleet might get with an in-service test: time and sample size. The longer the period you run the test, the more the variables like weather traffic and driver influence will even out. Michaelson says a minimum of two months is needed, preferably when similar weather conditions exist.

“The best time, depending on the climate, is late spring to early summer, or late summer to early fall, when the weather is fairly stable,” he says.

Cold air, for example, is more dense than hot air, so any aerodynamic performance improvements will be greater if you test in 40- to 60-degree temperatures compared to 80- to 90-degree temperatures, when the air is less dense. Spring and fall testing are more likely to be at neither extreme.


Testing aerodynamic devices for fuel economy gains is a game of millimeters in a world of miles. The gains can be almost impossible to spot among all the variability and data noise. - Photo: Jim Park

Testing aerodynamic devices for fuel economy gains is a game of millimeters in a world of miles. The gains can be almost impossible to spot among all the variability and data noise.

Photo: Jim Park


“It really does make a difference,” Michaelsen adds. “The longer the test runs, say six months to a year, the more reliable your results will be.”

The size of the test fleet matters, too. For the same averaging reasons, a larger sample size will shake out the variables more effectively.  

“If you’re testing with one truck, a difference of less than 5% is almost impossible to see,” warns Michaelsen. “Even with a large population you will have a hard time getting anything reliable under 2% or 3%. There’s just that much variability.”

In addition to the test trucks, you will also need a group of baseline trucks — a portion of the fleet that doesn’t get the new technology, so you have a before and after comparison. The baseline group also helps track the environmental influences, assuming that if the test truck and the baseline truck experience similar weather, the impact on each truck should be similar.

“You should really have a baseline portion of the fleet that remains unchanged from when you install a new technology, you know, before adoption and after adoption, so that you have a portion of your fleet that you can compare to what hasn’t been changed,” he advises.

Many fleets start out with the best intentions, but the heightened level of diligence with the test fleet is difficult to maintain. For example, basic maintenance such as tire pressure checks must be performed daily. A tire change can disqualify a truck from the test fleet. And a fairly high level of cooperation is required between operations and dispatch to ensure the trucks and drivers are kept on the same loads and routes for the duration of the test. All that is a lot to ask on top of the usual operational pressures.

On top of all that, there’s a serious time lag between the beginning of the test and the fleet-wide adoption of a successfully tested technology. If it doesn’t test successfully, you start a new test with a different fuel-saving product.

“If you’re unable to measure a device with an in-service test, you have to find another way to do it, or you just leave those savings on the table,” says MVTS’ Bear, who has 19 years engineering experience in motorsports testing and advanced R&D. “For example, we recently evaluated some SmartWay-approved fuel-efficient tires and found a difference of 8 gallons of fuel over 1,000 miles between the brands tested. That’s $2,000-a-year savings, but even a sophisticated fleet would have trouble seeing that with an in-service test.”

Call the Professionals

Both MVTS and PIT Group do for-hire testing. Both companies can provide fuel test data on practically any product on the market at a cost that can often be earned back in a year or less, depending on the fleet and the technology.

MVTS can do full testing either at its test track or at your location, with your truck and the technology of your choice, Bear told HDT. “The cost varies, but if we do a test at the fleet, at their location, it can be around $20,000 to $30,000 for up to three tests, and often the device suppliers will help with the cost.

“That’s obviously a tough number for a smaller fleet, but we have some options for them too; a program called Real World Fuel Saving Analysis, where we compare your fleet to data we have already collected on various devices,” he says.

MVTS takes the fleet’s data, such as duty cycle, where they run, percentage of on-highway miles versus urban duty cycles, and a lot of other information, and then builds a digital model to predict the fuel savings the fleet will get with a particular technology.

“It’s purely science,” says Bear. “We’re able to get the fleet a valid real-world savings number from a four-hour test.

“The smaller the fleet, the tougher it is. But if a fleet really wants some good information and is willing it invest, it pays for itself. It’s kind of like going to college. If you’re going to use it for the rest of your career, then it’s worth doing as early as possible.”

PIT Group takes a different approach. Fleets can buy into the group, become partners in the consortium, if you will. The cost of joining the PIT Group is approximately $38 per power unit at the highest price point, with a minimum charge based on 100 power units, or $3,765 per year. Partners can suggest technologies they would like to have tested, or just wait for data to emerge on other tests as they are completed. Once you’re a partner, you have access to the test results, good and bad.

“The top-notch fleets actually have people working in quality improvement and tracking fuel. Those people are paid a salary and the company sees some benefit from that,” Michaelsen says. “For smaller fleets that can’t afford to have someone like that on staff, we try to be that person for them.”   





Source link