When I need to evaluate how long a commercial scrubber will run on a single battery charge, I follow a practical, repeatable plan that balances lab-style measurement with on-site realism. Over the years I've tested dozens of machines — from compact walk-behind scrubbers to large ride-ons — and I’ve learned that the difference between an advertised runtime and real-world performance often comes down to how you test and which variables you control. Below I share a step-by-step method I use to produce reliable run-time figures you can trust for procurement, scheduling, and battery management.
Why a formal test matters
Manufacturers often quote runtimes under ideal conditions (low brush pressure, low solution flow, constant speed, no headland turns). In everyday use you get varying speeds, frequent turns, different floor types, and varying operator behaviour — all of which can drastically change battery drain. A consistent test protocol lets you compare models apples-to-apples and also predict how many charges you’ll need per shift.
What you’ll need before you start
Step-by-step test protocol
Follow this procedure to generate repeatable results:
Charge the battery fully using the recommended charger and allow the battery to rest as the manufacturer advises (some chemistries require a short rest after charge). Ensure brush and squeegee are in good condition. Fill the solution and recovery tanks to levels you would normally use.
Decide and record the settings you’ll use: brush pressure, brush speed, solution flow rate, travel speed. For a fair comparison, use the same settings across different machines. For example, medium brush pressure, medium solution flow, and a steady travel speed that reflects realistic cleaning pace.
Mark a route that mimics your cleaning environment: include straight runs, turns, and areas with different floor surfaces if possible. Measure the circuit length precisely so you can calculate distance covered during the test.
Begin with a fully charged battery. Start the timer and operate the scrubber continuously on the test circuit at the predefined settings. Keep operator behaviour consistent — the same person controlling speed and turns helps reduce variability.
Every 10–15 minutes (or every circuit), note remaining battery percentage (if available), voltage reading, elapsed time, and distance covered. Also record any changes in performance like slower brush speed, reduced traction, or warning lights. If the machine has an onboard runtime estimator, log that too — it’s useful to compare predicted vs actual.
Decide beforehand what constitutes “run out”: when the machine shuts down, when performance becomes unacceptable (reduced brush speed, poor water pickup), or when a low-battery alarm triggers. I usually record both the time at the first low-battery alarm and the time to automatic shut-off.
Run at least three cycles to the defined end-point to get an average. Batteries and machines can show variability; averaging reduces noise in your data.
How to record and present results
Use a simple spreadsheet with columns for test number, date/time, battery percentage at start, settings used, elapsed time to alarm/shutdown, distance covered, battery percentage at end, and notes on performance. Here’s a basic example table structure I use:
| Test | Start % | Brush Pressure | Speed (m/min) | Elapsed to Alarm (min) | Elapsed to Shutdown (min) | Distance Covered (m) | Notes |
|---|---|---|---|---|---|---|---|
| 1 | 100 | Medium | 40 | 85 | 92 | 3680 | Normal performance, low-batt alarm at 85min |
| 2 | 100 | Medium | 40 | 83 | 90 | 3600 | Consistent with Test 1 |
How to interpret the data
From the recorded runs, calculate averages and standard deviation if you want precision. The primary metrics I look at are:
Remember that onboard displays often give optimistic numbers. I rely on observed runtime rather than theoretical estimators for scheduling work and quoting to clients.
Factors that commonly skew runtimes
These are things I always check because they have outsized effects:
Tips to improve battery life in real operations
Reporting to stakeholders
When I deliver test results to managers or procurement teams I include:
With a clear, repeatable testing routine you can move beyond vendor claims and build realistic schedules and purchasing decisions. Accurate battery runtime data reduces downtime, helps choose the right model, and saves money — both in operational efficiency and by preventing premature battery replacement. If you want, I can provide a downloadable checklist or a blank spreadsheet template of the protocol I use to make it easy to start testing on your own site.