I recently led a project that forced me to rethink how we validate battery degradation for lithium scrubber packs and how we schedule swaps so busy sites never face unexpected downtime over weekends. Scrubber fleets—especially battery-powered floor cleaners—are mission-critical for many clients, and a failed pack on a Friday evening can mean a messy Monday and reputational damage. In this case study I’ll walk through the practical steps I used to measure degradation rates, build a reliable predictive model, and implement swap schedules that avoid weekend failures while keeping costs and spare inventory under control.
Why this matters
Battery degradation affects run time, charge cycles, and ultimately operational availability. If you underestimate degradation you’ll either overstock spares (tying up capital) or risk in-service failures. For lithium scrubber packs—typically lithium-ion modules from manufacturers like LG Chem, Samsung SDI, or smaller OEM modules—the degradation pattern is predictable but influenced by charge practices, depth of discharge (DoD), temperature, and charger quality. The goal was to move from reactive replacements to scheduled, data-driven swaps.
Initial data collection: what I measured and why
Before changing anything, I collected baseline data from a mixed fleet (Nilfisk, Tennant and a few OEM-branded machines) across three sites: retail, corporate office and hospitality. Key metrics I logged:
We pulled historical service logs from our CAFM and maintenance reports for the previous 12 months to correlate failures and downtime with usage patterns. In some cases the BMS could provide detailed cycle and voltage logs; where it couldn’t, we installed low-cost data loggers on a representative sample of scrubbers for a 6-week period.
Establishing an empirical degradation curve
Manufacturers publish cycle-life estimates (e.g., 2,000 cycles to 80% capacity), but real-world conditions differ. I derived an empirical degradation curve using measured capacity vs cycle count. Steps I followed:
Example table we used internally:
| Cycle Count | Measured Capacity (% of Nominal) |
|---|---|
| 0–100 | 100–98% |
| 300 | 95% |
| 600 | 90% |
| 1200 | 82–85% |
From this data the degradation rate was roughly 0.8–1.0% capacity loss per 100 cycles in our environment, which matched similar field studies for commercial lithium packs used in light industrial equipment.
Adjusting for real usage: Depth of Discharge and temperature
Capacity fade is not only a function of cycles. Two modifiers we applied:
By adjusting cycle counts with these factors we produced an "effective cycle" metric that better predicted remaining capacity.
Defining a service threshold and safety margin
Operationally, I needed a practical rule: replace a pack before it reduces run-time below the minimum required for a single shift plus buffer. Steps:
Example: if a machine needs 3 hours and average draw is 400W, required energy is 1.2kWh. For a 2kWh nominal battery that’s 60% of capacity. If the degradation curve predicts reaching 75% capacity at X cycles, we schedule replacement before X to keep the 15% buffer.
Swap scheduling to avoid weekend failures
With thresholds defined, I built swap windows prioritised by day-of-week risk:
Why mid-week? Swapping on a Monday risks missing an early-week failure; Friday swaps risk end-of-week failures going unnoticed. Mid-week swaps give at least 48–72 hours for verification charging and monitoring before the weekend.
Operationalising the plan: tools and processes
To make this repeatable I integrated the plan into our maintenance workflow:
We used a mix of vendor tools (Tennant and Nilfisk diagnostics) and simple cloud-based spreadsheets linked to our CAFM. Where available, we leveraged OEM telematics to automate cycle counts and alarms.
Cost and spare inventory considerations
One of the positive outcomes of a data-driven schedule was optimisation of spare inventory. Rather than holding 20% extra packs, we reduced spares by staging swaps and aligning replacements with predictable windows. The trade-off is investing in better monitoring (BMS/data loggers) and disciplined reporting.
To justify the monitoring cost I modelled two scenarios: reactive replacement (higher spare stock, urgent weekend callouts) vs predictive replacement (monitoring cost + planned swaps). For clients with high-cost downtime or many sites, predictive replacement paid back within 6–12 months due to reduced emergency call-outs, fewer lost shifts, and extended life from improved charging discipline.
Lessons learned and practical tips
Implementing this approach across our clients reduced weekend battery failures by over 90% in the first three months and improved overall fleet availability. If you want templates for the CAFM fields, the weekly risk report, or the swap checklists we used, I can share them.