Data hygiene is a term straight outta the early aughts that’s become as relevant (again) as the 1990s QR code in the post-pandemic economy of 2021. Data needs to be spotlessly clean, now more than ever. Which makes all this remote data gathering more than unsettling.
PYMNTS February 2021 Digital Consumer Onboarding Tracker® done in collaboration with Melissa notes, “Businesses’ data-gathering efforts hinge on smooth onboarding procedures, and getting the process right can yield crucial insights into customers’ preferences as soon as they sign up for services. Getting it wrong can turn potential customers away or result in data quality issues that hamper relationships from the start. One recent study revealed that up to 30 percent of consumers’ issues with businesses stem from subpar onboarding experiences, revealing that firms have work to do to smooth out these pain points.”
Data duplication (“dupes”) have long been the bane of data-driven marketers, and they still are. It’s a perfect example of how virtually all companies regardless of size or industry should be taking steps now to cleanse their data as digital-first everything gets fully underway.
Up To 10 Percent Of All Data Is Affected
Advanced systems running exotic code may cause companies to think they’re totally covered on data basics when they’re not. There’s a reason to single out data duplication: it’s a big problem.
Bud Walker, chief strategy officer at address and identity verification solutions provider Melissa, told PYMNTS that “up to 10 percent of the average business’s customer information is duplicated, leaving many companies to lose money by resending packages mailed to incorrect addresses or losing customers by shipping them marketing materials more than once — or not at all. Walker said that businesses can take a proactive approach to cutting down on these errors by leveraging geolocation data and tracking telephone numbers.”
Per the new Digital Consumer Onboarding Tracker®, “Geolocation data and telephone number tracking offer two potential solutions to fighting data duplication, but there is another technology can that prove useful: artificial intelligence (AI). Other research suggests that as much as 30 percent of companies’ master data files contain duplicated information, costing them money and time as they work to sort through inaccurate data. AI could reduce data duplication by analyzing large amounts of customer information and identifying and flagging misspellings.”
Bad Data Will Cost You Customers
It’s an age of online ordering and home delivery. That entire proposition rests on accurate customer data — correct address for starters — and it’s surprising how many companies are still roughing it when it comes to absolute data integrity.
“Companies that do not have access to quality data risk losing not only customers frustrated by receiving the wrong — or duplicate — materials but also significant revenues due to these inefficiencies,” according to the latest Digital Consumer Onboarding Tracker®.
The Tracker adds, “Only 45 percent of 2,165 data and analytics decision-makers surveyed in a [recent] study reported consistently using rigorous quality checks to ensure their data’s accuracy, 60 percent admitted they lack confidence in their data and only 10 percent believed they excel at managing data quality.”