Study Shows Highly Integrated Digital Financial Services Accounts Go Long Way with Consumers

Digital Banking

About half of consumers say their digital financial services accounts are “highly integrated,” meaning it’s relatively easy in most cases to transact, make payments and see transaction histories across accounts and channels, according to new PYMNTS research.

Of course, if half of consumers are satisfied with their user experience, that means the other half aren’t. Consumers who frequently use multiple channels to access their accounts are more likely than the average consumer to want more integration from their providers, according to our research.

In “The Future Of Authentication In Financial Services: Engaging Consumers Across Channels And Devices,” a PYMNTS and Entersekt collaboration, we asked 2,719 United States consumers to examine how financial institutions can deepen relationships and build trust with consumers by ensuring cross-channel access across different devices.

Close to three out of every five consumers (56%) say their digital financial services accounts are “very” or “completely” integrated, with 655 of bridge millennials most likely to say they feel this way, and millennials just behind them at 64%.

More than three-fifths (62%) of consumers are satisfied with their providers’ current integration levels, but consumers who use multiple channels when looking over their online financial accounts are usually less satisfied than the average consumer.

Seventeen percent of these consumers say their current levels of integration are lower than what is needed, higher than the average share of 13%.

Our research also found that a financial institution that offers its customers a positive experience can earn their trust more quickly than those that don’t achieve this milestone. More than half (53%) of the accountholders we surveyed said their financial services providers’ ability to provide “integrated, consistent, cross-device experiences boosts their trust in them.”

That means integration can give providers the chance to deepen their relationships with accountholders — or destroy them.

Other key findings in “The Future Of Authentication In Financial Services: Engaging Consumers Across Channels And Devices” include:

  • Seven out of 10 bank account holders use online accounts, and nearly one-third of surveyed consumers have accounts with online-only banks. Meanwhile, 39% report having an account at a traditional retail or commercial bank with a physical branch.
  • One-quarter of consumers prefer to access their digital financial accounts across several devices. Bridge millennials and millennials are the most likely to do so, where 37% of bridge millennials and 36% of millennials toggle between different means of accessing accounts.
  • Mobile apps have become critical tools for finance, with 36% of consumers with digital financial services accounts saying they primarily access these accounts using apps on their mobile devices.

Tech Giants Push Back at a Crucial Time for the EU AI Act

The EU AI Act has been lauded as the most comprehensive set of regulations on artificial intelligence on the planet. But it is a set of general principles without details for implementation.

The real work comes with the Code of Practice for general-purpose AI models, which details the compliance requirements for AI companies.

“Many outside Europe have stopped paying attention to the EU AI Act, deeming it a done deal. This is a terrible mistake. The real fight is happening right now,” wrote Laura Caroli, a senior fellow at the Wadhwani AI Center, for the Center for Strategic and International Studies.

The code of practice will undergo three drafts before being finalized at the end of April. These voluntary requirements take effect in August.

However, the third draft was supposed to be released on Feb. 17, but it’s now delayed, with indications that it won’t be out for a month, Risto Uuk, head of EU policy and research at the Future of Life Institute, told PYMNTS. The advocacy group is led by MIT professor Max Tegmark as president.

Uuk believes the draft’s delay was due to pressure from the tech industry. Particularly tricky are rules for AI models that pose a systemic risk, which apply to 10 to 15 of the biggest models created by OpenAI, Google, Meta, Anthropic, xAI and others, he added.

Tech Companies Push Back

Big tech companies are boldly challenging EU regulations, believing they will have the support of the Trump administration, according to the Financial Times (FT). Meta has dispatched tech lobbyists in the EU to water down the AI Act.

The FT also said Meta refused to sign the code of practice, which is a voluntary compliance agreement, while Google’s Kent Walker, president of global affairs, told Politico that the code of practice was a “step in the wrong direction” at a time when Europe wants to be more competitive.

“Certain big technology companies are coming out saying they either will not sign this code of practice unless it is changed according to what they want,” Uuk said.

Some issues of contention revolve around how copyrighted material is used for training. Another is having an independent third party assess their models for risks, according to Uuk.

They’ve complained that the code goes beyond the EU AI Act’s requirements, Uuk said. But he noted that many of them already exercise these practices themselves in collaboration with the U.K. AI Safety Institute and others. Tech companies also already release their technical reports publicly.

Uuk said there’s concern that the EU will weaken the safety provisions because of tech companies’ opposition. He also noted that the new European Commission administration that took office last December leans toward cutting red tape, simplifying rules and increasing innovation.

What Happened After the AI Pause Letter?

The Future of Life Institute is perhaps best known in AI circles as the organization that circulated an open letter in March 2023 calling for a six-month moratorium on advanced AI models until safety protocols were developed. It was signed by Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, AI pioneers Yoshua Bengio and Stuart Russell, among others.

Did the letter work? Uuk said there was no pause in AI development, and the fast pace of AI advancements has continued.

Moreover, “Many of these [AI] companies have not increased their safety work, which the ‘pause’ letter called for,” he added. “The pause was not just for the sake of pausing, but you would use it to increase AI safety work, and this work, arguably, in many cases, has not happened.”

In May 2024, OpenAI dissolved its AI safety team days after the resignations of its two AI safety leaders: Chief Scientist Ilya Sutskever and safety co-leader Jan Leike, who posted on X that OpenAI didn’t prioritize AI safety.

One silver lining is that while tech companies continued to build, regulatory action has gained momentum globally, Uuk said. He noted the following:

  • The EU AI Act became the world’s first comprehensive AI regulation. It was adopted in March 2024.
  • South Korea adopted the Basic AI Act last December, mirroring the EU’s framework.
  • China has been introducing AI governance policies while Brazil is working on its own AI Act.
  • The U.S. remains fragmented, with states introducing their own laws.

However, Uuk was disappointed about this year’s AI Action Summit in Paris, following the first two in the U.K. and South Korea.

Unlike past safety summits, the Paris summit leaned toward promoting AI innovation. “There was barely any discussion of safety,” Uuk said.