Fraud is constantly evolving, so why shouldn’t digital identity protection? The National Institute of Standards and Technology recently updated its Digital Identity Guidelines, to include authentication — and the scope goes well beyond passwords. PYMNTS’ Karen Webster checks in with Sunil Madhu, chief executive officer of Socure, and Paul Grassi, Senior Standards and Technology Advisor, Trusted Identities Group at the National Institute of Standards and Technology, to get a sense of what’s outdated and what works.
The more business is done online, the more fraud makes inroads. The more individuals leave a trail of information across websites, the more tempting the targets are for hackers to make off with everything from Social Security numbers to health care information.
That old password you have? The one referencing your favorite Journey song? Your security question with your dog’s name? They ain’t cutting it.
To that end, the National Institute of Standards and Technology (NIST) has just updated its guidelines — spanning four volumes — on digital identity services. The official name is NIST Special Publication 800-63. The drafts were finalized after 74,000 unique visitors took a look and weighed in on suggestions overhauling previous guidelines (this is the third iteration and the first in four years).
The overarching theme is to mitigate risk when it comes to offering individuals access to government services, in an effort to keep their personal data safe and the agency from a breach of data — and the conduits to getting there have, perhaps, more moving parts than you might expect.
Assurance of such digital safety has several components, among them identity proofing, authenticators and federation assertions, governed by risk analysis, and, in this case, the guidelines target federal agencies, offering up required and suggested practices. Each agency has jurisdiction over the levels of assurance they employ. The authority is FISMA and the OMB Circular A-130.
In an interview with PYMNTS’ Karen Webster, Sunil Madhu, chief executive officer of Socure, and Paul Grassi, senior standards and technology advisor of the Trusted Identities Group at the National Institute of Standards and Technology, weighed in on the reasons for creating a new guideline set … and why the time is right, right now.
Grassi noted that through the Trusted Identities Group, which was born in the previous presidential administration, “we’ve been very focused on supporting the market to deliver secure and privacy-enhancing solutions and the … government as an early adopter leveraging market-based solutions rather than building our own. We learned a lot … it was obvious that [previous] guidelines” which have worked well over the past 10 years, needed changes. The old model of levels of assurance, ranked from little confidence to high confidence, he continued, was not working across all use cases.
Identity proofing, said Grassi, remains a “squishy target for hackers, predominantly using data that is already public,” and thus existing guidelines were not effective in keeping the bad guys out. The technical side of the authenticator credentials — the ones we all use to log in to websites, from social media to commerce juggernauts — has seen innovation, especially in mobile. The general theme here is to use an architecture and processes to shepherd digital identities. Once a user has been proofed, and issued an authenticator, identities can be federated, a characteristic that allows that identity to work across separate organizations. Using the same identity data across a continuum of networks, as Webster noted, can have many applications, both now and in the future.
But herein lies challenges alongside opportunity: Organizations that may need information on individuals’ identities also need to establish the level of assurance they are willing to accept. They also, as Grassi explained, may need to partner with identity providers to meet, and often exceed that level of assurance.
“Part of the problem,” Madhu told Webster, stems from the old assumption that there exists a “unified level of assurance,” a single score of authentication. But now the shift, as per NIST, is to suss out whether and how much proofing and authentication must be applied to an identity given a specific context.
Explaining the shift a bit more broadly, Madhu said, “We have silos of information that would … [house] the place that you authenticated at originally, that enrolled you.” And that agency, typically, served as the identity provider. Now the user takes that identity with them to a service provider or other entity, and the question arises as to just how much should be revealed.
In response to Webster’s assertion that there needs to be at least some core attributes that ascertain someone is who they say they are, Grassi stated that those core attributes are “use case driven” and that while a strong digital identity needs a trusted anchor back to something —that something does not have to be a predefined set of attributes across all services, in the guise of, say, a Social Security number. That’s “a pretty darn weak anchor,” he said.
In contrast, a more robust anchor can take root in certification programs and in standards, he said, where organizations can say they establish identities and manage data using clearly defined guidelines (such as those offered through the recently debuted NIST documents). Thus, information provided by a user has been confirmed across a variety of validation and verification processes.
The real exciting part, said Grassi, is that “now when I want to use that identity in certain use cases, I can be in control of what data is being shared and with” whom.
Accessing a service that only needs to know a user’s age over a certain threshold — with the date of birth in place with a provider that has already proofed an individual — can be responded to by a binary “yes or no, they are over 21” check. And if the service wants to have a full deck of information, such as a date of birth along with a home address, “that transaction can be rejected by the user.” Conversely, he continued, “if the user wants to share more than necessary to access the service … they can. It’s their perogative.”
The possession of a secure private key, said Grassi, “establishes with a high level of confidence that I am continually … who I say I am.” The important trust element here is how tightly bound the proofed identity is to that private key. The view is that the guidelines don’t force an architecture, for example storing keys in devices (like a “storage container,” as Webster noted) or credentials stored in the cloud, but allows for both. Identity credentials can be in the cloud, illustrated Grassi, but can also be housed in a cryptographic store in a mobile phone (oh, and it’s quite difficult for the bad guys to steal everyone’s data this way). “Mobile solutions can put users in the middle of a transaction where they are only sharing what they need to share, reducing the ability for an identity provider to be able to track your online behavior,” he noted.
In a world where the internet looms ever connected, in payments and commerce, as Webster posited, Alexa and other interactions with speakers and the like offer a knotty scenario. It’s not the device, theoretically, that stores the credential. It’s the skill connected to the app connected to the cloud that stores the credential. “IoT is a paradgym that we are just beginning to addressed,” said Grassi. But, he added, “we understand the unique challenges of IOT and are actively working with industry to determine what guidelines and best practices are needed. There are standards, like OAUTH (open authorization), that exist that mitigates the concern that ‘the Alexa skill has your credentials,’ because with OAUTH, credentials are not shared. I am hoping these standards can work with IOT so we don’t need to create something new.”
Biometrics may be a buzzword. But according to Grassi and Madhu, biometrics offers no panacea in the quest for digital identities that prove foolproof and hack-proof.
Simply put, the pair explained, biometrics offer great promise, but they are not all created equal. “There’s a balance that has to be walked when we talk about biometrics,” said Grassi. They are not a secret. They can be lifted, they can be forged and they can be compromised because they are not private.
One salve: to create parameters around the use of biometrics to guard against the fact that they are not secret. “My goal,” said Grassi, is that someone that has my fingerprint will never be able to use it to authenticate remotely, but that I can still log in. So, there are requirements around performance and security, to include presentation attack detection (often referred to as spoof detection); we have requirements about devices being trusted as well since sensor technology is key to meeting our requirements. By the time [hackers] try and access my accounts, I’m likely shutting down access anyway because I realize I am no longer in control of my phone.”
Webster queried the pair about the highest order threats that lurk for governments and specifically for federal agencies. Said Grassi, “It’s not sexy, but the biggest vulnerability is still passwords.”
He cited the Verizon data breach report that showed 81 percent of attacks were due to weak credentials, typically a password. “We’ve also flagged SMS as restricted, even though it has some level of usage, ubiquity and user acceptance.” Some multifactor is better than no multifactor efforts, but, as Grassi noted, high-risk data needs higher levels of defense in place.
Password files on a server should be resistant to offline attacks, he said, making a hack less than rewarding to fraudsters. NIST has put strong requirements in place for data protection. Organizations should also reject passwords that have been known to have been breached, he told Webster. The password is the weakest methods of authenticators, said Madhu, but consumers are stuck with them for a while (even if there’s a level of what Grassi said is fatigue in place).