Vlog: Defeating Deepfakes
Join Michael Cichon, CMO at 1Kosmos, and Vikram Subramaniam, VP of Solutions, as they delve into the alarming rise of deepfakes. Discover how this evolving technology threatens security and authentication measures, and explore 1Kosmos’ innovative solutions to combat this growing threat. Stay informed and safeguard your digital identity.
Michael Cichon:
Hello everybody, this is Michael Cichon. I’m the Chief Marketing Officer at 1Kosmos, here with Vikram Subramaniam, our VP of Solutions, to talk about deepfakes. Vik, it’s great to have you, and welcome to the vlog. Deepfakes are in the news lately. Frightening news. One case recently, they were used to con a company out of $25 million, several payments totaling 25 million in Asia Pacific. Closer to home, we’ve heard stories of deepfakes, dialing the phone, calling people, folks answer the phone, and their friends, family, loved ones appear to be in distress and asking to be rescued. Some are claiming to have kidnapped people and asking for ransom. Quite devastating for anybody to receive these types of things. So this phenomenon of deepfakes, how did we get here? What are they?
Vikram Subramanian:
Yeah, I think, imagine that, Mission Impossible is becoming real. We are experiencing it in our everyday lives. I’ve only seen it in the movies, where you’ve got large command centers, where they’re taking small voice clips of different things and then, playing and tricking the bad guys into thinking that it’s one of their own. Regular people are facing it. And essentially, with AI and just freely available software these days, that takes advantage of the chips that we have in our computers and even generally available cloud and internet, you can produce video, you can produce voice, you can produce anything that looks or feels real to the person on the other side of a virtual line. So I’m thinking of form line, I’m thinking of a laptop, but yeah, this is where sometimes I just need to shake your hand to know if you are real, right?
Michael Cichon:
Yeah. Do you know I’m real right now? Is this not a deepfake? Is this somebody pretending to be me?
Vikram Subramanian:
Exactly. Exactly right. I think so that’s where the world is going to, but these are essentially deepfakes, the same things that we have seen in the movies. So people wearing masks, you creating videos of or morphing someone’s face on top. And then, yeah, it just sounding like someone else. It’s crazy what’s out there with deepfakes.
Michael Cichon:
Yeah, the evolution has been stunning. We’ve talked generally about artificial intelligence, but over the last 20 years, we’ve seen this evolution of artificial intelligence being the ability to recognize patterns very deeply in data somehow to get into a predictive science and now, to even approach what looks like thinking, when computers can draw inferences without being told to do so. The deepfake, the lineage, how we got here, essentially, what we have is two chess players, if you will, two adversarial networks creating a likeness and presenting it to another adversarial engine to be graded for similarity to a sample. And then, for this, basically, this iteration of synthetic media, at the speed of light, whether that’s a voice, whether that’s a video, even texting patterns, the way that somebody might text a friend or chat with somebody, that can also be deepfaked. Training, well, first, before I get into training deepfake attack, this technology now, five years ago, you had to be one of the world’s leading data scientists.
Vikram Subramanian:
Yeah, data scientists, researcher, videographer, one of those special effects companies, right? Yeah.
Michael Cichon:
Right. Now, I just did a simple Google Search the other day using ChatGTP, and this thing brings me a half a dozen tools that I can, within probably an hour, create a deepfake video. Deepfakes trained on humans are one issue. We saw that with the $25 million that was taking. And certainly to the human eye, these deepfakes are very hard to detect. So in terms of humans, of humans, we’ve got to train people. We start to move millions of dollars around. We have to have policies, procedures, and oversight to prevent fraud at that scale. But I think, more on the sinister side, over the past, what, six, seven years, the authentication using live biometrics has set in. This is a step up from what we used to do. A thumb print or a face ID on a phone, we’re now logging in with essentially a live selfie, and that’s a step up in security.
But talk a little bit, if you will, about the way that deepfakes can be used to attack biometric authentication. Because before you start that, companies move into biometric authentication to get rid of passwords, and a live selfie had been pretty hard to fake. A live selfie can also be verified against a driver’s license. So you could have that direct connection of the live selfie to an offline credential and prove identity, but now, deepfakes are attacking. So can you talk a little bit about how deepfakes are being trained on authentication in cyber attacks?
Vikram Subramanian:
Yeah, absolutely. Right. So you mentioned it correctly, right? So we want convenience. We want to look at something, and then, it needs to open. So we have that happening on our phones. We have that happening at the immigration gates, and now, we obviously want it with our enterprise systems. So until now, obviously, hey, if you needed to fool something, if you needed to fool a face detection software, you would come and present a picture. You would come and present a video of the person that you wanted to impersonate. And in the early days, of course, you couldn’t find it. But I think technology has improved over multiple decades that technology now can prevent presentation attacks, where you’re actually presenting something to the camera, where you’re capturing like a picture, and if it’s a picture of a picture, from a screen, from a video, all of those things, you know that it’s not a live person. And it’s only then that the face is getting to match.
So the presentation attacks, yes, are evolving a little bit, but you know what? For the most part, I think a lot of companies and our solution basically can help prevent those presentation attacks. But because more and more companies are moving towards facial recognition and people are remote, now, impersonation has become big. And like you mentioned, there’s technology available out there to create videos and create stuff about things or videos or pictures that look like you, and from just freely available content. You’ve got public content. This vlog is going to be public. People can take small snippets from you, take your voice, take your face and create a short video and very likely try and fool the system. The problem there is, till now, you have to have another device to show it to the camera, and the moment you did that, the camera and generally having the picture and different objects out there, people could detect that it’s not a live person and it’s something that someone’s trying to attack.
But with deepfakes, the thing is that they are utilizing what we call as injection attacks. Injection is basically fooling the entire piece of software, because there is dependency on capturing a picture from the camera. And there is trust that pictures are coming from the camera, and we can depend on all of the things of the features of the camera to capture the picture, that trust is gone. So injection attacks are coming directly into the stream of the video, that comes in from the camera and goes out to the server. So it could very well be, we’ve got different kinds of attacks.
We’ve got JavaScript attacks, we’ve got the multiple cams or the substitute cams attacks, anything that really can inject itself into the stream, with the advent of technology, different scripts of being available, different pieces of technology, like mini cams. So they allow you to have multiple cameras and then, inject a video out there. And the software is no smarter and thinks that it is actually a person doing it. So now, that’s where deepfakes are there, but obviously, there is a way to prevent them. But Michael, I know you’ve been doing some research on deepfakes. Are there other things that you have noticed that people are doing out there?
Michael Cichon:
Okay, well, let’s talk about this a little bit. You talk about presentation attacks. What are the defenses against presentation attacks?
Vikram Subramanian:
Sure. So this is why, when we are doing a login at 1Kosmos, we verify multiple things. We verify the liveness of a user and also, do a face match of the user. And we are doing a one-to-one match. We’re not doing a one-to-many match. So which means that we are exactly verifying if the user is who they say they are. And along with that, before we do anything of that, we are verifying that the user is a live person. A couple of ways to do that. One is where we have the user follow specific instructions on the screen, like, “Turn left, turn right, smile.”
You know what I mean? I used to say, and I think it’s still valid, we’re the happiest MFA out there. We make people smile. So essentially, when people smile, we know that it is an actual human who’s able to follow instructions and then, do it. And along with that, we are able to go ahead and verify all of the other parameters of the picture that we captured of the person. At a random interval, we are able to capture the picture and then say that this is a live person or not.
Michael Cichon:
Okay, awesome. So what you’re describing there, it’s the active liveness detection?
Vikram Subramanian:
Correct, it’s the active liveness detection. And along with that, now, technology has evolved and we have evolved, where we have brought out the passive liveness. Passive liveness, we can determine the liveness of a person just by means of taking a selfie. There are a couple of attributes about the picture that we can utilize to go ahead and determine that the person is indeed who they say they are. And the very fact that they’re even coming in to take a selfie and just look, once they look at the screen, we are able to go ahead and determine that they are who they say they are. We are not waiting for them to upload a picture or something like that. We’re substituting or we’re actually doing active liveness, without the need of spoiling the user experience.
Michael Cichon:
Okay. Okay. All right. So then, on the injection side, we’re monitoring for what? Virtual cameras? External cameras?
Vikram Subramanian:
So see the thing is, injection side, you’re injecting into the stream, which means that something needs to sit on the client or something needs to sit on at the place where the picture is getting captured or where the video is getting captured to determine liveness. Because that’s exactly where all of these other pieces of software are sitting. Now, that means that this piece of code or this SDK that sits at the browser layer or at the client is able to determine, “Do you have multiple cameras out there? Are you using something that is plugged into the USB? Or is it using something that is virtual?”
And also preventing something from injecting random pieces of information in the stream that is going out to our server. So what that means is that, yes, cracking that in is going to be highly improbable, and we’ll see, obviously, there are different avenues to go ahead and improve the technology. But right now, we are able to go ahead and determine, at the point where the picture is getting captured that, “Hey, is there an injection attack that’s happening?” And this could be done even on the mobile device, right? So no problem.
Michael Cichon:
Okay, all right. All right. So when we talk about identity verification, at the very highest levels of assurance, we’re essentially talking about an enrollment process, where somebody presents themselves to a camera, and that likeness is verified against an offline credential, something like a state-issued ID, driver’s license, a passport. So for this to happen, at the point of enrollments, we’re doing several things, correct? We’re doing a passive likeness, we’re doing an active likeness, we’re checking for injection attacks, then somebody presents a credential, and we’re also inspecting that credential for signs of manipulation. “Has the photograph been manipulated? Is it a valid credential, with,” for example, “the databases that contain the driver’s license information from the states?” Only when it passes that, all those tests, all those inspections, has somebody then successfully registered. Once that process, the enrollment process, is completed, now folks are using this biometric to log in. So again, at the point where a access request is being made, we’re again inspecting for signs of manipulation, being liveness, passive, active, and then, the injection attacks.
Vikram Subramanian:
This is where authentication is changing for the enterprise, constantly changing for the customers. We are definitely, the entire space is evolving, because now earlier enrollment used to be, hey, you have a username, password, and then, get in. Max, you do an MFA. Now, enrollment can happen really quick, self-service, just by utilizing your credential, that’s powerful. You’ve already gotten a credential, you can do this remotely. And the consuming, either company, it’s an enterprise, or whether it’s a customer piece of software, can trust what is happening. Then on, as a customer, now, like you mentioned, I have the ability to authenticate using my face, and we are able to go ahead and say confidently, “Know what, you’re a live person, I know who you are, Vikram. Welcome.” Wouldn’t you like that experience?
Michael Cichon:
I certainly would. So at 1Kosmos, we talk and we have talked a long time about certifications, and for some, that’s a snoozer topic, “Why are we talking about certifications?” So what’s the role of certifications in this? It’s one thing for any number of companies to claim that, “We do this stuff,” but what proves that we do this stuff?
Vikram Subramanian:
Right, yeah, I think you’re right. So certifications, even the clients where we have implemented, they’ve said, “Okay, you know what? I really want to test out your software. I want to bring a picture in there. I want to put a mask on me. I want to put a video out there, and I’ll test it.” And we say, “All right, go test it. But that’s exactly why we do the certifications.” We’re doing this, because there are third parties who go through and try and break the system. We do pen testing. So it’s very important. It’s something that our clients can utilize and take it to their auditors and go, “Hey, I’ve gotten a certified system, which has been through the ringer, and now, I can just cut my timeline short, implement it, and get my users to actually use it and focus on what’s important for improving the business.”
Michael Cichon:
Got it. So in terms of these certifications, just real briefly, ISO has a set of presentation attack defense specifications. So that’s a set of certifications where the system has to be rigorously tested, that it cannot be fooled, for example, by a 3D mask or by a mask with the eyes cut out. There are, with the NIST certifications, the 800-63 certification, this what certifies the biometric. Talk a little bit about NIST.
Vikram Subramanian:
Yeah, the NIST has the FRVT certification, and essentially, now, they’re calling it FRTE. And of course, this the organization iBeta, which actually is dedicated towards testing biometrics. So we go through all of these agencies to make sure that we are not only complying with their tests, but also, standards and regulations that rule the use of biometrics. It’s one thing to be able to capture it and do it correctly, but can you do it in a privacy preserving way? I think that differentiates us.
Michael Cichon:
All right. That’s bringing up another thing. Okay. But real briefly, what the UK version, the UK…
Vikram Subramanian:
DIATF.
Michael Cichon:
Right? So that’s there. FIDO2 is there. I know I’m forgetting a couple, but you mentioned privacy. So biometric, now that’s a pretty personal piece of information, along with my birthday and social security number, everything, right? So talk about privacy. What’s the big deal?
Vikram Subramanian:
Yeah, I know we’re on the topic of trying to figure out liveness, are you live person and everything. But yeah, once you capture it, where are you storing it? How are you storing it? So this is where the 1Kosmos’ architecture comes into play wherein we have the key for the private key that is always in possession of the user. Now, this could be a pin or this could be a private key that is there that is stored within their device. Or really, what we are able to do is to calculate a key right at the edge by using your face. Your face is your key. That is what opens the door. So it is really amazing, that amazing technology that we’re able to do, wherein we are able to do that, manage your wallet for you, and think about what a wallet is.
It is just like your physical wallet, where you put different things, and you can put in your face, you can put in your fingerprint, you can put in all the other biometrics that are in there, along with your FIDO credential and your smart car credential. If you want to put your legacy password, you can put it in there. It’s simply a wallet that is in your possession, that can be opened up only by using certain keys.
Michael Cichon:
Yeah, I love that. Vik, that means two things to me. It means that, as a consumer, I control the data. I want to delete the data, I delete it. I want to use it, I determine, at the point of use, exactly what information I’m going to share, I feel comfortable sharing. The other thing it means is for the IT and security people, it’s not introducing another vulnerability, because you’ve introduced a better way to authenticate. And that point of vulnerability is this honeypot of customer information. Why have it when you don’t have to have it? So eliminating that honeypot to me is important, and I think it should be important to IT and security people. Have we left anything out? This feels like a pretty good conversation. One thing, we do have a new white paper coming out. It’s called Biometric Authentication Versus the Threat of Deepfakes. It’ll be on our website here shortly, so look out for that. Any parting shots, Vik, any stone unturned on deepfakes?
Vikram Subramanian:
Yeah, I would just say, look out for them. And just be sure, just like, earlier, it was with the phishing emails, now it’s with deepfakes, make sure that you’re doing something real. Do your due diligence, and yeah, if you need help, talk to 1Kosmos.
Michael Cichon:
Absolutely. Yeah. I see deepfakes is kind of the latest salvo from the cybercrime community to attack one of the creators of value on the digital side, that creators of value is getting workers quick access to systems, plugging the holes, so that account takeover doesn’t lead to data breach or ransomware. A lot of news about ransomware recently. So yeah, some really interesting developments. And once again, here at 1Kosmos, we’re trying to stay ahead of that with our innovative solutions. So Vik, thanks for spending time with me today. Very much appreciate your time.
Vikram Subramanian:
Absolutely. Thanks, Michael. That was great.