ylliX - Online Advertising Network
mm

Kaarel Kotkas, CEO and Founder of Veriff – Interview Series


Kaarel Kotkas is the CEO and Founder of Veriff and serves as the strategic thinker and visionary behind the company. He leads Veriff’s team in staying ahead of fraud and competition in the rapidly changing field of online identification. Known for his energy and enthusiasm, Kotkas encourages the team to uphold integrity in the digital world. In 2023, he was recognized in the EU Forbes 30 Under 30, and in 2020, he was named the EY Entrepreneur of the Year in Estonia. Nordic Business Report has also included him among the 25 most influential young entrepreneurs in Northern Europe.

Veriff is a global identity verification company that helps online businesses reduce fraud and comply with regulations. Using AI, Veriff automatically verifies identities by analyzing various technological and behavioral indicators, including facial recognition.

What inspired you to found Veriff, and what challenges did you face in building an AI-powered fraud prevention platform?

My motivation for Veriff came after witnessing firsthand how easy it was for people online to pretend to be someone else. When buying biodegradable string from eBay for my family’s farm at the age of 14, I effortlessly bypassed PayPal’s 18+ age restrictions with a touch of Photoshop to change my birth year on the copy of my identity document.

I continued to see the problem of online users misrepresenting their identity to pass age checks and other security measures. It was due to these experiences that I came up with the idea for Veriff.

As for challenges, a year after founding the company, we gave our team the weekend off. This was the same day we did a bug fix, which resulted in a full interruption in monitoring capabilities. We didn’t notice our service shutting itself down until Saturday morning. Come Monday morning, I had to meet face-to-face with our biggest customer, who had lost thousands of dollars in revenue. I was transparent in that meeting, explaining the mistakes on our end. We shook hands and went back to work. What I learned from this is that as a founder and business leader, we must expect and prepare for challenges. Additionally, transparency is key for building trust. Lastly, demonstrating a history of overcoming challenges can prove more valuable because it shows you can successfully tackle problems and are resilient.

With deepfakes becoming more sophisticated, especially in political settings, what do you think are the most significant risks they pose to elections and democracy?

This election season, the integrity of the voting process is in jeopardy. AI can analyze vast amounts of data to identify voter preferences and trends, enabling campaigns to tailor messages and target voters with messages they care most about. Bad actors are well equipped to create false narratives of candidates performing actions they never did or making statements they never said, thus damaging their reputations and misleading voters.

To date, we have seen deepfakes of celebrities endorsing presidential candidates and a fake Biden robocall. While technology does exist to help distinguish between AI-generated content and the real deal, it’s not viable to implement broadly at scale. With the high stakes and election credibility on the line, something must be done to preserve public trust. The future growth of the digital economy and its fight against digital fraud centers around proven identities and authentic and verified online accounts.

Deepfakes can manipulate not only images but also voices. Do you believe one medium is more dangerous than the other when it comes to deceiving voters?

In general, especially in the U.S. context of elections, both should be treated equally as threats to democracy. Our most recent report Veriff Fraud Index 2024: Part 2, found that 74% of respondents in the US are worried about AI and deepfakes impacting elections.

The evolution of AI has turbocharged the threat to security, not only in the US but around the globe, during this year’s elections. Whether it be deepfake images, AI-generated voices in robocalls trying to skew voter opinions, or fabricated videos of candidates, they both provoke warranted concern.

Let’s look at the bigger picture here. When there are lots of data points available, it’s easier to assess the “threat level.” A single image might not be enough to tell if it’s fraudulent, but a video provides more clues, especially if it has audio. Adding details like the device used, location, or who recorded the video increases confidence in its authenticity. Fraudsters always try to limit the scope of information because it makes it easier to manipulate. I view robocalls as more dangerous than deepfakes because creating fake audio is easier than generating high-quality fake videos. Plus, using LLMs makes it possible to adjust fake audio during calls, making it even more convincing.

Given the upcoming elections, what should governments and election commissions be most concerned about regarding AI-driven disinformation?

Governments and election commissions need to understand the potential scope of deepfake capabilities, including how sophisticated and far more convincing these instances of fraud have become. Deepfakes are especially effective when deployed against enterprises with disjointed and inconsistent identity management processes and poor cybersecurity, making it more critical today to implement robust security measures or have a layered approach to security.

Still, there is no one-size-fits-all solution, so a coordinated, multi-faceted approach is key. This could include robust and comprehensive checks on asserted identity documents, counter-AI to identify manipulation of incoming images, especially concerning remote voting, and, most importantly, identifying the creators of deepfakes and fraudulent content at the source. The responsibility of verifying votes lies with governments and electoral commissions, as well as technology and identity providers.

What role can AI and identity verification technologies like Veriff play in countering the impact of deepfakes on elections and political campaigns?

AI is a threat and an opportunity. Nearly 78% of U.S. decision-makers have seen an increase in the use of AI in fraudulent attacks over the past year. On the flip side, nearly 79% of CEOs use AI and ML in fraud prevention. In a time when fraud is on the rise, fraud prevention strategies must be holistic – no single tool can combat such a multitudinous threat. Still, AI and identity verification can empower businesses and users with a multilayered stack that brings in biometrics, identity verification, crosslinking, and other solutions to get ahead of fraudsters.

At Veriff, we use our own AI-powered technology to build our deepfake detection capabilities. This means our tools improve from the learnings when we see a deepfake. Taking large amounts of data and searching for patterns that have appeared before to determine future outcomes relies on both automated technologies and human knowledge and intelligence. Humans have a better understanding of context, identifying anomalies to create a feedback loop that can be used to enhance AI models. Combining different insights and expertise to create a comprehensive approach to identity verification and deepfake detection has allowed Veriff and its customers to stay ahead of the curve.

How can businesses and individuals better protect themselves from being influenced by deepfakes and AI-driven disinformation?

Protecting yourself from being influenced by deepfakes and AI-driven disinformation starts with education and cognizance of AI’s expansive capabilities, coupled with proven identities and authentic, verified online accounts. To determine if you can trust a source, you must look at the cause rather than the symptoms. We must confront the problem at its source, where and by whom these deepfakes and fraudulent resources are being generated.

Consumers and businesses must only trust information from verified sources, such as verified social media platform users and well-credited news outlets. In addition, using fact-checking websites and looking for visual anomalies in audio or video clips—unnatural movements, strange lighting, blurriness, or mismatched lip-syncing—are just some of the ways that businesses can protect themselves from being misled by deepfake technology.

Do you think there’s enough public awareness about the dangers of deepfakes? If not, what steps should be taken to improve understanding?

We’re still in the growing awareness phase about AI and educating people on its potential.

According to the Veriff Fraud Index 2024: Part 2 over a quarter (28%) of respondents have experienced some kind of AI- or deepfake-generated fraud over the past year, a striking result for an emerging technology and an indication of the growing nature of this threat. What is more important is that this number could actually be much higher, as 20% say they don’t know if they have been targeted or not. Given the sophisticated nature of AI-generated fraud attempts, it is highly likely that many respondents have been targeted without their knowing it.

Individuals should be cautious when encountering suspicious emails or unexpected phone calls from unfamiliar sources. Requests for sensitive information or money should always be met with skepticism, and it’s crucial to trust your instincts and seek clarity if something feels wrong.

What role do you see regulatory bodies playing in the fight against AI-generated disinformation, and how can they collaborate with companies like Veriff?

Given the extent to which deepfake technology has been used to deceive the public and amplify disinformation efforts, and with the U.S. election still underway, it’s yet to be seen how great an impact this technology will have on that election as well as broader society. Still, regulatory bodies are taking action to mitigate the threats of deepfake technology.

A lot of responsibility for mitigating the impact of disinformation falls on the owners of the platforms we use most often. For instance, leading social media companies must take more responsibility by taking action and implementing robust measures to detect and prevent fraudulent attacks and safeguard users from harmful misinformation.

How do you see Veriff’s technology evolving in the next few years to stay ahead of fraudsters, particularly in the context of elections?

In our rapidly digital world, the internet’s future hinges on online users’ ability to prove who they are; that way, businesses and users alike can confidently interact with each other. At Veriff, trust is synonymous with verification. We aim to ensure that digital environments foster a sense of safety and security for the end-user. This goal will require technology to evolve to confront the challenges of today, and we’re already seeing this with wider acceptance of facial recognition and biometrics. Data shows that consumers view facial recognition and biometrics as the most secure method of logging into an online service.

Looking ahead, we envision this trend continuing and a future where rather than users constantly entering and re-entering their credentials as they perform different tasks online, they have “one reusable identity” that represents their persona across the web.

To bring us a step closer to our goal, we recently updated our Biometric Authentication solution to improve accuracy and user experience, and to strengthen security for stronger identity assurance. These latest advancements in biometric technology have enabled our technology to adapt to individual user behaviors, ensuring user authentication rather than just during one session. This advancement, in particular, represents forward progress on our journey to one reusable digital identity.

Veriff is recognized for its global reach in fraud prevention. What makes Veriff’s technology stand out in such a competitive space?

Veriff’s solution offers speed and convenience as it’s 30x more accurate and 6x faster than competing offerings. We have the largest identity document specimen database in the IDV/Know Your Customer (KYC) industry. We can verify people against 11,500 government-issued ID documents from more than 230 countries and territories, in 48 different languages. Additionally, this convenience and reduced friction enable organizations to convert more users, mitigate fraud, and comply with regulations. We also have a 91% automation rate, and 95% of genuine users are verified successfully on their first try.

Veriff was one of the first IDV companies to obtain the Cyber Essentials certification. Cyber Essentials is an effective government-backed standard that protects against the most common cyber attacks. Obtaining this certification demonstrates that Veriff takes cybersecurity seriously and has taken steps to protect its data and systems. This achievement is a testament to the company’s unwavering commitment to cybersecurity and our dedication to protecting our customers’ data. Most recently, we completed the ISO/IEC 30107-3 iBeta Level 2 Compliance evaluation for biometric passive liveness detection, an independent external validation to solidify that Veriff’s solution meets the highest standard of biometric security.

Thank you for the great interview, readers who wish to learn more should visit Veriff.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *