Itai Liptz on How AI Is Changing the Way We Trust Businesses
The way trust operates in business is changing. Instead of relying on assumptions, people now expect confirmation. Increasingly, before entering a partnership, signing a deal, or investing capital, they want proof that the story holds up. And in many cases, they’re using artificial intelligence to get it. For professionals like Itai Liptz, who works at the intersection of global risk and investigation, AI has become more than a technical resource. It now plays an early role in how credibility is evaluated.
In an interview with Ideamensch, Liptz pointed to the rise of AI-powered due diligence and investigative tools as one of the most meaningful developments in his field. These technologies, he noted, allow investigators, investors, and business leaders to “surface insights that were previously buried in noise.” Instead of relying on instinct or informal vetting, decision-makers now have the ability to run deep checks: automated, scalable, and increasingly sophisticated.
This matters because of how quickly reputations can crumble when details don’t line up. In a digital business environment where transactions often span jurisdictions and involve opaque structures, the burden of verification has grown. People are no longer just asking, “Do I trust this person?” but “Can I verify what they’ve told me without having to take their word for it?”
AI is making that possible in ways that weren’t feasible even five years ago. It’s not a magic wand. But for a growing number of professionals, it’s the first pass filter that determines whether a deeper relationship is even worth pursuing.
How AI Became a First Step, Not a Fallback
Until recently, background checks were treated as a final step in a deal, performed after trust was already extended. That’s changing. AI is helping to shift these practices earlier in the process, allowing individuals and organizations to vet potential partners before resources are committed. The goal is simple: avoid wasted time and mitigate risk before it becomes costly.
A number of tools now assist in this pre-engagement screening. Platforms like Sayari, OpenCorporates, and Truth Technologies offer ways to map ownership structures, identify hidden links, and flag irregularities in regulatory filings. Many of these platforms draw from open-source databases. AI enables them to make connections across jurisdictions, corporate entities, and even language barriers at a scale that would be impossible manually.
Criminal history and creditworthiness are still checked, but AI tools now go further by identifying links to sanctioned entities, politically exposed persons, and businesses that share IP addresses or website metadata, signals that could point to coordinated or deceptive behavior. Some advanced platforms also monitor obscure or unstructured sources across the public internet, though the extent to which these include dark web content varies and is often limited to specialized providers.
Organizations are increasingly embracing this approach. According to SecureWorld, 63% report using or piloting AI tools specifically for vendor risk scoring, contract analysis, and continuous monitoring. This reflects a growing shift toward data-supported screening processes even before formal relationships begin.
What These Systems Are Designed to Find
One of the most tangible benefits of AI in this space is pattern detection. Many of the biggest corporate scandals in recent years (whether financial, legal, or ethical) had warning signs that were missed because they were buried in complexity. AI is increasingly adept at connecting those dots.
Professionals in Liptz’s field often rely on AI to reveal links between entities that would otherwise go unnoticed: overlapping directorships, inconsistent incorporation histories, or asset transfers that raise red flags. These signals may not always mean fraud, but they’re often indicators that further scrutiny is warranted.
A good example involves shell companies that exist solely to obscure ownership. AI tools that cross-reference addresses, registries, and even email domains can help unmask these connections. In one widely reported case, investigative journalists using AI-assisted platforms discovered that multiple companies tied to a single fraud scheme had been registered under different names in different countries but were operated by the same people using reused digital infrastructure.
Some tools go beyond surface-level checks by layering on behavioral indicators such as changes in business activity, patterns in vendor payments, or unusual volumes of contract awards. When this data is aggregated, it doesn’t provide a verdict, but it gives investigators a more focused place to start. As Liptz puts it, the goal isn’t to automate judgment. It’s to improve the quality of the questions being asked.
While none of this replaces traditional investigative work, it shortens the distance between suspicion and clarity. That’s become especially useful in sectors where partnerships form quickly or span jurisdictions that may lack transparency. AI isn’t perfect, but it tilts the odds in favor of better-informed decisions.
Why More People Are Using These Tools—And What They’re Hoping to See
The use of AI in due diligence reflects a broader shift toward accountability and transparency. More organizations—and individuals—are asking not just whether a potential partner is reputable, but whether they’re verifiable.
Investigative journalists use these platforms to confirm source credibility and trace corporate networks. Procurement teams deploy them to screen vendors for undisclosed affiliations or reputational concerns. Nonprofits, especially those distributing large grants or working internationally, are also turning to AI tools to vet recipients and ensure compliance with sanctions or funding guidelines.
Professionals working in corporate investigations say that this type of AI-enhanced vetting is now becoming routine. When discrepancies emerge, such as a mismatch between claimed subsidiaries and official filings, it often prompts additional inquiries. Even in the absence of wrongdoing, the presence of incomplete or misleading disclosures can be enough to delay or derail a partnership.
Meanwhile, most organizations aren’t building these tools in-house. A joint report from MIT Sloan and BCG found that 78% of companies rely on third-party AI systems—over half of them exclusively—which has introduced new kinds of risk when vetting fails or data quality is poor.
In addition to uncovering fraud, these tools are increasingly used to confirm when everything is in order—helping to build trust through verification. That has become part of how credibility is earned, particularly in global business environments where face-to-face relationships aren’t always possible.
What AI Still Can’t Do—and Where Human Judgment Remains Essential
Even as these tools grow more capable, their limitations remain clear. AI can surface inconsistencies, but it can’t explain intent. It can raise questions, but it can’t interpret context. This is why professionals like Liptz emphasize the continued need for human expertise.
False positives remain a concern, especially when data sources are outdated or incomplete. A flagged name might belong to someone else with a similar profile. An unusual financial transaction might be fully legal but look suspicious in isolation. And in some parts of the world, public records are either unreliable or manipulated, which can distort algorithmic outputs.
Another challenge is nuance. A donation to a politically sensitive cause may be routine in one jurisdiction and controversial in another. AI isn’t well-equipped to handle those cultural or legal distinctions, particularly when training data reflects a narrow set of norms. Investigators still need to bring experience, context, and judgment to the table.
Still, the interest in expanding AI’s role is clear. A 2023 analysis by SkyQuest Global, cited by Panorays, found that 39% of companies were already using AI for risk management—and another 24% planned to adopt it within two years.
Liptz describes AI as a “stronger starting point,” not a replacement for real analysis. That framing is useful. The best investigations still involve people who know how to dig, question, and connect dots. AI can assist with that work, but it can’t replace the mindset that makes it effective.
Itai Liptz: A New Baseline for Transparency
What used to be exceptional is now expected. In many industries, undergoing a reputational check isn’t seen as intrusive. It’s seen as routine. Businesses that are confident in their operations are beginning to lean into that shift, offering more documentation up front and preparing proactively for questions that might arise.
Some companies now publish their beneficial ownership information, ESG certifications, and regulatory histories on their websites. Others make it clear that they’ve undergone independent vetting. These moves don’t eliminate scrutiny, but they help set the tone. They show a willingness to be examined.
The availability of AI tools means that scrutiny is no longer limited to large firms with dedicated investigative teams. Smaller organizations, independent contractors, and even concerned citizens can run basic checks using freely available databases and open-source platforms. That kind of access changes the dynamic. Trust can be earned or lost before a conversation even begins.
For Liptz, this is part of a broader evolution. It’s not that people trust less, it’s that they verify more. And when something is hidden, it now raises a different kind of question: why? AI hasn’t replaced trust, but it has redrawn the path by which it’s built.