Modern ID verification can involve far more than matching your face to a passport. Automated screening, sanctions checks and risk scoring increasingly run in the background — and users are rarely told how many decisions are being made about them or how long that data is kept.
Others are reading now
When a website asks you to “verify your identity,” it sounds simple enough. Upload a photo of your ID. Take a quick selfie. Wait for approval.
But behind that frictionless experience can sit a system far more complex than most users realise — one that runs dozens, sometimes hundreds, of automated checks before you ever see a green tick.
That’s the broader issue raised by a recent anonymous investigation into Persona, the identity verification company used by major platforms including OpenAI. Strip away the dramatic tone, and one uncomfortable truth remains: modern identity verification is no longer just about confirming your name. It’s about scoring, screening and risk profiling at scale.
Identity checks are no longer “just” identity checks
In today’s compliance-driven tech landscape, ID verification systems can include:
- Facial comparison between your selfie and your ID photo
- Liveness detection (checking you’re not a deepfake or photo replay)
- Sanctions screening
- Politically exposed person (PEP) checks
- Adverse media scans
- Device fingerprinting
- Duplicate detection
- Fraud-risk scoring
Some systems also allow recurring screening, meaning you can be checked again later without uploading anything new.
Also read
For banks and financial platforms, that level of scrutiny is expected. But when similar systems are used for AI tools or digital services, the lines become blurrier. Users may not understand that what feels like a one-time verification could be part of a much broader compliance engine.
The “black box” problem
The real tension isn’t necessarily that checks exist — it’s that users don’t know:
- How many checks ran
- What triggered a warning
- Whether their face was compared against political or sanctions databases
- How long their biometric data is stored
- Whether they can challenge or correct a false flag
If you’re denied access, the explanation is often vague or nonexistent. That lack of transparency fuels suspicion, even when systems are operating within legal compliance frameworks.
In other words, secrecy, and not necessarily wrongdoing, is what erodes trust.
When compliance tools feel like surveillance
There’s a thin line between anti-fraud protection and something that feels like surveillance.
Also read
For example, screening against sanctions lists or politically exposed person databases is standard practice in financial compliance. But if your selfie is also being checked for resemblance to public figures, or your device fingerprint logged alongside identity data, it can start to feel less like “verification” and more like profiling.
Most companies argue these measures are designed to prevent fraud, money laundering, and misuse. That may well be true. But from a user’s perspective, the experience is identical whether the system is built for safety or suspicion. In other words, you are evaluated by algorithms you cannot see.
A debate that isn’t going away
AI platforms are moving toward stronger identity checks for advanced access, abuse prevention, and regulatory compliance. Governments are tightening rules around digital identity and online harms. Fraud and deepfake technology are getting better.
All of that pushes companies toward more aggressive screening.
The question isn’t whether identity checks will expand. They will. The real question is whether transparency expands alongside them.
Also read
If companies want public trust, they’ll need to clearly explain:
- What data is collected
- What automated decisions are made
- How long biometric data is retained
- Whether re-screening happens
- What appeals process exists
Without that clarity, every technical breadcrumb, every exposed file, and every oddly named server will be interpreted in the worst possible light.
Because in 2026, the fear isn’t that companies verify who you are.
It’s that they’re building a risk profile of who you might be.
Sources: Persona, FedRAMP, FinCEN, public compliance documentation