GetReal Security Targets Deepfake-Era Identity Risks

GetReal Security is doubling down on deepfake detection and human verification, addressing identity risks emerging from generative AI's rapid expansion into voice, video, and image synthesis.

Share
GetReal Security Targets Deepfake-Era Identity Risks

As generative AI tools make it trivial to clone voices, swap faces, and fabricate convincing video, enterprises face a new class of identity-based attacks that bypass traditional security controls. GetReal Security is sharpening its focus on this threat, expanding its platform to address deepfake-era identity risks and human verification challenges that have become central concerns for CISOs, fraud teams, and trust-and-safety leaders.

The Deepfake Identity Crisis

Voice cloning systems now require only seconds of reference audio to produce convincing impersonations. Face-swapping and lip-sync models can generate real-time video deepfakes capable of passing through video conferencing platforms. Combined, these capabilities have fueled a wave of high-profile attacks — from the widely reported $25 million Arup heist, where attackers impersonated a CFO on a video call, to a growing volume of voice-cloning scams targeting executives, call centers, and KYC workflows.

Traditional identity verification — passwords, knowledge-based authentication, even some forms of liveness detection — was not designed for a world where adversaries can synthesize a target's likeness on demand. GetReal Security's strategic pivot reflects an industry-wide recognition that verifying humanness is now as important as verifying identity.

GetReal's Technical Approach

GetReal Security, founded by deepfake researcher Hany Farid and a team of synthetic media experts, has built its detection stack around multi-signal analysis rather than a single classifier. The company's platform combines several detection modalities:

  • Provenance signals: Examining metadata, compression artifacts, and content credentials (including C2PA-style manifests) to establish whether media originated from a trusted capture device.
  • Physical and physiological inconsistencies: Detecting lighting mismatches, unnatural eye reflections, blood-flow anomalies, and micro-expression irregularities that current generative models struggle to render perfectly.
  • Model-artifact detection: Identifying frequency-domain fingerprints and statistical signatures characteristic of diffusion models, GANs, and voice synthesis pipelines.
  • Behavioral and contextual analysis: Flagging anomalies in conversational patterns and meeting metadata that suggest a non-human participant.

This ensemble approach mirrors a broader shift across the detection industry, where vendors increasingly acknowledge that no single model can keep pace with the rapid release cadence of new generative systems. By stacking complementary signals, detectors aim to remain robust even as individual generators close specific tells.

Human Verification as a New Security Primitive

GetReal's emphasis on "human verification" reflects an emerging category distinct from traditional identity proofing. Where identity verification answers "are you the right person?", human verification answers a more foundational question: "is there a real person on the other end of this interaction at all?"

This matters for several enterprise workflows:

  • Executive communications: Real-time deepfake detection during video calls to prevent CEO-fraud and wire-transfer scams.
  • Customer onboarding and KYC: Detecting synthetic identities and injection attacks that bypass liveness checks by feeding pre-generated video into the camera pipeline.
  • Contact center authentication: Identifying cloned voices attempting to reset credentials or authorize transactions.
  • Insider threat and HR processes: Verifying that remote interviewees are real candidates rather than deepfake operators — a documented North Korean IT-worker tactic.

The Competitive Landscape

GetReal operates alongside a growing field of deepfake-defense vendors including Reality Defender, Truepic, Pindrop, and Clarity, each emphasizing different blends of detection, provenance, and authentication. The market is being shaped by enterprise demand, regulatory pressure (EU AI Act labeling requirements, evolving U.S. state-level deepfake laws), and high-profile incidents that have moved synthetic media from a research curiosity to a board-level risk.

One persistent challenge: detection accuracy degrades as generative models improve. Recent research has shown that state-of-the-art video and audio generators are increasingly able to evade open-source detectors. Vendors like GetReal counter this with continuous model retraining, red-teaming against the latest generators, and layered defenses that don't rely on any single detector being perfect.

Why It Matters

GetReal Security's sharpened focus is a signal that the deepfake detection market is maturing from point solutions into integrated platforms addressing the full identity lifecycle. As synthetic media tooling becomes cheaper and more capable, enterprises will need detection embedded directly into communication, onboarding, and authentication workflows — not bolted on after the fact. The companies that succeed will be those that combine technical rigor with operational integration, treating human verification as a first-class security primitive in the generative AI era.


Stay informed on AI video and digital authenticity. Follow Skrew AI News.