Homepage Technology Deepfakes Can Now Fake a Pulse. That’s a Problem.

Deepfakes Can Now Fake a Pulse. That’s a Problem.

Deepfakes Can Now Fake a Pulse. That’s a Problem.
Shutterstock

New research shows AI-generated videos can mimic real human pulses—making deepfakes even harder to detect.

Others are reading now

Deepfakes have gotten frighteningly good.

From manipulated videos of politicians to nonconsensual fake porn, these AI-generated creations are evolving faster than detection systems can keep up.

Now, scientists warn the cat-and-mouse game has taken another turn in favor of the deceivers.

A study published in Frontiers in Imaging by researchers at Humboldt University of Berlin reveals that the latest generation of deepfakes can now exhibit realistic human heartbeats—a detail long considered a reliable tell for spotting fakes.

Also read

Study author Peter Eisert writes:

Here we show for the first time that recent high-quality deepfake videos can feature a realistic heartbeat and minute changes in the color of the face, which makes them much harder to detect.

Deepfakes are becoming physiologically convincing

Previously, many deepfake detection tools leaned on something called remote photoplethysmography (rPPG)—a technique originally used in telehealth that detects subtle changes in facial skin tone caused by blood flow. These signals were thought to be beyond the reach of generative AI.

But in the new study, deepfakes fooled detectors built specifically to spot this very signal. The researchers created 32 AI-generated videos modeled on real footage of human participants. Each fake video appeared lifelike, and—crucially—registered a false positive for a human pulse.

The researchers wrote:

Our experiments demonstrated that deepfakes can exhibit realistic heart rates, contradicting previous findings,

According to Popular Science, it appears the AI models “inherited” the heart-rate signals from the original source footage.

The deepfakes didn’t generate fake pulses—rather, they replicated the very real ones embedded in the data they were trained on.

Fake videos, real consequences

The findings highlight a major vulnerability in how we detect manipulated media. While some platforms are building tools that rely less on physical traits—like analyzing pixel-level patterns—others still depend heavily on biometric cues.

The study adds urgency to efforts from tech giants like Google and Adobe to embed digital watermarks and authentication data in media files, so consumers can more easily tell real from fake.

But even these tools face challenges in a rapidly evolving arms race.

As Eisert puts it:

Small variations in skin tone of the real person get transferred to the deepfake together with facial motion, so that the original pulse is replicated in the fake video.

That’s a chilling reminder: the next deepfake you see might not just look real—it might feel real, too.

Also read

Did you find the article interesting? Share it here Share the article: