Major chat platform just rolled out mandatory facial verification for age-restricted channels. Scan your face. Prove you're over 18. Company policy.
Three hours later, bypass methods were already circulating. Deepfake injection. 3D printed masks. Video loop exploits. Pre-recorded face swaps. The entire facial recognition stack compromised before most users even saw the notification.
This is about liveness detection, anti-spoofing measures, computer vision exploitation, and the fundamental problem with biometric verification when the capture device is client-controlled.
Red team shows what breaks. Blue team shows what stops it. Arms race documented in real time.
Red Team: Breaking Facial Recognition Systems
The attack surface is client-side video capture: JavaScript API, WebRTC stream, local processing before transmission. User controls the camera, the lighting, the environment, and the video feed. Every client-side biometric verification is defeatable. The question is not if but how much effort.
Method 1: Video Loop Injection
You need OBS Studio, a pre-recorded video of yourself moving your head naturally, and a virtual camera driver.
Facial recognition liveness detection checks for head movement, blinking, expression changes, and lighting variation from different angles. Record thirty seconds covering all of those. Replay it forever.
# Install OBS Studio
# Install OBS Virtual Camera plugin
# Record yourself:
# - Turn head left 45°
# - Turn head right 45°
# - Tilt up 30°
# - Tilt down 30°
# - Blink naturally every 3-5 seconds
# - Smile, neutral, smile
# - Total duration: 30 seconds looped
# OBS Setup:
# Sources â Video Capture Device â Select your real webcam
# Record 30-second natural movement video
# Save as verification_loop.mp4
# Playback setup:
# Sources â Media Source â verification_loop.mp4
# Loop: Enabled
# Start Virtual Camera
# Platform now sees "live" video that's actually pre-recorded loop
# Passes basic liveness detection (movement, blinking present)
Most liveness systems check for the presence of movement indicatorsânot for randomness, not for challenge-response. A pre-recorded video containing all required movements passes as live. Defenders can add challenge-response prompts, which makes replays harder, but that requires more complex UX and still does not stop more sophisticated attacks.
Method 2: Deepfake Face Swap
This takes First Order Motion Model or equivalent, a single photo of a consenting adult face, and real-time GPU processing.
Take someone else's faceâage-appropriate, with their consentâand swap it onto your video feed in real time. Your movements drive their face.
# Using First Order Motion Model for real-time face swap
import cv2
import torch
from fomm_model import load_checkpoints, make_animation
# Load pre-trained FOMM model
generator, kp_detector = load_checkpoints(
config_path='config/vox-256.yaml',
checkpoint_path='models/vox-cpk.pth.tar'
)
# Load source image (person who meets age verification)
source_image = cv2.imread('adult_face.jpg')
# Capture webcam for driving video (your movements)
cap = cv2.VideoCapture(0)
# Create virtual camera output
import pyvirtualcam
with pyvirtualcam.Camera(width=1280, height=720, fps=30) as cam:
while True:
ret, driving_frame = cap.read()
# Perform face swap
# Source face (adult_face.jpg) animated by your movements
swapped = make_animation(
source_image,
driving_frame,
generator,
kp_detector
)
# Send swapped video to virtual camera
cam.send(swapped)
cam.sleep_until_next_frame()
Facial recognition checks face structure, not identity against a government ID. If the face looks age-appropriate and passes liveness detection because your real movements are driving it, the system accepts it. Defeating this requires checking for deepfake artifactsâtemporal consistency issues, lighting mismatches, edge blending errors. That adds significant processing cost at scale.
Method 3: 3D Printed Mask
Higher effort, lower tech. Photogrammetry rig or 3D scanning app, resin printer, silicone casting materials, and paint matched to skin tones.
Capture the face with Meshroomâfifty to a hundred photos from all angles, processed into a high-polygon mesh.
# Using Meshroom (free photogrammetry software)
# Take 50-100 photos of subject's face from all angles
# Process into 3D mesh
meshroom_photogrammetry \
--input photos/ \
--output face_model.obj
# Export high-poly mesh
# Resolution: 500k+ polygons for detail
Print it, sand smooth through grits from 320 to 1500, prime, paint with silicone-based skin-tone paint, add synthetic eyebrows, finish with clear coat for the skin-like sheen.
# Slice for resin printing
# Print face mask with eye holes
# Wall thickness: 2-3mm (flexible, comfortable)
# Post-processing:
# - Sand smooth (320 grit â 800 grit â 1500 grit)
# - Prime with automotive primer
# - Paint with silicone-based skin-tone paint
# - Add synthetic hair for eyebrows
# - Clear coat for skin-like sheen
2D facial recognitionâwhich is what most webcam systems useâcannot detect depth. A mask with the right facial features, positioned correctly, matches the landmarks the system expects. A Vietnamese woman used a 3D mask to fool airport facial recognition and board a flight as another passenger. The mask defeated the system. Defeating the mask requires depth sensing: multiple cameras, structured light, or LiDAR. Most deployment environments do not have any of that.
Method 4: Infrared Makeup Bypass
Situational, but surgically effective against IR systems.
Many facial recognition systems use infrared illumination for low-light operation. IR-blocking makeup is commercially available. It appears normal under visible light and creates dark voids under infrared, disrupting the landmark detection the system depends on.
Strategic placement to disrupt facial landmarks:
- Horizontal bands across cheekbones (breaks facial geometry)
- Vertical stripes across nose bridge (disrupts symmetry detection)
- Patches around eyes (confuses eye detection algorithms)
Under visible light: Looks like regular makeup or face paint
Under IR illumination: Appears as black voids, breaking face detection
Facial recognition needs to locate specific landmarks: eye corners, nose tip, mouth corners, jawline. The IR makeup creates holes exactly where those landmarks should be. The fix is switching to visible-light-only recognition, which defeats this attack but also defeats night-vision and low-light systems. It works precisely because IR is so common.
Method 5: Video Hijacking via Virtual Camera
Trivial. OBS, ManyCam, or XSplit. Any video file of an age-appropriate person.
# Install OBS Studio
# Create Scene with Media Source
# Load video: adult_verification.mp4
# Start Virtual Camera
# Platform's JavaScript camera API sees:
navigator.mediaDevices.enumerateDevices()
# Returns:
# - "OBS Virtual Camera" (your injected video)
# - "Integrated Webcam" (real camera)
# User selects OBS Virtual Camera
# Platform receives whatever video you feed it
JavaScript getUserMedia() cannot distinguish a physical camera from a virtual camera driver. The platform asks for camera access, the user grants it, and the system receives a video stream with no mechanism to verify authenticity. Preventing this requires kernel-level driversâand even then, the operating system treats virtual cameras as legitimate video sources. There is no clean fix.
Method 6: AI-Generated Face
Emerging and increasingly viable. StyleGAN or equivalent, real-time inference GPU, facial animation rig.
# Generate photorealistic face that doesn't exist
import torch
from stylegan2 import Generator
generator = Generator(1024, 512, 8).cuda()
generator.load_state_dict(torch.load('stylegan2-ffhq-config-f.pt'))
# Generate random face
z = torch.randn(1, 512).cuda()
generated_face = generator(z)[0]
# Animate with First Order Motion Model
# Drive generated face with your real movements
# System sees: photorealistic face, natural movements, passes liveness
# Face is entirely synthetic
# No real person associated with biometric capture
Facial recognition verifies that a face looks human and age-appropriate. It does not verify that the face corresponds to a real human who created the account. Generated faces are photorealistic, can be animated naturally, and pass all liveness checks. Defeating this requires GAN-detection algorithms checking for generation artifacts. Active research area. No production deployment yet.
Method 7: The Oldest Trick
A sibling or friend over 18. Willingness to verify once.
Friend performs facial verification. Account gets flagged as age-verified. Verification is never requested againâwhich is how most systems work.