How to Do Real-Time Face Swap using Deep Live Cam?


Deepfake and face swap technologies are becoming more common in everyday digital content. Deep Live Cam is an open-source tool that makes it possible to perform real-time face swaps and create Deepfake videos using just a single image. The tool is designed to be straightforward and accessible, offering natural-looking results that maintain facial expressions, lighting, and head movement. It supports a wide range of hardware and is useful for content creators, educators, and developers working with visual media. In this blog, I will explore the working of Deep Live Cam, how to set it up, and what to keep in mind when using real-time face swap tools responsibly.

What is Deep Live Cam?

Deep Live Cam is an AI-based application that enables real-time face swaps on live video feeds and supports one-click Deepfake video generation. Using machine learning models, it maps one person’s face onto another while preserving natural expressions, lighting, and angles. Designed with simplicity in mind, the tool requires just a single source image to produce realistic results.

Key Features

  • Live Face Swaps: Changes faces on video feeds quickly with minimal delay.
  • Easy Deepfakes: Allows deepfake video generation effortlessly with a single image.
  • Works on Many Systems: Runs on CPU, NVIDIA CUDA, and Apple Silicon hardware.
  • Better Picture Quality: Uses models like GFPGAN to make swapped faces look real. This enhances real-time face swap visuals.
  • Safety Measures: Includes checks to stop use with bad content. This supports legal and ethical standards.

How Deep Live Cam Works Inside?

Deep Live Cam uses several key AI models. These models power their real-time face swap functions: 

  • inswapper: InsightFace developed this model. It trained on millions of facial images. The model infers 3D facial structures from 2D images. It separates identity features from pose features. This allows for smooth face replacements.
  • GFPGAN: After the face swap, GFPGAN improves image quality. It refines details and corrects image errors. This process ensures a realistic look for the deepfake video generation.

Deep Live Cam supports various hardware. This includes CPU, NVIDIA CUDA, and Apple Silicon. The software design is modular. This structure allows easy updates. New models can be added as they appear.

Getting Started: Installation and Setup

This section guides you through installing Deep Live Cam. Follow these steps carefully for a successful setup. Proper installation prepares the software for real-time face swap and deepfake video generation.

Installing Python 3.10

Deep Live Cam recommends using Python version 3.10. Newer versions, like 3.12 or 3.13, might cause errors. If you use a Python version newer than 3.10, you might see this error: ModuleNotFoundError: No module named ‘distutils’. This error occurs because distutils is not part of newer Python versions. Using Python 3.10 avoids this.

Visit the official Python release page here.

Installing FFmpeg

Video processing is handled by FFmpeg for Deep Live Cam.

Download FFmpeg: We are running this system on Linux, so 

# Make a directory in your home for FFmpeg

mkdir -p ~/apps/ffmpeg && cd ~/apps/ffmpeg

# Download a static build of FFmpeg for Linux

wget https://johnvansickle.com/ffmpeg/releases/ffmpeg-release-amd64-static.tar.xz

# Extract it

tar -xf ffmpeg-release-amd64-static.tar.xz

# Enter the extracted directory

cd ffmpeg-*-amd64-static

# Test it

ffmpeg -version

It will print the version of ffmpeg that you have installed. Now add ffmpeg to Path:

add ffmpeg to Path
export PATH="$HOME/apps/ffmpeg/ffmpeg-*-amd64-static:$PATH"

Clone Deep Live Cam Repository

Next, get the Deep Live Cam project files.

Clone with Git: Open your terminal or command prompt. Navigate to your desired directory using cd your\desired\path. Then, run:

git clone https://github.com/hacksider/Deep-Live-Cam.git

The terminal will show cloning progress. Now change the directory using 

cd Deep-Live-Cam

Download AI Models

Deep Live Cam needs specific AI models to function.

  1. Download these two model files:
  2. Place both downloaded files into the ”models” folder within the Deep-Live-Cam project directory:

Install Dependencies using venv

Using a virtual environment (venv) is recommended. It keeps project dependencies isolated. venv is a Python tool. It creates isolated Python environments. This prevents package conflicts between projects. Each project can have its own package versions. It keeps your main Python installation clean.

Create Virtual Environment: Open your terminal in the Deep-Live-Cam root directory. Run:

python -m venv deepcam

If you have multiple Python versions, specify Python 3.10 using its full path:

/path/to/your/python3.10 -m venv deepcam

1. Activate Virtual Environment:

    On macOS/Linux
    source deepcam/bin/activate

    2. Your command line prompt should now show (deepcam) at the beginning:

      Install Required Packages: With the virtual environment active, run:

      pip install -r requirements.txt

      This process may take a few minutes to run it will download all the required libraries for the app.

      Running the Application (Initial CPU Run)

      After installing dependencies, you can run the program.

      Execute the following command in your terminal (ensure venv is active):

      python run.py

      Note: The first time you run this, the program will download additional model files (around 300MB).

      Your Deep Live Cam should now be ready for CPU-based operation:

      CPU-based operation

      Testing the Deep Live Cam

      Upload the source face and a target face then click on “Start”, it will start swapping your face with from the source to target image.

      Testing the Deep Live Cam

      Output:

      Testing the Deep Live Cam Output

      We can see that the model is performing well and providing us with a good output.

      Testing the Live Feature

      For testing the live feature, select a face and then click on live from the available options.

      Testing the Live feature

      Output:

      The model outputs in the live feature are also commendable although the camara moment is very low due to expensive calculations in the background.

      Testing the Live Feature

      We also noticed that while using our glasses, the model is not losing its accuracy. It’s able to swap the face even if any object is coming in between the face and the camara.

      Using GPU Acceleration (Optional)

      For faster performance, you can use GPU acceleration if your hardware supports it.

      Nvidia CUDA Acceleration

      Install CUDA Toolkit: Ensure you have CUDA Toolkit 11.8 installed from NVIDIA’s website.

        Install Dependencies:

        pip uninstall onnxruntime onnxruntime-gpu
        
        pip install onnxruntime-gpu==1.16.3

        Run with CUDA:

        python run.py --execution-provider cuda

        If the program window opens without errors, CUDA acceleration is working.

        How to Use Deep Live Cam?

        Executing python run.py launches the application window.

        • Video/Image Face Swap Mode:
          • Choose a source face image (the face you want to use).
          • Choose the target image or video (where the face will be replaced).
          • Select an output directory.
          • Click “Start”.
          • Frames will be processed and saved in a sub-directory in your chosen output location. The final video appears after processing.
        • Webcam Mode:
          • Select a source face image.
          • Click “Live”.
          • Wait a few seconds (10 to 30 seconds typically) for the preview window to appear.
          • Face Enhancer: This option improves image clarity. It may cause choppy video if hardware performance is insufficient.

        Troubleshooting

        Face area showing a black block? If you experience this issue, try these commands within your activated venv environment:

        Troubleshooting

        For Nvidia GPU users:

        pip uninstall onnxruntime onnxruntime-gpu
        pip install onnxruntime-gpu==1.16.

        Then, try running the program again:

        python run.py

        Also Read: How to Detect and Handle Deepfakes in the Age of AI?

        One-Click Deepfake

        1. Pick Your Source Photo: Choose a clear photo of the face. A high-resolution image works best for the real-time face swap.
        2. Select Your Target Video: Pick a video or use a webcam feed. This is where the face swap will happen.
        3. Set Options: Adjust settings to match your computer hardware. This includes frame processing options and output paths.
        4. Begin the Swap: Click the “Start” button. This action begins the deepfake video generation process.
        5. Watch and Tweak: See the results live on your screen. Change settings if needed to get a good outcome.

        My Test Results with Deep Live Cam

        I tested Deep Live Cam using clear photos of celebrities Sam Altman and Elon Musk, applying the real-time face swap feature to my live webcam feed. The results were quite good:

        • Looks Real: The swapped face showed natural expressions. Lighting matched the target video well.
        • Runs Well: The program ran smoothly on a mid-range NVIDIA GPU. There was very little delay.
        • Some Issues: Fast head movements caused some visual errors. Extreme angles also showed minor problems. These areas show room for improvement.

        The Risks Involved

        Deep Live Cam offers exciting uses. It also brings significant risks. Its real-time face swap ability needs careful thought. Some of the

        • Identity Theft: The tool can impersonate individuals effectively. This raises serious concerns about identity theft. Privacy violations are possible.
        • Financial Fraud: This technology could help facilitate scams. For example, faking executive video calls to approve bad transactions.
        • Erosion of Trust: As face-swapping technology grows, telling real from fake becomes harder. This can damage trust in digital communication.
        • Legal Trouble: Using such technology without consent can lead to problems. Laws vary by jurisdiction. Users could face lawsuits or regulatory actions from deepfake video generation.

        Users must understand these dangers. They should use Deep Live Cam responsibly. Implementing safeguards helps. Watermarking deepfake content is one step. Obtaining consent before using a likeness is crucial. These actions can reduce potential misuse.

        Also Read: An Introduction to Deepfakes with Only One Source Video

        Conclusion

        Deep Live Cam makes real-time face swaps and Deepfake videos easy to create, even with minimal technical skills. While it’s a powerful tool for creators and educators, its ease of use also raises serious concerns. The potential for misuse, like identity theft, misinformation, or privacy violations is real. That’s why it’s important to use this technology responsibly. Always get consent, add safeguards like watermarks, and avoid deceptive use. Deepfake tools can enable creativity but only when used with care.

        Frequently Asked Questions

        Q1. What is Deep Live Cam?

        A. Deep Live Cam is an AI tool. It swaps faces in live video. It also creates deepfake videos from one image.

        Q2. What do I need to run Deep Live Cam?

        A. You need Python 3.8+ and specific libraries. Pre-trained AI models are also required. A capable computer (CPU, NVIDIA, or Apple Silicon) is best.

        Q3. Is Deep Live Cam hard to use?

        A. It aims for user-friendliness for tasks like one-click deepfakes. However, initial setup might require some technical skill.

        Q4. Are there risks with Deep Live Cam?

        A. Yes, significant risks exist. These include identity theft, financial fraud, and misinformation. Ethical use is essential.

        Q5. Can Deep Live Cam improve image quality?

        A. Yes. It uses models such as GFPGAN. These models enhance the swapped face, aiming for a more realistic appearance.

        Harsh Mishra

        Harsh Mishra is an AI/ML Engineer who spends more time talking to Large Language Models than actual humans. Passionate about GenAI, NLP, and making machines smarter (so they don’t replace him just yet). When not optimizing models, he’s probably optimizing his coffee intake. 🚀☕

Login to continue reading and enjoy expert-curated content.

Leave a Comment