Zoom background removal for resource restricted Macbook Pro
By kirk86, , 0 comments.

In order to follow this guide you'll need some familiarity with the terminal and running command line programs. Ok, let's start with the guide.

  1. Install miniconda or anaconda on your computer.

  2. Once you have either miniconda or anaconda installed on your computer, start the terminal and execute the following command in order to create a virtual environment and install scipy in it.

  3. cd ~
    conda create -n fakecam scipy
    

  4. Now we need to create a virtual camera driver for processing. We need the AkVCamManager which you can get it from here. Install the akvirtualcamera-mac-9.1.0.pkg corresponding to the latest version, as of this writing this is 9.1.0. After installation you should see the following file AkVCamManager under the location:

    /Applications/AkVirtualCamera/AkVirtualCamera.plugin/Contents/Resources/AkVCamManager
    

    Since the location is a bit uncomfortable to work with we'll create a symbolic link to make our lifes a bit easier.

    ln -s /Applications/AkVirtualCamera/AkVirtualCamera.plugin/Contents/Resources/AkVCamManager /usr/local/bin/virtualcamera
    

    Now in the terminal if you write virtual and press tab twice you'll get all options including virtualcamera.


  5. Next we need the software to do the virtual background removal, there are plenty of options on github but I used the following.

    cd ~
    git clone https://github.com/allo-/virtual_webcam_background.git
    

  6. Once you've downloaded the github project we need to download the actual model parameters used to perform the background removal based on a neural network.

    cd ~/virtual_webcam_background
    ./get-model.sh
    

  7. Now we need to replace some files in order for the software to work on mac.

    cd ~/virtual_webcam_background
    rm requirements.txt
    rm virtual_webcam.py
    

    Copy the contents of the file into a file named requirements.txt inside the directory ~/virtual_webcam_background

    Copy the contents of the file into a file named virtual_webcam.py inside the directory ~/virtual_webcam_background

    Copy the contents of the file inside a file named run.sh inside the directory ~/virtual_webcam_background


  8. Now install the final software requirements inside the fakecam conda environment we created in the step 2.

    cd ~/virtual_webcam_background
    source activate fakecam
    pip install -r requirements.txt
    

  9. Finally we need to setup the config file which tells the software which device is the camera to receive input from and which is the virtual camera device in order to process and replace the background video input with a static image whose location you'll have to define.

    cd ~/virtual_webcam_background
    mv config.yaml.example config.yaml
    

    The config file config.yaml has the following format:

    segmentation_threshold: 0.75
    blur: 3
    erode: 10
    dilate: 10
    virtual_video_device: "/dev/video0"
    real_video_device: "/dev/video1"
    average_masks: 3
    mjpeg: False
    layers:
      - "empty": [["image", "background.jpg"]]
      - "foreground": []
    

    Which we are going to replace with the following content:

    segmentation_threshold: 0.75
    blur: 3
    erode: 10
    dilate: 10
    virtual_video_device: "FakeCam0"
    real_video_device: 1
    width: 640
    height: 480
    fps: 2
    average_masks: 3
    mjpeg: False
    layers:
      - "empty": [["image", "~/virtual_webcam_background/background.jpg"]]
      - "foreground": []
    model: mediapipe
    multiplier: 0.5
    output_stride: 16
    

    The virtual_video_device: "FakeCam0" is the name of the virtual webcam device which will be created once we execute the run.sh file.

    The real_video_device: 1 is the input received from the actual webcam in our macbook pro computer. If you ask me how do you know it's 1 and not 2 or something else entirely? You can list all devices on your machine with the following command:

    ffmpeg -f avfoundation -list_devices true -i ""
    

    Under AVFoundation video devices: you'll find the corresponding numbers for each device on your computer. In my case 1 corresponds to FaceTime HD Camera.

    You'll need ffmpeg to be installed on your computer first, just run the following command on your terminal.

    brew install ffmpeg
    

    Finally, you'll need to download an image of your choice from the web which will be used to replace your background during zoom calls. Download your preferred image inside the directory ~/virtual_webcam_background and rename it into background.jpg.


  10. Finally, now we are ready to run the background removal neural network, just move inside the ~/virtual_webcam_background directory and execute the run.sh file.

    cd ~/virtual_webcam_background
    ./run.sh
    

    Now open zoom while the program runs in the terminal and in zoom select the "Virtual Camera" in the video options.


  11. When you're done with your zoom call and don't need the background removal anymore you can stop it by pressing Ctrl-c in the terminal window where the programme is running.

  12. To remove any virtual camera devices just execute in a terminal window the following command, although the run.sh file will do that the next time you'll need to run the background removal again for your next zoom call.

    virtualcamera remove-devices