Utilizing the GPU in Intel-based MacBooks for Machine Learning

I own a base model MacBook Pro (16inch, 2019). It comes with a AMD Radeon Pro 5300M GPU inside. Since I bought this laptop, I've struggled to utilize its GPU for machine learning. I've tried:

Eventually, I just gave up for a while and switched to using Google Colab. I use the paid version of Colab, which is relatively cheap and provides a pretty good service.

There actually is a third method to using my MacBook's GPU, which I stumbled upon completely accidentally. It's much easier to set up and more convenient to run than any of the other options. You can continue using your native TensorFlow backend and not have to switch to Linux.

You simply need to:

  1. Install Anaconda
  2. Install the two metal libraries Apple provides (instructions are provided below):

And that's all!

For more detailed steps, you can follow the instructions Apple provides or follow my more condensed version here:

  1. Create a new Conda env:
    conda create --name metal python=3.8
    conda activate metal
  2. Install the two metal packages:
    SYSTEM_VERSION_COMPAT=0 python -m pip install tensorflow-macos
    SYSTEM_VERSION_COMPAT=0 python -m pip install tensorflow-metal
  3. Install some supporting packages:
    python --m pip install ipykernel
    python --m pip install matplotlib
    python --m pip install tensorflow-datasets

Let's run a little test to ensure it's working:

First, let's import TensorFlow and ensure the GPU shows up as a visible device:

Let's download a small dataset and get it ready for training:

Let's define the model:

Run training on CPU

Run training on GPU

As you can see, training on the GPU was 8 to 12 times faster in this scenario.

Enjoy training your models! 🙂