|
3505 | 3505 | "cell_type": "markdown",
|
3506 | 3506 | "metadata": {},
|
3507 | 3507 | "source": [
|
| 3508 | + "### 2.1 Getting PyTorch to run on Apple Silicon\n", |
3508 | 3509 | "\n",
|
| 3510 | + "In order to run PyTorch on Apple's M1/M2/M3 GPUs you can use the [`torch.backends.mps`](https://pytorch.org/docs/stable/notes/mps.html) module.\n", |
3509 | 3511 | "\n",
|
3510 |
| - "### 2.1 Getting PyTorch to run on the ARM GPUs\n", |
| 3512 | + "Be sure that the versions of the macOS and Pytorch are updated.\n", |
3511 | 3513 | "\n",
|
3512 |
| - "In order to run PyTorch on the Apple's M1/M2 GPUs you can use the [`torch.backends.mps`](https://pytorch.org/docs/stable/notes/mps.html) package.\n", |
3513 |
| - "\n", |
3514 |
| - "Be sure that the versions of the MacOS and Pytorch are updated\n", |
3515 |
| - "\n", |
3516 |
| - "You can test if PyTorch has access to a GPU using `torch.backends.mps.is_available()`\n" |
| 3514 | + "You can test if PyTorch has access to a GPU using `torch.backends.mps.is_available()`." |
3517 | 3515 | ]
|
3518 | 3516 | },
|
3519 | 3517 | {
|
|
3533 | 3531 | }
|
3534 | 3532 | ],
|
3535 | 3533 | "source": [
|
3536 |
| - "# Check for ARM GPU\n", |
| 3534 | + "# Check for Apple Silicon GPU\n", |
3537 | 3535 | "import torch\n",
|
3538 |
| - "torch.backends.mps.is_available()" |
| 3536 | + "torch.backends.mps.is_available() # Note this will print false if you're not running on a Mac" |
3539 | 3537 | ]
|
3540 | 3538 | },
|
3541 | 3539 | {
|
|
3564 | 3562 | "cell_type": "markdown",
|
3565 | 3563 | "metadata": {},
|
3566 | 3564 | "source": [
|
3567 |
| - "As before, if the above output `\"mps\"` it means we can set all of our PyTorch code to use the available Apple Arm GPU" |
| 3565 | + "As before, if the above output `\"mps\"` it means we can set all of our PyTorch code to use the available Apple Silicon GPU." |
3568 | 3566 | ]
|
3569 | 3567 | },
|
3570 | 3568 | {
|
|
3574 | 3572 | "outputs": [],
|
3575 | 3573 | "source": [
|
3576 | 3574 | "if torch.cuda.is_available():\n",
|
3577 |
| - " device = 'cuda'\n", |
| 3575 | + " device = \"cuda\" # Use NVIDIA GPU (if available)\n", |
3578 | 3576 | "elif torch.backends.mps.is_available():\n",
|
3579 |
| - " device = 'mps'\n", |
| 3577 | + " device = \"mps\" # Use Apple Silicon GPU (if available)\n", |
3580 | 3578 | "else:\n",
|
3581 |
| - " device = 'cpu'" |
| 3579 | + " device = \"cpu\" # Default to CPU if no GPU is available" |
3582 | 3580 | ]
|
3583 | 3581 | },
|
3584 | 3582 | {
|
|
0 commit comments