Skip to content

v2 transforms failing to apply same transformation when multiple inputs passed in (Mac M1) #9184

@elyall

Description

@elyall

🐛 Describe the bug

As described here in 2022 and exhibited in a recent feature post, v2.transforms should apply the same transformation to all inputs during a single call. However on my M1 Macbook when I pass in multiple inputs each input is transformed uniquely as if they are passed in as separate calls.

import torch
from torchvision.transforms import v2

array = torch.rand(1, 2, 3)
flipper = v2.RandomHorizontalFlip(0.5)

for _ in range(100):
    assert torch.equal(*flipper((array, array.clone()))), (
        "arrays were transformed differently"
    )
print("The correct behavior occurred... am I going crazy?")

Versions

PyTorch version: 2.8.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 15.5 (arm64)
GCC version: Could not collect
Clang version: 17.0.0 (clang-1700.0.13.5)
CMake version: Could not collect
Libc version: N/A

Python version: 3.12.7 (main, Oct 16 2024, 07:12:08) [Clang 18.1.8 ] (64-bit runtime)
Python platform: macOS-15.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Apple M1 Pro

Versions of relevant libraries:
[pip3] mypy==1.17.1
[pip3] mypy_extensions==1.1.0
[pip3] numpy==2.2.6
[pip3] numpydoc==1.9.0
[pip3] pytorch-lightning==2.5.2
[pip3] torch==2.8.0
[pip3] torchmetrics==1.8.0
[pip3] torchvision==0.23.0
[conda] Could not collect

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions