Fix: Replace nn.Buffer with register_buffer #30
+27
−18
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
#28
The core issue was a bug in the original HRM source code that made it incompatible with modern versions of PyTorch. The error message AttributeError: module 'torch.nn' has no attribute 'Buffer' told us that the code was trying to use a feature in a way that doesn't exist.
The Cause: Incorrect PyTorch Usage
In PyTorch, a "buffer" is a tensor that is part of a model's state (like weights) but is not a parameter that gets updated during training (e.g., a running mean in a normalization layer).
The original developer wrote code like this:
self.weights = nn.Buffer(...)
This is incorrect. nn.Buffer is not a function you can call directly to create a buffer. This might have worked in a very old, pre-release version of PyTorch, but it is not the correct way to do it.
The Change: Using the Correct Method
The official and correct way to create and register a buffer in a PyTorch model is by using the self.register_buffer() method.
We fixed the code by changing lines like the one above to the following pattern:
By making these changes, we made the code compliant with the modern PyTorch API, which allowed the training to proceed without errors.