Skip to content

Replace AdamW with Muon as the optimizer #51

@nanxstats

Description

@nanxstats

Looks like PyTorch core is getting Muon as an optimizer option - from Soumith Chintala's X post

Might be useful to try (Muon is almost twice the training efficiency compared to Adam?) once it's there.

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions