forked from coqui-ai/TTS
-
Notifications
You must be signed in to change notification settings - Fork 214
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
While querying the vits model for a number of texts, there seems to be a memory leakage. The memory consumption almost linearly increases as the loop progresses.
To Reproduce
import os
import torch
from TTS.api import TTS
# Get device
device = "cpu"
tts = TTS("tts_models/en/vctk/vits").to(device)
texts_to_test = [
"some 112 pieces of text, each consisting of multiple lines ...."
]
speaker = tts.speakers[0]
num_requests = 0
for idx, txt in enumerate(texts_to_test):
wav = tts.tts(text=txt, speaker=speaker, speed=0.8)
To reproduce the issue just replace texts_to_test with a list of good number of texts (>100) and each text should consist of 2-3 lines.
I performed this experiment on a c2-standard-4 machine (gcp).
Expected behavior
The consumption of RAM should be stable and should not increase while running the given code.
Logs
Environment
{
"CUDA": {
"GPU": [],
"available": false,
"version": "12.4"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.6.0+cu124",
"TTS": "0.25.3",
"numpy": "1.26.4"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.10.16",
"version": "#23~22.04.1-Ubuntu SMP Thu Jan 16 02:17:57 UTC 2025"
}
}
Additional context
No response
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working