-
Notifications
You must be signed in to change notification settings - Fork 691
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Apparent memory leak during normal MP3 playback #1827
Comments
Thanks for this great description. Could you also include the output of
Are they all using the same type of output device e.g. all using the Pi's onboard audio?
FYI Mopidy doesn't support
This is interesting and potentially important. We need to be able to reproduce this so it would be good to know if this is reliably reproducible with the same n set of tracks. Can you then reproduce by repeating a single track? Are you able to get a debug log and maybe even a GStreamer log? It's much easier to get these when running Mopidy as your user than as a service. |
Here's the output of
EDIT: I confirmed that the output is identical on another RasPi where the problem does not occur. I installed mopidy using All of the RasPis in question are using the HiFiBerry AMP+ for audio output, driving some 8-ohm ceiling-mounted speakers. Volume levels are reasonably quiet, and all RasPis share the same fixed 12V/30A DC power supply, so I doubt there are any power issues. I haven't been able to dig into replicating it with or without particular tracks, but I'll see what I can do in the next few days. |
Two notable updates on this. First, despite changing nothing whatsoever (media library, Ubuntu packages, hardware, usage style), the problem seems to have disappeared on the previously failing RasPi. I have no idea why. I'm 99% sure the system isn't auto-updating itself, and nobody except for me would have done it on purpose. Second, it came to my attention that v2.2.2 is not the latest, and my apt-based install was using Raspbian repos instead of the official Mopidy repos. I've updated to v2.3.1, and it looks like it's still working fine (better, in fact, since the web interface was finicky before w/r/t track changes and playback state). I'll keep my eyes on it for a while longer, but this may be a non-issue. 🤷♂ |
Thank you for the update! |
Hi,
One possible cause is buffers getting dropped somewhere in the Gstreamer
pipeline.
If you see this happen again then it might be prudent to turn on memory
buffer tracing using GST trace eg GST_REFCOUNTING or use the tracer
gst-leaks (if using Gstreamer 1.10 or later).
You could also try to replicate it by constructing the Gstreamer pipeline
manually on your Rpi command line to see if it also happens.
If that doesn't work then it could be a python application memory leak.
You could try using objgraph to build up a picture of what python objects
are being allocated.
Liam.
…On Thu, 21 Nov 2019, 06:57 Stein Magnus Jodal, ***@***.***> wrote:
Thank you for the update!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#1827?email_source=notifications&email_token=ABTRST35TMHIGNIJZYB4CNTQUYWN7A5CNFSM4JEENTUKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEZFUMI#issuecomment-556948017>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABTRSTY3T2T66DE2FWKAT5LQUYWN7ANCNFSM4JEENTUA>
.
|
I spent some time trying to reproduce this and while at the beginning, I thought I did (never to the point of memory exhaustion, just increasing beyond the normal amount), when I tried to employ the gst tracing tools they didn't show anything. I played around with some more tracing/logging but then was definitely unable to reproduce it. After restoring everything back to default I found I was then entirely enable to reproduce anything like this. In other words, a total failure. |
It is worth having a read on this:
https://rushter.com/blog/python-garbage-collector/
It is not unusual for python applications to potentially sit on memory.
There are some potential pitfalls on the types of object references
existing in applications that may cause delayed release of memory -- this
is especially true of applications that use containers / object reference
extensively.
|
Hello, I've recently noticed the same thing, namely a steady increase of the used memory as the playback goes. In particular, memory usage starts at about 30MB; I add about 110 songs to the tracklist from a playlist and start playing. At each song being played, memory increases of about 2-3MB (I guess because mopidy is loading it into RAM). However, GC never seems to kick in as I've easily seen the memory grow up to 270MB. Even if I stop playback and resume the day after, it keeps growing from the same level. At some point, it even starts using swap. As I'm using the same system to perform other activities that keep the device on 24/7, the easy workaround I've found is to restart mopidy process when I quit music playback (to leave it clean for when it starts next time). However, I think Mopidy is quite cool idea, so it would deserve a real fix for this. I'm using Mopidy v.3.0.1. |
Please provide more detailed information about the file types they are seeing this with? Are these mp3, wav, aac? As I said, I was unable to reproduce. We also need, full |
Hello, thanks for your quick reply on this. I only have mp3s in my playlist. Here you are the output from the two commands you asked:
|
And are using the |
To be honest, I don't know. Is there a way to find out what's going on under the hood? Any specific command I need to run or anything special to look in the logs to find out? I think it doesn't make much difference, but just to describe my setup better - I'm playing files from a locally mounted folder from my NAS. To start reproducing I usually use mpc, which is interfacing with Mopidy-MPD extension. After that, I fully control the Mopidy instance using its embedded simple HTTP interface. |
If you just want to provide a full debug log somewhere that'd be useful. Although what is the "embedded simple http interface" you mentioned? |
Sorry, I meant I'm using the Back to your initial question (file vs local), I quickly glanced through the logs and it seems I'm using the former. Here's an excerpt of my log (I'm running mopidy as a systemd service if that matters):
That repeated warning about "Resource not found" related to gstreamer looks suspicious; however, despite the warning, I can confirm all files are playing absolutely fine. |
Hello, I've just spent 15 minutes looking at the mopidy code for the first time, therefore I cannot call myself an expert :-) However, I see that mopidy is using the GStreamer python bindings to drive media playback. By looking at the code at mopidy/audio/actor.py, I see where you allocate resources for the playbin, but I could not find where you "unreference" it to free resources... am I missing something? Again, this issue is not a blocker for my deployment as I decided to restart mopidy process every time I stop using the player. However, I'd be glad to help you finding the root cause of the memory leak of this beautiful product. |
We setup the playbin once as part of the Audio actor starting during Mopidy startup. We reuse the playbin between songs. The playbin is "torn down" when Mopidy shuts down (not that deallocating it matters at that point). Keep in mind that I tried to reproduce this issue before and I failed. I'm guessing it's related to the media types you are playing. |
Ok, please let me know if I can do something to help you identify the issue. As I said, mine is a very common use case, where regular MP3s are played from a locally mounted NAS folder. |
An experiment where you moved the files locally would be helpful. And an experiment where you use the Local backend rather than the File backend. And if you could include the full debug log that might also be helpful for me to try again to reproduce. |
Hello, I have experimented copying the same MP3 files locally and I see exactly the same behaviour, namely memory increases by 2-3 MBs every time a song gets played. I have collected the memory usage of mopidy after playing each song - you can find all the numbers with File and Local here below. I have also collected the logs as you requested. From the data I gathered, I'm under the impression the leak doesn't happen in the file/local backends, but rather in mopidy itself (or maybe in GStreamer). One more thing: all my MP3s contain a JPG artwork that displays correctly on other media player (e.g. iTunes). Not sure if mopidy/Gstreamer leaks might be due to the presence of the artwork in the songs. My 2 cents
After that, I've disabled "file" backend and have switched to "local" backend - same behaviour in terms of memory:
|
Hello, I was just wondering if the info and logs I provided helped in the end? I managed to reproduce this with any MP3 of my library, so I doubt this is related to the format. Have you tried with your MP3s? I could ship a couple of MP3s not covered by copyright for you to try if needed. |
Hello. I am experiencing the same memory leak issues. I switched from Raspberry PI 3 1GB to Raspberry PI 4 1GB. All testing was performed with clean install of raspbian and mopidy with mostly default settingss. Only file:media_dirs was changed to local folder and core:restore_state was enabled. Because memory limitations, problem starts after a few hours of playing. I tested with quickly skipping tracks and pause/resume. On RPI3 with Stretch Raspbian, with version 2.2.3 installed, I had no problems at all. Memory consumption stayed around 50-60MB. On RPI4 and RPI3 with Buster Raspbian I tried the latest version 3.0.2 and older 2.2.2 version. With both, there is an extreme memory leak issue. If I add 90 tracks to tracklist, memory jumps to around 150MB. If mopidy is restarted it starts with around 40MB. On RPI3 (Stretch) with mopidy 2.2.3, python 2.7.13 and gstreamer 1.10.4.0 is used. |
Hello! I'm trying to use an old RPi 1 B+ as a mopidy music player and it seems I'm having a similar (if not the same) problem. I'm new to it, but can help with testing (I can reinstall and test things here, if it helps). The problem: RPi plays some songs just ok (I use an android MPD app), but suddenly after some time the song hangs and the green led (I/O activity) keeps bright on (without even blinking). Can't login through ssh but the RPi answers to ping. After reading the above messages, I tried htop and can also see that if I keep changing to the next song and play/pause some times, the memory keeps going up. In the first test it went to almost 80% (from 35%), but I couldn't hang the system. |
I am experiencing the same problem with Mopidy 3.0.2 on a Raspberry Pi Zero W with a Phat Beat DAC (which ultimately uses the hifiberry alsa drivers). It's a fresh install of Raspbian 10, with Mopidy installed from the apt.mopidy.com repo This has been happening consistently since I set this thing up a week ago in my case I'm using the Mopidy-Subidy backend primarily. Watching mopidy's memory use in htop it seems like garbage collection is happening inconsistently - on some track changes memory use will reduce by a bit, but often it just grows. If I leave it playing for a few hours, it consistently runs out of memory and begins to fail at playback. By that point there's not enough memory for ssh to spawn a shell for me. Today I'm going to try to let it get to that point while I'm already connected to ssh. here's the output of
I'm not sure how to use gst-leaks at all - can someone provide an example? I tried the example in this slide deck but it didn't work for me there's nothing useful so far in journalctl - where can I enable/view real debug logs? this is driving me nuts and I'll gladly provide whatever information you need to help track it down |
Mopidy's debug logging can be enabled as per https://docs.mopidy.com/en/latest/config/#confval-logging-verbosity. I don't think GST debug logging will be fruitful. I had another ago a couple of weeks back trying to reproduce this on my desktop with absolutely no luck. I have a possible memory leak theory but without being able to reproduce it's not much good. However, if anyone can reproduce this consistently on their system I do have a simple experiment which you can try. Edit /usr/lib/python3/dist-packages/mopidy/audio/tags.py line 78 from
to
So the complete function will then read:
Then be sure to restart Mopidy. |
I'll try it right now |
so far it's looking much improved - the resident set shrinks consistently on track change now I'll keep an eye on it for a couple more hours and see if it holds true |
It seems this will solve my problem too. |
yep after playing a playlist for 2 hours that consistently triggers this memory attrition condition, I can confirm that this completely solved the problem for me. I see Mopidy's resident set fluctuate between 78MB and 86MB now, depending on the size of the track being played |
@Rhalah do those particular flac files have embedded album artwork? You can use |
Quickly looking at https://lazka.github.io/pgi-docs/index.html#Gst-1.0/classes/Object.html#Gst.Object.get_name and various bits of the GST python bindings, I suspect there are a bunch of places we need to add |
beetbox/audioread#84 looks like the same problem with |
Yes. That is the workaround we'd need. Again, I saw zero behaviour different in Ubuntu 20.04 with the original code and that workaround code, but I'm starting to think this is a bug in the Buster bindings that was then fixed later. |
Also meant to add https://mail.gnome.org/archives/python-hackers-list/2013-February/msg00002.html which is a bit old but does confirm my assumption that the bindings should really be handling this for us. |
I got a Pi 4 running Buster and generated a load of fake embedded cover art for some songs and can reproduce now. And I'm pretty sure it's a difference in the 30.x (Buster) to 36.x bindings (Ubuntu 20.04). I had a look again at the changelogs, but this time for pygobject (rather than gst or gst-overrides) and there's a couple of entries there that sound like good candidates (maybe this ??). Either way, we should go for the workaround but it's worth trying to understand why as there are probably more leaks in the Buster version of the bindings because of this. I still don't understand how
but EDIT: The gir files show the annotations do somehow work out fine and it gets |
Sorry for the delay, @kingosticks ! Both the sets (the one that plays ok and the one that triggers the problem) have algum art, but the ok ones have even more meta-data. I'll put below the output of mediainfo command from one track from each ser (Bob Marley set is the problematic one). mediainfo Bob\ Marley\ -\ Africa\ Unite.flac > bob-au.txtResults in a txt file with: General Audio mediainfo A-Ha\ -\ Hunting\ High\ And\ Low.flac > aha-hhl.txtResults in a txt file with: General Audio
|
I was hoping for output from gst-discover-1.0 as that shows the size of the embedded artwork. But since the audio is 89% of the first file, compared to 100% in the second file, maybe 11% of the first file is artwork data and would tally with it being worse. But I'm really just guessing there. |
No problem! Below is the output of gst-discover-1.0:
Topology: Properties:
Topology: Properties: |
…dy#1827) Embedded cover art, and other tags exposed to us as type `Gst.Sample`, were causing memory leaks when converted to plain Python types using `GstBuffer.extract_dup()`. `extract_dup()` expects the caller to free the memory it allocates but older versions of the python3-gi don't seem to do this. The workaround is to access samples using `GstMemory` methods instead. This issue was present on Buster 10 systems (python-gi 3.30.4) but not on Ubuntu 20.04 (python-gi 3.36.0).
…dy#1827) Embedded cover art, and other tags exposed to us as type `Gst.Sample`, were causing memory leaks when converted to plain Python types using `GstBuffer.extract_dup()`. `extract_dup()` expects the caller to free the memory it allocates but older versions of the python3-gi don't seem to do this. The workaround is to access samples using `GstMemory` methods instead. This issue was present on Buster 10 systems (python-gi 3.30.4) but not on Ubuntu 20.04 (python-gi 3.36.0). Mopidy memory usage after scanning 2861 tracks: Without fix: 163.2MB With fix: 54.1MB
…dy#1827) Embedded cover art, and other tags exposed to us as type Gst.Sample, were causing memory leaks when converted to plain Python types using GstBuffer.extract_dup(). extract_dup() expects the caller to free the memory it allocates but older versions of Python 3 PyGObject don't seem to do this. The workaround is to access buffer data using GstMemory methods instead. This issue was found to be present on Buster 10 systems (python-gi 3.30.4) but not on Ubuntu 20.04 (python-gi 3.36.0). Mopidy memory usage after scanning 2861 tracks: Without fix: 163.2MB With fix: 54.1MB
I have what appears to be a memory leak on a Mopidy installation on a RasPi, booting and running over NFS. Initially I thought the problem was that I had no swap set up, so I added some, but it continues to happen.
The extremely odd thing about this is that the NFS root image is identical to three other RasPi servers handling sound systems in other rooms of the house, but those other three systems have run for weeks without exhibiting any issues. The failing unit freezes up nearly once a day. The only difference among all four servers is the exact content of the music library, but all libraries are pretty small...less than 2k songs, far less in some cases. All songs are in MP3 format, in case it makes a difference.
Here's a screenshot of
top
that I had left running to see the system state when it freezes. Note the insane load and 0k free swap, and the fact thatkswapd
is pegging the CPU, but also thatmopidy
is the one using all the memory. Normally themopidy
process is down around 5-6% mem usage.I'm already using a local SQLite db. I can run a local scan from the command line with no issues.
After the above screenshot, I power-cycled the RasPi and got back into a
top
session so I could pay more attention to it. The%MEM
value reported has slowly crept upward from about 5% to 42%, where it is now. I expect it will continue to rise if I do nothing.In contrast, one of the stable RasPi audio servers has been running for over two weeks solid and playing audio for at least 72 hours without stopping, and the
mopidy
process is at 6.7% usage. It even has a slightly larger library....and after a while longer, the memory use did indeed continue to creep upwards (at 50% when I acted), and I ran
sudo service mopidy force-reload
as a test to see if that would help. It did--mopidy
memory usage dropped down near zero for a moment, and then back to the expected ~5%. No full service restart needed, though I assume that also would have done the same.The different RasPis involved all report Raspberry Pi 3 Model B Rev 1.2 from the output of
cat /sys/firmware/devicetree/base/model
. They are running over NFS mounted root folders that are exact copies of each other, except for the hostname. The contents of/etc/mopidy/mopidy.conf
are identical. The memory usage takes many hours to grow significantly, but it will eventually eat up all available RAM and swap space.I have confirmed as well that the memory use stops increasing when audio is stopped (not sure about paused though). The following set of graphs shows RAM and SWAP usage on the problem system over a 24-hour period, including two full PLAY-STOP cycles (hours long) with one final PLAY event at the end to confirm. I did another
force-reload
immediately after this to avoid a complete system freeze and drop RAM/swap usage back down to normal levels.You can see how the RAM is eaten up first (from 50% to 80%) during the first cycle, then shortly after the second cycle starts, it eats the last bit of safe RAM (from 80% to 85%) and then switches to consuming swap (from 20% to 80%) until stopped. Finally, it ticks up again at a consistent rate about 15 minutes before the end of the graph, which is where I started playback again.
Note, as far as I can tell, pausing playback in the Mopidy/moped web interface has the same effect as stopping it in this regard, i.e. it stops eating memory.
Host platform is Raspbian on a RasPi 3B v1.2:
It is possible that this issue is related to either #1750 (most likely) or #1648 (less likely).
The text was updated successfully, but these errors were encountered: