-
-
Notifications
You must be signed in to change notification settings - Fork 92
Description
Is your feature request related to a problem? Please describe.
Yes. When running Audio.service.bash against a large Lidarr library with many missing albums (e.g., 60,000+), the script uses the /wanted/missing endpoint, which returns only the most recent 1000 missing releases in descending order.
This causes several issues over time:
• The script loops the same albums repeatedly, especially failed ones
• Older missing albums are never reached, since they fall outside the 1000-item limit
• Albums that can’t be matched/downloaded are retried every cycle, wasting API calls and time
This makes it difficult to backfill long-missing releases or move past stuck entries.
Describe the solution you'd like
I’d like to see the script support a smarter, more dynamic approach to processing missing albums, including:
-
Randomization of the /wanted/missing list
Support randomizing the album order before processing.
Example: pipe the JSON list through shuf (e.g., jq -c '.[]' | shuf) before looping.
A config flag like shuffleWantedList=true could control this. -
Alternating modes: recent + backfill
Alternate between processing recent releases and older catalog entries.
This could involve:
• Pulling page 1 of /wanted/missing for recents
• Pulling from /artist or additional pages (e.g., page=2–5)
• Merging and shuffling the results
This would ensure both current and long-missing albums are addressed over time.
- Retry backoff / cooldown logic
Add logic to track albums that fail to download (e.g., not found on TIDAL, Deezer etc.).
After X failed attempts, pause retrying that album for a set duration (e.g., 7 days).
This could use a simple JSON file to track failures by timestamp.
Describe alternatives you've considered
Some thing's I've tried:
• Manually shuffling the wanted list using shuf
• Filtering out known-failed albums via custom wrappers
• Paging /wanted/missing manually with custom scripts
These approaches work but require external tooling and don’t solve the problem natively within the automation.
Additional context
These changes would:
• Improve long-term efficiency for large-scale libraries
• Prevent unnecessary repeat processing of failed entries
• Increase coverage of missing releases that fall outside the most recent 1000
• Improve behavior in unattended or always-on setups
Thanks again for this incredible script — it’s become a cornerstone of my music automation stack. These enhancements would make it even more powerful and scalable.