[youtube-dl] Download files from youtube.com or other video platforms

Packaging error. I'd done it right on my manual update test for all three files, but then forgot to update the build script completely.
Beta package updated. If you could test and confirm...
Glasgow local news downloading now :)
 
Should --hls-prefer-ffmpeg be added to the default youtube-dl.conf file?
Any downside to using it always on Humax?
 
If you "prefer" the native HLS downloader, the CF Python SSL will be used and is likely to fail, whereas the rebuilt CF ffmpeg should succeed. Other downloaders don't know how to handle HLS (called m3u8 in the supports() methods in youtube_dl/downloader/external.py). However, ISTR that iPlayer is unusually lenient in this regard and if that's still the case then --hls-prefer-native could mean lower system load when fetching BBC shows.
 
I never used to have these errors downloading from iPlayer, but I've had it twice just now, requiring manual re-submission to the queue, which is dull.
Code:
23/02/2024 21:41:46 - Caught error: ERROR: unable to download video data: <urlopen error [Errno 145] Connection timed out>
23/02/2024 22:00:33 - Caught error: ERROR: unable to download video data: <urlopen error [Errno 145] Connection timed out>
Is there scope in yt-dl for retrying things that time out, rather than just bailing?
 
There is a default retry count of 10 for things that the code expects to be resolved by retrying. A timeout on connecting is, IIRC, not one of those.

Maybe we need to stop sending ancient UA strings in case iPlayer is blocking certain old UA strings. A random UA from a list made up when Chrome/FF versions were <80 is sent, but personally I think all clients should send just Mozilla/5.0 since there is no good use that a server could make of the UA string (now that other ways exist to discover client characteristics).

Or maybe upgrading to OpenSSL 1.0.0w and rebuilding wget would have an effect.
 
There is a default retry count of 10 for things that the code expects to be resolved by retrying. A timeout on connecting is, IIRC, not one of those.
Perhaps it ought to be. Or, more strongly, probably it ought to be.
I think all clients should send just Mozilla/5.0
Is this in yt-dl somewhere?
Or maybe upgrading to OpenSSL 1.0.0w and rebuilding wget would have an effect.
Presumably you mean 1.1.1w as we are already on 1.1.1d?
I'll start another thread about that in due course.
 
You can increase or make infinite the --socket-timeout ... but I'd say the default 600s ought to be plenty. Actually the --retries ... (there's --fragment-retries ... too) is only used in fetching the media data, although an extractor could implement its own retry mechanism using the same parameter.

--user-agent 'Mozilla/5.0', but I'm sure lots of sites will complain if they're not fed any OS and Gecko data. Maybe not BBC.

Yes 1.1.1, It was 1.0.0... for so long ...
 
You can increase or make infinite the --socket-timeout ... but I'd say the default 600s ought to be plenty.
Indeed it should, but this seems not to be the factor at play:
Code:
24/02/2024 18:28:33 - [download]  48.7% of ~2.22GiB at  1.61MiB/s ETA 12:02
24/02/2024 18:28:34 - [download]  48.7% of ~2.22GiB at  1.67MiB/s ETA 11:35
24/02/2024 18:28:34 - [download]  48.7% of ~2.22GiB at  1.67MiB/s ETA 11:35
24/02/2024 18:31:44 - Caught error: ERROR: unable to download video data: <urlopen error [Errno 145] Connection timed out>
 
As, according to G, this error code has never been seen in the context of Python opening a web connection, -v may help to show what's going on. It may be easier to run the command in a shell rather through qtube.
 
I ran into the "Fixing malformed AAC bitstream" problem and my queue now has a delinquent RUNNING entry that I can't delete or hold via qtube. How do I tidy up my queue please? Could not find the answer in the various related threads here ad a quick tour via cli failed to spot anything handy
 
One thing that can happen is that the queued command gets stuck at the ffmpeg run to fix the AAC bitstream, which runs out of memory and crashes (or the OS does and kills it), causing the queue to restart the command. Picture issues may also accompany this.

Adding a virtual memory paging file of sufficient size (256MB worked for me) using the swapper package allows the command to complete.

But maybe those are not the symptoms you're seeing?
 
One thing that can happen is that the queued command gets stuck at the ffmpeg run to fix the AAC bitstream, which runs out of memory and crashes (or the OS does and kills it), causing the queue to restart the command. Picture issues may also accompany this.

Adding a virtual memory paging file of sufficient size (256MB worked for me) using the swapper package allows the command to complete.

But maybe those are not the symptoms you're seeing?
I stupidly tried to download a 3hour+ prog. I downloaded a working MP4, but didn't check if it's complete. My log is:
25/04/2024 14:06:09 - [ffmpeg] Fixing malformed AAC bitstream in "/mnt/hd2/My Video/60.....qtube.jpg
 
Another possibility is that the ffmpeg run is hanging for lack of memory. The same solution would apply.

For more details you need to look at the relevant entries in qtube.log.
 
If you are happy to get into the shell command line, through Webshell, telnet or SSH, you can find the ffmpeg process and kill it (pkill ffmpeg). That will cause the job to fail.

Otherwise, restarting the box will definitely stop job 21 and cause it to restart (there are.more extreme solutions but I assume you want to carry on using the box!). You may be able to Hold the job before restarting the box (that won't affect the running ffmpeg, I think), or you may be able to Hold or Delete the job in the queue manager before it restarts after rebooting.

The queue manager doesn't have an Abort job function, as discussed elsewhere.

When the queue manager sees a failed job, it's queued for restart (up to some retry limit, possibly 1, after which it can be Re-submitted). On restarting the problem job, youtube_dl will check from the original URL what files it should download, see that the intermediate files have already been downloaded, and then restart the problem fix-up process.

The qtube.log entries will show what's actually happening under the hood.
 
Thanks for the help. Got job 21 to FAIL, so it could be deleted. Swapfile set to 256 after reading a few threads, so will see how I get on.
 
Attempting to download https://www.bbc.co.uk/iplayer/episode/m001z735/tokyo-vice-series-2-1-dont-ever-fing-miss
gettting many thousand messages
Code:
5000    [mp4 @ 0x4c73f0] Invalid DTS: 118449000 PTS: 118447200 in output stream 0:0, replacing by guess
4999    [hls,applehttp @ 0x45a100] Invalid timestamps stream=1, pts=119347200, dts=119349000, size=15488
4998    [mpegts @ 0x499c00] Invalid timestamps stream=1, pts=119349000, dts=119350800, size=15601
4997    [mp4 @ 0x4c73f0] Invalid DTS: 118447200 PTS: 118445400 in output stream 0:0, replacing by guess
4996    [hls,applehttp @ 0x45a100] Invalid timestamps stream=1, pts=119345400, dts=119347200, size=18258
4995    [mpegts @ 0x499c00] Invalid timestamps stream=1, pts=119347200, dts=119349000, size=15488
4994    [mp4 @ 0x4c73f0] Invalid DTS: 118445400 PTS: 118443600 in output stream 0:0, replacing by guess
4993    [hls,applehttp @ 0x45a100] Invalid timestamps stream=1, pts=119343600, dts=119345400, size=18230
4992    [mpegts @ 0x499c00] Invalid timestamps stream=1, pts=119345400, dts=119347200, size=18258
Since the numbers are changing and file is growing (1GB so far) I assume the download is progressing ok
 
It seems to be typical, but which ffmpeg was involved?

This SE answer proposes options to eliminate the DTS errors. yt-dl isn't using those options, nor, I think, is yt-dlp, the more advanced fork for modern platforms only.
 
Last edited:
Back
Top