Exciting news for who? Only the site owner is excited that a free resource now requires a subscription
“Yay! Now I have to pay another subscription! I’m so excited! Let’s celebrate with them!” - nobody
Exciting news for who? Only the site owner is excited that a free resource now requires a subscription
“Yay! Now I have to pay another subscription! I’m so excited! Let’s celebrate with them!” - nobody
I mean to be realistic, Whisper (the audio to text AI ) linked with chatGPT can subtitle anything in real time, translated in any language, in very high quality…
You just need a GPU running in realtime along side your video playback to analyse what is being played instead of a single text file with timecodes.
Progress!
It doesn’t need to be realtime since you can pre generate an srt with time codes beforehand using something like bazarr. Whisper also runs faster than realtime in most model sizes, up to 32x realtime so it can really be worth it to add auto subtitles to media in your collection that’s missing subtitles as a one time job.
It’s an interesting idea to patch the holes when absolutely no srt files are available.
But why not have an open repository where already present srt files could be shared by people.
We could call it libre-subs or something like that.
Could probably do it with something like a Google coral. You can get one for $60 these days. A lot cheaper than a GPU and less power hungry too.
I think they’re $25ish from an official supplier. $60 is scalper pricing. Don’t pay a scalper as it just encourages them to do it more.
$25 for the m.2 version at least. It’s $60 for the usb version which I assume most people would prefer.
Ah, I didn’t realise the USB one cost that much more. I’m not sure most people would prefer the USB version though. It’s convenient to move around and you can use it with mini PCs, but cooling isn’t as good compared to something that sits in a case with good airflow (so it’s more likely to thermally throttle while in use), and having dedicated PCIe lanes as you’d get with an M.2 is way more efficient than using a shared bus like USB. Google have always advertised the USB version for “prototyping” while the M.2 versions are for “production”.
For $40, you can get an M.2 version that has two Coral TPUs on a single board. https://coral.ai/products/m2-accelerator-dual-edgetpu. I’ve got this one with a PCIe adapter, but currently only use one of the TPUs.
That’s still another thing that needs to be bought, installed, and fed with power.
My low power would likely melt trying to run Whisper.
A USB coral uses barely any power and if you have a hard time installing USB devices…
Besides, a lot of people are already using them for frigate. I am.
I was only aware of the m.2 variants.
Still, it’s a thing to be bought which I have not had to do for years for my media solution.
Seems like a huge waste of electricity
For something like movies or shows you would only just need to run it once and store it in .srt
Yeah, and we could have a website so people that already did the process for a piece of media could share the result with others!
…oh wait
Full circle.
Yeah but then multiply that by every video file in every Plex library in the world that doesn’t have SRTs already.
Does this have a plugin or something for Plex?
You can use bazarr to batch generate whisper subtitles for your Plex/jellyfin/kodi library: https://wiki.bazarr.media/Additional-Configuration/Whisper-Provider/
This is super cool.
Kodi would be question.
Asking the important questions here