Unless someone has registered the trademark for those specific purposes you’re clear. A trademarks is only valid within a specific field of purpose. Trademarks are there to avoid consumers mistaking one brand for another.
There are a lot of entertaining articles on Techdirt about companies not understanding trademark law.
Does anyone know of a list of TLDs that don’t allow reselling? I’d prefer to buy/lease one of those and let domain sharks play their own games.
I use gitit and it’s already packaged in most Linux distros.
TLDR; Sorta federation. It is possible to selfhost data.
Yeah, that container probably crashed because of atmospheric disturbance.
I use Devuan and it’s just Debian without systemd.
Okay, but that would have made a shitty joke wouldn’t it?
Hmm… I don’t know maybe it’s fine as a joke.
Gothub is looking for a new maintainer.
Fontunately it’s just DNS.
Loop up the domains at one of: ns1.cloudns.net ns2.cloudns.net ns3.cloudns.net ns4.cloudns.net
Aliasing and forwarding is not a good solution if you are concerned about law enforcement, because your personal e-mail is still linked with the tracker, just behind an extra hop and in addition you allow someone in between to read your e-mails. You had the answer yourself. Create a completely fresh free e-mail account somewhere, using as minimum a private tab to prevent tracking data to link anything to the account… and if you can get a free e-mail account with IMAP/POP access so that you can use it in an e-mail client to leak less data, do that.
If you still want to respect user privacy, your analytics software could use the port of the connection instead of IP as the identifier. It would be perfectly fine for determining simultaneus users from the same IP, but not invasive enough to monitor an individuals behaviour. Don’t ask me which analytics software supports that. I’d grab the data from the http logs if it was me and use a tool like goaccess.
You could check if a domain contains a lemmy instance by fetching /.well-known/nodeinfo
, but it’s bad netiquette to hammer sites with requests and could get users blocked. If you were to do it I’d make sure it cached the lookups in IndexedDB, localStorage or just using Cache API. I’m unsure how well any of the APIs works with UserScripts.
Marginalia Search perhabs.
Also these are worth mentioning:
I build a lot of tools like that and the first thing I do is to go to the developer tool in my browser and observe the network traffic. When you find the resource you’re after you scroll back and see what requests resulted in that URL. Going from those requests you figure out in the original static HTML document and resource, which parameters are used for the construction of the URL, that might require reversing some javascript, but that’s rare. After that you’ll have a pretty good idea how you obtain the video resource from the original URL. Beware of cookie set by the requests, they might be needed to access the next requests. For building my tools I use Perl or sometimes just Bash or a GreaseMonkey userscript to fetch and parse the urls and construct the desired output.
I use gitit from the Debian repositories. It’s a simple server application without a database and it uses git and pandoc. I just run gitit -f somewiki.conf
and access it in the browser. As formatting you can use what pandoc supports, but I’ve chosen reStructuredText. DokuWiki mentioned by others in the thread is also a good option.
From reading the docs I get the impression that the client is discovering torrents from the DHT, and that’s also the data you can search from other clients. That means it wouldn’t be revealing anything about which torrents you’ve downloaded or are sharing.
Why would you run linux.exe from Linux?
That’s the last stage of being a FOSS developer.