A simple selfhosted URL shortener with no unnecessary features. Simplicity and speed are the main foci of this project. The docker image is ~6 MB (compressed), and it uses <5 MB of RAM under regular use.
I use this in my house, it’s great. I chose this over others because it allows defining the url path specifically. (Domain.com/whatever)
I have all my pdf manuals and docs uploaded to Paperless ngx. From within Paperless I make them externally linkable.
I take those long nonsense links, shorten them using chhoto with meaningful paths (like /mitersaw) then convert all of those to qr codes that I print out and stick to whatever object is relevant.
Say if I’m working on my chainsaw or whatever and need the manual, point my phone at the qr code and open the manual from my network for my exact model.
That’s pretty damn clever
That’s great to know. Btw, you don’t actually need to specify the url path for it to work. That’s just for convenience of copying the link from the UI. It’ll just work as long as the server is reachable at that address.
It’s neat that this exists, but not neat if someone hosts it for a year, a bunch of fed users rely on it and share a bunch of links using it, and then the hoster takes it down for whatever reason, and now there are a bunch of dead links littered all over the place.
Even less neat if some malicious group can then buy the lapsed domain and forward all those dead links to ads and viruses.
Please host responsibly, is all I’m saying.
Looks awesome and very efficient, does it also run with
read_only: true
(with a db volume provided, of course!)? Many containers just need a /tmp, but not alwaysThanks. I had never tested this before. Seems like it throws errors. Of course, adding and deleting links don’t work. But that’s to be expected. But also link resolution fails since it cannot update the hit count properly. If this is a legitimate use case for you, I might work on making it work.
I try to slap anything I’d face the Internet with with the read_only to further restrict exploit possibilities, would be abs great if you could make it work! I just follow all reqs on the security cheat sheet, with
read_only
being one of them: https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.htmlWith how simple it is I guessed that running as a
user
and restrictingcap_drop: all
wouldn’t be a problem.For
read_only
many containers just needtmpfs: /tmp
in addition to the volume for the db. I think many containers just try to contain temporary file writing to one directory to make applyingread_only
easier.So again, I’d abs use it with
read_only
when you get the time to tune it!!Upon further testing, this does actually work. You may set both
read_only: true
, andcap_drop: all
and it will work as long as you have a named volume. I had it mount a database file from the host system for my test config, which is why I was getting the errors. I don’t know how to make that work though i.e. when the db is bind mounted from the host system. Setting the mount:rw
doesn’t seem to fix it.Odd, I’ll try to deploy this when I can and see!
I’ve never had a problem with a volume being on the host system, except with user permissions messed up. But if you haven’t given it a user parameter it’s running as root and shouldn’t have a problem. So I’ll see sometime and get back to you!
Please don’t use url shorteners, this hides any information the url gives you about where it is taking you. Also most things on the internet support the concept of a link where the url is hidden behind friendly text but still inspectable without clicking by mousing over it.
You’re only thinking of public use cases. For personal use, as I’m sure most people self-hosting would be using it for, it’s very convenient. I use it for work for typing long urls into a new computer we don’t yet have remote management of yet. At home, it makes it really easy to type any link with a TV remote or controller.
So why would this need docker at all?
Makes it easier to distribute and set up
Like the other guy said, it’s not necessary. But docker makes it much easier to deploy. There are instructions to set it up without docker as well.
I find dockerization tends to make things waaaay easier to bring up/take down with simple yet consistent configuration schemes. I distribute all my self hosted stuff across a small cluster of machines- if I want to move a service from one to another it’s as easy as moving the config folder and the docker-compose. Don’t need to have startup scripts, or remembering installation steps after a fresh install, or worry about python/package versions. Plus it helps keep track of what services are set up, soni don’t have to worry about leaving anything unused but still installed and running. And updating is as easy as pulling the images and recreating the containers.