Do not put this on the Internet if you do not know what you are doing.
By default this container has no authentication and the optional environment variables CUSTOM_USER and PASSWORD to enable basic http auth via the embedded NGINX server should only be used to locally secure the container from unwanted access on a local network. If exposing this to the Internet we recommend putting it behind a reverse proxy, such as SWAG, and ensuring a secure authentication solution is in place. From the web interface a terminal can be launched and it is configured for passwordless sudo, so anyone with access to it can install and run whatever they want along with probing your local network."
I hope everyone intrigued by this interesting and potentially very useful project takes heed of this warning.
That warning applies to anything you run locally. And going further, in this day and age, I would never put up any home service without it being behind Cloudflare Access or some form of wireguard tunnel.
I've done that in the past, even for securing the admin pages of some software (there was once an issue where the admin page auth could be bypassed, this essentially adds another layer). With TLS it's okay for getting something up and running quickly.
Of course, for the things that matter a bit more, you can also run your own CA and do mTLS, even without any of the other fancy cloud services.
After coming across a brief tutorial of mTLS in this tool for locking down access to my family photo sharing [0] I have bounced around the internet following various guides but haven't ended up with a pfx file that I can install in a browser. Can you recommend any resource to understand which keys sign what, and what a client certificate is verified against?
The guides I find often contain the openssl incantations with little explanation so I feel a bit like stumbling through the dark. I realize how much I've taken stacktraces for granted when this auth stuff is very "do or do not, there is no error"
Firefox can be configured to use Kerberos for authentication (search for "Configuring Firefox to use Kerberos for SSO"); on Windows, Chrome is supposed to do so too by adding the domain as an intranet zone.
I mean, I'm aware of SPNEGO etc. It's just that it was... ignored(?) by the startups/the community/google? Whatever little support there is is comparatively a worse experience than what we've got now for no really good reason.
Kerberos is old neckbeard tech, highly complex to set up, with layers upon layers of legacy garbage. Trying to get it working is ... a nightmare, I prefer even the
garbagefest that is Keycloak over dealing with Kerberos. At least that just requires somewhat working DNS and doesn't barf when encountering VPNs, split horizon DNS or split tunnels.
The only places I've seen a working Kerberos setup outside of homelabs is universities (who can just throw endless amounts of free student labor power onto solving any IT problem) and large governments and international megacorps.
Good luck when the TCP or SSL stack has an issue. These bugs are rare but they do exist and you're getting fucked royally if your entire perimeter defense was a basic auth prompt.
Windows and Linux have both had their fair share of network stack bugs, OpenSSL had Heartbleed and a few other bugs, and hell you might even run into bugs in Apache or whatever other webserver you are using.
It would have taken several days to heartbleed your private key in 2013 if you also added fail2ban. Your home lab probably isn't on the high priority target list.
> Your home lab probably isn't on the high priority target list.
Yeah but these days with botnets widely available to hire? Everything is fair game and whatever you run gets indexed on Shodan and whatever almost immediately. The game has never been easier for skiddies and other low-skill attackers, and mining cryptocoins or hosting VPN exit nodes makes even a homelab a juicy target.
My homelab for example sports four third-hand HP servers with a total of about 256GB RAM and 64 CPU cores on a 200/50 DSL link. That's more than enough horsepower to cause serious damage from.
Yeah, I made a mistake with my config. I had setup SWAG, with Authelia (i think?). Got password login working with 2fa. But my dumbass didn't realize I had left ports open. Logged in one day to find a terminal open with a message from someone who found my instance and got in. Called me stupid (I mean they're not wrong) and all kinds of things and deleted everything from my home drive to "teach me a lesson". Lesson painfully learnt.
But before that happened Webtop was amazing! I had Obsidian setup so I could have access on any computer. It felt great having "my" computer anywhere I went. The only reason I don't have it set up is because I made the mistake of closing my free teir oracle cloud thinking I could spin up a fresh new instance and since then I haven't been able to get the free teir again.
> deleted everything from my home drive to "teach me a lesson". Lesson painfully learnt.
I had a mentor in my teenage year that was the same kind of person. To this day the only meaningful memory I have of him is that he was an asshole. You can teach a lesson and be empathetic towards people that make mistakes. You don't have to be an asshole.
The lessons we learn best are those which we are emotionally invested in and sometimes that emotion can be negative, but a lesson will be learned regardless.
LOL! That's why we still smack kids' hands with a stick if they answer a question in school wrong. Because it emotionally sticks and definitely does not cause any psychological issues.
> The only reason I don't have it set up is because I made the mistake of closing my free teir oracle cloud thinking I could spin up a fresh new instance and since then I haven't been able to get the free teir again.
People are automating the process of requesting new arm instances on free tier [1]. You would find it near impossible to compete without playing same game
I had the same thing happen to me. I tried running a script for a month without luck (Sydney region). What did work was adding a credit card to upgrade to a paid account - no issues launching an instance, and it's still covered under the free tier.
There are operations that put cryptominers into any unauthenticated remote desktops they can find. Ask me how I know... Way friendlier than wiping your data though.
I use a soft-offline backup for most things: sources push to an intermediate, backups pull from the intermediate, neither source not backup can touch each other directly.
Automated testing for older snapshots is done by verifying checksums made at backup time, and for the latest by pushing fresh checksums from both ends to the middle for comparison (anything with a timestamp older than last backup that differs in checksum indicates an error on one side or the other, or perhaps the intermediate, that needs investigating, as does any file with a timestamp that differs more than the inter-backup gap, or something that unexpectedly doesn't exist in the backup).
I have a real offline backups for a few key bits of data (my main keepass file, encryption & auth details for the backup hosts & process as they don't want to exist in the main backup (that would create a potential hole in the source/backup separation), etc.).
But you can have Obsidian access from any device already if you easily setup syncing using the official method (and support the project by doing so) or one of the community plugins. Doing it this normal way avoids opening up a massive security hole too.
* any device you have admin rights to install software on, they are talking about being able to log in from any computer, not just their own
It surprises and annoys me that obsidian, logseq, etc don't have self hosted web front ends available. I think logseq will once they wrap up the db fork, and maybe someday we'll have nuclear fusion powerplants too.
I created personalized image with tailscale and kasmvnc for this particular reason, ... not on a public VPS. you can find images on my github as inspiration; do not directly copy unless you understand what you are doing.
Also note that their example docker config will allow anyone from the internet to connect, and even add a incoming rule in your host firewall to allow it. This is because they don't specify the port like -p 127.0.0.1:hostport:containerport (or the analog in the docker-compose config).
No they won’t. Octoprint (3d printing server) had a similar warning but they had to introduce actual user accounts to secure the system because people ignored it.
Do not put this on the Internet if you do not know what you are doing.
By default this container has no authentication and the optional environment variables CUSTOM_USER and PASSWORD to enable basic http auth via the embedded NGINX server should only be used to locally secure the container from unwanted access on a local network. If exposing this to the Internet we recommend putting it behind a reverse proxy, such as SWAG, and ensuring a secure authentication solution is in place. From the web interface a terminal can be launched and it is configured for passwordless sudo, so anyone with access to it can install and run whatever they want along with probing your local network."
I hope everyone intrigued by this interesting and potentially very useful project takes heed of this warning.