Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: How do you deploy your side-projects?
50 points by gerimate on Dec 13, 2022 | hide | past | favorite | 81 comments
I've been looking for alternatives since the Heroku announcement.


I can highly recommend fly.io. I have been using it for my Elixir projects [0][1][2]. The interface is easy to use, and you can come very far with the free layer. Another selling point for me was, that I can easily migrate away (in case I want to move to Hetzner later).

[0] https://slashdreamer.com/

[1] https://articletoimage.ai/

[2] https://ogtester.com/


I’ve had a lot of downtime cuz of Fly.

1/ My certificate didn’t auto-renew and there was no warning. Others had this issue [0].

2/ Fly's free Redis host is trash. Not usable for sidekiq workloads. Others had this issue [1].

[0] - https://community.fly.io/t/ssl-certificate-did-not-renew-aut...

[1] - https://community.fly.io/t/redis-timeouts/7232


Check https://dokku.com. Easy to set on your VPS, support Heroku buildpacks and almost-zero-to-none management (assuming you update your server from time to time). And you get Heroku-style deployment.

Running some projects with it for the last 5-6 years and had couple of Dokku versions upgrades - zero issues so far.


Keep in mind that depending on use case it will build your project on the same machine your production app runs. There are also a few things to keep in mind.. it depends on docker so a docker update might stop your services, disk space might fill up and needs to be checked on. Plugins such as databases also need to be updated once in a while. If you're very unlucky you may cause a crash/outage by deploying an update. What I do instead sometimes is to build and push container images of my apps in a pipeline on commit and then deploy those to dokku using git-from.


Digital Ocean's managed Kubernetes offering has been great for me.

There's lots of potential complexity to K8s, but if you're not managing the cluster and you have simple workloads the ratio of "stuff you have to learn" to "benefits you get" is pretty high. You can deploy anything that runs in a container, so it's really good for experimenting and not being tied down to one language or platform.

I can add a new service by copying and pasting a yaml file or two and it's up and running within a half hour or so.


If you are going down the k8s path and want to learn both a cool config language and make it easier to be able to deploy abitrary things in a more consistent way I suggest checking out Jsonnet and Tanka.

This combination allows me to make use of Helm charts, off-the-shelf YAML and custom Jsonnet in a cohesive way.

It's probably overkill for personal projects but I picked up the stack in a previous professional role and for me it's scaled down well enough.


I'm in the middle of migrating off Tanka's predecessor ksonnet to pure Helm at $work because the migration effort was deemed too challenging :cry: Did you use ksonnet before?


I didn't. Before Tanka we had built our own Jsonnet based system that shelled out to kubectl apply, we also had our own Jsonnet libraries.

When I started using Tanka I picked up the version of the ksonnet k8s lib that is managed by Grafana now.


What's your approach for ensuring that you've sufficiently secured your Kubernetes cluster?

I'd go down the same road as you, but I'm worried that I leave my cluster vulnerable as I don't know much about hardening Kubernetes. For example, I know I shouldn't run my apps under privileged user in Docker, but overall I'm not familiar with a managed Kubernetes attack vector.


First step if you're hosting it at home is separating your home connection from your business connection, physically. If you don't have that, do at least two VLANs.

Second step, don't expose anything unless you explicitly know what it's for. Start with everything 100% locked down, and only open things up you know want to be open. If you're just hosting a "home" cloud, nothing have to be exposed externally, so expose nothing. Otherwise go service/port by service/port to expose things.


The Kubernetes nodes and control plane are not something I manage directly. DigitalOcean perform regular upgrades to keep the cluster at a minimum supportable and secure version. They're reasonably sophisticated, and if they can't upgrade the cluster without downtime they'll notify you with what's wrong - which is great, because I don't know much in depth about K8s administration.

The configuration you use is pretty explicit about which ports are internal and which are published externally as a service, so it's unlikely you'll get it wrong by accident. Nevertheless, you can still verify that you haven't done anything daft like opened a DB port to the internet by e.g. trying to connect to it.

Finally, there's a whole class of difficult administration, hardening and access problems that can come with Kubernetes multi-tenanted operations that you just side-step as a sole administrator/user. You don't need to worry about who has access to which services or namespaces, or what privileges they have via RBAC - it's just you. I'd want to do some really serious research before letting other users launch containers on my cluster, or execute their own code in it; but that's not one of my use-cases, so it's not a problem.


I deploy to actual VPCs, the old way. No containers. No PaaS. Just get a Linux machine with a static IP and install. I use ansible to automate the deployment.

If I need some storage of undefined quantity, in the case of say image uploads, I might bring in S3 (or equivalent) service. But other than that, just a VPC.


Great to hear some people still follow the KISS method. I've done the same for years and this very simple setup can scale to tens of millions of (database intensive) requests per week.


Also a big fan of directly running my projects on VPCs. I do a lot of development on my Linux machine so I really enjoy not having any substantial differences between my local and deployed environments.


I went through Heroku, then DigitalOcean, then GCP, AWS, Hetzner Cloud, and finally landed on a dedicated Hetzner server at €45/month

My use case is app websites (https://lunar.fyi, https://lowtechguys.com), public databases (https://db.lunar.fyi), personal APIs (for uploading files, optimizing images/videos, MQTT etc.) and websites for my relatives (https://robert.panaitiu.com)

I've had this configuration for 2 years now: https://www.hetzner.com/dedicated-rootserver/ax41-nvme

    AMD Ryzen 5 3600 Hexa-Core "Matisse" (Zen2)
    64 GB DDR4 RAM
    2 x 512 GB NVMe SSD (Software-RAID 1)
    1 Gbit/s bandwidth
On the software side I'm heavily relying on self-hosted Portainer [0] (for managing Docker stacks) and Caddy [1] (for routing web services, static file servers, static websites, barebones single Python file APIs etc.)

I usually start a side project by simply running it in a tmux on the machine. If it becomes larger, I promote it to a docker-compose file inside Portainer.

When I find myself needing to update/re-deploy often enough, I create a repo for the compose.yaml file and create a Portainer stack with that repo. That way, a `git push` can re-deploy automatically with rollback on failure.

Sometimes I might need to run tasks like building a binary or a more advanced Docker image on `git push`, so I also self host Drone CI [2] for that.

Portainer also supports Kubernetes, but I just don't feel comfortable with its complexity. I feel more at home with running `docker ps/exec/logs` to troubleshoot possible problems, or test out some ideas quickly.

[0] https://www.portainer.io/

[1] https://caddyserver.com/

[2] https://www.drone.io/


What OS(flavour) are you running? How do you do OS upgrades/patches?


Running Ubuntu 20.04.5 LTS right now. I have the `unattended-upgrades` service running for automated security patches, and I just run `apt-get upgrade` from time to time.

I'll be upgrading to 22.04 LTS tomorrow, last time I did that it went smoothly.


I self-host on a 3-node Kubernetes cluster in my basement. It runs on three off-lease HP DL360 1U servers that I bought for about $120 each. I work heavily with Kubernetes in my day job, so this also gives me a playground for things that I may need to know about for work, or also things I'll _never_ need in my day job, but are good learning opportunities anyway.

I have an overspecced solar install on my roof that make the electricity used by the cluster essentially free to me, so the only recurring cost is business-class Internet service.


> ...I have an overspecced solar install on my roof that make the electricity used by the cluster essentially free to me, so the only recurring cost is business-class Internet service...

That's what i call living the dream! Nice!


I deploy to AWS with a serverless architecture and infa-as-code written in the same language as my app using CDK.

Serverless is great for side-projects because the scale based pricing effectively makes things free unless they really take off. And if it ever does go to decent scale, there's a good chance you don't need to re-architect or even tune anything to scale up with traffic. If you wanted to prioritize 100% predictable fixed costs over everything else, then maybe a VPS would be a better option.


I just have a digitalocean droplet and deploy with SCP. No fuss, no muss!


I've been using Render and Fly.io

Render is the closest thing to Heroku's ease of deployment currently. Major disadvantage is the larger build times.


If you're interested in sharing more: What determines if you choose render or fly for a given project?

What makes fly not as easy as render?

(I am contemplating moving off heroku, and these seem to be the major contenders for analogous ease of operations).


For existing Heroku projects, I chose Render. For newer projects, I mostly use Fly.io. It also depends on a few more factors like Postgres need of the project, in which case I choose Fly's managed services. Fly.io also makes it easier to deploy docker based projects. Render is more like plug-a-GitHub-repo-and-play.

Bear in mind that I'm only comparing these too in terms of their free tiers.


Thanks! why not fly for existing heroku projects?

I thought that fly didn't actually have a managed postgres, I guess I need to look more into it!


Depends... here's some recommendations

[1] https://dokku.com/ - Heroku but self-hosted, deceptively complex, can be a pain

[2] https://vercel.com/ - Kind of a modern replacement, very easy, but can only really run TS/JS

[3] https://cloud.google.com/run - A nice way to run any server in a Docker container, simple but probably not suitable for a front-end


A mix of docker-compose for static solutions with containerrr/watchtower for updates.

For deployments of actual project I've built myself I've built a stack myself, which deploys on kubernetes via. flux2. I use a stack on top of dagger via. drone for ci, and a again a custom releaser for moving stuff from ci into flux.

I am using wireguard to communicate with the clusters, and mostly caddy on the frontend for tls provisioning.

It is not really a fit for other people, but you can find open source solutions for all my custom bits.


Caprover on a Hetzner instance (tried alternatives like Dokku and Coolify and even just a Caddy server with a bunch of hooks but that was difficult to maintain)


Just started using Oracle Cloud Container Engine. Deploying all my side projects for free on ARM boxes, feels good man.

Previously I was self-hosted on a colo box I have using a PaaS I used to work on called Flynn - but it's about time to retire that machine, it's a 7 year-old Xeon and the power it draws has to be insane compared to the 3 ARM instances it's now running on.

The world has moved on from Flynn sadly and now it's my turn too.


Hi, there! I've been working on my own Blockchain project, since my knowledge on the topic is not so vast, I had to search some Blockchain pages that would help to launch my project.

I recently used a blockchain page called "Rather Labs" to help me launch my own blockchain-based project, and I was very impressed with their service.

The page offers a range of tools and resources, including tutorials and guides on topics such as smart contract development, token creation, and crowdfunding, as well as access to a community of experts who can provide advice and support.

Additionally, Rather Labs (https://www.ratherlabs.com) offers a range of services, including technical support, project management, and legal assistance, to help ensure the success of your project. I found the team at Blockchain Deployer to be very knowledgeable and helpful, and they were instrumental in helping me deploy my project successfully. Overall, I highly recommend Rather Labs to anyone looking to create their own blockchain-based project.


If it can be a static site, then I build the site using GitHub Actions, and then use GitHub Page to serve. I absolutely love GitHub Actions and the speed of which it can build my site, and the syntax of the config is bliss.

I don't really like any of the static site generator's out there, so I just build an ejs site and parse it using a project I built called statictron [1]. I'm using it for a site [2] I'm building at the moment, and it's a pretty smooth and cleap deployment. I did end up pushing the files to bunny.net (a EU based CDN), as its surprisingly much faster than GitHub pages. But I'm just being picky, as Pages is practically more than fast enough.

For anything that requires server side hosting, render.com is great, or digitalocean if I want something I can control more. I don't really want to buy into the whole ecosystem of things like Heroku, GCP, AWS as I like to make sure I can leave when/if they do bad things.

I'm also increasingly moving stuff back to dedicated hosting. I do try and automate as much of the deployment stuff as possible, and use docker to wrap the services.

It's actually amazing how much cheaper a dedicated server is, vs cloud companies, once you're using more than a few services. I'd normally say one of the benefits is your DevOps teams should be cheaper, but in the end I've found the teams that would be managing the dedicated servers, end up being just as big to support the cloud infrastructure platforms.

[1]. https://github.com/markwylde/statictron

[2]. https://github.com/markwylde/webcodeup


My poor-mans-PAAS: git, ssh, systemd, dedicated instance, caddy, grafana and prometheus

Cost per "project": domain + shared cost of the dedicated instance. Right now the dedicated instance is running 8 projects in total, I pay ~70 EUR a month, so about 8.75 EUR + .com domain for each which works out to be ~20 EUR per year, 1.6 per month. Also using Migadu for email, shared between all little projects, ~30 EUR / month (so currently 3.75 EUR per project per month).

Each project added makes cost per project lower, as they all share the same infrastructure.

Environment is setup so every service has it's own systemd user service. Deploys happen by doing the following steps:

- Build and test project locally

- Copy artifact/binary to server with ssh, rename file to be versioned incrementally

- Restart systemd service

- New service deployed

- If database migrations have to be run, I execute them here

- (if something goes wrong, I copy old artifact to "production" directory and restart service again, poors-man rollback)

- Send a marker to Grafana that a deploy have happened so it's visible in the dashboard

As it stands right now, it seems to be about 13 EUR per month per project, but as mentioned before, gets cheaper for each added service. None of them are particularly performance sensitive or use a lot of resources. As a max, I've run something like 15 services on this very machine without any problems. Everything goes via Caddy that acts like a proxy front for everything.

One of the projects got more popular one time and started impacting the performance of the others, I simply replicated exactly the same environment to a new dedicated server where it lived by itself from there on.


I deploy to a virtual machine. I use Ansible to set up the machine and install a container runtime, then i set up traefik + containers using Ansible and docker compose files. If I ever do need to scale it's easy to switch to docker swarm which is quite capable. It's also easy to deploy the full stack somewhere else quickly. I wish it was easier to do firewall configuration when using docker, since it basically occupies iptables even though I'd rather use nftables. I also keep secrets out of source code and add them during deployment using envsubst and ansible vault. One thing also, i think the Ansible documentation is quite terrible. The only saving grace is `ansible-docs -s` which can print snippets. I don't particularly like ansible but i guess it's currently the best there is and has a big ecosystem.


Would you happen to have that ansible setup on github for others to learn from? Now that docker is actually fast enough to be usable on mac, I'm leaning in the direction of a fast VPS plus docker compose for simple app deploys like you (with the option to move to swarm if growth occurs). Would be great to learn how others are automating the VPS setup itself.


Sadly I do not have it publicly available as I built this for work and the code is on an internal platform. But there are tons of similar examples on github. If you want to do the same, I mainly used the geerlingguy/docker role, the docker module for ansible and an internal module for provisioning vms.


Have you thought about using a single node Docker Swarm? That gives you benefits such as healthchecks for containers so unhealthy containers are replaced, support for rolling upgrades and some other goodies like secrets and configs managed by swarm.


Yup, I do that as well, my ansible scripts can provision either a single node swarm or regular docker on a host, both using traefik. Mostly even swarm is overkill for us though.


Render. But, they just jacked up prices - which makes it less appealing to me.

If I had to start over, I'd just do AWS App Runner. AWS is complex, but it's reliable - you won't have to pay huge minimums or subscriptions to host a side project. And, it will (hopefully) be dependable as a host long-term.


Old-school shared hosting. 8 bucks a month and I can launch as many apps as I want. If any of them gain traction, mine are all just node apps with a react bundle - easy enough to migrate to a scalable solution if they need it. But few of them ever need it.


Google CloudRun - I haven't done extensive comparisons but it's been nearly a year and a very smooth experience.

My audio plugin is backed by CloudRun https://signalsandsorcery.org


I've got a cheap VPS where I put them running in a Tmux session behind Nginx.


Bare metal server, each side-project lives in a docker container, Traefik is set to reverse-proxy https://<blah>.shish.io to the docker container named sn-<blah>

Docker containers get built with github actions and pushed to docker hub

I’ve occasionally looked into cloud stuff, but it all seems orders-of-magnitude more expensive for day-to-day use, and there’s no cost ceiling if I do something dumb and use a ton of bandwidth


How do you manage the Docker containers? Do you have any open source scripts you could share?


Pretty much the same as our goto for projects at work: Hetzner + Docker (Swarm) with some Ansible to orchestrate things

We have built some automation around cluster management over at https://github.com/neuroforgede/swarmsible.

I used to do everything in ansible, but Docker Stacks are just so much nicer to use.

In any case automation is king. I don't have to remember stuff if I can just look at some IaC Code :).


I mostly use DigitalOcean and their managed kubernetes service. My projects tend to be self-contained in a monorepo (where my k8s manifest also live) and I use Flux* to keep my cluster in sync. I'm quite happy with, took a bit of time to get it set up exactly how I wanted, but now it's no-effort to do releases.

* https://fluxcd.io/


Front-end deploys from GitHub/GitLab/etc to Netlify with Cloudflare CDN.

For back-end, I started with some Vultr credits and these basic VMs have better performance/cost than others I've used (including Digital Ocean), as well as having a decent feature set. Sometimes, I'll put Cloudflare in front of the backend API as well with carefully decided cache headers/behaviour.


The game-engine i develop for has a hook on github, which looks for commits in the format of VERSION{xx.xx}. If it finds such a commit, it zips the game, then compares the zip against the previous install, then it computes changes and creates shards.

These are downloaded via RAPID and create -with the previous shards a image of the game in the engines local install. After that its buisness as usual.


Cloudflare Workers is a big time saver for my side projects. I don’t use JavaScript/TypeScript/WASM at work, but it’s been great.


How do you deal with persistant data? What DB service etc?


For key/value storage there's Cloudflare KV * https://developers.cloudflare.com/workers/runtime-apis/kv/

For document storage, Durable Objects is amazing: * https://developers.cloudflare.com/workers/runtime-apis/durab... * https://blog.cloudflare.com/durable-objects-easy-fast-correc...

For relational data, there's now D1 (open beta): * https://developers.cloudflare.com/d1/

For bulk storage, there's R2: * https://developers.cloudflare.com/r2/


Workers has a few persistent options.

KV is the obvious one but Durable Objects themselves have interesting persistence properties. There is a sqlite DB service called D1 landing and there is an object service called R2 which is is actually really nice. Also there is a new queuing service which is also fully persistent, even if queues are a non-traditional persistence mechanism.

However Workers is a strange runtime and has strange billing/pricing. I wish there was better visibility into the "bundled" vs Unbound pricing models, namely I wish it could just tell you how much it would cost if you switched from bundled (this bitching could be specific to Enterprise plans though so YMMV).

Overall I'm pretty happy with it in a professional context. Cloudflare's network is second to none. The rest of their stuff... I could do without but Workers and the world's best edge makes up for it's other shortcomings.


My static website is exposed by Nginx in a managed Linux box at $whatever_cheap_vps. Deployment is done simply with rsync.


I use https://cloudcaptain.sh - has a CLI to let me chuck containers up to AWS without worrying about any of the gubbins

It's probably a bit prod-focussed in that it establishes ELBs/ALBs and things that cost money as standard but five stars for seamlessness


Scalingo, because EU company and hosting in EU. They are a great alternative to Heroku with awesome customer service. If I don't need a PaaS I use Hetzner (e.g. for my newsletter with Ghost). Clever Cloud is also a good EU hosting service.

I've used Render before and was happy, but I now prefer keeping all my stuff in the EU =)


Dokku on a vps gives an identical experience (Heroku) for me. I have several projects running on a Hetzner 2 cpu / 2 Gb vps.

Once a day I do a Dokku backup (zipping its main folders) to an attached volume. I did a test recover a few times (once to Digital Ocean) and that only takes minutes.


For me, this topic falls along the same lines as distro ping pong.

I’ve tried many, but have ended up with Cloud Run. It’s been great and costs next to nothing.

Since you run within a Google Cloud Project, you get all the Google tools to use if and when needed too.

GitHub actions usually handle my deployments etc.


Hatchbox https://hatchbox.io/

I deploy to single Linode instance with multiple projects.

Super easy interface to deploy a GitHub repo.

Auto provisions the server with nginx, MySQL, redis, etc.


https://github.com/piku/piku - dozens of projects in various languages in the same tiny VPS.


Been using fly.io - Started to move professional projects over too


Just been using azure. Their free tiers are pretty good


Vercel for NextJS frontends and Azure Container Apps (hosted k8s that scales down to $0/month in hobby or light usage scenarios) for backends.


VPS with https://caprover.com and then docker (actually colima in macOS)


Trying to fit them in a github pages, behind Cloudflare, with a json file as a database. All deployed with a bash script


Hetzner 5€ vps with nginx as reverse proxy


However, currentlly I'm migrating to Oracle free vps (4CPU, 24GB ram)


I would like to try Oracle but those offers aren't sustainable so I don't trust any of the pricing they currently have


devs don't want to work with nginx


If "devs" don't want to play with my side project because of nginx, that sounds like a problem for "devs" not for me.


What do they want to work with then? nginx is solid, easy to configure, good docs and does -exactly- what it says on the tin.


I love it, just heard that lot of them are just want to code, nothing more :(


who cares? most devs don't want to work with anything else than their "beloved" $IDE and $LANGUAGE on $OS

imho. this was the reason, why the DevOps movement was founded in the first place - until it got "yet another" buzzword for plain/old system-administration ...


Really? First I've heard of this. All the devs I know love nginx and it has been my go to tool for almost a decade. Although I recently switched jobs and haven't worked with anything web or deployment related for over 6 months now, so my information might be out of date.


My blog was temporarily hosted on cloud storage. It's a static site which I developed using Scully.


nearly everything on "classic" server / VM based systems based on (debian) linux

* setup of the system itself via ansible

deployment depends on the kind of project

* rsync for simple (static) webpages on vhosts

* docker-compose (docker) or helm/tanka (k8s) for container-based stuff

* and again ansible for everything else :)


For a static site, I deploy it on Vercel via Github. Been great for me


I just pay the $7/mo for Heroku


Fly.io now


Heroku


fly.io these days


I choose between https://railway.app or native AWS serverless. But native serverless for me is to only use services that scale down to 0, so this may defocus you from your project


Just use github actions. Currently I do AWS + EKS + GHA. You edit, you commit, it runs test, GHA pushes to EKS. Roll backs are easy, just revert. Scaling is easy, you use the EKS node scaler. https://github.com/james-ransom/eks-gha-auto-deploy-fortune/...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: