In your last sentence, the converse also applies to would be trendsetters denigrating mature technologies. I think that the key problem is that people in this industry are quick to play the denigration card in order to justify their own beliefs, instead of fostering an environment of mutual respect. Remember, a lot of the audience of hackernews, for better or for worse, are impressionable green management that aren't exactly qualified to start powering their homes with miniaturized fusion reactors, and would be better off to keep using things like relational databases and cobol if the ends justify the means.
Does this problem play out in respect to experimental medicine I wonder?
+1 tannhaeuser. I live in a more heterogeneous environment than what docker can provide, and it seems to me that the ground docker covers is like you said, application packaging. This is frustrating since in a lot of cases, "applications" are only being provided as docker images. What the heck can I do with a docker image on something that doesn't provide docker? How do I have assurance that when teams pull docker images in from the wild, that they're actively maintained and not full of 0-day vulnerabilities? I've switched to pkgsrc and highly recommend it for solarish (and solaris), *bsd, centos, debian, mingw+windows, osx and whatever else one would feel like building binaries and dependencies for. The key to me is separating the operating system package manager, and the application dependencies such that when operations runs their mandatory apt/pkg/yum/dnf/whatever updates it doesn't break application dependencies. And on the flip side when applications want to screw with things that aren't in apt/yum/etc those custom needs can be met. This approach also doesn't preclude using the respective os container mechanism (zones/containerd/vmware/hyperv/chroot/etc). We package custom internal packages in our own internal pkgsrc repo, along side of the main repository that provides north of 10,000 packages.
And the mainframes that run our banks, transportation systems, healthcare, public safety.. etc etc. Use the right tool for the job, price it against what the market will bear. Pacemakers and insulin pumps driven by npm updates - shudder -
Tech pop-culture... Navigating these technology growth explosions is like searching for solid reference architecture in a booming shantytown. Some parts of these settlements eventually get things like running water, working sewage, urban planning.
One thing I heavily enjoy about monorepo's (I'm talking java/c#/c++ projects) is the ability to navigate the entire codebase from within an ide. That alone has caused me to migrate projects (medium projects ~20 developers) from poly to mono repos. Dropping tons of duplication in the build system in the process. I can think of good reasons to split projects along boundaries when it makes sense, but not blindly by default, and not without carefully considering the tradeoffs.
Here here yowlingcat. Article is a way too prescriptive and agreed, borders on irresponsible. Monorepo vs polyrepo argument is way too broad a subject to create generalized stereotypes like this. These opinions sadly are taken as facts by impressionable managers, new developers, etc, and have cascading effects on the rest of us in the industry. Use what makes sense in the project environment and team, don't just throw shade at teams who are successfully and productively using monorepos where they make sense. Sure there is good reason to split things up on boundaries sometimes, (breaking out libraries, rpc modules, splitting along dev team boundaries, etc etc etc), but not blindly by default. Will Torvalds split up the kernel into a polyrepo after reading this article? Something tells me that would be a bit disruptive.
It's interesting that you talk about "team's using monorepos". I think that's different than what the article is arguing against, which is an entire company (100+ devs) using a monorepo.
A team with 5 services and a web front-end in a single repo is doable with regular git. It's a different beast I think.
Thanks softawre, what triggered me is the sensationalist title and general bashing of monorepo's (which a large percentage of impressionable readers will walk away from this article thinking, ie: that monorepo's are only for dummies and you're doing it wrong if you're not using a polyrepo). A less inflammatory title more along the lines of "Having trouble scaling development of a single codebase amongst 100's of developers? Consider a polyrepo". This argument comes up in developer shops almost as much as emacs vs vi, tabs or spaces, etc.
When you have 100+ developers on a project, managing inbound commits/merges/etc will become tedious if they're all committing/merging into one effective codebase.
IMHO, It depends on the project, the team makeup, the codebase's runtime footprint, etc whether or not/or when it makes sense to start breaking it up into smaller fragments, or on the other hand, vacuuming up the fragments into a monorepo.
I did enjoy reading Steve Fink's from Mozilla's comment (it's the top response on the OP's medium article) and counter arguments about monorepos vs polyrepos in that ecosystem (also clearly north of 100 developers). It's easy to miss if you don't expand the medium comment section, but very much worth reading.
At the risk of tarnishing my reputation amongst the hacker news docker/kubernetes hypecycle elite, have an upvote. The I.T. industry in general is funny. New technologies come and go like pop stars. Docker == ke$ha, Kubernetes == ice cube, triton is fred astaire. They all have their off moments. I personally like my platform stable, performant, secure, and boring. If I spent all of my time keeping up on the latest trends on how to spin up machines, I'd have little time to work on actual product. Something good will come out of this influx of cash, marketing, and cloud sales, eventually. Fits and starts. /me goes back to coding and deploying on triton, while patiently watching the docker/kubernetes show.
I'm still fighting the sneaking suspicion that putting kubernetes/etc out to the general public and having such a fast release cycle was just a genius play by the big cloud vendors to acquire customers (who will realize running this stuff on premises isn't as cheap as they thought it was after doing the math (all the math, security, training, operational expenses, personnel training/expenses, moving from docker->moby->rkt->gvisor->firecracker->now vm expenses, blah blah etc)). This current tech wave is kind of disheartening. Everybody is focused on hosting...can we not get icecube to play a show for folks that are pushing the envelope with technology as applied to the medical field, or saving the environment, yo?
Triton on-prem is a snap. Boot the headnode from usb, boot the cluster nodes from usb+pxe, and lets get to kicking ass, fighting the good fight focusing on real groundbreaking applications.
*edit: I'm still a little butt-hurt after kubernetes being rammed down my throat in a large enterprise environment. Apologies to those that are fighting the good fight with kubernetes, I know you're out there, and big high 5 :)
Love SmartOS! IMHO zones set the bar on containers back in the 2000's, and SmartOS is the evolved logical progression of this. It's truly a ferrari. However, it needs a paint job to accrete community. If you want a cohesive system, and want to become a paas provider, triton-sdc. But I think at the medium and low-end, smartos needs something like proxmox to compete where project fifo appears to be having a hard time. Any takers?
Let me clarify, proxmox (or the vsphere web ui for example), would be a fantastic management ui on top of the smartos engine that would bring some gravity to accrete userbase. Some organizations are just more comfortable with UI tools. Sure, your hairy-knuckled sa's and programmers will favor maintaining version controlled manifests for zones, and keeping things highly automated, using terraform, etc, but there's a huge base out there that will download smartos, stare at the console prompt, and lose interest unless they can treat it like a vsphere or proxmox machine, like they're used to, and are trained on. Project fifo...frankly, needs some good natured competition to breed better product in this space.
Does this problem play out in respect to experimental medicine I wonder?
j2ee+oracle == aspirin? isomorphic reactjs+node == S100A9 vaccine?