I agree with you except the last bit - I want a giant language with professionally developed official libraries. Not some junkies slapping half-baked libs together. There is so much calm and peace that comes with std libs. It's not mutually exclusive. You can have a well crafted and large std lib and still can use external libraries. Nothing is stopping you from BYOL.
I would also want some kind of a typing support, ala Pydantic built in. If for nothing else, I would like to know wtf is this function taking in as args.
Can we also run this in a browser!? One can dream.
I agree with you that it's great to have a comprehensive stdlib but I think the process should be careful and gradual. Python's stdlib has a lot of stuff that is not really best-in-class or consistent in design. I think with today's ubiquity of package managers, we could get away with a smaller stdlib initially, and take time to validate libraries with real world usage before blessing them with official status.
Most languages would probably benefit from having a “experimental for the standard lib” package that doesn’t guarantee perfection, and might get deprecated then abandoned before making it to the stdlib. But all new features go through that library first.
That experimental library could even be backed by other packages, so you could iterate on the interface while allowing other packages to continue to improve the core. But the assumption is for any clearly solved problem, these “batteries” provide one recommended way to solve the problem, and as of now it’s either the only or at least not the worst way to do it.
Claiming that a language user should be prepared for certain parts of the language to disappear is a perfectly reasonable idea. But in practice it does not work. Because either it can or cannot be used, and if it can't be used it is a waste of time, if it can be used it must be supported.
Once people use a library, there will be pressure to support it forever and the mindshare switch is a real cost. A language bleeds users every time it drops a feature. The Python namespace is still polluted by documentation and tutorials talking about how to do things in v2 too.
Rust seems to handle it ok. Features can spend a long time being nightly-only gated behind feature flags, while nevertheless seeing real world usage from people willing to brave the risk and churn.
But Python has a very different release schedule. Rust's nightly releases nightly and stable every six weeks, while Python only releases once a year. I don't think it could adopt the same model.
>Claiming that a language user should be prepared for certain parts of the language to disappear is a perfectly reasonable idea. But in practice it does not work
Well, it can. That's what you get when there's no official package and you depend on third party packages that change all the time (e.g. common in the Node ecosystem, in GTK+ land, etc.), or are abandoned, etc.
Compared to that, a scheduled, anticipated process for candidate packages to include in the language (or remove eventually) is much easier.
Indeed, but from your own original comment - we have hindsight. So we can build an excellent std lib from the get go.
There are other bundles that are very popular for a reason. Conda. That's all you need to know. People want stability of dependencies, not wild wild west on pulling libs from random github repos.
IMO the bundling and curation is the least interesting part of Conda. What's really nice are the build infrastructure, quality checks and proper dependency bounds (looking at you, PyPI). Something like R's CRAN already has those without bundling.
And now we have Mamba, which is just like Conda. Except it has a better-documented (and more open) tooling ecosystem. And it lacks the debilitatingly slow/unreliable dependency resolution of Conda.
I'd instead aim for a language that supported interface/trait/typeclasses with a huge standard library of those that allowed writing uniform benchmarks and unit tests to every implementation that shipped simple implementations for some of those and made it easy to use different implementations and experiment with them.
This seems like solving just dependency injection instead of trying to solve every library ever, which seems a much more realistic problem to solve correctly, or at least in a way better way.
Every common X means thousands of complex things. There are over 9000 RFCs (insert Vegita meme). Even more popular APIs, libs, algorithms, whatevers.
Even basic support would be near impossible. A constant race between what constitutes basic and releasing it. (Plus upkeep/maintenance, as things continue to evolve at a rather fast pace.) And if the language enjoys at least a minimal level of successful then a lot of non-stdlib packages are likely to become standard by convention.
It's no wonder that new languages nowadays start with minimal stdlibs.
(I just noticed that there's an SMTP server in Python3! It's ... great, I guess. But there's no DNS resolver library. So either the SMTP client (and now server) packages do not handle anything related to DNSSEC / DANE / TLSA / MTA-STS / DKIM / DMARC and others, or they bundle it. Sure, maybe it's enough to outsource these to the upstream SMTP server and the SMTPd package is just for "testing" ... but, this just shows how arbitrary "commonness" is.)
I think a lot of the benefits can be achieved by making more sophisticated types a first class citizen in the language. So for example things like date/time, intervals, and geometries are just another type. And you use normal operators to interact with those types and invoke algorithms.
Visual Basic .NET is your perfect beginner language with a rich built in framework and standard library. Ironically, it likely fell for the same reason discussed in this article: It got a lot more complex over time.
I would also like pip and packaging "fixed", which from my standpoint of a python software user is "broken". The three I rely on are in a constant cycle of breaking each other.
...although that might all be the fault of aws-cli. Which by the way doesn't install on a vanilla Ubuntu 20.04 ami via apt which is just... really really bad.
Aws cli continues the inexplicably horrible aws interface that makes me shake my head at people lauding their might, along with their console.
The console rewrite... anyway, pip seems to be a headache, certainly not as good as gems.
I love how you can tell which parts of aws-cli were written by different teams because of how the inconsistencies are consistent within a certain domain.
I'm thinking ReasonML or OCaml might be able to take on this role.
I was sad to hear about the change to ReScript though. ReasonML targeting the browser and native seemed nice to me, although I do agree the documentation situation and DX wasn't ideal.
Hindley-Milner typing systems are a great way to introduce typing without making beginners afraid of it. The inference make things just work, the way a beginner may expect from the computer.
I'm not sure I care about the "run this in a browser" part (unless someone developed a WASM backend for it). And FOSS would be a requirement for me. Given those criteria, I would gladly pay a recurring subscription or donation for such a language.
For what it's worth, there are a lot of good interesting languages out there that aren't Python and have pretty good library ecosystems. Kotlin, Elixir, and Elm come to mind. But they don't exactly fit the Python niche.
What you're describing sounds to me like a modernized Common Lisp with commercial backing. Rework CLOS a bit, clean up some of the other crufty old weirdness, and give me a big selection of high-quality well-supported libraries like Numpy, FastAPI, Trio, etc.
The advantage of doing it in some Common Lisp derivative is that things like React JSX are pretty much "free" and wouldn't much seem out of place, whereas in Javascript they need a whole separate build/compilation step and are often portrayed as black magic fuckery.
edit 1: Arguably Julia could be used for a task like this, but its library ecosystem and language runtime are heavily skewed towards numerical computing and data analysis.
edit 2: Raku also sounds like a possible candidate here, but it obviously lacks widespread adoption and last I heard it fell significantly short of Perl 5 performance. I'm not sure what kind of library ecosystem it has nowadays and I doubt we're going to see big commercially-backed library development efforts for it, soon or ever. Perl 5 has a huge ecosystem of course, but it's such a nasty warty language with nasty warty tooling.
If you like Raku, and the large Perl 5 ecosystem, you might consider going the path of Raku, as it allows 99.9% of the Perl 5 ecosystem to be used as if it is Raku, by installing the Inline::Perl5 module. Once that it done, using a Perl 5 module is as easy as:
I use Clojure for that reason. You can use Python libraries with it as well as Java libraries and JavaScript libraries. Its reach is just phenomenal. And indeed, its React support is amazing.
Right. It clearly doesn't have a strong enough offering at the moment to attract anyone beyond a few early adopters.
> last I heard it fell significantly short of Perl 5 performance
Likewise. I'm pretty sure Perl still roundly trounces it for a lot of operations, especially in regex processing. I think it will need to be significantly faster than Perl before it will have a chance at gaining significantly more adoption.
> I'm not sure what kind of library ecosystem it has nowadays
The raku.land directory has less than a thousand packages and many seem to be jokes, unloved, poorly tested and documented, version 0.x, or even 0.0.x. [0]
The standard library is a different ballgame. It's generally well designed and very large, cutting out the usual blizzard of folk reinventing the wheel for basics. So that's good.
> I doubt we're going to see big commercially-backed library development efforts for it, soon or ever.
I agree.
> Perl 5 has a huge ecosystem of course, but it's such a nasty warty language with nasty warty tooling.
This is where I think Raku's strength lies.
A lot of the effort that went into Raku was to build a platform that could just directly use tooling and libraries of other PLs while insulating developers from their downsides.
That way you get to use Raku tooling, syntax and semantics at the surface level with other PLs' tooling and modules and performance. Kinda like using C code in a non-C PL, but generalized to using code from any other PL in Raku.
The exemplar was supposed to be Perl, but it was also supposed to extend to many other PLs. Starting 7 years ago this became the Inline family.[1]
Inline::Perl is the most mature. Imo it's remarkable. But it was always supposed to just be the first one to get the polish to demonstrate that the approach works and works great. About a year ago the author behind it said it was nearly time to do the same for Python modules. And now I see a GSOC proposal to do that.[2]
The Inlines hide the warts (though you do have to read the modules' documentation to know what their API's are etc).
I think the impact of Inline Python will be huge because it takes away the complaint about reading Perl documentation and code. Instead you read Python API docs but get the benefits of using Raku.
(And, in the wings, there's Inline::Ruby, Inline::Lua, and so on.)
Somewhere I have posted code that uses Inline::Perl, Inline::Python, Inline::Ruby, and I think maybe Inline::Go to convert Base64. (Perhaps other language interfaces as well, I don't quite remember)
While there were some quirks, It worked rather well.
>Not some junkies slapping half-baked libs together.
This is offensive. Whether the people who work hard to make and share (for free) half baked libraries are addicted to drugs is really not part of the main thrust of your argument. You could leave it out, save words, and not have distracting comments from people like me while still making your point.
Are you seriously taking this literally? I didn’t even think this deep when writing it. Honestly! Lighten up.
No one in their right mind thinks about people building free libraries on GitHub as nothing but positive thing. Come on. I am offended that you’re offended by this.
Usually, I would apologize for saying something offensive, but I guess someone will get offended on the internet. If I said “Wild Wild West” then may be some Texan would be offended by it.
Intention matters. Smile more and be tolerant, no offense :-)
It’s 2021. The Earth is heating up, we have a cult member who believes in baby eating pedophiles in the USA government, and people are still dying from their inability to afford insulin. It’s not that name calling will be taken seriously by everyone, but thoughts like that are distractions we can no longer afford to think about. Put your energy towards ensuring there’s a future worth worrying about.
I honestly can't tell if it's sarcasm or not. If I was reading it on some comp.* newsgroup I'd say the former, but it's 2021 so I consider the possibility you might be serious - but I hope not.
I agree about the stdlib. There was a reason that xkcd had the "import gravity" joke. One of the reasons I ditched perl was having to evaluate so many different perl libraries, which would do anywhere from thirty to eighty percent of what I needed, and tended to come with some really exciting dependency chains.
I know the mantra is that the standard library is where things go to die, but I believe that to be cultural. I still have code running somewhere probably that uses the cgi library, which was perfectly well-suited for a Windows serve where the page in question got an average of seven hits a day, never to scale much beyond that (you'll have to trust me on this, for amusement I reviewed ten years of logs for just that page to find nothing out of the ordinary, and there are reasons it would not need to "scale" in the future).
Each library and each string-formatting method and so on is a choice, and those choices impose a lot of speedbumps and cognitive friction.
Anecdotally, this approach has been replicated by go. The stdlib in go is surprisingly complete. I’ve seen several go developers cite this as an advantage, being able to write many apps without adding dependencies from external to the language runtime.
I definitely think they’re onto something. Dependencies are a hard problem to solve.
Modern build tools like gradle work great at resolving deps, until they don’t and at that point we’re in for a world of shock at just how complex dependency resolution can be in large projects.
Is there a specific advantage to running in the browser versus running in everything at the OS level? I've always been a little confused why are abstractions went all the way up to the browsers. It seems that you could push a lot of browser functionality down the stack and simplify a lot or architectures and remove a lot of cruft we don't need.
Deployment is easier. Upgrade is easier. Access control, restrictions, permissions; you don’t have to add a user account to namespace/sandbox a process in browser tab.
The facilities provided by a browser are almost (but not quite) a complete superset of those provided by an OS. file io, peripheral io, etc. Can all be done via the browser. Additional facilities are provided too (security/sandboxing).
Right, but you could implement all of that at the OS level too, which to a large part of my understanding is what things like docker is built on. Changes to the kernel to allow sandboxing at the kernel level for instance versus doing it in user space (which browsers could theoretically implement with syscalls now?), and would reduce a lot of overhead no? Or are browsers faster because they stay in user space and avoid context switching?
I don’t think so - OS’s have been around much longer than browsers. People have been trying (without anywhere near the success of browsers) to do cross platform deploys for almost as long as OS’s have existed.
Now a common model has arrived (dom + js) allowing multiple browsers to just run your app.
I’m not sure browsers are faster (there was a thing called pepper api in chrome and nacpi or something in firefox - i’m not familiar with the space) that allowed native code to run.
I’m not sure browsers are generally faster than native OS binaries, but they are an easy to deploy to environment that doesn’t need syscall emulation like freebsd can do for running linux binaries.
Say not browsers, but linux becomes the dominant OS in existence on all platforms, not just embedded and server. Does that move the most targetable platform back to the OS level instead of the browser level if we are going off the metric that human interaction and ease of use is more heavily weighted than all possible machine interactions?
You still have to click on the update button for a browser, and as far as apps on your machine go, they could just check for updates next time the application is launched like current web apps/web sites do. What you are saying is not impossible for the OS to take back. It seems more like there just isn't momentum to push browser features down the stack back into the OS and I'm not sure why other than browsers already do it, so why backport it.
What do you mean by it? The browser itself? In that case, everything done in the browser could theoretically be pushed down a level from user space into kernel space, potentially saving a lot of overhead, no? The only trade of I imagine is the context switch you hit from doing everything in userspace, but as far as I'm aware, even browsers have to eventually hit the kernel for at least raw input output on the network.
Why not let the OS do more and more of the complicated technical stuff, simplifying the deployment of web apps because now we don't have to know a thousand different bloated libraries, just a slowly updating set of OS APIs.
Libraries in Python are hell. And Stackexchange can be almost pedantically useless in those things. For example let's take OpenCV bindings for Python.
3 separate libraries provide it, which all are called in as
import cv2
Yes, Stack Exchange, I know OpenCV can do this for me, but integrating that into my package dependencies is time down the drain which I am never getting back.
> There is so much calm and peace that comes with std libs.
This, and library documentation. It's probably the single practica issue I have with D and Nim. They seem to have fairly robust standard libs but when I look for something specific it always turns out this feature is missing, outdated, or badly documented.
I would also want some kind of a typing support, ala Pydantic built in. If for nothing else, I would like to know wtf is this function taking in as args.
Can we also run this in a browser!? One can dream.