In fact, to support your point, it is perhaps questionable from first principles if 3 dimensions not 4 is right. Leaving out tetra chromats and the (partially) color blind, normal human light perception is 1 kind of rod and 3 kinds of cones (i.e. 4 photo receptors, plus some light sensitive ganglia that don't seem to participate in vision, but diurnal regulation).
So, sure, this "4th dimension" (for normals) might be as simple as "candelas" - truly orthogonal, but one does hear an awful lot about "ambient" or "candela contrastive" (a term I just made up) kinds of effects. (EDIT: e.g. in color calibration of projectors in dark rooms vs. living rooms, for example, but I'm sure there are many.) I am just one person, but it feels like candela brightness matters for color perception. So, maybe luminous intensity is not actually exactly orthogonal. Maybe this is all covered in the 1931 CIE documents, though.
To support your point, it's also used in basically every shared library / DLL system. While usually used "for code", a "shared pure data library" has many applications. There are also 3rd party tools to make this convenient from many PLangs like HDF5, https://github.com/c-blake/nio with its FileArray for Nim, Apache Arrow, etc.
Unmentioned so far is that defaults for max live memory maps are usually much higher than defaults for max open files. So, if you are careful about closing files after mapping, you can usually get more "range" before having to move from OS/distro defaults. (E.g. for `program foo*`-style work where you want to keep the foo open for some reason, like binding them to many read-only NumPy array variables.)
As @SkiFire correctly observes[^1], off-by-1 problems are more fundamental than 0-based or 1-based indices, but the latter still vary enough that some kind of discrimination is needed.
For many years (decades?) now, I've been using "index" for 0-based and "number" for 1-based as in "column index" for a C/Python style [ix] vs. "column number" for a shell/awk/etc. style $1 $2. Not sure this is the best terminology, but it is nice to have something consistent. E.g., "offset" for 0-based indices means "off" and even the letter "o" in some case becomes "the zero of some range". So, "offset" might be better than "index" for 0-based.
If it helps anyone explain the SkiFire point any better, I like to analogize it to an I-bar cursor vs. a block cursor for text entry. An I-bar is unambiguously "between characters" while a block cursor is not. So, there are questions that arise for block cursors that basically never arise for I-bar cursors. When just looking at an integer like 2 or 3, there is no cursor at all. So, we must instead rely on names/conventions/assumptions with their attendant issues.
To be clear, I liked the SkiFire explanation, but having multiple ways to describe/think about a problem is usually helpful.
Nit pick: only in few human languages the ordinal numbers start at 1.
In most modern languages, the ordinal numbers start at 2. In most old languages, and also in English, the ordinal numbers start at 3.
The reason for this is the fact that ordinal numbers have been created only recently, a few thousand years ago.
Before that time, there were special words only for certain positions of a sequence, i.e. for the first and for the last element and sometimes also for a few elements adjacent to those.
In English, "first", "second" and "last", are not ordinal numbers, but they are used for the same purpose as ordinal numbers, though more accurately is to say that the ordinal numbers are used for the same purpose with these words, as the ordinal numbers were added later.
The ancient Indo-European languages had a special word for the other element of a pair, i.e. the one that is not the first element of a pair. This word was used for what is now named "second". In late Latin, the original word that meant "the other of a pair" has been replaced with a word meaning "the following", which has been eventually also taken by English through French in the form of "second".
Meta nit pick: You are conflating linguist's jargon with mathematician's jargon.
In much the same way as physicists co-opted common words (e.g. "work" and "energy") to mean very specific things in technical contexts, both linguists and mathematicians gave "ordinal" a specific meaning in their respective domains. These meanings are similar but different, and your nit pick is mistakenly asserting that one of these has priority over the other.
"Ordinal" in linguistics is a word for a class of words. The words being classified may be old, but the use of "ordinal" to denote them is a comparatively modern coinage, roughly contemporary with the mathematicians usage. Both come from non-technical language describing putting things in an "orderly" row (c.f. cognates such as "public order", "court order", etc.) which did not carry the load you are trying to place on them.
There is “zeroth” though as an ordinal humeral, which was already used long before computers came around, as for example in “the zeroth power of a number” (according to Merriam-Webster). So it’s still not quite unambiguous. :)
I don't use D.. I find Nim helps with even lower ceremony. That said, it's hard for me to understand how "getting into gcc" is failure. The list of such PLangs is very short. People can be very parochial, though. They probably mean pretty shallow things (just one example, but something like "failed to convert me, personally", or "jobs I'd like to apply for", or etc.).
Maybe people should instead talk about how they use D or what they'd like to see added to D? In an attempt to be the change one wants to see, I'd say named arguments are a real win and there seem to be some stalled proposals for that in D last I checked.
This can very clearly be said about C++ as well, which may have started out as C With Classes but became very kitchen sinky. Most things that get used accrete a lot of features over time, though.
FWIW, I think "standing out" due to paradigm commitment is mostly downstream of "xyz-purity => fewer ways to do things => have to think/work more within the constraints given". This then begs various other important questions, of course.. E.g., do said constraints actually buy users things of value overcoming their costs, and if so for what user subpopulations? Most adoption is just hype-driven, though. Not claiming you said otherwise, but I also don't think the kind of standing out you're talking about correlates so well to marketing. E.g., browsers marketed Javascript (which few praised for its PLang properties in early versions).
Yeah. I've been doing this for almost 10 years now. It's not APE/cosmopolitan (which also "kinda works" with Nim but has many lowest common denominator platform support issues, e.g. posix_fallocate). However, it does let you have very cross-Linux portable binaries. Maybe beyond Linux.
Some might appreciate a concrete instance of this advice inline here. For `foo.nim`, you can just add a `foo.nim.cfg`:
Besides `ar` as a sibiling observed, you might also be thinking of pixz - https://github.com/vasi/pixz , but really any archive format (cpio, etc.) can, in principle, just put a stake in the ground to have its last file be any kind of binary / whatever index file directory like Zip. Or it could hog a special name like .__META_INF__ instead.
One aspect of the question is that "permissions" are mostly regulated at the time of open and user-code should check for failures. This was a driving inspiration for the tiny 27 lines of C virtual machine in https://github.com/c-blake/batch that allows you to, e.g., synthesize a single call that mmaps a whole file https://github.com/c-blake/batch/blob/64a35b4b35efa8c52afb64... which seems like it would have also helped the article author.
This description matches my own experience. E.g., I recall having to use my own macro-based syscall() things when the inotify system was first introduced because glibc did not have support for years and then it was years more for slow moving Linux distros to pick up the new glibc version.
Unsaid was that much of this project separation comes from glibc being born as (and probably still being) a "portable libc with extra GNU-ish features", not a Linux-specific thing.
Honestly, some of this pain might have been avoided had the Bell Labs guys made two libraries - the syscall interface part of `libc`, called say `libos`, and the more articulated language run-time (string/buffered IO/etc./etc) the actual `libc`. Then the kernel could "easily" ship with libos and libc's could vary. To even realize this might be helpful someday likely required foresight beyond reason in the mid-1970s. Then, afterwards, Makefile's and other build system stuff probably wanted to stay with "-lc" in various places and then glibc/others wanted to support that and so it goes. Integration can be hard to un-do.
So, sure, this "4th dimension" (for normals) might be as simple as "candelas" - truly orthogonal, but one does hear an awful lot about "ambient" or "candela contrastive" (a term I just made up) kinds of effects. (EDIT: e.g. in color calibration of projectors in dark rooms vs. living rooms, for example, but I'm sure there are many.) I am just one person, but it feels like candela brightness matters for color perception. So, maybe luminous intensity is not actually exactly orthogonal. Maybe this is all covered in the 1931 CIE documents, though.
reply