>The author has never used APL. It's a chicken and egg problem. (...) All testing has been done with tryapl.org.
There are also a few free software implementations of APL most notably GNU APL. I have been sticking to that while learning APL. https://www.gnu.org/software/apl/
I've personally found GNU APL to be a bit flimsy, and not some of the design choices they made in it to be a bit odd. It also deviates from the ISO standard, if that matters to you.
Dyalog is the only implementation that is robust and production ready that is still actively maintained. I would suggest learning with something other than the GNU implementation.
Dyalog also deviates from the ISO standard, adding things from J like hooks and forks. IIRC, GNU APL's deviations are to bring it closer to IBM APL2, but it's been a while since I cared about any of this.
I am learning using the book "APL with a Mathematical Accent" and so far all the examples work fine (I am on chapter 3). I'll probably switch to learning J if I start running into problems. It says on the GNU page "implementation of ISO standard 13751" so if it's not standards compliant it should likely be filed as a bug.
I did request a copy of Dyalog but they rejected my application (probably because I didn't add my address?). I don't have a lot of enthusiasm for learning using a proprietary implementation of a language anyway.
That reads to me like they are aiming for standard compliance but the implementation is still incomplete, which is quite different from deliberate deviation from the standard.
I wouldn't say that Go iota is modeled after APL iota. They share the same name, but that's really where the similarities end. The iota operator in APL is more like the unary case of the python range function.
They are both used to generate a sequence of consecutive integers, although Go's iota starts at 0 whereas APL's starts at 1.
"Ken suggested iota for the counter and, since all three of us (Ken, Robert, Rob) had implemented APL interpreters, it seemed a perfect thing to us." - Rob Pike
> They are both used to generate a sequence of consecutive integers, although Go's iota starts at 0 whereas APL's starts at 1.
Iota in APL is unary operator that generates a vector of consecutive integers starting at the origin up to and including its operand. Iota in Go is a predeclared identifier that can only be used in constant declarations wherein it auto-increments on every use. The resemblance is superficial at best.
> "Ken suggested iota for the counter and, since all three of us (Ken, Robert, Rob) had implemented APL interpreters, it seemed a perfect thing to us." - Rob Pike
Maybe we have just have a different view on what "modeled" means, or I am being too literal. To me, "modeling" implies that something is created with another thing in mind, adhering to the same principles. To relate that to the quote, I wouldn't call the process of coming up with a concept and only afterwards finding a name for it from some handwavely similar concept in another context modeling.
No, I think your criticism is fair. "Modeled" is not quite accurate, since the term was applied retroactively to functionality that was already implemented.
Still, I wouldn't call the resemblance superficial. "iota" in both languages refers to sequences of whole numbers. If "iota" in Go meant "the zero-value for an arbitrary object" or "the smallest non-zero float64," then the two would truly share only a name.
If there are more than superficial similarities between the two, surely there is a rough analogue for "⍳10" in Go using its iota. Conversely there must be an analogue for "const ( A = iota; B; C )" in APL using its iota. Those are the fundamental uses of these entirely (IMO) different concepts.
In the latter case, you can say that "A B C←⍳3" (at least in Dyalog) which is roughly equivalent to a Go constant assignment using iota if you squint a little and disregard the underlying procedure of creating and unpacking an array at run-time vs creating a compile-time constant. In the former case, I'm not aware of any analogue.
Yes it can. I ran into an issue with one of the APL examples once because the index had been changed and it wasn't documented. The Dyalog folks answered my email on the weekend (I'm not a customer).
That sounds like one of those super useful features that's easy to shoot yourself in the foot with. Perl has a few of those (cribbed from AWK, I believe):
$, : The output field separator
$ perl -E 'my @data = (11,22,33); say 11,22,33; say "@data"; say @data; $,="|"; say 11,22,33; say "@data"; say @data;'
112233
11 22 33
112233
11|22|33
11 22 33
11|22|33
$" : Interpolated list separator
$ perl -E 'my @data = (11,22,33); say 11,22,33; say "@data"; say @data; $"="|"; say 11,22,33; say "@data"; say @data;'
112233
11 22 33
112233
112233
11|22|33
112233
And a bunch more, including input line/record separators (so you can slurp a whole file into a buffer at once instead of line by line), etc.
A bit tangential, but what do people here use to input APL characters in macOS?
I've been using Dyalog's GUI, but I'd much rather be using free software from the usual terminal. It'd be nice if there were just a global input method like the ones they have for human languages.
I’d like to know more about the underlying motivation! APL is a fascinating language but I’m not sure how you’d implement it efficiently in a language like Go - you would need intrinsics to get at the vector instructions no?
Even without vector intrinsics, you can get a lot of mileage out of spending almost all your time in loop bodies that map primitive operations over large arrays. Interpretive overhead tends to grow a lot slower than the user program's input size.
IIRC J didn't use intrinsics until maybe two years ago. They used mostly C and everything was built with for-loops. Some special verbs or phrases were however implemented in x86 assembly.
I think part of the speed of APL is the way that parts of it are small enough to fit in L2 cache (at least 1 of the APL descended languages, Q / K, bragged about this). Whether the Go implementation seen here will fit in typical caches of today's CPUs I don't know.