> ould just be a matter of iterating from A until you hit the end point and counting, right?
Yes, you could brute-force all the points. The good thing is, ECC is done on fields that are very large, so that actually enumerating them is not practical. Check out Curve25519[1] for some numbers.
Right, but the point is that doing the transformation in one direction needs to be fast in order of the scheme to be viable. So there must be some faster way of doing n iterations, which the post doesn't mention.
The rules are somewhat imprecise, but basically any functional cookies do not need to have consent as it is implied by the user using the service. This includes thinks like the user-identifier to know who is logged in for example.
The actual desktop SuperMemo app (supermemo 17 I believe) is not subscription based and is way way more powerful (but also convoluted and somewhat outdated UX wise). The web supermemo is dumb down and mainly for language learning.
More than kind of like; you nailed it: exactly like. This has been an infamous issue with machine learning for decades, where unwary researchers/developers can do this quite accidentally if they're not careful.
The thing is that training data is very often hard to come by due to monetary or other cost, so it's extremely tempting to share some data between training and testing -- yet it's a cardinal sin for the reason you said.
Historically there have been a number of cases where the best machine learning packages (OCR, speech recognition, and more) have been best because of the quality of their data (including separating training from test) more than because of the underlying algorithms themselves.
Anyway it's important to note that developers can and do fall into this trap naively, not only when they're trying to cheat -- they can have good intentions and still get it wrong.
Therefore discussions of methodology, as above, are pretty much always in order, and not inherently some kind of challenge to the honesty and honor of the devs.
The issue is usually less what species gets endangered or faces extinction, but the fact that this is not done for progress but for greed (be it humanity or corporations).
Destructively changing anything should be approached with extreme care, since you cannot un-extinct a species.
I'd illustrate it like what happens on Linux when you are approaching OOM situations. You can either kill other processes, not knowing what they did or if they were vital, or just stop doing what you were doing. And so far societies have usually decided that mines are not as important as raw materials at some point.
What do you mean by 'hard to please'? The joke about the compiler being some beast you need to sacrifice a goat to seems to put the blame on the wrong end of the computer imo.
Unless what you are doing is not fit for what guarantees Rust gives you, the compiler should just be a crutch in case you missed a step.
I thought the same until I actually tried Rust. The compiler will complain in a some places which are ok if you know what's happening. It was probably easier to implement the borrow checker that way. At the same time a lot of the error messages are very cryptic if you haven't seen them before. It is in that sense hard to please. A lot of this might become better with coming iterations though.
Now, this is true in many senses, as it's inherently how static analysis works, but I've also had many experiences where someone joins one of our IRC channels, shows some code and says "hey the borrow checker won't let me do this thing that's totally safe" and then I or someone else replies "well, what about this?" to which the answer is "...... oh. yeah." This is virtually almost always from C++ programmers.
It's hard to escape the mindset of languages we're used to!
> At the same time a lot of the error messages are very cryptic
You should file bugs! We care deeply about the legibility of error messages, and the whole --explain system is there to try and go above and beyond.
Yes, you could brute-force all the points. The good thing is, ECC is done on fields that are very large, so that actually enumerating them is not practical. Check out Curve25519[1] for some numbers.
[1]: https://en.wikipedia.org/wiki/Curve25519