The standard divisibility rule for 3, 6 and 9 in base 10 is to sum the digits until you only have one left and check if it's one of those. Here, 5+7=12, 1+2=3, so 57 is divisible by 3.
Math is not my strong suit at all, so I probably won't grok this, but that kind of blows my mind, so I'm curious... how?! That works for any arbitrarily large number?
Math is crazy!... still don't want to study it though!
When checking whether it is a multiple of some k, you can add/subtract multiples of k without changing the result, and those 99...9 are multiples of both 3 and 9.
It is basically because $10 mod 3 == 1$ (as 10 = 3*3 + 1). So if you are in the ring modulo 3, where every number is equal to the remainder of its division by 3, the sum of the digits of the number in its decimal representation equals the number itself (modulo 3), because in that ring 10 is actually 1, so the 10s in the decimal sum become 1s. Ie if n_k is the kth digit of n, you have
Hence, n is divisible by 3 iff $n mod 3 == 0$ iff $(n_0 + n_1 + n_2 + ...) mod 3 == 0$.
Of course, summing up the digits may not give you a 1-digit number, but it gives you a number that you know is divisible by 3 (if the original number is divisible by 3). So you can apply the same idea/process again, summing up the digits of that number, and get another number that is divisible by 3. Repeat until you end up with one digit (hence the recursion mentioned).
However, Rust does automatically provide a niche if you make a simple enumeration which doesn't occupy all bit patterns and we don't need compiler-only features, or even unstable Rust, this Just Works™.
If you have a USHoliday enum, with values like Juneteenth the fact is there's not that many US national holidays, so both USHoliday and Option<USHoliday> will be the same size, one byte with no extra work.
I think "blurry" was used here referring to the fact that they don't really pay attention to the differences between fonts, not to an aspect of the rendering.
It was kind of a double-meaning. Non-bitmap fonts really are blurry, comparatively. AFAICT the non-bitmap fonts just hide this thanks to people using HiDPI displays now, which is basically like saying "well, it looks fine from far away".
Title is imprecise, it's Archiveteam.org, not Archive.org. The Internet Archive is providing free hosting, but the archival work was done by Archiveteam members.
What ArchiveTeam mainly does is provide hand-made scripts to aggressively archive specific websites that are about to die, with a prioritization for things the community deems most endangered and most important. They provide a bot you can run to grab these scripts automatically and run them on your own hardware, to join the volunteer effort.
This is in contrast to the Wayback Machine's builtin crawler, which is just a broad spectrum internet crawler without any specific rules, prioritizations, or supplementary link lists.
For example, one ArchiveTeam project had the goal to save as many obscure Wikis as possible, using the MediaWiki export feature rather than just grabbing page contents directly. This came in handy for thousands of wikis that were affected by Miraheze's disk failure and happened to have backups created by this project. Thanks to the domain-specific technique, the backups were high-fidelity enough that many users could immediately restart their wiki on another provider as if nothing happened.
They also try to "graze the rate limit" when a website announces a shutdown date and there isn't enough time to capture everything. They actively monitor for error responses and adjust the archiving rate accordingly, to get as much as possible as fast as possible, hopefully without crashing the backend or inadvertently archiving a bunch of useless error messages.
I just made a root comment with my experience seeing their process at work, but yeah it really cannot be overstated how efficient and effective their archiving process is
> Like they kinda seem like an unnecessary middle-man between the archive and archivee
They are the middlemen that collects the data to be archived.
In this example the archivee (goo.gl/Alphabet) is simply shutting the service down and has no interest in archiving it. Archive.org is willing to host the data, but only if somebody brings it to them. Archiveteam writes and organises crawlers to collect the data and send it to Archive.org
ArchiveTeam delegates tasks to volunteers and themselves running the Archive Warrior VM, which does the actual archiving. The resultant archives are then centralized by ArchiveTeam and uploaded to the Internet Archive.
Sidenote, but you can also run a Warrior in Docker, which is sometimes easier to set up (e.g. if you already have a server with other apps in containers).
Yep, I have my archiveteam warrior running in the built-in Docker GUI on my Synology NAS. Just a few clicks to set up and it just runs there silently in the background, helping out with whatever tasks it needs to.
Ran archive warrior a while back but hadde to shut it down AS i sterted seeing the VM was compromised trying to spam ssh and other login attemps in my local network.
It should have been properly setup, but it was a couple years ago and I migth have left to much open. (it was on a VM behind a consumer nat, firewall router solution )
> What exactly is archiveteam's contribution? I don't fully understand.
If Internet Archive is a library, ArchiveTeam is people who run around collecting stuff, and gives it to the library for safe keeping. Stuff that are estimated/announced to be disappearing/removed soon tends to be focused too.
They gathered up the links for processing, because Google doesn't just give a list of short links in use. So the links have to be brute-forcefully gathered first.
This one is arguably even more of a hack; it's working at the source code level rather than the AST level.
The "coding" here is a bytes-to-text encoding. The Python lexer expects to see character data; you get to insert arbitrary code to convert the bytes to characters (or just use existing schemes the implement standards like UTF-8).
> it's working at the source code level rather than the AST level.
this (lexing) is the only use of the codec hack - if you want to manipulate the AST you do not need this and can just to `ast.parse` and then recompile the function.
I think there's a package to treat Jupyter notebooks as source code (so you can import them as modules).
While the OP package is obviously a joke, the one with notebooks is kind of useful. And, of course, obligatory quote about how languages that don't have meta-programming at the design level will reinvent it, but poorly.
I'd argue "import from notebooks" is still only helpful in the "space bar heating" sense.
I think Notebooks are great for quick, "explorative" sketches of code. They are absolutely terrible for organizing "production" code.
I know it often happens that something starts in a notebook and then sort of morphs into a generic script or full-on application. But I think, this is usually the signal you should refactor, pull out the "grown" parts from the notebooks and organize them into proper Python modules.
If you have parts that are still experimental or explorative, consider importing your new modules into the notebook instead of the other way around.
By chance, do you happen to know if the Mindstorms NXT (the old one, before EV3) software was based on the same toolkit? I always wondered what UI framework it used, it had an unusual look.
It was not... it actually was, IIRC, a LabVIEW program with some tweaks here and there. The UI was basically a LabVIEW VI front panel with a LabVIEW 2D Picture Control. Most of the program logic and the compiler to the NXT was LabVIEW G code.
Very short answer: because a treesitter will do an approximately correct parsing, while a hand written parser will do a correct parsing (and if not, it is a bug).
About 90 millions rods vs 6 millions cones. Sometimes I'm surprised we can even see detail at all. Though it certainly helps that they're not uniformly distributed; most cones are in the macula, around the middle of the back of the eye. Still, it's not a lot.
And within the macula, the red and green are generally towards the centre and the blue are generally towards the edge. This helps prevent the red shift problem photographs with high contrast changes sometimes get.
There is a very real risk of political candidates committing political fraud, getting elected thanks to it, and putting pressure on the judiciary branch to lower their chances of getting arrested. We're seeing this exact process happen in real time in the US. Every modern country pretends that nobody is above the law and that bad people will get convicted and get sentences but in real life the government has power over this stuff.
Making ineligibility sentences immediate is a way to make sure this sort of thing doesn't happen.
It's also a way for someone to make their own re-election (or the continued control of their party) easier by "putting pressure on the judiciary branch" as you said to find their opponent guilty.
Take the individual parties of today out of it. You don't want the party in power in the government to have the ability to decide who is allowed to run for office. If you actually want to live in a democracy and not just autocracy with your favored party in power, you want the people to decide who runs with as little government input as possible.
A judge saying someone is not allowed to run for office is objectively, by definition, anti-democratic.
> We're seeing this exact process happen in real time in the US.
Last I checked Rosie O'Donnell is only one stupid enough to imply that the latest presidential election was not completely above board.
I know that, I'm just saying that if you can "put pressure on the judiciary" (which is a ridiculous statement in most western countries but especially France) to find someone not guilty, you can certainly do it to find them guilty.
The information the jury hears in the US in criminal cases, especially high profile ones, is extremely tightly controlled. They're not in the room when lawyers are making evidentiary arguments to include or exclude evidence. I've served on a couple juries and the most high-stakes one carried potentially decades of jail time for the defendant. We were shuffled in and out of that room dozens of times each day for lawyers to make arguments about what we could or could not hear. Several of our questions during deliberation were answered with a section of the transcript and nothing more.
reply