It is the outcome of appellate court cases and arguments that determine law in common law jurisdictions, not the output of trial courts. Telling what the law is in a common law system would not be affected if trial court records were unavailable to the public. You only actually need appellate court records publicly available for determining the law.
The appellate court records would contain information from the trial court records, but most of the identifying information of the parties could be redacted.
So the elites were flying in on the Lolita Express to an island filled with underage sex workers, to a venue with luridity abounds, hosted by a convicted pedophile, to discuss philanthropy?
Or might it be that the entire political class is filled with moral degenerates devoid of ethics, to nobody's surprise. And so consequently you are effectively having suspects investigate themselves only to conclude 'nah, nobody did anything wrong except the dead guy, but he's dead so ¯\_(ツ)_/¯ '
Even that report notes at least 4 or 5 victims stated they were abused by other men and women besides Epstein. So it's gone from #MeToo to #Only4or5. If you are ever curious what creates cynics like me...
You could probably make a pretty good short story out of that scenario, sort of in the same category as Asimov's "The Feeling of Power".
The Asimov story is on the Internet Archive here [1]. That looks like it is from a handout in a class or something like that and has an introductory paragraph added which I'd recommend skipping.
There is no space between the end of that added paragraph and the first paragraph of the story, so what looks like the first paragraph of the story is really the second. Just skip down to that, and then go up 4 lines to the line that starts "Jehan Shuman was used to dealing with the men in authority [...]". That's where the story starts.
Thanks, I enjoyed reading that! The story that lay at the back of my mind when making the comment was "A Canticle for Leibowitz" [1]. A similar theme and from a similar era.
The story I have half a mind to write is along the lines of a future we envision already being around us, just a whole lot messier. Something along the lines of this [2] XKCB.
> The guitars were nearly impossible to play, with frets that could cut your hand and intonation that created sounds half and whole steps away from the intended tone
I'm curious about those intonation problems because a couple of years ago I wanted an inexpensive but decent acoustic guitar, and bought a Fender CC-140SCE acoustic guitar direct from Fender's online store.
It had frets that could cut your hand and a note that was off by a whole step. It also came with the truss rod way out of adjustment leading to lots of buzzing. I was able to get rid of most of the buzz by adjusting the truss rod, but I didn't want to deal with the rest of the problems and sent it back.
Only after it was on the FedEx truck on its way back to M̵o̵u̵n̵t̵ ̵D̵o̵o̵m̵Fender did I realize that I couldn't think of a way that the intonation error I saw was even theoretically possible. I wish I had realized that before sending it back so that I could have figured it out.
Maybe someone here has an idea?
Here's what I observed. This only happened on the first string. The others were fine.
To make this easier for people who don't know anything about music (because this is really a physics mystery, not a music mystery) I'm going to use numbers for the notes instead of names.
Let's call the note played when you pluck a guitar string without pressing any of the frets note 0. When you pluck while pressing the string down just behind the first fret, that is note 1. Frets are counted from the neck end of the guitar, and "behind" a fret means on the neck side. Second fret gives note 2, and so on.
Note n+1 is higher frequency than note n. The frequency ratio of adjacent notes is supposed to be 2^(1/12).
Playing up the fretboard, starting with plucking without pressing any frets should give this sequence of notes: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20.
What actually happened was that note 13 was missing. It was replaced with 15. The sequence from 12 through 16 went 12, 15, 14, 15, 16.
A bad fret can easily case a wrong note, but there are limits. The way frets work is when you push the string down behind the fret, it changes the length of the vibrating portion. With no frets pressed the string vibrates between a support on the neck (called the "nut") and a support on other end (called the "saddle").
If you press down behind a fret, that pushes the string down enough that the portion between your finger and the saddle is suspended between the saddle and the metal of the fret. That's shorter than nut to saddle, so has a higher frequency.
The saddle is higher than the nut and the frets so that the string slopes up as you get closer to the saddle. Consider playing note 12 on mine. The string is then suspended between fret 12 and the saddle. It is sloped up enough from there the its up and down excursions from vibrating don't hit anything.
Now we play note 13, but what actually plays is 15. There is a straightforward way for that to happen: fret 15 can be slightly too tall. It is not tall enough to interfere when playing 0-12, but playing fret 13 pulls the string down farther than playing fret 12 does, and that is enough to reach fret 15 and so the string ends up actually suspended between 15 and the saddle instead of 13 and the saddle.
Can that explain what I observed? Nope! Because if 13 pulls down the string enough to run into 15, 14 will do also do so because 14 pulls down the string more than 13. The sequence would then be 12, 15, 15, 15, 16 not 12, 15, 14, 15, 16. It should not be possible to have fret n+1 play a lower note than fret n.
Wouldn't that just stop it from considering castling, en passant, and promotion on the first move of the position you are analyzing? It's still going to consider those in the tree search, and the static evaluation neural net was trained on positions where they are allowed.
For castling you should be able to fix this by specifying that castling rights have been lost in the positions you give it.
For the others I think you would need to filter not just when giving it the list of allowed first moves. You'd also have to make it not consider en passant and promotion in the search, by modifying its move generation.
The static evaluation would still be off a bit, but that probably would not have much effect most of the time.
From what I've read it is feasible to train a new neural net on a decent home computer in maybe around a week, but that's probably overkill for your use of figuring out how strong your engine is at no castling/no promotion/no en passant chess.
That only worked though because Romania is using a voting method for President that is completely terrible for countries that have several viable political parties.
They use a two-round system to elect their President that works like this:
1. If a candidates gets more than 50% in the first round they are the winner, and there is no second round.
2. If there is no clear winner in the first round, the top two from the first round advance to the second round to determine the winner.
In that election there were 14 candidates. 6 from right-wing parties, 4 from left-wing parties, and 4 independents. The most anyone got in the first round was 22.94%, and the second most was 19.18%. Third was 19.15%. Fourth was 13.86%, then 8.79%.
With that many candidates, and with there being quite a lot of overlap in the positions of the candidates closer to the center, you can easily end up with the candidates that are more extreme finishing higher because they have fewer overlap on positions with the others, and so the voters that find those issues most important don't get split.
You can easily end up with two candidates in the runoff that a large majority disagree with on all major issues.
They really need to be using something like ranked choice.
Ranked choice is very similar to what you just described, has the same downsides, and is much more difficult to understand. What you want is approval voting which has all of the upsides ranked choice claims to have, none of the downsides, doesn't have multiple rounds, and is trivial to understand. On top of that approval voting has an additional benefit where voting third-party/moderates doesn't feel like throwing any vote away so you can just include them and they're much more likely to win.
>That only worked though because Romania is using a voting method for President that is completely terrible for countries that have several viable political parties. [...] They really need to be using something like ranked choice.
Firstly, there's many forms of elections, each with their own pros and cons, but I don't think the voting method is the core problem here.
Let's assume Norway would have the exact same system and parties like Romania. Do you think Norwegians would have been swayed by a an online ad campaign to vote a Russian puppet off tiktok to the last round?
Maybe the education level, standard of living of the population and being a high trust society, is actually what filters malicious candidates, and not some magic election method.
Secondly, what if that faulty election system, is a actually a feature and not a bug, inserted since the formation of modern Romania after the 1989 revolution, when the people from the (former) commies and securitatea(intelligence services and secret police) now still running the country but under different org names and flags, had to patch up a new constitution virtually overnight, so they made sure to create a new one where they themselves and their parties have an easier time gaming the system in their favor to always end up on top in the new democratic system, but now that backdoor is being exploited by foreign actors.
> Let's assume Norway would have the exact same system and parties like Romania. Do you think Norwegians would have been swayed by a an online ad campaign to vote a Russian puppet off tiktok to the last round?
> Maybe the education level, standard of living of the population and being a high trust society, is actually what filters malicious candidates, and not some magic election method.
My point isn't about filtering malicious candidates. My point is that a "top two advance to runoff if no one wins the first round" system often does a poor job in the face of a plethora of candidates of picking a winner with majority support.
Yes, there are many forms of elections each with their own pros and cons, and that is one of the main cons of that system (and of one round systems where the winner is whoever gets the most votes even if it is not a majority).
Consider an election with 11 candidates and where there is one particular issue X that 80% of the voters go one way on and 20% the other way. The voters will only vote for a candidate that goes their way on X. 9 of the candidates go the same was as 80% of the voters, and the other 2 go the other way. All the candidates differ on many non-X issues but voters don't feel strongly on those. They will pick a candidate that agrees with them on as many of those as they can, but would be OK with a winner that disagrees with them on the non-X issues as long as they agree on X. This results in the vote being pretty evenly split among the candidates that agree on X.
The 9 candidates that agree with the 80% that go one way on X then end up with about 8.9% of the vote each, and the 2 that go the other way end with 10% each. Those two make it to the runoff and wins.
Result: a winner that would lose 80-20 in a head to head matchup against any of the 9 who were eliminated in the first round.
Note I didn't say that the 2 on the 20% side of issue X were malicious. They just held a position on that issue the 80% disagree with.
Such a system is also more vulnerable to manipulation like what happened with TikTok in Romania, because with a large field of candidates with roughly similar positions you might not need to persuade a large number of people to vote for an extreme candidate to get that candidate into the runoff.
> Actually there's a whole bunch of mathematics which I find useful as an engineer because it tells me that the perfection I have vaguely imagined I could reach for is literally not possible and so I shouldn't expend any effort on that
That's actually given as a reason to study NP-completeness in the classic 1979 book "Computers and Intractability: A Guide to the Theory of NP-Completeness" by Garey & Johnson, which is one of the most cited references in computer science literature.
Chapter one starts with a fictional example. Say you have been trying to develop an algorithm at work that validates designs for new products. After much work you haven't found anything better than exhaustive search, which is too slow.
You don't want to tell your boss "I can't find an efficient algorithm. I guess I'm just too dumb".
What you'd like to do is prove that the problem is inherently intractable, so you could confidently tell your boss "I can't find an efficient algorithm, because no such algorithm is possible!".
Unfortunately, the authors note, proving intractability is also often very hard. Even the best theoreticians have been stymied trying to prove commonly encountered hard problems are intractable. That's where the book comes in:
> However, having read this book, you have discovered something almost as good. The theory of NP-completeness provides many straightforward techniques for proving that a given problem is “just as hard” as a large number of other problems that are widely recognized as being difficult and that have been confounding the experts for years.
Using the techniques from the book you prove the problem is NP-complete. Then you can go to your boss and announce "I can't find an efficient algorithm, but neither can all these famous people". The authors note that at the very least this informs your boss that it won't do any good to fire you and hire another algorithms expert. They go on:
> Of course, our own bosses would frown upon our writing this book if its sole purpose was to protect the jobs of algorithm designers. Indeed, discovering that a problem is NP-complete is usually just the beginning of work on that problem.
...
> However, the knowledge that it is NP-complete does provide valuable information about what lines of approach have the potential of being most productive. Certainly the search for an efficient, exact algorithm should be accorded low priority. It is now more appropriate to concentrate on other, less ambitious, approaches. For example, you might look for efficient algorithms that solve various special cases of the general problem. You might look for algorithms that, though not guaranteed to run quickly, seem likely to do so most of the time. Or you might even relax the problem somewhat, looking for a fast algorithm that merely finds designs that meet most of the component specifications. In short, the primary application of the theory of NP-completeness is to assist algorithm designers in directing their problem-solving efforts toward those approaches that have the greatest likelihood of leading to useful algorithms.
> But it's not a good ad when the only people who will get the reference are those plugged into "ai twitter". But association by implication doesn't work, the only thing most people will end up associating is the creepy guy with "Claude"
It might also work for people who watch "South Park". I've never used any LLM speech interface, and until recently had only ever asked ChatGPT short one off questions and so had never seen the sycophantic tendencies, but immediately recognized the creepy people in the ads were supposed to represent ChatGPT from its portrayal in the South Park episode "Sickofancy" from August of last year.
The appellate court records would contain information from the trial court records, but most of the identifying information of the parties could be redacted.
reply