Hacker Newsnew | past | comments | ask | show | jobs | submit | alipang's commentslogin

I wouldn't really buy this in the first place for $20, maybe on sale, but in addition you can get it for free on mobile if you have a netflix subscription, so I'd have to imagine that many people do what I did and played it for a bit on their phone, so now I doubt I'll ever pay for it.


Even Javascript has a similar syntax, though `yield` / `await` only works for certain "monads" (promises seem close enough)


Well, why would it be taken as a given that programming should mimic the hardware?

Pure functions are easier to reason about (there may be exceptions of course), that's why they're interesting.

This paper is not related to side-effects - it's related to "totality", meaning all programs terminate. This could also be interesting for side-effecting programs, but it's of course much harder to verify in this case.

Due to the halting problem's existance a total language can not be Turing complete, but many useful programs can be verified.

I didn't see anything in the paper that claims that pure functions should be "all the way down", and the paper is not about side effects anyway.


>Well, why would it be taken as a given that programming should mimic the hardware?

Because that’s the environment that programs are run on, doing anything else is fighting against the environment. I would argue that humans have done this to great effect in some areas, but not in programming.

>Pure functions are easier to reason about (there may be exceptions of course), that's why they're interesting.

Prove it.


>Prove it.

Proofs are good evidence that pure functions are easier to reason about. Many proof assistants (Coq, Lean, F*) use the Calculus of Inductive Constructions, a language that only has pure, total functions, as their theoretical foundation. The fact that state of the art tools to reason about programs use pure functions is a a pretty good hint that pure functions are a good tool to reason about behavior. At least, they're the best way we have so far.

This is because of referential transparency. If I see `f n` in a language with pure functions, I can simply lookup the definition of `f` and copy/paste it in the call site with all occurrences of `f`'s parameter replaced with `n`. I can simplify the function as far as possible. Not so in an imperative language. There could be global variables whose state matters. There could be aliasing that changes the behavior of `f`. To actually understand what the imperative version of `f` does, I have to trace the execution of `f`. In the worst case, __every time__ I use `f` I must repeat this work.


And if I go to a flat earth conference, I will find that they produce lots of “proof” for flat earth.

I don’t really accept “this group of people who’s heads are super far up the ‘pure functions’ ass choose purity for their solutions” as “evidence” that purity is better.

I’m not saying that purity is bad by any stretch. I just consider it a tool that is occasionally useful. For methods modifying internal state, I think you’ll have a hard time with the assertion that “purity is easier to reason about”.


>For methods modifying internal state, I think you’ll have a hard time with the assertion that “purity is easier to reason about”.

Modeling the method that modifies internal state as a function from old state to new state is the simplest way to accomplish this goal. I.e., preconditions and postconditions.


> doing anything else is fighting against the environment

Video game programming, where performance really matters, is a great way to see the cost of forcing the hardware to deal with human abstractions (mainly OOP). Rules like "never have a Boolean in a struct" or "an array with linear access can be faster than a tree with log access" wake you up to the reality of the hardware. :p

In academia, and the especially academic, utopian realm of functional programming, you're trained to live in dreamland.

If you can afford it, though, hey, it's a nice place to be.


> In academia, and the especially academic, utopian realm of functional programming, you're trained to live in dreamland.

OO outside of contexts where every little bit of performance matters suffers in exactly the exact same way.

> If you can afford it, though, hey, it's a nice place to be.

No arguments there! A huge majority of applications can afford to be written this way, even ones where performance is a concern (WhatApp, for example).


> A huge majority of applications can afford to be written this way, even ones where performance is a concern

This is sometimes true for any one given app but it's not a good overall outcome.

It is why we have today multi-GHz CPUs with lots of cores and dozens of GB of RAM and yet... most actions feel less responsive today than in 1995 with a 120MHz CPU, 1 core and 1MB.


My comment was in response to the need to squeeze out every last bit of performance you possibly can. You're talking about ignoring performance altogether which is not what I'm talking about.


Then again, abstractions can be helpful too, including in game programming. Epic's heavily invested, for example. Or in databases, relational algebra often beats out an array with linear access. I agree that OOP-in-the-small lacks mechanical sympathy though. That's one reason for entity-component-model, though another, I'd argue, is that it provides abstractions that are easier to reason about.


On Epic's case it helps that Tim Sweany started his game company doing games in VB, and always cared for a good developer experience, that is why Unreal Engine always favoured good abstractions, has a C++ GC, Blueprints and now Verse.

He was never a kind of "lets keep doing C in games" style of developer.

Thankfully, without the likes of Unreal, Unity and similar efforts, we would still be doing like it always was done here, kind of mentality.


Andrew Kelley has a pretty good talk on gaining performance in the zig compiler.

In many cases performing math again is faster than memoization.

General gist is to try to cram as much in to the cache lines as possible, sometimes even at the “cost” of calculating values again.


Jax and Daxter did pretty well for a game written in Common Lisp.

And apparently Epic belives enough on this to create Verse, and to quote Tim Sweany point of view on "The Next Mainstream Programming Languages",

http://lambda-the-ultimate.org/node/1277


A slight tangent: there's also a related notion to termination, that allows you to describe event loops.

Basically it's about the loop making progress on every iteration, but we can have an infinite number of iterations.

I think the distinction is related to the one between data and co-data.


The idea that anyone who's opposed to narrow government regulations is somehow brainwashed by the Kochs is just an unsufferably smug attitude towards people you disagree with. It's like the "funded by George Soros therefore bad" you sometimes see on the right.

I'm sure they've spent money on promoting this, but there's many reasons you'd come to this conclusion other than having it "manufactured" for you by billionaire conspiracies.

You have to start by considering that your political opponents are capable of thinking for themselves if you want to ever do more than just preach to the choir. Declaring them idiots in the guise of "providing historical context" isn't helpful.


> Even on death’s doorstep, Trevor was not angry. In fact, he staunchly supported the stance promoted by his elected officials. “Ain’t no way I would ever support Obamacare or sign up for it,” he told me. “I would rather die.” When I asked him why he felt this way even as he faced severe illness, he explained: “We don’t need any more government in our lives. And in any case, no way I want my tax dollars paying for Mexicans or welfare queens.”

https://www.bostonreview.net/articles/jonathan-m-metzl-dying...


There is nothing more smug than the point-and-laugh hit piece, where a journalist finds some ignorant rube for the college-educated to sneer at.


The author is not a journalist, but a doctor who studied public health issues.


I don't think the author's credentials alter the dynamic at play here.


Would it be better if he just kept quiet and let the health issues he is seeing go unmentioned because otherwise it would make people look bad? Do you think his book is just an exercise in mockery because he enjoys it?


"You suffer because of your politics" is laundering a political statement under the guise of health.

And yes, I do think mockery has become de rigeur in American politics, both left and right.


If people are indeed suffering because of their politics, is there any way to ease their suffering without making a political statement of some kind?


I think viewing things that way is akin to thinking wet streets cause rain. It is placing (often very selectively) an undue amount of agency upon a voter, whose effect on the political system at scale is essentially nil.


Nobody likes to be sneered at but the story illustrates the corrosive impact of oligarchic domestic propaganda pretty well, which is pretty relevant to the topic at hand.


Practical politics is more visceral than intellectual. A lot of people have first-hand experience with government-provided healthcare through the military and VA, and for many of them, it's not a positive experience.

A lot of effort could be expended by the government to improve the quality of its own workforce and the incentive structure under which they operate, but that is boring and unsexy work, which always gets put aside in favor of some new ambitious piece of legislation that makes a politician feel good about his or her accomplishments.

Then the backers of said legislation turn around and wonder why the purported beneficiaries don't like it. But politicians and the upper-crust live in an alternate universe where their own needs are met through special systems and their own view of government employees comes from the sycophants and yes-men.


>Practical politics is more visceral than intellectual. A lot of people have first-hand experience with government-provided healthcare through the military and VA, and for many of them, it's not a positive experience.

I mean, a majority of Republicans want single payer. A majority of the people where I live (in a country with single payer)... also want single payer.

It's objectively a very popular policy. The majority of people on medicare and VA benefits would probably try to fight you if you tried to take them away.

Nonetheless, socialized medical care is objectively not an oligarchy friendly policy. Some of them make EPIC mind bending profits from private healthcare.

And, they have a lot of control and influence over the media, which results in rather a lot of anti-single payer propaganda.

The mix of these two forces can sometimes have interesting results. Like this: https://otb.cachefly.net/wp-content/uploads/2010/10/dont-ste...

Which is definitely more visceral than intellectual.


Identifying why this person thinks they're two different things and why they're opposed to what seems to you to just be an extension of an existing, popular thing would be more useful than looking at the apparent contradiction and inferring there's something deficient about the messenger. American politics is suffused with propaganda from all sides; people latch onto available messages based upon feelings.


It is worthwhile contemplating who is benefiting from “my opponents are brain dead retards”.


If you check out the linked paper you'll see the title is "One-shot Algebraic Effects as Coroutines" - the keyword being "one-shot".

In general every Monad can be expressed "interpreting" the free monad. This relaxes the "one-shot" restriction, and can be implemented using delimited control. One-shot means faster performance though and is still useful for many things - and that can be implemented using coroutines.

If you acquire some understanding of what this means then you'll have a very good idea about the expressive power of what you can use coroutines (with nothing more) for, so it's very interesting.


I'm not quite sure if the free monad (or the 'freer' monad) can simulate the continuation monad? See https://stackoverflow.com/questions/25827271/how-can-the-con...


It's been a while since I was steeped enough in this to answer comprehensively, but per memory, up to performance concerns, they're equivalent in inductive settings.

Haskell ends up being a bad place to talk about this (or a great place, depending on your goals) because due to laziness you get to write a lot of structures which look inductive but end up being able to express coinductive structures.

From memory and intuition, if you're working in a strict language (or better yet, something like Agda where the distinction becomes very sharp) then you end up finding that continuations make for good coinductive "free" structures and the free monad (and its ilk) make for good inductive free structures.

The distinction between inductive and coinductive types is fairly subtle and hard to see in most languages where those distinctions are blurred, but broadly you can think of inductive structures as ones that are, in principle, finite and coinductive structures as being those which may be, in principle, infinite.

For example, a linked list is inductive. If you're looking at one cons cell of it you can't prove that, you may have to chase pointers for longer than your patience allows, but at least in principle there is an end. A stream is coinductive, because it instead suggests a generative process.


In a sense, inductive types are "naturally strict" whereas co-inductive types are "naturally lazy". Though there are some types, such as products/tuples and arrays, that come in both strict and lazy varieties. Ultimately, this would allow one to equally account for both strict and lazy evaluation in a very natural way - quite unlike languages like ML or Haskell, where only one form is natural and idiomatic whereas the other has to be added as an afterthought.


After reading that, I think freer can in fact encode the continuation monad.


https://www.reddit.com/r/haskell/comments/7yll62/comment/duh... also suggests that.

A quick scan of https://okmij.org/ftp/Haskell/extensible/more.pdf doesn't yield much one way or another.



And, as the article says, this proposal is likely to be illegal under the ECHR and the CFREU.


You don't have to memorize that log(5) = 0.7 if you know log(2) = 0.3 since log(5) = log(10 / 2) = log(10) - log(2) = 1 - 0.3 = 0.7.

Also log(8) = 0.9 because log(2^3) = 3log(2) = 0.3 3 = 0.9


What, the anal beads is not a sensible theory? /s


This is a really intuitive way of thinking about it. The best way I could work this out was to start with the special case of pure scaling in a single direction.

For instance when scaling by "a" in the x direction

M = (a 0; 0 1)

It's pretty easy to see by graphing an example that in order for the normal to work out correctly you must scale it by the inverse scale factor (1 / a), so the correct choice of M is the inverse, M⁻¹, i.e. M = ((1 /a) 0; 0 1).

This of course generalises to scaling in multiple directions (just multiply the matrices) - i.e. to diagonal matrices. For

M = (a 0; 0 b)

the correct choice for transforming the normals is

M = ((1/a) 0; 0 (1/b));

By applying singular value decomposition you get

M = UDV

So choosing M' = M⁻¹ᵀ you'll get M' = (UDV)⁻¹ᵀ = UD⁻¹V

Since orthogonal matrices U, V will actually correctly transform the normals (they're rotations or reflections) you can intuitively understand that M' will work for transforming the normals.

Question is - where does the transpose part actually come from? It's a of a bit "magic" choice. This article makes it very clear that this "arises from" the transpose in the dot product, (t⋅n = tᵀn), as the dot product defines orthogonality!


I'd imagine you can use a keyboard shortcut as well to open this menu, but based on the text cursor, rather than the mouse one.

I do this a lot is vscode to auto-fix issues and automatically add imports for the identifier under the caret. It's however frustrating in that it sometimes uses the underlying platform's dropdown widget, which doesn't respect my key bindings (Emacs-like, using C-p, C-n for up, down etc).


Indeed, you can use the keyboard shortcut Shift-F10 to open the context menu based on the cursor location.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: