Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Holy cow.

This could have replaced ~8 weeks of my CS languages/compilers class, and I would have understood the material better at the end of it.



I don't know.

I see it as a kind of entertaining academic game - a Glass Bead Game, if you will, and I intend the deep allusion - but I wouldn't put too much faith in it teaching you much about the mechanics of compilers. It's one way of decomposing semantics into more simple elements, but it's not the one chosen for almost all practical languages, which after all have to execute on silicon, not in the Lambda calculus.

It doesn't teach anything about parsers, and nor does it exercise much thinking in trees. It does emphasize recursion, but in a way that makes it seem like an absurd roundabout way of doing things, rather than something that more usually simplifies the expression of a program. Above all, it doesn't really demystify how the text of your program changes the coloured lights on your screen. It's a splendidly constructed wonderland, a pyramid of abstraction with nothing but function application at its core, but I don't think it leaves you with a lot more than a sense of having seen something very clever.

(FWIW, nothing in the presentation was new to me, so any residual sense of wonder has faded. Take that into account in my perhaps cynical judgement.)


parsing is in my opinion the crappiest part of compiler writing. I feel pretty safe saying that because we have made tools to automate or near-automate the act of writing parsers for compilers ...

I would argue that there's a big difference between "demystify how the text of your program changes the coloured lights on your screen" and "demystify why the text of your program changes the coloured lights on your screen". if you are interested only in the first question, you might be an engineer. if you are also interested in the second question, you might be a computer scientist...


Parsing is the best understood part of writing compilers; that's why it has tools, not because it is the "crappiest". (If anything, the wealth of free tools available gives a clue as to how fun dealing with it is.) But using the tools well requires some understanding of how they work; and if you're doing an industrial-strength parsing job, you'll probably end up writing the parser by hand, because what a tool gives you - speed in converting specification into implementation - is not usually the constraining factor; rather, functionality and performance of the end result are.

As to your question "why", nothing about the lambda calculus will tell you anything about why your program changes the coloured lights. There is only "how" and "will", by which I mean human agency. There is no answer to "why" here, and there cannot be, because the "why" resides in people's minds. It takes no more extra effort to believe in "if" than beta reduction.

Take that single example: implementing if as a primitive rather than a function with lazily evaluated arguments means greatly increasing practicality at the cost of the sparse beauty of minimalism. 'If' is very common; optimizing it, diagnosing misuses of it, etc. is a lot harder once you've lost it in a forest of function applications.


I agree with everything in your first paragraph and would add the following: parsing is overrated. It's interesting the way that crossword puzzles are. Nothing wrong with that, but it can be a distraction; it's just not that deep a space.

That's not to say that the people who worked out how to do it in the first place weren't brilliant. They were, and it was a hard problem. But it's a solved one.


There are still some fairly hard problems in parsing. For example, doing minimal work to convert a series of text deltas into abstract syntax tree deltas, using caching to avoid throwing away too much. This is highly relevant to IDEs for providing code completion and other analysis, but it's usually solved with a mix of brute force - restarting the whole parse from the top - and trickery, such as skipping uninteresting function bodies, or parsing a much simplified version of the language that ignores many productions.


I didn't know that. Thanks.


I'm not sure I'd necessarily agree with your assessment of parsing as a field in 2011. There are still a lot of unsolved problems going forward --- see Laurence Tratt's excellent article on the subject for a few details: http://tratt.net/laurie/tech_articles/articles/parsing_the_s...


I don't know. Barrkel's example seems better to me because there's an obvious practical need for it. The trouble with most of the work on parsing I see is that it's just not hard to hand-write a parser. I used to avoid doing so, and then I wrote one and was surprised: once you factor in error-handling and whatever other meaningful output your system may need from its parser (e.g. text extents for ASTs), the overall complexity of a hand-written one can easily be less than one made with tools at a supposedly higher level of abstraction. And that's not counting the time it takes to learn the tool (which is not trivial, as they don't always have good debugging support) or the complexity cost of having the tool in one's stack (also not trivial, since they typically have their own languages, complicate the build process, and so on). This experience led me to mentally discount the whole field, hence my perhaps overly dismissive comment.


I fully agree with you that i favor hand-written recursive-descent parsing. However, it still helps to know the theory for error recovering etc.


I would argue that there's a big difference between "demystify how the text of your program changes the coloured lights on your screen" and "demystify why the text of your program changes the coloured lights on your screen".

I agree, there is a big difference between implementation and semantics, but I would normally expect a compilers class to focus on implementation (and I wouldn't expect a typical languages or compilers class to spend half a semester teaching how lambda calculus works as a computation model).


Still; I think that the experience of deriving mentioned language constructs from almost nothing is a great learning experience in itself. It's this kind of teaching - along with efforts as "The Elements of Computing System"[1] - that facilitate an - in my opinion - better learning experience that just 'learning this stuff from the blackboard'.

[1]: http://diycomputerscience.com/courses/course/the-elements-of...


Well, I suspect it would have helped me in my class, since we're using a functional language to implement a compiler. Our final exam from earlier today included a section regarding the lambda calculus. In fact, as I mulled over which incorrect answer to provide, I wished I could have talked previously with tomstuart about the subject.


Yay, I have a new project for this weekend! For those who missed it, he has a github project at (https://github.com/tomstuart/nothing) where you can reimplement this yourself, with some tips to help you out.

I especially like how he neither uses a fancy academic language which people can dismiss outright for not being practical - like Haskell. But neither do you need ridiculous amounts of boilerplate that obscures the message - like in Java. Ruby is used for real stuff and doesn't have pretentious academic baggage.

Edit: spelling.


> a fancy academic language which people can dismiss outright for not being practical - like Haskell

Actually that could be a good way to filter your audience. Some of those who would be least likely to appreciate this tutorial (and would be likely to post comments complaining how pointless (no pun intended) it was) would avoid a Haskell article in the first place.


This is n=1, of course, but I'm tremendously interested in the subject matter and superficially interested in Haskell. Yet, I don't work with Haskell on a daily basis, so when I see something non-trivial in Haskell, I skim it or put it aside to read later when I have time for study.

Whereas if it's in Ruby, JavaScript, or CoffeeScript, I try to read it immediately. So don't forget the class of slackers who mean to get around to Haskell fluency one day, but not today. We have a great interest in the subject matter and like Haskell in principle even if we're lazy(!) about evaluating Haskell code examples.


true, if you're optimising for an academic discourse. But that also limits its accessibility for any potentially interested who don't know the academic language. I can't be the only one who stopped reading an Ocaml paper merely for not knowing Ocaml. Its clear he's optimising for knowledge transfer as he spent the time to not only post code on github, but provide different branches for those with different backgrounds.

I think this tutorial shows what we really mean about Turing equivalence in languages, about how data structures can be represented as functions, and presents a wider point of view that opens up other possible program designs.


It sounds like you would really have enjoyed SICP[1]. I remember feeling exactly what you're describing when I finished the Logo interpreter project. Logo, which (despite its dearth of parentheses) is a type of Lisp, is a surprisingly simple language which is not that far removed from lambda calculus.

[1]: http://mitpress.mit.edu/sicp/

If you have time on your hands in the near future, you should definitely give SICP a go--it really is a brilliant book.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: