Hacker Newsnew | past | comments | ask | show | jobs | submit | juancn's commentslogin

In principle most if not all inference hardware should be usable for training.

Efficiency is the question.


iPhones are basically effort free, it takes a while, but 99% of it is transferred without a hitch, some poorly written apps may need an extra step.

This also degrades light output off angle. If you're showing someone a picture, you lose up to 50% of light output.

Sure, but presumably you wouldn't have the feature activated when you're showing someone something on your phone?

It's a gimmick, it reduces overall brightness when off-angle.

I much rather have a tandem oled screen.


Windows NT used to have the network configuration run in prolog:

https://web.archive.org/web/20030218034509/http://www.resear...


They lost me at "use tablets", that's plain bullshit.

Tablets are probably the worst waste of detergent. A cheap crappy store bought powder well used beats tablets. Lookup the Technology Connections deep dive on dishwashers.


An AI model takes about 100 to 150 MW to be trained.

A human at rest used ~100Wh, up to 400Wh for an elite athlete under effort.

So 20 years at 200Wh (I'm being generous here) ends up being 35MW, still cheaper, and inference is still at under 200Wh!


The reductionism and comparison of a human life to a corporate product is disgusting but it's valuable to see how they truly see the world they are creating.

Their idea of a person's value seems to be less than the communist soviets at this point, nothing but work units.


It was meant tongue in cheek, if we're doing wild comparisons, I might as well do one.

How much energy does it take to feed, clothe, house, entertain, and transport that human to 18? Probably $500K worth.

How much does it take to build data centers to house the inferencing, and an the involved logistics, infrastructure setup, bribery, marketing, and organisational structure behind it. Easily in the hundred billions.

> Probably $500K worth.

What life standards do you have!?


I never got why compilers don't have pluggable syntaxes.

I mean, once you decide the "flavor" (e.g.: typed, imperative, with a dash of functional and some oop for good measure), you could have more than one syntax and easily switch to whatever the reader wants.

We had an integration language in a product I worked on that had three flavors (you can check it here: https://docs.oracle.com/cd/E13154_01/bpm/docs65/pdf/OracleBP... , page 254)

The original syntax scared some people, so we had the compiler use the same AST with three different parsers: Original, Java and VB. The editor (which had syntax highlighting and auto completion) would let you see the code however you wanted.

You could even have a setting in the IDE that always showed the code as you wanted.

We even respected some weirdness in the spacing and indentation of comments and code when needed.

For some languages, like rust it may be a stretch, but for most vanilla languages, you could easily re-skin them to look much more like something else, that's comfy for whoever is looking at the code.


> I never got why compilers don't have pluggable syntaxes.

Because then you've created dialects and produced the Curse of Common Lisp all over again.

Syntax matters a heck of a lot, and you want consistency and coherency across codebases of the same language, otherwise you don't have a language at all. So wanting it "pluggable" is in fact the worse possible choice you want.

It's also a naïve view to think that concrete syntax can be trivially swapped out with the abstract syntax remaining. For certain things it can usually work (like declaration syntax in the article) but for most things it cannot.


> I never got why compilers don't have pluggable syntaxes.

An interesting question, but the answer is "because it's a bad idea" that doesn't actually solve the problem.

That said, the right way to implement this is as a "transpiler" that compiles one syntax into another. And only the people who want to use it pay the costs.


> An interesting question, but the answer is "because it's a bad idea" that doesn't actually solve the problem.

This doesn't really explain anything, and it isn't clear that both of you have the same model of "the problem" in mind.


The usual way people get here is that they didn't realise programs are for reading by humans. That's why we have formatting conventions (the compiler doesn't care but humans do) and so it's also why a single syntax is important.

If there are six syntaxes for a hypothetical language L, then either every L practitioner must learn all six syntaxes (ew, no thanks) or most L programmers can't read each other's programs and so it's basically unmaintainable.


Code is communication. The compiler could handle it, but what is important is that other people can.

There are many infamous examples of people using the C preprocessor to write near-Pascal or similar in C. It largely died out because it hindered effective communication about the code.


The sensing part is really really hard.

Hands, mechanically are fairly simple to mimic, touch? the way skin feels micron-level shifts and subtle temperature changes is plain tough.


> Hands, mechanically are fairly simple to mimic

I can assure you they are not.

Human hands have absolutely crazy performance. Human hands have 15+ degrees of freedom. Sub-millimeter precision, no backlash. Strong enough to lift 100 lbs. Gentle enough to catch a thrown egg without breaking it. Rigid enough to hammer a nail without dropping the hammer. A compact forearm for reaching into tight spaces. Water- and dust-proof. Oh, and it'll last for decades without maintenance.

Even a $100k robot hand like a Shadow Hand can't compare.


Nothing could possibly go wrong.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: