Hacker Newsnew | past | comments | ask | show | jobs | submit | bdbdbdb's commentslogin

> EU citizens have elected ineffective leaders for decades -- leaders that ignored the potential to set up homegrown cloud providers, software suites or tech companies.

It's not like there are people out there on the campaign trail every election saying "if I'm elected, I'll ensure we build homegrown cloud alternatives". Nobody campaigns on issues like that. The reality is you have to choose between people who want to kick the immigrants out and people who don't, people who want to enact green policies and people who don't. People who want a European army and people who don't. These big issues are what people vote on, even if we care that there should be a homegrown cloud industry. I really do care, but it's not something I can do anything about at the ballot box


AGS is cool but I wish they'd make a version for macos. You need to use wine to run it

They did. [0]

> This port was initially done by Edward Rudd for providing Gemini Rue in a Humble Bundle when the AGS backend was Allegro 4. Currently, it uses SDL2.

[0] https://github.com/adventuregamestudio/ags/tree/master/OSX


Unfortunately, Edward's project is just a runtime for running AGS games on Mac. The AGS Editor (the topic of the original post) is still Windows-only, and will likely always be due to its deep reliance upon Windows GUI libraries.

Oh nice!

Is it too hard to port?

1. Could also be called Miller silliness

It's very impressive tech but subject to the same limitations as other generative AI: Inconsistency, inaccurate physics, limited time, lag, massively expensive computation.

You COULD create a sailing sim but after ten minutes you might be walking on water, or in the bath, and it would use more power than a small ferry.

There's no way this tech can run on a PS5 or anything close to it.


Five years is nothing to wait for tech like this. I'm sure we will see the first crop of, however small, "terminally plugged in" humans on the back of this in the relatively near future.

You raise good points, but I think the “it’s not good enough” stance won’t last for long.

I've never heard of CachyOS. I'm amazed at how many Linux versions there are and how good they seem and it makes me wish I could try them all.

Oh, you can try them all. That's pretty much an entire hobby itself.

Gnome comes with built in virtual machine support with the Boxes app, just download an ISO and try as many as you want.

Huh, I had no idea sgi was not pc hardware. I just assumed they made PCs with their own OS


I remember that SGI was superfast. I did some on-site work for a company that had an SGI workstation and I had installed TeX on it for a typesetting system I’d developed with them. When I ran the TeX process, it was so fast that the screen did not scroll as it ran, instead it just refreshed with the whole multi-line output. At first I thought something had gone wrong because I was used to waiting a few seconds for the code to run on my PC, but it turned out, no, their machines really were that fast.


Back then there were quite a few competing architectures and UNIXes to go with them. SGI MIPS with Irix, IBM had POWER with AIX and later Linux, DEC had Alpha Tru64 UNIX and VMS (not a UNIX), Sun SPARC with Solaris, HP had HA-RISC with HP-UX. Only SPARC and POWER survived for long and only POWER survived until today as far as I know. Solaris of course lives on in various forms. The old UNIXes I guess mostly do not, being displaced almost entirely by Linux and BSDs.


IBM apparently still releases updates for AIX on POWER.


They still build POWER infrastructure too, but as far as I know Linux pretty much dominates. You can even buy POWER workstations from third party vendors like Raptor Computing Systems. Very expensive though.


They made a couple of Intel boxes in the very late 90s / very early 00s, but the company was already on the way out by that point.


I'm with proton on this tbh. It's not a lumo update, it's an attempt to tell people who don't use lumo about it's existence. Maybe it's not something you want to read but an email saying "hey, have you heard of this thing called lumo" is not something you'd send out to existing lumo users


Over in the Proton subreddit we've been wondering if there is currently some kind of Anti-Proton campaign going on. Constantly people will loudly complain about completely benign things and get lot's of people agreeing with them.


I thought the same thing last night when this was first posted. Lots of "if they can't get this right do they even care about users" as if a slipped-up miscategorization of a marketing email is the same as an oil company leaking waste into a river.

I operate on the assumption they hold firm on their technical commitments of encrypted email, email obfuscation, decent VPN and a solid password manager.

Call them out on mistakes, sure, but this blog post was written like a manifesto for something so minor.


Every time there is anything posted about Proton on HN, there is an immediate wave of super negative comments, none of which ever offer any arguments of substance. It's always just some vague allegations, and this has been the case for years. It's pretty obvious what is going on.


These vapid fanboy-esque comments make me significantly more likely to believe that Proton is astroturfing than the inverse that you are implying, that some unspecified actor is engaging in a conspiracy to impugn Proton's reputation. That said, if criticising Proton is indeed a paid vocation and you have some concrete details about where I can get paid for my comments daring to doubt the uncompromising holiness of Proton, I'm all ears.


Calling it an "anti- Proton campaign" or "benign" is just rhetorical hand waving. Those words let you dismiss criticism without engaging with the substance. Proton did deliberately email people who opted out. That is a GDPR violation, full stop. They are a large, well resourced company; "oops" is not an excuse. Criticism over that is not hysteria or bandwagoning, and blaming people for speaking up instead of the company for breaking the rules is weak.


[flagged]


You said all of that already and I replied. If you're not answering to any of my arguments, I'm not going to bother continuing the debate.


> but an email saying "hey, have you heard of this thing called lumo" is not something you'd send out to existing lumo users

But it is an e-mail you send out to people who have specifically went out of their way to indicate to you they do not want you e-mailing them about Lumo?


> You clearly underestimate the quality of people I have seen and worked with

"Humans aren't perfect"

This argument always comes up. The existence of stupid / careless / illiterate people in the workplace doesn't excuse spending trillions on computer systems which use more energy than entire countries and are yet unreliable


It does.

If you have 1% of them and they cost you 50-100k per year than replacing them with computers make plenty of sense.


I keep coming back to this. The most recent version of chatgpt I tried was able to tell me how many letter 'r's were in a very long string of characters only by writing and executing a python script to do this. Some people say this is impressive, but any 5 year old could count the letters without knowing any python.


How is counting not a technology?

The calculations are internal but they happen due to the orchestration of specific parts of the brain. That is to ask, why can't we consider our brains to be using their own internal tools?

I certainly don't think about multiplying two-digit numbers in my head in the same manner as when playing a Dm to a G7 chord that begs to resolve to a C!


The 5-year old counts with an algorithm: they remember the current number (working memory, roughly analogous to context), scan the page and move their finger to the next letter. They were taught this.

It's not much different than ChatGPT being trained write a to Python script.

A notable difference is that it's much more efficient to teach something new to a 5-year old than fine-tune or retrain an LLM.


A theory behind LLM intelligence is that the layer structure forms some sort of world model that has a much higher fidelity than simple pattern matching texts. In specific cases, like where the language is a DSL which maps perfectly to a representation of an Othello gameboard, this appears to actually be the case. But basic operations like returning the number of times the letter r appears in 'strawberry' form a useful counterexample: the LLM has ingested many hundreds of books explaining how letters spell out words and how to count (which are pretty simple concepts very easily stored in small amounts of computer memory) and yet its layers apparently couldn't model it from all that input (apparently an issue with being unable to parse a connection between the token 'strawberry' and its constituent letters... not exactly high-level reasoning).

It appears LLMs got RHLFed into generating suitable Python scripts after the issue was exposed, which is an efficient way of getting better answers, but feels rather like handing the child really struggling with their arithmetic a calculator...


In case anyone is like me and has never heard of Orion, apparently it's a browser


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: