Hacker Newsnew | past | comments | ask | show | jobs | submit | Animats's commentslogin

There are M-Disks. These are CD/DVD/BluRay disks which use a drive with a higher power laser and work by ablating a metal layer, rather than a photosensitive dye as in the lower-powered disks. Regular drives will read both kinds.

For a small amount of data (crypto keys?), consider deep laser engraving on stainless steel. That's very durable. Or even engrave text into stainless steel with a small CNC mill.

You can engrave QR codes, bar codes, etc. But there's a lot to be said for engraving plain text.


Just get a set of alpha numeric punches and blank dog tags for crypto keys. Much cheaper than engraving.

Good answer. A set of punches is about $25, and a manual dog tag stamping machine (set wheel to desired character, pull lever) is about $250. That will get you a tough piece of metal.

> automation tools ... eliminates the boring part of the job, and then the job description shifts.

But the job had better take fewer people, or the automation is not justified.

There's also a tradeoff between automation flexibility and cost. If you need an LLM for each transaction, your costs will be much higher than if some simple CRUD server does it.

Here's a nice example from a more physical business - sandwich making.

Start with the Nala Sandwich Bot.[1] This is a single robot arm emulating a human making sandwiches. Humans have to do all the prep, and all the cleaning. It's slow, maybe one sandwich per minute. If they have any commercial installations, they're not showing them. This is cool, but ineffective.

Next is a Raptor/JLS robotic sandwich assembly line.[2] This is a dozen robots and many conveyors assembling sandwiches. It's reasonably fast, at 100 sandwiches per minute. This system could be reconfigured to make a variety of sandwich-format food products, but it would take a fair amount of downtime and adjustment. Not new robots, just different tooling. Everything is stainless steel or food grade plastic, so it can be routinely hosed down with hot soapy water. This is modern automation. Quite practical and in wide use.

Finally, there's the Weber automated sandwich line.[3] Now this is classic single-purpose automation, like 1950s Detroit engine lines. There are barely any robots at all; it's all special purpose hardware. You get 600 or more sandwiches per minute. Not only is everything stainless or food-grade plastic, it has a built-in self cleaning system so it can clean itself. Staff is minimal. But changing to a product with a slightly different form factor requires major modifications and skills not normally present in the plant. Only useful if you have a market for several hundred identical sandwiches per minute.

These three examples show why automation hasn't taken over. To get the most economical production, you need extreme product standardization. Sometimes you can get this. There are food plants which turn out Oreos or Twinkies in vast quantities at low cost with consistent quality. But if you want product variations, productivity goes way, way down.

[1] https://nalarobotics.com/sandwich.html

[2] https://www.youtube.com/watch?v=_YdWBEJMFyE

[3] https://www.youtube.com/watch?v=tRUfdBEpFJg


> But the job had better take fewer people, or the automation is not justified.

Not necessarily. Automation may also just result in higher quality output because it eliminates mistakes (less the case with "AI" automation though) and frees up time for the humans to actually quality control. This might require the people on average to be more skilled though.

Even if it only results in higher output volume you often have the effect that demand grows also because the price goes down.


There's a classic book on this, "Chapters on Machinery and Labor" (1926). [1]

They show three cases of what happened when a process was mechanized.

The "good case" was the Linotype. Typesetting became cheaper and the number of works printed went up, so printers did better.

The "medium case" was glassblowing of bottles. Bottle making was a skilled trade, with about five people working as a practiced team to make bottles. Once bottle-making was mechanized, there was no longer a need for such teams. But bottles became cheaper, so there were still a lot of bottlemakers. But they were lower paid, because tending a bottle-making machine is not a high skill job.

The "bad case" was the stone planer. The big application for planed stone was door and window lintels for brick buildings. This had been done by lots of big guys with hammers and chisels. Steam powered stone planers replaced them. Because lintels are a minor part of buildings, this didn't cause more buildings to be built, so employment in stone planing went way down.

Those are still the three basic cases. If the market size is limited by a non-price factor, higher productivity makes wages go down.

[1] https://www.jstor.org/stable/1885817?seq=1


I think this is probably the trajectory for software development because while people claim there is a potentially unlimited demand that really only occurs at rock bottom prices.

In many cases you can saturate the market. The stone planer examples is an early case. Cheaper lintels don't mean more windows, because they are a minor part of the cost. Cheaper doorknobs do not generate demand for more doorknobs, because the market size is the number of doors. Cheap potatoes, soy, corn, and cheese have saturated their markets - people can only eat so much.

This might also be true of web analytics. At some point, more data will not improve profitability.


> But the job had better take fewer people, or the automation is not justified.

In many cases, this is a fallacy.

Much like programming, there is often essentially an infinite amount of (in this case) bookkeeping tasks that need to be done. The folks employed to do them work on the top X number of them. By removing a lot of the scut work, second order tasks can be done (like verification, clarification, etc.) or can be done more thoroughly.

Source: Me. I have worked waaaay too much on cleaning up the innards of less-than-perfect accounting processes.


Well said. It’s like they think that the only thing automation is good for is cutting costs. You can keep the same staff size but increase output instead, creating more value.

"They" don't think the only thing automation is good for is cutting costs. Management thinks the only thing worth doing, at all, using any means, is cutting costs.

Well that’s clearly false, and obviously “they” refers to people that include management lol

The firm simply assumes that if the top X was sufficient in the past, it is still sufficient now.

From the perspective of modern management, there's really no reason to keep people if you can automate them away.


> The firm simply assumes that if the top X was sufficient in the past, it is still sufficient now.

> From the perspective of modern management, there's really no reason to keep people if you can automate them away.

These are examples of how bad management thinks, or at best, how management at dying companies think.

Frankly, this take on “modern management” is absurd reductionist thinking.

Just a few points about how managers in successful companies think:

- Good employees are hard to find. You don’t let good people go just because you can. Retraining a good employee from a redundant role into a needed role is often cheaper than trying to hire a new person.

- That said, in any sufficiently large organization, there is usually dead weight that can be cut. AI will be a bright light that exposes the least valuable employees, imho.

- There is a difference between threshold levels of compliance (e.g., docs that have to be filed for legal reasons) and optimal functioning. In accounting, a good team will pay for themselves many times if they have the time to work on the right things (e.g., identifying fraud and waste, streamlining purchasing processes, negotiating payment terms, etc.). Businesses that optimize for making money rather than getting a random VP their next promotion via cost-cutting will embrace the enhanced capability.

Yes, AI will bring about significant changes to how we work.

Yes, there will be some turmoil as the labor market adjusts (which it will).

No, AI will not lead to a labor doomsday scenario.


> - Good employees are hard to find. You don’t let good people go just because you can. Retraining a good employee from a redundant role into a needed role is often cheaper than trying to hire a new person.

Your best employees at a given price though.

Part of firm behavior is to let go of their most expensive workers when they decide to tighten belts.

Unless your employee is unable to negotiate, lacking the information and leverage to be paid the market rate for their ability. Your best employees will be your more expensive, senior employees.

Everything is at a certain price. Firing your best employee when you can get the job done with cheaper, or you can make do with cheaper, is also a common and rational move.

While I agree it’s unlikely that there won’t be a labour doomsday scenario, I think ann under employment scenario is highly likely. Offshoring ended up decimating many cities and local economies, as factory foremen found new roles as burger flipper.

Nor do people retrain into new domains and roles easily. The more senior you are, the harder it is to recover into a commensurately well paying role.

AI promises to reduce the demand for the people in the prime age to earn money, in the few high paying roles that remain.

Not the apocalypse as people fear, but not that great either.


See self checkouts at supermarkets, with teams reduced to when checkouts go bad, or filling the shelves.

Not only do the prices increase, now we get pushed to their jobs for free, while the chains layoff their employees.

Hence why I usually refuse to use them if I have to take some additional extra time queuing.


I have mixed feelings on these.

For a full cart, I expect a cashier or to be available.

If I have 3-5 items, I’d rather do it myself than wait.

That said, even 20-30 years ago, long before self checkout, at places like WalMart, one could wait 15-20 minutes in line. They had employees but were too cheap to have enough. They really didn’t care.

I don’t even understand how that math works. I might have kept going there if they had a few extra lowly paid cashiers around.


No? You don’t only gain justification for automation by cutting costs. You can gain justification by increasing profits. You can keep the same amount of people but use them more efficiently and you create more total value. The fact you didn’t consider this worries me.

Also the statement “show why automation hasn’t taken over” is truely hysterically wrong. Yeah, sure, no automation has taken over since the Industrial Revolution


You can increase profits by cutting costs. It is remarkably easier to do in the short term. And even if you choose not to downsize you can drop/stagnate wages to gain from the fact everyone else is downsizing.

None of what you just said is anything I hadn’t considered, and also none of it negates anything I said.

The Nala bot reminded me of the guys at Felipe's in Cambridge MA. When they're building burritos during dinner rush, you'd swear to god that multiple different ingredients were following a ballistic trajectory toward the tortilla at any given time. If there was a salsa radar it would show multiple inbounds like the Russkies were finally nuking us.

ETA: It didn't remind me of this because the robot is good at what it does. It reminded me of just how far away from human capabilities SOTA robotic systems are.


That’s one use case that is very hard to automate right now yes.

Thank you. Having automation means process control, which means handling sources of variation for a defined standard/spec. The claims of all jobs being done by AI end up also assuming that we will end up with factories running automated assembly lines of thought.

I have been losing my mind looking at the output of LLMs and having to nail variability down.


What, you've never seen industrial strength sandwich production?

[1] https://www.youtube.com/watch?v=_YdWBEJMFyE

[2] https://www.youtube.com/watch?v=tRUfdBEpFJg


Space-X wants to be regulated under the Railway Labor Act? [1] They should be careful of what they ask for. Some anti-union activities such as fussing with the bargaining unit definition don't apply under the RLA. Space-X is going to end up as a union shop.

Airlines are under the Railway Labor Act because Congress put them there in 1932, and they are almost totally unionized.

[1] https://nmb.gov/NMB_Application/wp-content/uploads/2020/04/R...


Pretty sure Elon believes in 5-10 years when unionization is a threat, most union-type workers will already be Optimus drones anyway

Pretty sure he doesn’t believe anything of the sort but is coasting from promise to promise to keep the investments rolling in.

To be fair he does enough hallucinogens he may actually believe his own bullsit.

Don't worry. The SpaceX employees will never agree on how to pronounce "unionized".

A Union that can't strike is a union in name only.

Just like a railway! The last time they tried to strike for (checks notes) paid time off, Congress said "No" and prevented them from striking. Legally.

A work to rule union would probably do more damage to space x than a strike…

The CO2 graph over decades is painfully clear.[1] From 321ppm in 1970 to 428ppm in mid-2005, measured in Hawaii atop Mauna Loa, far from any major CO2 sources. Everything else is noisy and statistical, but the CO2 measurement increases very steadily.

[1] https://gml.noaa.gov/ccgg/trends/


Yes, and the scary thing is that soon the atmospheric carbon PPM will be high enough to start affecting how we think, act, and feel on a day to day basis.

Surprisingly, no. Humans adapt to higher CO2 concentrations over a period of days to weeks. Submarines run as high as 5000ppm, which is way above normal atmospheric concentration.[1] Many indoor environments are above 1000ppm.

This seems to be like high altitude adaptation. It's going back and forth between concentrations that causes problems at moderate concentrations. The adaptation doesn't happen.

[1] https://www.nationalacademies.org/read/11170/chapter/5#51


They don't have a lost-kid feature?

In China, kids are accustomed to face recognition early.[1] The kids are checking into school via fare gates with face recognition. Here's an ad for Hikvision surveillance systems showing the whole system.[2] Hikvision has a whole series of videos presenting their concept of a kindly, gentler Big Brother. This is probably the most amusing.[3]

Amazon's concept is in some ways more powerful. They don't need full coverage. Just sparse, but widespread coverage. Anything that moves around will pass through the view of cameras at some point. Suspicious behavior can be detected in the back end cloud processing, which improves over time.

Flock has the same concept. Flock coverage is sparse in terms of area, but widespread.

"1984" was so last cen.

[1] https://www.youtube.com/shorts/SMKG8aLTJ38

[2] https://www.youtube.com/watch?v=XnHFJz-u85A

[2] https://www.youtube.com/watch?v=otAuH6FDhgw


I think the initial plan was to introduce lost-kid/elderly plan, but they thought people will be more willing to accept a pet version (pun indented) first.

Also the implementation is quite strange as well. I can imagine a version where the camera itself compares the recorded footage against a well-known database of lost children, just like the milk cartons.


[flagged]


Not who you are replying to, but I think mass surveillance is bad and evil, period. So, any person or company contributing toward mass surveillance is bad.

Most bad things have some good part you can point to. Mass surveillance and all of the other police and government aiding technologies usually point to improved conviction rates or something similar. But making police more efficient at convicting people isn't the only goal of society. That's only one part of what makes up a country and it's society. And, as the saying goes: "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."


That's beside the point? Gaining security by losing freedom was always on the table. What's interesting is the cultural shift toward not caring about losing freedom.

I think it is the point: there is a balance between freedom and safety.

For example, it is illegal to carry a loaded handgun onto a plane. Most people would agree that is an acceptable trade of freedom for safety.

There are places with even less safety and more “freedom” than the US so people who take an absolutist view towards freedom also need to justify why the freedoms that the US does not grant are not valuable.


> I think it is the point: there is a balance between freedom and safety.

Sometimes. But freedom and security are not always opposed.

It’s possible to trade freedom for security but it’s also possible that freedom creates security. Both can be true at the same time. Surveillance, not security, is what opposes freedom. Surveillance simply trades one form of insecurity for another at the cost of freedom.

> For example, it is illegal to carry a loaded handgun onto a plane. Most people would agree that is an acceptable trade of freedom for safety.

A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.

2A seems to make the case that the freedom to bear arms creates security. Given how history played out it’s hard to argue against. I’m not arguing we should be able to take guns on planes but 2A is an example of freedom creating security.


Everything I want to do in public I can still do.

What "freedom" is lost? I gain security and lose no freedoms (unless you are doing something illegal).

When property crime is up 53%.. plenty of people are willing to lose "freedom" whatever you are referring to, in exchange for safety.


How about just general privacy? I mean do you really want someone / the government to be able to track everywhere you go?

- Going to your girlfriends place while the wife is at work

- Visiting a naughty shop

- Going into various companies for interviews while employed

With mass surveillance there is the risk of mass data leak. Would you be comfortable with a camera following you around at all times when you're in public? I wouldn't be.


You were recorded smoking marijuana, an illegal drug at the federal level.

You were recorded walking into an abortion clinic, although face recognition identified as a resident of a state where abortion is illegal.


Well aren’t both of those things crimes? I’m not a fan of mass surveillance either but maybe pick a different example.

The second is clearly not. State governments don't have jurisdiction over their residents when they are out of state.

Read about Texas.

It's a crime to leave the state to get an abortion. They can prosecute when you return home.

There have been vigilante patrols in West Texas, watching the necessary routes out of the state. The law gives any resident the grounds to turn in their neighbor for planning to get an abortion.


Is "crime" one and the same as "wrong"?

The solution is to change the laws, not to stop enforcing them. Otherwise this is basically just giving up on the concept of having laws.

The point is to maintain pressure so that even when the law becomes unjust, people aren't immediately harmed.

Selective enforcement has always been the law of the land.

The right to privacy, to not let the government have a master record of everywhere you've ever been and everything you've ever said just in case they decide to someday revoke free speech and due process, or decide it doesn't apply. Lately we have plenty of examples of how quickly that can happen.

The Stasi were "tough on crime" too, back when that was expensive. How quickly we forget. Well, you're welcome to find a panopticon to live it, but excuse other for not finding it a good tradeoff.

Do you really need a 0-60mph time of 3.0 seconds in a mommymobile?

How much of that performance comes for free, from optimizing for range/efficiency?

What's the obvious "that could be less" in the system that wouldn't negatively impact efficiency?


None of it.

It the motor is smaller, it pulls less current.

If it pull less current, you can use batteries which aren't specced for high amps.

If use use less amps, you can use thinner cabling and split the batteries up i various compartments. That means heat is more distributed. Less active cooling, if any, is needed, of both batteries and motors.

All of the above can translate to less weight, which mean better range.


Weight has nearly no effect on range of an EV. The YouTube channel Aging Wheels has two good videos on this.

Here he talks about towing, and he demonstrates loading the truck to max capacity makes nearly no difference: https://youtu.be/UmKf8smvGsA

He also covered an attempted Cannonball run where they stuffed two extra battery packs into a Rivian R1T: https://youtu.be/yfgkh4Fgw98

Real differences makers are smaller wheels and aerodynamics


> are smaller wheels

Looks like rolling resistance decreases with diameter [1]. So, is it from the increased drag from higher stance? Would lowering the car the same work better?

[1] https://www.tirereview.com/science-behind-rolling-resistance...


I think it's mostly from aerodynamics. Lowering the car could help but even just smaller rims, with the same overall diameter (rim + tire), can have a 15% impact on range. This Engineering Explained[1] video does an okay job with some of the math but he clarifies it well with a comment:

> CLARIFICATION! Why do bigger wheels mean worse efficiency, when the overall tire diameter remains the same? This comes down to aerodynamics. A 20" wheel will cause more of a disruption in airflow than an 18" wheel. That's why Tesla (and others) uses aero covers on their wheels (Car & Driver testing showed it gives about a 3% efficiency bonus at speed). The smaller the wheel, the more of the side profile of the wheel & tire is perfectly flat (the tire is flat, the wheel open: more tire = more flat area, less open area). Ideally, you'd have just a plain, solid sheet for the wheel, but obviously that's not idea for brake cooling. Wheel covers are today's common compromise as they have some airflow, but minimal.

[1] https://youtu.be/NYvKxsYFqO8


> solid sheet for the wheel, but obviously that's not idea for brake cooling

So, temperature driven radial louvers!? :D


That's interesting. It demonstrates that regenerative braking really works. The energy you expend going uphill, you mostly get back going downhill. The energy you expend speeding up, you mostly get back slowing down. His tests were a round trip, so start and end altitude are the same. And he kept a fixed speed on a freeway, so there wasn't much acceleration energy expenditure or energy loss into friction brakes. You don't get drag or rolling resistance back, so that apparently dominates. Those don't vary too much with load.

Nice result.


It has an effect on the range of the tires.

> from optimizing for range/efficiency?

I meant for normal highway driving, not drag racing.

Optimal highway driving is still lowest resistance and losses. Cold weather driving is what mostly results in a battery capable of the high performance, from what I understand.

Is a "small" motors more efficient than a large one? I suspect no, with the assumption that everything is sized so the "drag race" operating range would be well into the peak, rather than sustained, operating range.


In my observation the primary purpose of this performance seems to be to be able to round on-ramps at really slow speeds and then floor it and merge leaving slower vehicles behind you with a crap situation.

That's like asking if mommy needs a razor sharp knife in the kitchen or if we should keep it dull for legacy reasons.

Uhhh, what? No it isn't.

Having a sharp knife is safer than a dull one.

Having a massive heavy steel box that weighs thousands of kilos that can accelerate that quick, that you can operate in public with little to no useful training is not safer. I'm sorry, no car outside of a race track needs to accelerate that quick. It's absurd.


The analogy fits perfectly. A slow car is also unsafe, that's why low speed plates exist along with minimum speed limits. There's no such thing as a vehicle with too much acceleration and never will be.

Yeah what qualifies as a "slow car"? An Amish dude on a horse and buggy?

I have driven plenty of "slow" cars with less than 100 horsepower without issue. If you can't figure out how to merge that's a skill issue, another prime reason not to be giving people cars with former F1 car levels of acceleration. You don't need 0-60 in 3 seconds to fucking merge, get real!


Having driven a very slow acceleration car, yes, yes you do, trust me. Nothing worse than merging into dense traffic with high-speed trucks next to you and not being able to come up to speed before the ramp ceases.

So eleven years ago someone put a backdoor in the Telnet daemon.

Who?

Where's the commit?



That link goes to a page full of random garbage. No commits there to be seen.

Apparently the owners of that website don't like my choice of user agent, and have decided to punish me accordingly.


Same here. It says please wait while verifying.

I just checked, and it's confirmed: I am definitely using a web browser. It seems my browser and this site have a different definition of web standards, however.

So exhausting to be surrounded by people with a paranoid, irrational fear of robots, who don't give a shit who they harm in their zeal to lash out and strike the evil bots.


That's crazy. This is core business critical software but they just YOLO critical changes without any automated tests? this PR would be insta-rejected in the small SAAS shop I work at.

If you think you can do better you're welcome to do better. I say this without a hint of sarcasm. This is how open source works. It's a do–ocracy, not a democracy. Whoever makes a telnet server gets to decide how the telnet server works and how much testing it gets before release.

Maybe the lesson here is to stop letting the GNU folks do things, if this is what they do. This is only one example of craziness coming out of the GNU camp.

Or, flip the responsibility to what it has always been understood to be, when using open source software from random volunteers (some being bad actors) on the internet for anything remotely critical: audit the source.

GNU doesn’t provide labor, only organizational tools like mailing lists and whatnot. The projects that GNU supports are still run by individual volunteers. If you want it done better then please volunteer so that you can be the one doing it better.

I am the one doing it better. GNU software is slowly being deprecated on my system, starting with glibc.

So you’re just changing which volunteers you depend on? That’s really productive of you. Thank you for your service.

You can enslave yourself to Microslop if you prefer.

Culture has changed a lot since the 20th century and older projects can have antiquated norms around things like testing. I was just listening to a recent podcast talking about how worrisome it is that OpenSSL has a casual culture about testing[1] and was reminded about how normal that used to be. I think in the case of telnetd you also have the problem that it’s been deprecated for multiple decades so I’d bet that they struggle even more than average to find maintainer time.

1. https://securitycryptographywhatever.com/2026/02/01/python-c...


Even with automated tests you'd need to think of this exploit right? Perhaps fuzzing would have got it. The mailing lists says they proved it successful on

- OpenIndiana

- FreeBSD

- Debian GNU/Linux

So not complete YOLO.

See https://lists.gnu.org/archive/html/bug-inetutils/2015-03/msg...

FWIW, a well known LLM agent, when I asked for a review of the patch, did suggest it was dodgy but didn't pick up the severity of how dodgy it was.


> a well known LLM agent

Which one?


Not GP, but my local Ministral 3 14B and GPT-OSS 20B didn't catch anything unless I gave some hints.

He says 'well known' so I assume Claude or GPT, I just don't get why he's being coy.

I thought by not naming it wouldn't shift the focus to the particular model, but it did the opposite. It was gpt-5.3-codex in medium mode.

Any business that has a telnet daemon able to be reached by an unauthenticated user is negligent. Just the fact that everything is in the clear is reason enough to never use it outside of protected networks.

unless it doesn’t matter if it’s evesdropped

Traffic could be tampered as well.

Sometimes that doesn't matter either. That is the valid use case of a plain-text protocol like telnet: doesn't matter.

Sure. But, contrary to what some people seem to think, "it's nothing secret" is not a sufficient justification to use an unencrypted plain-text protocol.

It literally is. I do not give a fuck if someone reads or fakes the wind speed from the sensor on my roof.

Most 90’s era software had zero tests. Nobody gave it a second thought.

This is quite untrue as a blanket statement. The problem is that there was massive cultural variation: if you installed a Perl module from CPAN you probably ran hundreds of tests. If you ran a C program, it ranged from nothing to “run this one input and don’t crash” to exhaustive suites. PHP tended towards nothing with a handful of surprises.

As a data point, my first tech job was QA for a COBOL compiler vendor. They supported roughly 600 permutations of architecture, operating system, and OS version with a byte-coded runtime and compiler written in C. I maintained a test runner and suite with many thousands of tests, ranging from unit tests to things like Expect UI tests. This was considered routine in the compiler vendor field, and in the scientific computing space I moved into. I worked with someone who independently reproduced the famous Pentium FDIV bug figuring out why their tests failed, which surprised no one because that was just expected engineering.

Then you had the other end of the industry where there was, say, 50k lines of Visual Basic desktop app where they didn’t even use version control software. At a later job, I briefly encountered a legacy system which had 30 years of that where they had the same routine copied in half a dozen places, modified slightly because when the author had fixed a bug they weren’t sure if it would break something else so they just created a copy and updated just the module they were working on.


True, it is colored by my own personal experienced. I remember CPAN, perl, and installing modules with tests. I also remember my day job: a 500,000 line C and C++ code base with literally 5 automated tests that nobody ever ran!

Yeah, I think it’s really hard to understand how much more cultural variation there was without first the internet and open source, and then services like GitHub, GitLab, BitBucket, etc. converging people onto similar practices and expectations.

Early '90s maybe. By the late '90s people knew tests were a good idea, and many even applied that in practice.

There's a famous XKCD about this: https://xkcd.com/2347/

In this case the hero's name is apparently Simon Josefsson (maintainer).


I feel like we should just start saying 2347. Everyone knows what you mean.

https://xkcd.com/2347/

Ah, someone beat me to it!


It can't be critical business software if the business to which it is critical isn't paying anything for it.

/s


Telnet's cleartext and always has been. A backdoor seems like overkill.

You still have to know the password or snoop on someone typing the password. But with this vuln, you don't. You can just get root instantly.

> backdoor

Do you mean that it's intentional? Why do you think so?


It wasn't a backdoor, just a very serious security bug. Congrats on jumping straight to conspiracy and paranoia, though.

It's only a conspiracy and paranoia if it's wrong. 11 years ago was 2015.

It is wrong. The author is known, was acting in good faith, and simply fucked up really badly.

I don't know what 11 years ago has to do with anything, besides the awful lifespan of such a severe bug.


> GNU organization

> giant security flaw

Checks out.


Waymo seems to be unnecessarily secretive about this. Why not let reporters visit the control centers? Zoox had the New York Times visit one a few years ago. It came out that there are about 1.5 support people per car. Nobody has a steering wheel. They hint to the cars by dropping "breadcrumbs" on screen.

Waymo has had blog posts detailing this stuff.

The ratio of workers to cars matters more, imo, than whether the workers drive the cars. The fundamental sell of self-driving is that it saves labor. If it effectively doesn't, self-driving essentially going to a luxury rather than a replacement for the existing models.

NVidia says no new gamer GPUs in 2026, and increasing prices through 2030. They're too focused on enterprise AI machines.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: