Hacker Newsnew | past | comments | ask | show | jobs | submit | dijksterhuis's commentslogin

nope, i say “i’m going to search for XYZ” or similar

> It’s very sad to see what’s happening over there

looks out window.

i don’t see a ministry of truth or posters of Big Brother. i think we’re still a ways away from 1984.


i’m absolutely, concretely and overwhelmingly fine with the concept of gun control here as a uk citizen.

i say this as someone who did target rifle shooting as a kid. so, i’ve been around weapons in a positive way.

the controls are a good thing.


> i’m absolutely, concretely and overwhelmingly fine with the concept of gun control here as a uk citizen.

That... speaks volumes of the citizens of the said country.


> This leads to the point: in general do we care about this non-determinism?

> Most of the time, no we don't.

well that’s a sweeping generalisation. i think this is a better generalised answer to your question.

> It depends on the problem we’re trying solve and the surrounding conditions and constraints.

software engineering is primarily about understanding the problem space.

are 99% of us building a pacemaker? no. but that doesn’t mean we can automatically make the leap to assuming a set of tools known for being non-deterministic are good enough for our use case.

it depends.

> Once you accept that the next stage is accepting that most of the time the non-deterministic output of an LLM is good enough!

the next stage is working with whatever tool(s) is/are best suited to solve the problem.

and that depends on the problem you are solving.


> are 99% of us building a pacemaker? no. but that doesn’t mean we can automatically make the leap to assuming a set of tools known for being non-deterministic are good enough for our use case.

This seems irrelevant?

Either way hopefully you test the pacemaker code comprehensively!

That's pretty much the best case for llm generated code: comprehensive tests of the desired behaviour.


> REST (Representational State Transfer) is a software architectural style

italics mine

https://en.wikipedia.org/wiki/REST

also REST is less about communicating, more about the high level user interface and the underlying implementations to arrive at that (although one could argue that’s a form of communicating).

the style does detail a series of constraints. but it’s not really a formal standard, which can get pretty low level.

standards often include things like MUST, SHOULD, CAN points to indicate what is optional; or they can be listed as a table of entries as in ASCII

https://en.wikipedia.org/wiki/ASCII

dictionary definition of a standard:

> standard (noun): An acknowledged measure of comparison for quantitative or qualitative value; a criterion

note that a synonym is ideal — fully implementing a standard is not necessary. the OAuth standard isn’t usually fully covered by most OAuth providers, as an example.

> The Model Context Protocol (MCP) is an open standard and open-source framework

again, italics mine.

https://en.wikipedia.org/wiki/Model_Context_Protocol

MCP, the technology/framework, is like Django REST framework. it’s an implementation of what the authors think is a good way to get to RESTful webpages.

MCP, the standard, is closer to REST, but it’s more like someone sat down with a pen and paper and wrote a standards document for REST.

They aren’t the same, but the have some similarities in their goals albeit focussed on separate domains, i.e. designing an interface for interoperability and navigation/usage… which is probably what you were really asking (but using the word protocol waaaaaaay too many times).


Thanks, and call me wrong, I think "Protocol" in MCP is somehow misused. Sure it is somehow a protocol, because it commits on something, but not in the technical sense. MCI (Model Context Interface) would probably the better name?

I agree that interface would be a better name than protocol, but Model Context Integration/Integrator would be even better as that is it's core intent: To integrate context into the model. Alternatively, Universal Model Context Interface (or integrator) would be an even better name imo, as that actually explains what it intends to do/be used for, whereas MCP is rather ambiguous/nebulous/inaccurate on the face of it as previously established further up-thread.

That said, I think as the above user points out, part of the friction with the name is that MCP is two parts, a framework and a standard. So with that in mind, I'd assert that it should be redefined as Model Context Interface Standard, and Model Context Interface Framework (or Integration or whatever other word the community best feels suits it in place of Protocol).

Ultimately though, I think that ship has sailed thanks to momentum and mindshare, unless such a "rebranding" would coincide with a 2.0 update to MCP (or whatever we're calling it) or some such functional change in that vein to coincide with it. Rebranding it for "clarity's sake" when the industry is already quite familiar with what it is likely wouldn't gain much traction.


Wow, this is great. Calling it UMCI would have saved me a lot of confusion in the first place. But yeah I think the ship has sailed and it shows that a lot of things there were cobbled together in a hurry maybe.

> you're the worst sinner if you start engaging in e.g. collective action for social change instead of individual self-centredness.

erm... did you mean the e.g. examples to be the other way around?


No. 12 steps is very self-centred and while they don't say it's forbidden to join a union or prisoner's association, start a book club, work at a mutual aid centre or whatever and tell everyone 'hey I'm partially here because I need new friends and help to keep sober', it's also not something embedded in these types of programs.

Commonly the 'mentoring' in 12 steps is also based on you contacting your mentor, rather than building a network of people that check up on you whether you realise it's a good idea or not. This varies between groups, however, it's not as consistent as some of the other individualist traits they tend to have.

Edit: Personally I suspect this partially explains why 12 steps has a pretty bad success rate.


They are being sarcastic but their point is taken, it's a punishment based system on the individual for their moral failings rather than a collective solution based system.


Plenty of people go out the door after X years then go back in worse states than when they first showed up. Once picking up a drink/drug/whatever, they're off and running again. 10 - 12 are "maintenance" to stave that sort of thing off. There's no end to working 10 - 12. They're a daily practice of continued growth. There's no end point (recovered), recovery continues on (recovering).

It's also a practice to keep everyone on the same level. Everyone is an alcoholic -- otherwise it'd just be a bunch of old farts telling new guys what to do (then hardly anyone would come back).


i personally prefer this kind of version — if i want to do the maths to work out tweaks i can, but i’m not forced to do maths in my head to know/tweak the end value

    // a total of 
    // - backup interval = 24
    // - approx backup duration = 2
    // - “wiggle room” = 2
    ageAlertThresholdHours = 28
yes lazy devs are lazy and won’t want to or just won’t update the comments (be pedantic in review :shrug:). it’s all trading one thing off with another at the end of the day.

edit — also your version forces me to read horizontal rather than vertical, which takes longer ime.

sorry, i’ve basically done an unprompted code review. i feel like a bit of a dick now.


    const int backupIntervalHours = 24
    const int approxBackupDurationHours = 2
    const int wiggleRoomHours = 2
    const int ageAlertThresholdHours = backupIntervalHours + approxBackupDurationHours + wiggleRoomHours;

    static_assert(28 == ageAlertThresholdHours);
It's a shame more languages don't have static asserts... faking it with mismatched dimensions of array literal/duplicate keys in map literals is way too ugly and distracting from the intent.


Mmm...

    ageAlertThresholdHours = 24 + // backup interval
                              2 + // approx backup duration
                              2;  // "wiggle room"
No static assert needed, no need to pre-compute the total the first time, and no need to use identifiers like `approxBackupDurationHours`, the cognitive override about the possibility of colliding with other stuff that's in scope, or the superfluous/verbose variable declaration preamble.


I'm a believer in restricting the scope of definitions as much as possible, and like programming languages that allows creating local bindings for creating another.

For example:

    local
        val backupIntervalHours = 24
        val approxBackupDurationHours = 2
        val wiggleRoomHours = 2
    in
    val ageAlertThresholdHours = backupIntervalHours + approxBackupDurationHours + wiggleRoomHours
    end
Then it's easier to document what components a constant is composed of using code without introducing unnecessary bindings in the scope of the relevant variable. Sure constants are just data, but the first questions that pops into my head when seeing something in unfamiliar code is "What is the purpose of this?", and the smaller the scope, the faster it can be discarded.


Mentally discarding a name still takes some amount of effort, even if local.

I often write things the way you have done it, for the simple reason that, when writing the code, maybe I feel that I might have more than one use for the constant, and I'm used to thinking algebraically.

Except, that I might make them global, at the top of a module. Why? Because they encode assumptions that might be useful to know at a glance.

And I probably wouldn't go back and remove the constants once they were named.

But I also have no problem with unnamed but commented constants like the ones in the comment you responded to.


> sorry, i’ve basically done an unprompted code review. i feel like a bit of a dick now.

That's all fine.

Just note that this was one of the easiest examples I could find. For example, for reasons out of my control, the individual network configuration on a linux host is positively nuts. The decision whether to deploy routes, static DNS servers and such depends on 3-5 facts about the datacenter and the provider it's on.

In such a case, it is more maintainable to separate the facts about the provider, or the thing we are looking at (e.g. "Does this provider allow us to configure routes in their DHCP server?", from the computation/decision making ("Can the system rely on the routes from the DHCP servers?"), and all of that from the actual action based off of the decision ("Deploy routes statically if DHCP provided routes are not correct").


ime summaries/details of the “business what” are the most important part of “why the code exists/is written this way” — it’s the overarching problem that section of code aims to solve / needs to work around!

they can also be a good defence against newer seniors on the team refactoring willy nilly — can’t argue with business rules, but can argue over uncle bob. ;)


“responsibility” isn’t a boolean, at least in this human’s experience.

there are different degrees of responsibility (and accountability) for everyone involved. some are smaller, some are larger. but everyone shares some responsibility, even if it is infinitesimally small.


Would you say an AI researcher involved in LLMs today are as responsible for how AI is being deployed, as the developers/engineers who initially worked on TCP and HTTP are for the state of the internet and web is today?

I don't have any good answer myself, but eager to hear what others think.


it’s not for me to judge someone else’s degree of responsibility really, that’s up to each individual to do for themselves.


A quick search shows me (Disclosure: I think it is the duck.ai search thing rather than any article)

> TCP and HTTP protocols were primarily developed with funding and support from government agencies, particularly the U.S. Department of Defense and organizations like ARPA, rather than by non-profit entities. These protocols were created to facilitate communication across different computer networks

So um... yea?


That's to say they are or aren't responsible for what their technology is being used for?

So say the people who specified, implemented and deployed TCP and HTTP, should they be held responsible for aiding transmission of child pornography across international borders, for example?


No sorry, if you meant that should they be liable. I presume not.

I was just pointing out that information because I had thought that http was created by non profits/similar but It was HTML which was created in CERN

that being said, coming to the point, I think that no this shouldn't be the case for the people who specified TCP/HTTP

But also I feel like an AI researcher / the people who specified TCP are in a different categories because AI researcher companies directly work for AI companies which are then used so in a way, their direct company is facilitating that partially due to their help

On the other hand, People who have Specified Open source have no whatsoever relation similar to the AI company model perhaps.

I am not sure, there definitely is nuance but I would definitely consider AI researchers to be more than the people who created the specification of TCP/http as an example.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: