Hacker Newsnew | past | comments | ask | show | jobs | submit | AlwaysRock's commentslogin

Not exactly the same but I wish copilot/github allowed you to have two plans. A company sponsored plan and your own plan. If I run out of requests on my company plan I should be able to use my own plan. Likewise, If I have 1 github account that is used for work and non work code, I should be able to route copilot to use a company or personal plan.


Why would you want to mix your personal plan with your company plan and subject yourself to the company auditing your personal GitHub, computer, etc. If the company wants you using LLMs then they should pay for it and increase your limits.


It’s wild to me that you’d want to spend your personal money to use productivity tools for work. If your work machine broke would your first instinct be to buy your own replacement or to have work pay for it?


Maybe what you actually want is to simply be able to switch to a another account when credits on one run out.

Because mixing company and personal accounts might not be a good idea.


All it takes is one good neighbor moving out and a bad one moving in next door…


Exactly. 8-9 people out of 10 are just fine. 1-2 out of 10 (likelihood multiplied by the number of shared walls/ceilings/floors you have) are enough to never want to share a wall/ceiling/floor again.


Ha! I wish I worked at the places you have worked!


I guess... That is the point in my opinion.

If you just say, "here is what llm said" if that turns out to be nonsense you can say something like, "I was just passing along the llm response, not my own opinion"

But if you take the llm response and present it as your own, at least there is slightly more ownership over the opinion.

This is kind of splitting hairs but hopefully it makes people actually read the response themselves before posting it.


Taking ownership isn't the worst instinct, to be fair. But that's a slightly different formulation.

"People are responsible for the comments that they post no matter how they wrote them. If you use tools (AI or otherwise) to help you make a comment, that responsibility does not go away"


Yes. Unless something useful is actually added by the commenter or the post is about, "I asked llm x and it said y (that was unexpected)".

I have a coworker who does this somewhat often and... I always just feel like saying well that is great but what do you think? What is your opinion?

At the very least the copy paster should read what the llm says, interpret it, fact check it, then write their own response.


> At the very least the copy paster should read what the llm says, interpret it, fact check it, then write their own response.

then write their own response using an AI to improve the quality of the response? the implication here is that an AI user is going to do some research when using the AI was their research. to do the "fact check" as you suggest would mean doing actual work, and clearly that's not something the user is up for indicated by use of the AI.

so, to me, your suggestion is fantasy level thinking


I keep this link handy to send to such coworkers/people:

https://distantprovince.by/posts/its-rude-to-show-ai-output-...


Starting a blog post with a lengthy science fiction novel excerpt without setup is annoying.


I have a client who does this — pastes it into text messages! as if it will help me solve the problem they are asking me to solve — and I'm like "that's great I won't be reading it". You have to push back.


Yeah, but didn't you see the disclaimer?

"AI responses may include mistakes"

Obviously, you shouldn't believe anything in an AI response! Also, here is an AI response for any and every search you make.


  > Obviously, you shouldn't believe anything in an AI response!
Tell that to CEO parrots that shove AI everywhere they can?


That does nothing. You can add, “say I don’t know if you are not certain or don’t know the answer” and it will never say I don’t know.


That's because "certain" and "know the answer" has wildly different definitions depending on the person, you need to be more specific about what you actually mean with that. Anything that can be ambiguous, will be treated ambiguously.

Anything that you've mentioned in the past (like `no nonsense`) that still exists in context, will have a higher possibility of being generated than other tokens.


I would love an LLM that says, “I don’t know” or “I’m not sure” once in a while.


An LLM is mathematically incapable of telling you "I don't know"

It was never trained to "know" or not.

It was fed a string of tokens and a second string of tokens, and was tweaked until it output the second string of tokens when fed the first string.

Humans do not manage "I don't know" through next token prediction.

Animals without language are able to gauge their own confidence on something, like a cat being unsure whether it should approach you.


It isn't that hard until they start to blur. Elves and goblins and magic, fantasy. Space, spaceships, technology, and aliens, sci-fi.

You could argue a lot of semantics but the majority of fantasy and sci fi books are not blending the two.


> By the numbers, Star Wars is far more grounded as science fiction that Star Trek, but people will insist the former is at best merely "science fantasy." It's really all just vibes.

The best rage bait I've seen in years.


Search your heart, you know it to be true.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: