Not exactly the same but I wish copilot/github allowed you to have two plans. A company sponsored plan and your own plan. If I run out of requests on my company plan I should be able to use my own plan.
Likewise, If I have 1 github account that is used for work and non work code, I should be able to route copilot to use a company or personal plan.
Why would you want to mix your personal plan with your company plan and subject yourself to the company auditing your personal GitHub, computer, etc. If the company wants you using LLMs then they should pay for it and increase your limits.
It’s wild to me that you’d want to spend your personal money to use productivity tools for work. If your work machine broke would your first instinct be to buy your own replacement or to have work pay for it?
Exactly. 8-9 people out of 10 are just fine. 1-2 out of 10 (likelihood multiplied by the number of shared walls/ceilings/floors you have) are enough to never want to share a wall/ceiling/floor again.
If you just say, "here is what llm said" if that turns out to be nonsense you can say something like, "I was just passing along the llm response, not my own opinion"
But if you take the llm response and present it as your own, at least there is slightly more ownership over the opinion.
This is kind of splitting hairs but hopefully it makes people actually read the response themselves before posting it.
Taking ownership isn't the worst instinct, to be fair. But that's a slightly different formulation.
"People are responsible for the comments that they post no matter how they wrote them. If you use tools (AI or otherwise) to help you make a comment, that responsibility does not go away"
> At the very least the copy paster should read what the llm says, interpret it, fact check it, then write their own response.
then write their own response using an AI to improve the quality of the response? the implication here is that an AI user is going to do some research when using the AI was their research. to do the "fact check" as you suggest would mean doing actual work, and clearly that's not something the user is up for indicated by use of the AI.
so, to me, your suggestion is fantasy level thinking
I have a client who does this — pastes it into text messages! as if it will help me solve the problem they are asking me to solve — and I'm like "that's great I won't be reading it". You have to push back.
That's because "certain" and "know the answer" has wildly different definitions depending on the person, you need to be more specific about what you actually mean with that. Anything that can be ambiguous, will be treated ambiguously.
Anything that you've mentioned in the past (like `no nonsense`) that still exists in context, will have a higher possibility of being generated than other tokens.
> By the numbers, Star Wars is far more grounded as science fiction that Star Trek, but people will insist the former is at best merely "science fantasy." It's really all just vibes.