The main benefit for me to just know and primarily use mermaid is that it integrates with markdown in Azure DevOps and GitHub seamlessly. No need for a text to image build step or similar.
no, it was a thing back in the 90's (and likely earlier.)
The goal was to write a program that was "impossible to read". Some of the winners are seriously creative. And it's very much just smat kids and their machines...
I remember one back in the day which wasn't obfuscated at all. It was clearly a simple utility. Except that it didn't do what you thought it did, it did something completely different. (alas I can't remember the details...)
There are a number of blog posts online and StackOverflow questions explaining IOCCC entries, and they generally seem to be built/obfuscated by hand. It's an art and it's far from trivial, which is one of the reasons why the contest exists :)
That's definitely a thing. Additionally, humans are surprisingly friendly in all the wrong ways when it comes to physical security (tailgating, "forgotten ID/credentials", etc.).
Can’t help but think of the 2002 Ted Chiang novelette “Liking What You See” and its tech “Calliagnosia,” a medical procedure that eliminates a person’s ability to perceive beauty. Excellent read (as are almost all his stories, imho).
Don't know about that - but we're incredibly sensitive to some minor changes to faces;
I saw a clip not too long ago of a face digitally transitioning between male and female, the changes themselves were incredibly subtle, and yet the result was obvious and undeniable.
There's also the uncanny valley, faces that are almost human yet very slightly off, and somehow come across as incredibly creepy.
Experiments have shown that we perceive our own face as more attractive than it really is. When presented with a series of morphed pictures of their own face, from less attractive to more attractive, people tend to not pick the unmodified picture as the real one, but one morphed slightly more towards attractive (where “attractive” mostly means “symmetric”, IIRC).
"We’re starting to roll out advanced Voice Mode to a small group of ChatGPT Plus users. Advanced Voice Mode offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions.
Users in this alpha will receive an email with instructions and a message in their mobile app. We'll continue to add more people on a rolling basis and plan for everyone on Plus to have access in the fall. As previously mentioned, video and screen sharing capabilities will launch at a later date.
Since we first demoed advanced Voice Mode, we’ve been working to reinforce the safety and quality of voice conversations as we prepare to bring this frontier technology to millions of people.
We tested GPT-4o's voice capabilities with 100+ external red teamers across 45 languages. To protect people's privacy, we've trained the model to only speak in the four preset voices, and we built systems to block outputs that differ from those voices. We've also implemented guardrails to block requests for violent or copyrighted content.
Learnings from this alpha will help us make the Advanced Voice experience safer and more enjoyable for everyone. We plan to share a detailed report on GPT-4o’s capabilities, limitations, and safety evaluations in early August."