I'm big on user first, if that dialog had sirens blaring, a gif and ten arrows pointing that "THIS MAY EXECUTE CODE" and people still didn't get the idea, I'd say it needs fixing. It can't be said that they didn't try or that they hid it though.
So at the end of the day its still unclear whether it executes code or not? Just say "this WILL execute code" and specify exactly which code it tries to execute by default.
I don't know about you people, but I always read this as "it may execute code if you run a build step".
Not "I will execute autorun.inf like an idiot."
And NO. I do not want my IDE to execute code when i open files for editing. I want it to execute code only as part of an explicit step that I initiate.
I don't mean this in a bad way but, have you visited other countries? I'm Spanish but lived for about half a year in Krakow, and the difference is just so stark I can't imagine skepticism. In winter the air smells burned. Fog is a phenomenon, sure, but what takes place most of winter in Krakow is not fog. It's just smoke.
I didn't know about any of this when I first travelled there, in fact when my boss at the time recommended I got a mask I thought he was paranoid or something. Absolutely not.
But it wouldn't be for those who "speak exclusively English", rather, for those who speak English. Not only that but it's also common to have system language set to English, even if one's language is different.
There's about 1.5B English speakers in the planet.
Let's indeed limit the use case to the system language, let's say of a mobile phone.
You pull up a map and start navigation. All the street names are in the local language, and no, transliterating the local names to the English alphabet does not make them understandable when spoken by TTS. And not to mention localised foreign names which then are completely mangled by transliterating them to English.
You pull up a browser, open up an news article in your local language to read during your commute. You now have to reach for a translation model first before passing the data to the English-only TTS software.
You're driving, one of your friends Signals you. Your phone UI is in English, you get a notification (interrupting your Spotify) saying 'Signal message', followed by 5 minutes of gibberish.
But let's say you have a TTS model that supports your local language natively. Well due to the fact that '1.5B English speakers' apparently exist in the planet, many texts in other languages include English or Latin names and words. Now you have the opposite issue -- your TTS software needs to switch to English to pronounce these correctly...
And mind you, these are just very simple use cases for TTS. If you delve into use cases for people with limited sight that experience the entire Internet, and all mobile and desktop applications (often having poor localisation) via TTS you see how mono-lingual TTS is mostly useless and would be switched for a robotic old-school TTS in a flash...
> only that but it's also common to have system language set to English
Ask a German whether their system language is English. Ask a French person. I can go on.
If you don't speak the local language anyway, you can't decode pronounced spoken local language names anyway. Your speech sub-systems can't lock and sync to the audio track containing languages you don't speak. Let alone transliterate or pronounce.
Multilingual doesn't mean language agnostic. We humans are always monolingual, just multi-language hot-swappable if trained. It's more like you can make;make install docker, after which you can attach/detach into/out of alternate environments while on terminal to do things or take in/out notes.
People sometimes picture multilingualism as owning a single joined-together super-language in the brain. That usually doesn't happen. Attempting this especially at young age could lead a person into a "semi-lingual" or "double-limited" state where they are not so fluent or intelligent in any particular languages.
And so, trying to make an omnilingual TTS for criticizing someone not devoting significant resources at it, don't make much sense.
> If you don't speak the local language anyway, you can't decode pronounced spoken local language names anyway
This is plainly not true.
> Multilingual doesn't mean language agnostic. We humans are always monolingual, just multi-language hot-swappable if trained
This and the analogy make no sense to me. Mind you I am trilingual.
I also did not imply that the model itself needs to be multilingual. I implied that the software that uses the model to generate speech must be multilingual and support language change detection and switching mid-sentence.
Funny enough, it would the movement stopped being about harassing women the moment the media stopped writing about it, advocates kept on going, criticizing ideological push into videogames to this day. At the same time by now both Brianna Wu and Anita Sarkeesian have been shown to be grifters who really knew jackshit but how to play a crowd.
Tailwind docs are also the source of, duh, docs. People browse them way less and as a result Tailwind gets way less funding.
The problem is that Wikipedia should be set for life at this point, and they insist on rejecting that notion. There may be a future in which Wikipedia closes, and if that comes to pass it will due to wanton disregard for people's goodwill.
Being critical of favorable actions towards a rival country shouldn't be divisive, and if it is, well, I don't think the problem is in the criticism.
Also the link doesn't mention open source? From a google search, he doesn't seem to care much for it.
reply