Of course HTML is a programming language. It's one of the languages I use every day to program with. I'm not sure what the definition of a programming language would be beyond that.
Do you mean "Turing-complete" language? Or maybe "procedural programming language"? I agree HTML isn't either of those, but those aren't the be-all and end-all of programming now, are they?
I, and most of us, mean a language in which one can express a computer program, which is a set of instructions for a computer to execute. You don't execute an HTML file, you display it, render it. You can't implement fizz buzz in HTML. At best, you mark up its output. With HTML, you don't instruct, you describe. You instruct what to do with JavaScript, or Python, or whatever programming languages you use client or server side.
A programming language doesn't need to be procedural, it can be functional, or use another computationally equivalent paradigm. I'm not quite sure it needs to be Turing complete, but possibly.
A programming language lets you express to some processor that provides a set of computation primitives what to do with the memory cells you have at your disposal, and in general it lets you deal with input and output.
If you consider any language you program with to be a programming language, then CSS, JSON, YAML, XML, markdown (that's what your readme is likely written in) and even English (that's what you use to express the specs, the bugs, maybe your notes / drafts, the comments, possibly the language the singer of the songs you're listening to while programming use) or UML need to be programming languages too. That's not quite useful. "Program with" is too large and would make the "programming" qualifier largely useless.
If we're doing "features": password fields with no option to view the plaintext value? I use long passwords, and if I'm in a safe place, I would much rather see what I'm typing and correct any typos I make along the way than have to retype the same password multiple times with long delays between each attempt.
Also, once a day my Touch ID stops working and I need to log in with my password again. That's fine, the passwordless access expires after 24 hours or so, fair enough. But I turn my laptop on fresh every morning and have to also put my password in then. If I do that, I at least expect to be and to use my fingerprint until the end of the day.
The wombo combo is field that doesn't allow pasting (???) plus app that forgets where you are. So you can't paste, so you type the first few characters, dismiss the app to look at what the field should be, come back and boom - it's cleared out or, worse, you're on the home page. For some reason banks LOVE this.
I had a similar issue trying to create an Apple TV account. I already had an Apple account that I was using on my work laptop (first mistake - I should have created a work account there instead), and for 2FA, I needed to wait for a code to pop up on that laptop. It never came. There was an email alternative, but that also didn't work properly (maybe only on certain devices, IIRC?). Apparently in the settings you can request a 2FA code, though, so I did that... but that only had five digits, whereas I needed to give six for the code to work. Eventually I figured out that Apple had just forgotten to zero-pad the 2FA code out to six digits, so I needed to add a leading zero to make things work.
The worst part of this is that now my Apple TV account is linked to a laptop that I don't always have on me. And even if I did have it on me, I don't want to get a laptop out and turn it on just to do 2FA. I already have a TOTP app on my phone, just let me put everything in there and leave me be.
My experience with MacOS is generally that it's about as buggy as my home Linux setup. That's partly a testament to how solid Linux can be these days, but at the same time, it feels pretty damning considering only one of these operating systems is free (in any sense of the word). And that's not including stuff like the configurability of the whole thing.
Other people might point to more specific tells, but instead I'll reference https://zanlib.dev/blog/reliable-signals-of-honest-intent/, which says that you can tell mainly because of the subconscious uncanny valley effect, and then you start noticing the tells afterwards.
Here, there's a handful of specific phrases or patterns, but mostly it's just that the writing feels very AI-written (or at least AI-edited). It's all just slightly too perfect, like someone's trying to write the perfect LinkedIn post but are slightly too good at it? It's purely gut feeling, but I don't think that means that it's wrong (although equally it doesn't mean that it's proven beyond reasonable doubt either, so I'm not going to start any witch hunts about it).
I bought a second hand Fairphone, and I'm very happy with it, except that my wife, a colleague of mine, and some friends of ours now also gave Fairphones, so when one buzzes we all instinctively check our pockets because they all sound the same...
I also bought headphones from the same company, and while they're probably not the best for audio quality, it was great being able to repair them when the headband broke. Generally, I'm a very happy Fairphone customer.
> when one buzzes we all instinctively check our pockets because they all sound the same
Isn't that the same for every brand? I have a friend who worked in cybersecurity in a certain phone company and was getting very stressed whenever my phone, which happened to be from the same brand, was ringing :D
I guess one can change the default sound, isn't that the case with fairphones?
I have a Samsung Moto, and it has a very default ringtone, not really a tone since it says "Hello, Moto" which is embarrassing but I haven't made the effort to switch tones, at any rate while I will be confused if someone in proximity to me gets a call on their Moto, my experience they don't have to be very far from me before I realize instinctively, that sound is far enough away it can't be my phone, although it irritates me nonetheless.
And I've been seated eating with people who had the same phones and I realized no, it must be their phone (although I feel a strong urge to check), because my ears are able to determine direction of a sound.
I'm also old and keep getting told I'm going deaf, so my question is, are people really not able to tell it's not their phone or are they just not thinking it through before checking.
Samsung Moto? Two different companies with very different phones. I'm surprised that such a mutant exists. Reads to me Car (with square wheels).
Moto is the only big brand I ever consider for a phone, while Samsung has never been as much as a consideration. Moto has had, which is changing, a bit of freedom - enough to tweak it into resembling a pure android experience. Samsung is incorrigibly infested - and if they ever start giving phones to prisoners, they'll be Samsung.
Just in case you wondered, and even if you didn't,
I admire ignorance of smartphones and consider such as virtue. I obtained my first in 2018 after years of resistance. But driving a semi and not being the best with maps and logistics, I finally capitulated.
And back then, although cyanogenmod was gone, they weren't too bad. 2019 changed a lot, with autonomous, respawning, immutable "services" and things have digressed severely since. Hence my visiting this post for Fairphone.
So take pride in your purity. It only gets worse the more you know.
It's less the sound, and more the buzz when it's on vibration. I've never found a way of changing that, unfortunately. It's probably true for other brands, but I've never really had a phone that other people have also used, whereas now I'm in a (very small) bubble that seems to be happily converging on Fairphones...
I think this is connected to the overlap and offset that are used layer to account for complex or symmetrical letter shapes. If the author had just split the grid, those effects would have been harder to achieve.
The thing is that you can still have high-level abstractions without them needing to be as slow as React. React does a slow thing by default (rerendering every child component whenever state changes, so every component in the UI if top-level state is changing), and then requires careful optimisation to correct for that decision.
But you can also just... update the right DOM element directly, whenever a state changes that would cause it to be updated. You don't need to create mountains of VDOM only to throw it away, nor do you need to rerender entire components.
This is how SolidJS, Svelte, and more recently Vue work. They use signals and effects to track which state is used in which parts of the application, and update only the necessary parts of the DOM. The result is significantly more performant, especially for deeply nested component trees, because you're just doing way less work in total. But the kicker is that these frameworks aren't any less high-level or easy-to-use. SolidJS looks basically the same as React, just with some of the intermediate computations wrapped in functions. Vue is one of the most popular frameworks around. And yet all three perform at a similar level to if you'd built the application using optimal vanilla JavaScript.
Note that it's not clear that any of the JustHTML ports were actually ports per se, as in the end they all ended up with very different implementations. Instead, it might just be that an LLM generated roughly the same library several different times.
I had a course on natural language processing with Prolog, and the first third of the exam was just evaluating Prolog expressions and figuring out what syntax errors had been made. This of course took so long that everyone spent at least two thirds of the time on that one portion...
It was a weird course, though. Because we spent so long learning Prolog, the second half of the course was really rushed - lots of learning about English grammar and syntax trees, and how you could model them in different ways, and then the last lecture was just "oh, by the way, here are all the ways this doesn't work and is a complete dead end - your exam is on the 14th".
IIRC there was a part two to the course, but I think it clashed with something I was more interested in so I never took it. It was cool to learn Prolog, but I wish it had been a whole course on just Prolog and actual present-day use-cases, as opposed to this weird half-and-half course about an approach to NLP that even back then wasn't being pursued with much enthusiasm.
According to [1], the most important factor for the power consumption of code is how long the code takes to run. Code that spreads over multiple cores is generally more power efficient than code that runs sequentially, because the power consumption of multiple cores grows less than linearly (that is, it requires less than twice as much power to run two cores as it does one core).
Therefore if parallelising code reduces the runtime of that code, it is almost always more energy efficient to do so. Obviously if this is important in a particular context, it's probably worth measuring it in that context (e.g. embedded devices), but I suspect this is true more often than it isn't true.
>Therefore if parallelising code reduces the runtime of that code, it is almost always more energy efficient to do so
Only if it leads to better utilisation. But in the scenario that the parent comment suggests, it does not lead to better utilisation as all cores are constantly busy processing requests.
Throughput as well as CPU time across cores remains largely the same regardless of whether or not you paralellise individual programs/requests.
That's true, which is why I added the caveat that this is only true if parallelising reduces the overall runtime - if you can get in more requests per second through parallelisation. And the flip side of that is that if you're able to perfectly utilise all cores then you're already running everything in parallel.
That said, I suspect it's a rare case where you really do have perfect core utilisation.
Do you mean "Turing-complete" language? Or maybe "procedural programming language"? I agree HTML isn't either of those, but those aren't the be-all and end-all of programming now, are they?
reply