To me "AI" means it learns by itself. Including the decision what to learn, and how. Having done quite a bit of statistics courses over the last few years (albeit with a focus on medicine/biology) and some of free the basic machine learning courses, AFAICS that is not the case, what and how something is learned is all decided and done by the human(s) in front of the computer, no? So, I don't see much "intelligence" - in the machine. Lots of it in those humans, of course.
But with the application of those diciplines you can build machines that "learn." The "by themselves" bit is agi which is a specific subset of ai and would be more on the research side of things, not a bs.
I think the real issue appears that the media has confused people as to what "ai" practically means from a compsci perspective.
Your definition doesn't match the real definition. Technically, a hard coded rule-based decision algorithm is a form of very basic AI. You seem to be confusing ML and AI- ML is a subset of AI that focuses on training a complex model.
No I'm not confused, as I said, I took the courses. I know what is. I'm just saying that I don't see this as "AI" at all and why. I don't really care that someone decided to define what is possible now as "AI" for whatever reason because I don't agree and don't see that as reasonable. It's not like those "definitions" are laws either, ask around, even among professionals, and you get as many different definitions as you want. Therefore I prefer to use a "common sense expectation/interpretation" approach, and "common sense" here to me means coming from above (where we want to go), not from below (what we have thus far achieved).