I've been a STEM field student and prof, and I've published some quite pure math research and also some AI research. My Ph.D. dissertation had its motivation from practice, e.g., from when I was Director of Operations Research at FedEx, and was an early case of what is now a major theme of the Department of Operations Research and Financial Engineering (ORFE) at Princeton. And since my Ph.D., I've made practical applications of math where the key was some relatively pure math research. Moreover, the crucial core of my intended to be fully practical startup is some applied math with some advanced pure math prerequisites.
So, my experience is that what I suggested is not a danger but motivation and stimulation of a lot in pure research.
Having some students do some work with people from outside academics is crucial for their professional development. Of course the work will have graduate student and faculty supervision and high quality. The students should welcome the business world contacts.
There is a lot of the very best pure medical research in labs not far from the wards with dying patients. In important senses, that there are real patients there with their lives literally depending on the results of the research helps both the patients and the research.
It is very much a fact of life in life, applications, and also pure research that a lot of good motivation is from good to excellent. In particular in research, for the younger researchers, finding good problems to work on is one of their most severe struggles. Well, in medicine, a young physician in a research-teaching hospital everyday can see, does see, patients dying in the wards, and that work can help the researcher find good problems to work on.
For your
> Whenever it makes sense to ground research in practice, CMU professors generally do so by working with industry and govt collaborators.
"Whenever", quite commonly and generally and no exceptions? Amazing. I'm thrilled. Good for CMU. Since one of my Ph.D. dissertation advisors was long President at CMU, maybe he was in part responsible for this amazing, thrilling situation?
Color me skeptical: My long experience in and around research tells me that pure research needs much more contact with and stimulation and motivation from practice. Yes, some pure researchers have found really good pure research problems and directions, and, then, they should continue on, but much more common is what I explained, using practical problems as motivation and stimulation, and justification, for research that might be, in my experience often can become, nicely general and pure. Or, if the research is all just routine, then pass the problem off to a ugrad for a class exercise, term paper, or senior honors paper. Else push forward for better results and encounter some real research problems.
Here is a big example: During WWII, G. Dantzig was working on military logistics, e.g., what to ship where, how, when to aid the war effort. After the war, at RAND for, IIRC, the USAF, he continued and as a first cut invented linear programming. About then a special case of that, for the transportation, problem, the "translocation of masses" resulted in a Nobel prize in economics for L. Kantorovich. So, linear programming was already making progress enough in pure research to yield a Nobel prize in economics. And there were some more Nobel prizes from linear programming and optimization.
To solve linear programming problems, Dantzig invented his simplex algorithm, basically a nice tweak on Gauss elimination for systems of linear equations. Cute. Not very pure but at one time rated as one of the most important pieces of work in engineering of the 20th century. In practice, nearly always it is shockingly fast, and it took some nice work decades later in some relatively pure math of computational geometry (K. Borgwardt) to show why it was so fast.
Continuing on, soon enough it was observed that, often in practice, it was required that the variables be restricted to whole number values. That is, the real, practical problems were often integer linear programming (ILP). IIRC, first cut, Dantzig expected that a tweak of his simplex algorithm would be able to handle that.
Work on ILP continued, for decades. There were lots of important practical problems for motivation, e.g., network design at Bell Labs. That problem has remained important, e.g., was the subject of an A. Goldman lecture at Johns Hopkins by MIT Dean of Science T. Magnanti. And there was progress on solutions.
ILP was taken seriously by, okay, Princeton grad R. Gomory.
By then computer science had discovered the problem of sorting, saw that simple bubble sort ran in O(n^2) but that heap sort ran in worst case and average case O(n ln(n)) and met the Gleason bound and, thus, was in the sense of big-O the fastest possible sorting algorithm for sorting just by comparing pairs of keys. So, this was progress in computational time complexity. Since heap sort is also in-place, it was also progress in computational space complexity.
So, with both the practical successes and the struggles of ILP and the practical success of the simplex algorithm and the specter of O( e^n ) for ILP, there was the serious research question of what would be the fastest algorithm in worst case for ILP. This question was asked and explored at Bell Labs and resulted in the now famous
Michael R. Garey and
David S. Johnson,
Computers and Intractability:
A Guide to the Theory of NP-Completeness,
ISBN 0-7167-1045-5,
W. H. Freeman,
San Francisco,
1979.
So, from there ILP is, yes, in NP-complete.
So, now we have at Clay Math in Boston a prize of $1 million for the first solution of the problem in computational time complexity of P versus NP, generally considered one of the most important problems in both pure and applied math and computer science.
Lesson: Practical problems, taken seriously, can result in some of the most important problems in pure research, and some of the progress in pure research can help get solutions to some practical problems. The motivation from pressing practical problems can help drive the research in both pure and applied research.
In particular, the OP was about CMU, CS, and AI. From what I've seen and heard about AI, a lot of what is of interest now, and likely a big part of the CMU AI ugrad program, is "modern regression analysis". Maybe CS and AI need modern here because otherwise they are open to accusations of reinventing and pushing out a lot of hype about some multivariate statistics quite mature as math 50+ years ago.
If CMU CS and AI are willing to take regression so seriously, also going for some of what I mentioned, e.g., convex programming, stochastic optimal control, should be regarded as much more worthy. Making stochastic optimal control more practical is one heck of a challenge but with some progress possible, e.g., as now at the ORFE Department at Princeton.
And we should note that much of the AI interest in regression is based on the work of L. Breiman in Classification and Regression Trees (CART). Breiman was, IIRC "an academic probabilist"; his text Probability (one of my favorites, e.g., for measurable selection) was all based heavily on measure theory; and his work on CART started by trying to get fits and predictive models starting with complicated data from clinical medicine. So, here again, some pressing practical problems in practical medicine led Breiman to CART which is now one of the main pillars of AI. Given that background, the CMU CS AI program should welcome the level of contact with real problems I described without your concern about the death of pure research.
> Lesson: Practical problems, taken seriously, can result in some of the most important problems in pure research, and some of the progress in pure research can help get solutions to some practical problems. The motivation from pressing practical problems can help drive the research in both pure and applied research.
I'm pretty sure I explicitly agreed that this is often the case in my original post, so we must be talking past one another :)
What I'm arguing for is basically just academic freedom: the freedom of faculty and students to make choices about where they should resarch agenda. As your extensive history demonstrates, THIS APPROACH WORKS! All of those people chose to engage with industrial because it made sense for their research agenda!
More importantly, we can come up with an equally lengthy wall of text detailing accomplishments that would not have been possible without the freedom to work on things that industry isn't all hot and bothered about. E.g., neural nets until about 5 years ago!
And an even lengthier wall of text describing silly research agendas that only existed because of industry hype (AOP anyone?)
Industry collaboration can be a tremendous impetus. However, it can also be a distraction from more important problems or even an impetus to focus on silly problems. Professors and students should be incentivized and encouraged to do good research; industrial collaboration can sometimes be a useful tool, but it is a means, not an end.
Finally, IMO, the central premise of your argument (that there's not enough collaboration) is not factually accurate in the current climate. Read the proceeds of any major AI conference. Filter out papers written at top universities. Count the number of papers with vs. without an industrial collaborator named in the acks or even in the author list. Failure to collaborate isn't a failing of modern mainstream AI research.
I never tried to constrain "freedom" in research. Freedom in research is crucial: With a good researcher, often only they have a good sense of the promise of their research direction. And, they are the one making a bet: If their research is soon good, then, modulo academic politics, they make progress in their academic career, e.g., maybe get to upgrade their 20 year old used Mazda to a 10 year old used Toyota and celebrate with a toast of tap water!!!
If current academic AI research is too close to non-academic problems, okay, I can believe that but see little downside since I have no respect for 90+% of current AI work anyway.
Net, contact with non-academic problems is crucial for STEM fields but with bad work can be abused. Of course it can be abused, special case of the general situation that nearly anything can be abused.
I spent a lot of time in STEM field academics: My considered, solid, well informed opinion is that there is far too little contact with important non-academic problems. E.g., when I went from Director of Operations Research at FedEx to graduate school in applied math, I brought with me a nice collection of important practical problems. In casual conversations, as I described some of those problems, even very pure research profs took detailed notes furiously. When I was an applied math prof in a B-school and MBA program, there were nearly no people from business in the halls with pressing problems looking for solutions, and that situation was really bad for the the business people, the students, the faculty, faculty research, and the B-school.
The suspicion has to be strong that if a research-teaching hospital were run like a B-school, then the physicians and researchers would be off studying the possibilities of silicon-based life on the planet Faraway, no one would know even how to dress a skinned knee, there would be no progress on any of the major, pressing medical problems, e.g., heart disease, cancer, and no one would want to go to a hospital no matter how badly they hurt.
> I never tried to constrain "freedom" in research. Freedom in research is crucial
Well then, I think we're violently agreeing. However, a couple of observations.
> e.g., maybe get to upgrade their 20 year old used Mazda to a 10 year old used Toyota and celebrate with a toast of tap water!!!
Here is CMU's dean on what happens to faculty with successful AI/ML research agendas: "How to retain people who are worth tens of millions of dollars to other organizations is causing my few remaining hairs to fall out".
I didn't realize how expensive used Toyotas have gotten...
>...applied math
I'll again reiterate that CS and especially AI have a completely different culture.
Also, this sentence seems to somehow undermine your entire thesis:
> If current academic AI research is too close to non-academic problems, okay, I can believe that but see little downside since I have no respect for 90+% of current AI work anyway.
So, my experience is that what I suggested is not a danger but motivation and stimulation of a lot in pure research.
Having some students do some work with people from outside academics is crucial for their professional development. Of course the work will have graduate student and faculty supervision and high quality. The students should welcome the business world contacts.
There is a lot of the very best pure medical research in labs not far from the wards with dying patients. In important senses, that there are real patients there with their lives literally depending on the results of the research helps both the patients and the research.
It is very much a fact of life in life, applications, and also pure research that a lot of good motivation is from good to excellent. In particular in research, for the younger researchers, finding good problems to work on is one of their most severe struggles. Well, in medicine, a young physician in a research-teaching hospital everyday can see, does see, patients dying in the wards, and that work can help the researcher find good problems to work on.
For your
> Whenever it makes sense to ground research in practice, CMU professors generally do so by working with industry and govt collaborators.
"Whenever", quite commonly and generally and no exceptions? Amazing. I'm thrilled. Good for CMU. Since one of my Ph.D. dissertation advisors was long President at CMU, maybe he was in part responsible for this amazing, thrilling situation?
Color me skeptical: My long experience in and around research tells me that pure research needs much more contact with and stimulation and motivation from practice. Yes, some pure researchers have found really good pure research problems and directions, and, then, they should continue on, but much more common is what I explained, using practical problems as motivation and stimulation, and justification, for research that might be, in my experience often can become, nicely general and pure. Or, if the research is all just routine, then pass the problem off to a ugrad for a class exercise, term paper, or senior honors paper. Else push forward for better results and encounter some real research problems.
Here is a big example: During WWII, G. Dantzig was working on military logistics, e.g., what to ship where, how, when to aid the war effort. After the war, at RAND for, IIRC, the USAF, he continued and as a first cut invented linear programming. About then a special case of that, for the transportation, problem, the "translocation of masses" resulted in a Nobel prize in economics for L. Kantorovich. So, linear programming was already making progress enough in pure research to yield a Nobel prize in economics. And there were some more Nobel prizes from linear programming and optimization.
To solve linear programming problems, Dantzig invented his simplex algorithm, basically a nice tweak on Gauss elimination for systems of linear equations. Cute. Not very pure but at one time rated as one of the most important pieces of work in engineering of the 20th century. In practice, nearly always it is shockingly fast, and it took some nice work decades later in some relatively pure math of computational geometry (K. Borgwardt) to show why it was so fast.
Continuing on, soon enough it was observed that, often in practice, it was required that the variables be restricted to whole number values. That is, the real, practical problems were often integer linear programming (ILP). IIRC, first cut, Dantzig expected that a tweak of his simplex algorithm would be able to handle that.
Work on ILP continued, for decades. There were lots of important practical problems for motivation, e.g., network design at Bell Labs. That problem has remained important, e.g., was the subject of an A. Goldman lecture at Johns Hopkins by MIT Dean of Science T. Magnanti. And there was progress on solutions.
ILP was taken seriously by, okay, Princeton grad R. Gomory.
By then computer science had discovered the problem of sorting, saw that simple bubble sort ran in O(n^2) but that heap sort ran in worst case and average case O(n ln(n)) and met the Gleason bound and, thus, was in the sense of big-O the fastest possible sorting algorithm for sorting just by comparing pairs of keys. So, this was progress in computational time complexity. Since heap sort is also in-place, it was also progress in computational space complexity.
So, with both the practical successes and the struggles of ILP and the practical success of the simplex algorithm and the specter of O( e^n ) for ILP, there was the serious research question of what would be the fastest algorithm in worst case for ILP. This question was asked and explored at Bell Labs and resulted in the now famous
Michael R. Garey and David S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, ISBN 0-7167-1045-5, W. H. Freeman, San Francisco, 1979.
So, from there ILP is, yes, in NP-complete.
So, now we have at Clay Math in Boston a prize of $1 million for the first solution of the problem in computational time complexity of P versus NP, generally considered one of the most important problems in both pure and applied math and computer science.
Lesson: Practical problems, taken seriously, can result in some of the most important problems in pure research, and some of the progress in pure research can help get solutions to some practical problems. The motivation from pressing practical problems can help drive the research in both pure and applied research.
In particular, the OP was about CMU, CS, and AI. From what I've seen and heard about AI, a lot of what is of interest now, and likely a big part of the CMU AI ugrad program, is "modern regression analysis". Maybe CS and AI need modern here because otherwise they are open to accusations of reinventing and pushing out a lot of hype about some multivariate statistics quite mature as math 50+ years ago.
If CMU CS and AI are willing to take regression so seriously, also going for some of what I mentioned, e.g., convex programming, stochastic optimal control, should be regarded as much more worthy. Making stochastic optimal control more practical is one heck of a challenge but with some progress possible, e.g., as now at the ORFE Department at Princeton.
And we should note that much of the AI interest in regression is based on the work of L. Breiman in Classification and Regression Trees (CART). Breiman was, IIRC "an academic probabilist"; his text Probability (one of my favorites, e.g., for measurable selection) was all based heavily on measure theory; and his work on CART started by trying to get fits and predictive models starting with complicated data from clinical medicine. So, here again, some pressing practical problems in practical medicine led Breiman to CART which is now one of the main pillars of AI. Given that background, the CMU CS AI program should welcome the level of contact with real problems I described without your concern about the death of pure research.