What happens if you run it on itself - eg you do some sort of basic program with a little bit of structure and generate code from it, and then analyze the code, eg to make a graph of it (if that makes sense). Or to put it another way, what are the possibilities of making better tools Mathematica (or Wolfram Language or whatever it is called now) using Mathematica?
It would be slightly easier than coding Mathematica from scratch (you have better computers and an example to follow) but much harder to fund, since you would be re-treading old ground and wouldn't be able to competitive for a long time. The types of problems simplified by Mathematica don't include coding Mathematica. To be sure, it pioneered a few nifty techniques that you would want to take advantage of, especially regarding symbolic manipulation, but you wouldn't be able to implement them faster if you used Mathematica (vs just drawing inspiration from it).
Mathematica's huge selling points (to me) are that it's a fantastic CAS and it shatters the barriers between analytic and numeric tools. Many engineering and statistical codes involve 2/3 math and 1/3 computer science (or the other way around). Mathematica is in a completely different league for those problems because it handles the 2/3 about 50x better than your typical programming language and it handles the interface between the 2/3 and the 1/3 about 100x better than your typical programming language.
Mathematicians, engineers, and scientists regularly use a number of abstractions that your typical programmer doesn't care about: integration, differential equations, linear algebra (including infinite-dimensional and tensors), coordinate systems, statistical distributions, etc, etc. These abstractions are almost never considered during language design so you spend hours writing glue code to achieve the simplest of tasks. Unless you have a Mathematica license.
Example: you're writing a finite element code. The "meat and potatoes" of the problem involve putting a differential equation into weak form and evaluating a bunch of integrals to compute matrix elements. Mathematica can represent and display those elements using proper mathematical notation, it can find analytic AND numeric solutions, it can hold off on substituting variables until the last second (or elegantly substitute different sets of values for different problems), it can visualize all of this in a dozen different ways without worrying about indices and sampling rates, and it can generate code if you're feeling trusting that day. Now you want to evaluate convergence for a bunch of different meshing strategies. Not a problem: Mathematica can interpolate your meshed solutions to whatever order you want in whatever dimension using a single command and it can take differences between those interpolations and integrate with another. You never have to notice that the mesh points don't line up. You never have to spend a second thinking about multi-dimensional quadrature on tetrahedral meshes. Now you want to compare statistics about the mesh graph to an analytic model you found using Bayes Theorem in 3D parameter space and solve for the maximum likelihood values? Again, you don't have to worry about meshing or gradient descent or how to put least squares into matrix form. Just write the high-level math and let Mathematica worry about the rest. Now you want to use a simplified PDE model with an analytic solution and compare that to the numerical solutions? Mathematica can solve the PDE and take care of all the interpolation and quadrature needed to integrate the square difference with the numerical solutions. Then it can plot all these errors and update the plot automatically as you go back and change things. The value-addition vs a general purpose language is off the charts.
None of that translates to the process of writing another Mathematica, since the algorithms Mathematica lets you abstract away are precisely the things you would need to worry about to implement another Mathematica.
Mathematica = A + B = (1000s of algorithms that mathematicians, scientists, and engineers care about, all coded with a common interface) + (syntax that elegantly glues them together)
B is comparatively easy and interesting if you like languages -- Mathematica's language is LISP on steroids. A is the schlep, and it's a doozy.
Thanks for the comprehensive answer! I phrased the question poorly, in that I didn't intend to suggest running Mathmatica against its own codebase but rather against shorter end-user programs with a view to developing IDE-type addons.
I agree with your general comments, as I hate writing glue code and my interest is primarily for DSP and simulation applications, for which I already admire it.
Whoops, sounds like you already know exactly what I spent 3/4 of my post belaboring :)
I'm still not sure I understand your proposal. How do short IDE-type addons solve the problem of integrating siloed codebases with each other? The way I see it, the only way forward is
1. Convince a language person that first-class support for algebraic metaprogramming is important
2. Convince someone from each of the fields {numerical methods, linear programming, finite element methods, graph theory, etc} to export an interface to the language in #1, possibly re-structuring their entire codebase to do so
3. Documentation, convenience schleps
I say "convince" because the problem of making a Mathematica replacement is far enough down my todo list that I have no choice but to be realistic about the fact that I probably won't be that person.
I'm very into flow-based programming because I use a fair bit of it for audio DSP and that's how I think. So these days when I look at code-as-text I often find myself thinking about how I'd like to see the functions, branding etc. abstracted out into a flow chart diagram automatically. I was just struck during the Demo (and from some previous explorations) at how good M. is at developing graphs from structural information, and I think to myself 'why not from code?'.
On a tangential note, this latest demo looks very very close to the sort of thing Bret Victor has been talking about over the last year or two (http://worrydream.com). If you haven't checked it out I think it might be interesting to you.
We build networks of lots of different kinds against our own codebases and databases. And personally I've done a bunch of experiments around visualizing code in the past.
Unfortunately, I don't think I, or anyone else at the company, has convincingly cracked the problem yet. Generally, the visualizations look pretty, but they aren't "tactile" or practical enough to be useful.
But you've convinced me that it would be worth digging up some of my old experiments and writing a blog post around them, so that at least any progress I've made can be carried further by other people.
Yes! The prospect of doing that deeply and seriously is probably the single thing I'm most excited about tackling next.
There's a whole tier of previously impractical metaprogramming that would come with being able to analyze and manipulate programs (and commit histories) as easily as data.
Syntactic macros are just the teeniest tip of the iceberg!