Quantifying everything: Wolfram alpha and algorithms

Wolfram Alpha is pretty great:  you type in a problem and it finds a solution.  It does this by transforming the natural language problem into computational elements and entries in its curated data set, and then running the computations.  Ta-Daa!  The solution appears, provided that the problem includes elements that are 1. reducible to computation and 2. include elements that are in the database.  Improving on 2. is easy enough, the argument goes:  simply add more things into the database.  If you want to calculate the likelihood that a word will occur in a Yeats poem, simply add more Yeats poems to the database and eventually you’ll get a meaningful result.

It’s principle 1. that’s potentially more problematic.  It raises the question about the extent to which all knowledge can be quantified.  In other words, it doesn’t explain why the repetition of words in a Yeats poem might be important.

Ahh, you say.  But that’s not science!  True, science is about quantifiablity.  But it is also about inquiry, about determining how to ask questions that are verifiable.  And it is about applying those questions generatively in order to develop new knowledge.  Wolfram Alpha’s founder has written about a new kind of science, which is based on simple rules that can be embodied in computer programs. I’m ready to be convinced, but I’m concerned that the Age of the Algorithm could mean the end of the Age of Inquiry.

My most memorable university exam included a question which asked me to differentiate special relativity from general relativity, and to explain how Einstein developed one from another.  I attempted to get Wolfram Alpha to compute this, but the closest result I got was this.  So far, inquiry is safe.