< a quantity that can be divided into another a whole number of time />

June in review

June 15, 2020

I still have a lot of bookmarks lurking around in Safari, since I overlooked some of them for a while. Anyway, here it goes. Note that quick links have been posted on the Micro section instead.

The twenty-first century2 has seen the rise of an extraordinary collection of prediction algorithms: random forests, gradient boosting, support vector machines, neural nets (including deep learning), and others. I will refer to these collectively as the “pure prediction algorithms” to differentiate them from the traditional prediction methods illustrated in the previous section. Some spectacular successes—machine translation, iPhone’s Siri, facial recognition, championship chess, and Go programs—have elicited a tsunami of public interest. If media attention is the appropriate metric, then the pure prediction algorithms are our era’s statistical stars.
The adjective “pure” is justified by the algorithms’ focus on prediction, to the neglect of estimation and attribution. Their basic strategy is simple: to go directly for high predictive accuracy and not worry about surface plus noise models. This has some striking advantages and some drawbacks, too.

♪ PJ Harvey • The Peel Sessions


See Also

» May in review (2) » May in review » April in review » ArXiving on March 2020 » March in review