Presentation Title: How Computers Learn
Description: Every day, we now produce more computable data than was created in the entire history of humanity before the year 2000. If “Data is the oil of the 21st century,” as The Economistwrites, how do you gauge the value of a commodity that’s growing exponentially? Data’s utility is measured by what we can learn from it, and how we can learn with it. Today, computers are doing much of that learning for us.
The age of “Big Data” has quickly given way to the age of “Too Much Data,” where selecting the right data and analyzing it the right way has become a harder problem than gathering it in the first place. Machine learning has gained a reputation as the crucial tool for leveraging this data glut positively. This reputation is deserved: machine learning can automate the creation of best-fit models that offer predictive and analytic powers beyond the capacities of human creation. From mastering the game of Go to optimizing supply chains to voice and image recognition, machine learning has offered a set of generic tools that have had amazing success across a stunningly vast landscape of applications. Google and Facebook have led the way with their massive data apparatus, but high-powered machine learning tools are now available to all.
Yet machine learning is mysterious. Its optimized models are frequently opaque, and it promises optimization rather than perfection. Analysts must provide it with concrete and carefully defined metrics and test data to ensure that machine learning matches for the right patterns. I will present a lightning tour of machine learning’s achievements as well as its limitations, explaining where it succeeds and fails, but more importantly why it does. Some patterns are far easier to pin down than others. Identifying which ones are which is the key to using machine learning successfully.
Ultimately, I’ll show where humans remain indispensable by comparing and contrasting the data science of machine learning with the growth and development of a different kind of intelligence, that of my own children.
DAVID AUERBACH is the author of Bitwise: A Life in Code (Pantheon). He is a writer and software engineer who has worked for Google and Microsoft. His writing has appeared in The Times Literary Supplement, MIT Technology Review, The Nation, The Daily Beast, n+1, and Bookforum, among many other publications. He has lectured around the world on technology, literature, philosophy, and stupidity. He lives in New York City.
After studying computer science as well as literature and philosophy at Yale University, Auerbach joined Microsoft as a software engineer to work on their instant messenger service, where he did both client and server work, becoming a technical manager on the latter. While there, he introduced emoticons to chat programs, in addition to pursuing standards and interoperability for instant messaging. Pursuing his interest in large-scale services and data analysis, He subsequently moved to Google to work on their core search infrastructure, specifically the search engine crawler, where he worked on optimization and heuristics.
After fifteen years as a software engineer, he turned to writing and became a technical columnist for Slate while continuing to write on the humanities and social sciences for other publications. This work culminated in his book Bitwise, written under a New America fellowship, an attempt to reconcile how machines see humans and how humans see machines. It is this question that is at the heart of machine learning today, as machine learning has been able to achieve some of the most stunning results artificial intelligence has yet yielded. Yet the oblique and non-semantic nature of machine learning has yielded much confusion over its capacities and limitations. Auerbach believes that the potential of machine learning truly is a milestone in computer science, but that we can best understand and utilize its marvels by understanding both what it can and cannot do–and more importantly, why.