SAVE THE DATE - March 11-12, 2019
In the Heart of Silicon Valley
From Bill Meisel, Publisher & Editor, LUI News (on the Language User Interface)
Perhaps we should be talking about Computer Intelligence (CI), not AI
Artificial Intelligence—AI—is obviously “hot.” Execs of leading companies have talked about their companies being “AI-First,” or otherwise emphasized the importance of the technology to their company’s long-term growth. This newsletter has emphasized the importance of the Language User Interface in connecting humans to technology to maintain the usability of that technology. Certainly, speech recognition and natural language processing are part of what most people consider AI.
But all the attention to AI is not celebratory. Brilliant men such as Elon Musk have warned that AI poses a risk to humanity when computers get smarter than humans, even to the extent of saying we need to colonize Mars to escape their domination. In a somewhat strange approach to the problem, he has even sponsored formed an AI startup, OpenAI, that somehow may help the challenge to humans by giving more people the weapon.
The term Artificial Intelligence is vague. It perhaps implies technologies that do things that were previously only done by humans, such as speech understanding. Or perhaps AI applies to software that is analogous to the behavior of the human brain, e.g., using “Deep Neural Network” technology.
If we accept the first interpretation, then Machine Learning applied to big data in corporations to find efficient ways of doing things, such as Google using it to reduce energy use in its server farms, shouldn’t be part of AI—No human could do what Machine Learning does in these cases! Yet, using machine learning in such cases is certainly included in the concept of AI by most experts.
If AI is computing like a human thinks, then perhaps calling Deep Neural Networks (DNNs) “Layered Classification and Regression Systems” will more accurately describe them and avoid the comparison with humans. Earlier techniques using similar concepts but models with fewer parameters weren’t considered AI.
For example, an early version of statistical modeling systems was Leo Breiman’s Classification and Regression Trees (CART, published as a book in 1984 and still available as a software package). CART was a layered model that used less parameters, and required less data and less computation than DNNs. It didn’t raise comparison to human thought processes because the name Classification and Regression Trees didn’t suggest that comparison. CART could however be interpreted as breaking down a problem into a series of simpler sub-problems and combined the results of the sub-problems, much as DNNs do.
What’s in a name? A lot.
And I must admit to my guilt in the naming game. My 1972 book was entitled, “Computer-Oriented Approaches to Pattern Recognition,” suggesting the (intended) analogy to human pattern recognition, although the methods described were simply statistical/empirical machine learning. The book could have been titled, in today’s terms, “Machine Learning for Classification.”
The reality is that computers are already “smarter” than humans in many dimensions, e.g., the amount of data they can be stored and accessed quickly. What formerly required a human clerk surfing the Internet for days to find information can today be found almost instantly with a web search. Computers have been faster than humans at arithmetic for decades. (Remember the hand calculator?)
Computers can outpace humans at many tasks, and have long been used to help humans complete tasks more quickly and accurately because of that fact. In this note, I will simply refer to what computers do as Computer Intelligence (CI), and not try to decide what qualifies as AI.
CI has continually advanced over time, partly driven by innovations in computer science techniques and algorithms, and partly by the exponentially growing power of computers. CI in general does indeed potentially reduce some job categories faster than individuals or educational systems can adapt, even when the automation is not generally considered AI. Automation began with the steam engine replacing jobs that previously required human muscle, and computer power is simply a continuation of that long-term trend of machines doing more of what humans used to do.
Automation at an accelerated rate can overwhelm a human’s and an economy’s ability to adapt quickly enough to avoid problems. In my 2013 book, The Software Society: Cultural and Economic Impact, I argued that this continued acceleration (driven in part by exponential growth in CI) is indeed a paradigm shift that needs to be addressed.
But CI also creates potential solutions. It creates the possibility of Intelligence Augmentation (IA), where, for example, our digital assistant can be available continuously through a portable wireless device and provide us quick access to information that helps us complete tasks, including those that are part of our jobs. This might create the option of more effective education on the job, rather than requiring a change of our education system. For example, some call center agents have available automated systems that help them find information to respond to customers by typing natural language inquiries into an automated system.
Advancing CI can create jobs as well as replace some. Amazon announced a few months back that it was expanding its Virtual Customer Service program, now in its fifth year, which gives employees the flexibility to provide customer service support to Amazon customers while working from home. Virtual Customer Service jobs are part of Amazon’s plan to hire 30,000 part-time employees in the US over the next year, the company announced. The work-at-home option can increase incomes for families where one member can’t work full-time, for example, and where the efficiencies of working out of one’s home make such part-time work feasible while, for example, taking care of young children. Virtual Customer Service jobs are made possible by Internet connectivity, a key part of today’s CI (networking).
The category of jobs done from home, perhaps with a flexible schedule, may become a very important part of new income. Moreover, these jobs may require less training because of intelligent computer assistants. In fact, backing up digital assistants with agents who don’t announce themselves, but communicate through the assistant, is already a growing job category. In this case, humans help computers be smarter!
Using terms like “Artificial Intelligence” and “Cognitive Computing” (IBM’s term for AI) makes the discussion of issues created by exponential advances in CI—whether or not considered AI—more difficult. Terms such as AI imply that some aspects of CI have some sort of particularly threatening quality. The reality is that changes in our economy created by technology developments are a part of a very long trend. Those accelerating developments may require our attention and creativity to deal with, but CI can be part of the solution, not simply part of the job replacement problem. “Computer Intelligence” is a less threatening term than “Artificial Intelligence” and more accurately describes the challenge we must address.