In the 1960s, the world’s first digital calculator was invented, fundamentally changing our understanding of computation and mathematics. The idea that this small device could find the product of a long series of numbers in a matter of seconds was met by both shock and applause.
If you were to ask someone walking down the street at that time to give you an example of “artificial” intelligence, they most likely would have said “a calculator.” But, while many today would argue that a simple handheld calculator in no way emulates the true nature of artificial intelligence, we have to ask ourselves: what benchmark are we using?
To those living in the 1960 and 70s, the calculator was the penultimate example of the artificial mind. It was a faster, more efficient, and error-free way to compute numbers. No average human could compete with such a device; thus, in a very real sense, the calculator was artificial intelligence. This was our first conceptual model of AI.
Our definition of AI has changed over the years, converging at each breakthrough in computing. Machine learning and neural networks solidified the definition of AI; all that is lacking now is the notion of consciousness. As machines become increasingly capable, perhaps the bar for what constitutes AI will be reset in the near future, stating that only those machines which can effectively replicate human emotion deserve such a title.
With the advent of newer and more focused technologies such as natural language processing and social intelligence, perhaps our view of AI will eventually fall on a lateral continuum rather than a linear trajectory. In the continuum model, AI applications are developed on a needs basis rather than a never-ending fight to the top.