AI thru the 'ages'
Anyone who knows me knows I believe that anything that people have created is a type of artificial intelligence (AI) or robot. A simple thing like a ruler, while fixed in nature, has intelligence to it. It's certainly not the normal definition a person on the street might think for a robot or AI but it is in my opinion. In my “short” post here I'm not going to elaborate on every historical piece of AI or robots but I’ll point out the key ones. I’m also grouping AI and robots together even though robots are generally considered mobile (not my definition).
The first step toward the creation of intelligence or “AI” was the programmable loom machines used in fabrics. These machines used a type of stored data, “procedure” to make the material. Super cool especially in the early 1800’s. A novel concept and somewhat abstract at the time and not just a mechanically fixed system. People could change the data and have different patterns. Like the later music boxes and cylinders, they could change the output (material) with just a new “program”.
The second step was Charles Babbage and Ada Lovelace and their mechanical computer and abstract program in mid-1800’s. These resembled more real abstractions of AI rather than the physical output producing loom machine. They were defined to manipulate abstract mathematical concepts, kind of like modern computers.
The third step was the beginning of modern technology and most electronic computing engines. In the 1940’s Alan Turing and John von Neumann created and defined the ancestors of our modern computing engines today.
The fourth step was the abstract concepts of AI, and it’s vision from the 1960s and 1970s. There are MANY people involved; Minsky is one of the more popular AI father names. This was “kind of” the beginning of real AI thinking concerning real concepts. Most of the concepts, however, could not be implemented at the time as the existing hardware was super crude.
The fifth step, even though a false one, was the general Japanese fifth-generation system of the 1980’s. I remember thinking at the time it was cool and might be possible. Looking back, computing power along with many other elements made the possibility of real working AI not possible at the time. But, again, it’s the thinking about AI that mattered and helps lead to future steps. In 1988-89 I built an expert system with my coworkers at Northrop Corp. It ACTUALLY worked and was used. It was limited to a certain number of the rules, ~3000, but did represent real knowledge. It was written in Ada! A big DoD language at the time.
The sixth step in the 1990s included Deep Blue, which gained attention by beating a chess grandmaster, and real usable transcription programs, Dragon. Deep Blue wasn't really AI but was more of a memory intensive algorithm. BUT, the perception is that it is intelligent and that's what matters. The transcription programs that were perfected by the end of the 1990s, to me, will lead to real intelligence in the near term. Transcription along with machine learning and neural networks are the key moving forward.
The seventh step, and where we are now, includes systems that learn without predefined knowledge (working neural nets) and personal assistants like Siri, Alexa, Google Home, and Cortana. These personal assistance are not really AI as they are provided pre-scanned answers, but to people using them, they appear to be intelligent. Perception is key. In the background, there is a lot of natural language processing going to collect and organize this information. That is AI. It is also the biggest area of growth moving forward. Very exciting.
All of the previous steps we made possible by two main factors. The first main factor is computing/memory power. Without computing/memory, nothing is possible especially when it comes to AI. With cloud service computing today, anyone, not just the large companies, can spin up tens of thousands of machines with enormous amounts of memory. Fast memory makes possible almost any algorithm for AI. If these hardware services were available 50 years ago in the 1960’s those computer scientists back then would have been able to achieve a lot of what we do today. This does not take away from the abstract models created during those years only to point out we are benefiting from technology today.
The second main factor moving forward is transcription. As I've said before, transcription is the last mile for computing. It bridges the human-computer interaction barrier and allows a computer to fully interact with a person so that it appears and can be intelligent. Given the memory and computing power that exists today over large cloud distributed systems, we are on the precipice of information/AI revolution. Amazon Echo and other devices are small windows into that world.
Computer scientists, knowledge engineers, data scientists, and others can now create a semantic brain and have it interact with people on any subject. All the thinking/theorizing for the past 50 years or longer can now be realized.
It’s all waiting for us. Let's go!!
Leave a comment