The Development of ‘Artificial Intelligence’
What is ‘Artificial Intelligence’?
According to the Cambridge Dictionary, they define ‘Artificial Intelligence’ as:
“the study of how to produce machines that have some of the qualities that the human mind has, such as the ability to understand language, recognise pictures, solve problems, and learn.”
What is the Root of ‘Artificial Intelligence’
In response to the above question, I would like to go back to when I think the root of ‘artificial intelligence’ began. Therefore, we need to go back to the 18th and 19th century, which was the time of the “Industrial Revolution”. This was the period when machines were being made and beginning to replace people, putting more and more out of work, for example, some crafts people were replaced by machines. Specifically, Richard Arkwright (1732 – 1792) invented a machine for carding cotton. Consequently these substituted the need for human hands and fingers and instead using machine and metal to create stronger spun thread, more quickly and easily. It revolutionised the world of work but it also made thousands of skilled workers obsolete. In relation to the definition of ‘Artificial Intelligence’, these machines would not be considered ‘Artificially Intelligent’ machines, because they do not ‘think’ like humans, however they were acting like humans and being like humans on a physical level, as they replaced humans by doing physical tasks.
The Next Stage of ‘Artificial Intelligence’: the ‘Turing Machine’
A Turing Machine:
“…is a hypothetical machine thought of by the mathematician Alan Turing in 1936. Despite its simplicity, the machine can simulate any computer algorithm, no matter how complicated it is.”
In my opinion, this was the next step in ‘Artificial Intelligence’ because Alan Turing invented this machine, which exceeded the speed of logic in humans. However, with regards to the definition, it is not quite an ‘Artificially Intelligent’ machine because it cannot “understand language, recognize pictures or learn”.
The Developing of ‘Artificial Intelligence’ – 1956+
John McCarthy was the first person who used the term ‘Artificial Intelligence’ in 1956, when he held a conference about the topic. In the same year, the first ‘Artificial Intelligence’ program called the ‘Logic Theorist’, was demonstrated by Allen Newell, J.C. Shaw and Herbert Simon at the Carnegie Institute of Technology (now Carnegie Mellon University). Since these events, computers and robots had significantly been developed.
Particularly in 1990 there were:
“Major advances in all areas of AI, with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics.”
Then in 2000:
“Interactive robot pets (a.k.a. “smart toys”) became commercially available.”
Recent Breakthroughs in ‘Artificial Intelligence’ – 2011 to 2015
Stephen Gold (an expert in ‘Artificial Intelligence’) thinks these are some of the breakthroughs in ‘Artificial Intelligence’:
- IBM Watson wins Jeopardy demo’s integration of natural language processing, machine learning (ML), and big data.
- Siri/Google Now redefine human-data interaction.
- Deep learning demonstrates how machines learn on their own, advance and adapt.
- Image recognition and interpretation now rivals what humans can do — allowing for imagine interpretation and anomaly detection.
- AI Apps proliferate: universities scramble to adopt AI curriculum