The Development of ‘Artificial Intelligence’

The Development of ‘Artificial Intelligence’


What is ‘Artificial Intelligence’?

 According to the Cambridge Dictionary, they define ‘Artificial Intelligence’ as:

“the study of how to produce machines that have some of the qualities that the human mind has, such as the ability to understand language, recognise pictures, solve problems, and learn.”

What is the Root of ‘Artificial Intelligence’

 In response to the above question, I would like to go back to when I think the root of ‘artificial intelligence’ began. Therefore, we need to go back to the 18th and 19th century, which was the time of the “Industrial Revolution”. This was the period when machines were being made and beginning to replace people, putting more and more out of work, for example, some crafts people were replaced by machines. Specifically, Richard Arkwright (1732 – 1792) invented a machine for carding cotton. Consequently these substituted the need for human hands and fingers and instead using machine and metal to create stronger spun thread, more quickly and easily. It revolutionised the world of work but it also made thousands of skilled workers obsolete. In relation to the definition of ‘Artificial Intelligence’, these machines would not be considered ‘Artificially Intelligent’ machines, because they do not ‘think’ like humans, however they were acting like humans and being like humans on a physical level, as they replaced humans by doing physical tasks.


The Next Stage of ‘Artificial Intelligence’: the ‘Turing Machine’


A Turing Machine:

 “…is a hypothetical machine thought of by the mathematician Alan Turing in 1936. Despite its simplicity, the machine can simulate any computer algorithm, no matter how complicated it is.”

 In my opinion, this was the next step in ‘Artificial Intelligence’ because Alan Turing invented this machine, which exceeded the speed of logic in humans. However, with regards to the definition, it is not quite an ‘Artificially Intelligent’ machine because it cannot “understand language, recognize pictures or learn”.

The Developing of ‘Artificial Intelligence’ – 1956+

 John McCarthy was the first person who used the term ‘Artificial Intelligence’ in 1956, when he held a conference about the topic. In the same year, the first ‘Artificial Intelligence’ program called the ‘Logic Theorist’, was demonstrated by Allen Newell, J.C. Shaw and Herbert Simon at the Carnegie Institute of Technology (now Carnegie Mellon University). Since these events, computers and robots had significantly been developed.

Particularly in 1990 there were:

“Major advances in all areas of AI, with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics.”

 Then in 2000:

“Interactive robot pets (a.k.a. “smart toys”) became commercially available.”


 Recent Breakthroughs in ‘Artificial Intelligence’ – 2011 to 2015

 Stephen Gold (an expert in ‘Artificial Intelligence’) thinks these are some of the breakthroughs in ‘Artificial Intelligence’:

  1. IBM Watson wins Jeopardy demo’s integration of natural language processing, machine learning (ML), and big data.
  1. Siri/Google Now redefine human-data interaction.
  1. Deep learning demonstrates how machines learn on their own, advance and adapt.
  1. Image recognition and interpretation now rivals what humans can do — allowing for imagine interpretation and anomaly detection.
  1. AI Apps proliferate: universities scramble to adopt AI curriculum



A Guide to Metadata 


When I started the course, I hadn’t the faintest idea what metadata was. It sounded like another language to me. Even when I googled what metadata was, the definition was, “Data that describes other data”, however I didn’t find it very helpful and I was still slightly confused. But it all started to make sense when I listened to a lecture about metadata given by Lyn Robinson, and by the end I knew what it was about.

The aim of this blog is to inform anyone about the basics of metadata, especially people who are new to library and information science like I was before I started learning about it at City, London University.


So what is metadata?

Well, there are different types of metadata.

DESCRIPTIVE METADATA – describes intellectual content of the data, for example, writing on the back of photographs, or the title and artist of a song.

STRUCTURAL METADATA – organises parts and relationships of the data, for example, table of contents shows chapters and sections, or file systems on computers that have files organised into folders.

ADMINISTRATIVE METADATA – provides information on how to manage data, for example, rights, technical and preservation metadata.


There are also three sub categories of ADMINISTRATIVE METADATA which are:

  1. RIGHTS METADATA – an example of this are copyright warnings that can be found at the beginning films informing about the rights and distribution of it.
  2. TECHNICAL METADATA – an example of this are regional codings on dvds indicating what dvd player they can be played on.
  3. PRESERVATION METADATA – when downloading a file there is a checksum file that ‘talks’ to your own checksum programe. It checks that the file you’re downloading is the file that it is intended to be and that it hasn’t been tampered by hackers during the transfer.




What does metadata look like?

Metadata comes in many formats. Sometimes metadata can be in the same format as the data, for example, an e-book may come in many formats, such as, XHTML/PDF/spoken-word audio, but these must include a plain text version with a standardised header.

Metadata can also be in a different format from the data, for example, a converted wave audiophile from a cassette may have an associated txt file showing the original content of the original cassette sleeve, the hardware and settings used for digitization.

The metadata could be within an audio file, for example, a description about it at the start.

The metadata could be non-verbal, for example, a microfilm target that contains technical metadata.


When is metadata created? Who creates it?


With regards to books:

metadata can be added by the publisher, such as, the biography and bibliography of the author.

The custodial history in schoolbooks (which is administrative data) is added by students, such as, information about themselves and their assessment of the books condition.

Also, someone could put a label on a book indicating that’s it is free to someone to pick up and find. (This is rights data).

So books can have lots of metadata by different people at different times.


Where is metadata?


Metadata can be inside the data or part of the data.

Some examples are:

The title page of a book, which is part of the book itself shows the title, author, date of publication etc.

Digital file formats have metadata that is inside the file, often called ‘header information’. You can sometimes see it if you open it up with a txt editor, such as, notepad.

Metadata can also be stored near the data, such as, the title and author printed on the cover or spine of a book is not often considered part of the data as it can be replaced.


Why is Metadata important?


It helps us to find things, such as, we can search for music, books and films by name, genre, date etc. Also, if files are effectively and efficiently organized on a computer, the metadata helps us to find and retrieve information needed at a given time. This can also apply to information on the internet, which is especially important if you’ve written an academic document, as you’d want people to be able to cite your work in their publications.


I hope this has simply explained what metadata is as it certainly has for me! Also most of the information has come from these two Youtube Videos below, which simply explain Metadata too:



Discovering the Wellcome Collection

The Permanent Exhibition

On Wednesday 19th October 2016, I visited ‘The Wellcome Collection’, which is a museum and library dedicated to medicine. There was a permanent exhibition on the first floor and I found the section about ‘Obesity’ the most interesting.


When I first entered the section, there was huge sculpture of what I thought was an exceedingly large deformed ball of fat on two legs.


However, this is what it was described as by the artist, John Isaacs:


Then I read a plaque written by ‘Paul Sacher’. The first part was about ‘childhood obesity’. He talked about how most children spend too much time inside playing video games, or watching television, combined with unhealthy eating, and not getting the advised minimum of 60 minutes of exercise per day. I agree with this, and I think the reason this has happened is because we have entered a technological era, where most people in the ‘Western World’ have computers, iPads, game consoles and TV’s. Also, some parents don’t have time to take their children outside everyday – especially if they are working hard to maintain a lifestyle they would like their family to have. Furthermore, when they are home they have basic household tasks to do, such as, cleaning the house and doing the laundry, so they may occupy their children by putting them in front of the TV or give them an iPad to play on. In response to Paul Sacher’s comment about unhealthy eating, I think supermarkets are one of the main contributors to unhealthy eating in general – not just for children. They often have offers for sugary and fatty foods, such as, crisps, biscuits and cakes, instead of offers for the healthier foods, such as, fruit and vegetables. Consequently, parents are likely to buy more unhealthier food because it’s cheaper. Therefore if supermarkets changed their pricing strategies, people in general may become healthier. Furthermore, here is an article from the ‘The Telegraph’ about a study conducted by Cambridge University in 2014, which found that “Healthy foods cost three times as much as unhealthy foods…showing a widening gap in the costs between junk foods and fine fare.”

Paul Sacher also said that diets don’t really work, because they are something that people practise for short periods of time, and once someone has been obese they have already made more fat cells than someone who has not been obese. Most of the time if someone goes “back to eating or being as sedentary as they were before” and they stop dieting, they will easily put the weight back on, which is “why changing one’s lifestyle seems to work the best.” I can understand this because if somebody is dieting and this is the only change they make to their lifestyle, then when they stop dieting they will end up back to what they were before they started dieting. Therefore, like Paul stated, people need to change their lifestyle rather than just dieting.

Then in another part of the exhibition were 8 cubes in a glass box that spelled ‘dyslexia’.

I found these interesting because I’ve worked in many schools as a teaching assistant through some agencies, and I have worked one-to-one with children who had special needs, including some with dyslexia.

The Reading Room


When I entered ‘The Reading Room‘, I was instantly amazed by it, as it felt like I’d entered into a different building. It was set up, what I think of, as an old library, as it wasn’t like the rest of the decor in the building, which was modern. ‘The Reading Room’ is a place where you can read the books while you’re there. There were a number of book cases, tables and soft, comfy chairs where people can study and read at. There were also some machines from the 1920’s that were used in medicine, such as, an x-ray machine and a dental station. ‘The Reading Room’ seemed so relaxing and I’m sure I could easily find myself studying or reading in it.

The Broken Brain and the Button

Before starting the Library Sciences Msc course, I had already read a few chapters of ‘Introduction to Information Science’ by David Bawden and Lyn Robinson. I found a couple of the chapters hard work to read, as a lot of the information was new and complex to me. On starting the course, I am finding the course very interesting.

Specifically, a part of the first lecture in the module ‘Digital Information Technologies and Architectures’ about ‘Finding the ‘I’ in Data’ fascinated me. A futurist Raymond Kurzweil, states in his book ‘The Singularity is Near’, that a human being’s memory is able to hold up to around 1.25 terabytes of functional memory. Before acquiring this piece of information, I had never stopped to think of human brains holding data. Then for some reason it made me think of my grandma, who is suffering from dementia and is losing her capacity to memorise things. I think I thought of her because she struggles to remember the information she already knows, which impacts her ability to ‘hold up to 1.25 terabytes of data’. So instead of her gaining new information and memories, she is losing information and memories.


On further reflection, I compared a brain to a hard disk, as when you’re on a computer you save data to the hard disk which also has memory. However, if the computer unfortunately receives a virus then the data in the memory can be affected, or even lost and so that data cannot be retrieved. Similarly, my grandma’s brain is like a hard disk infected with a virus as her brain is losing the data and unable to properly retrieve it.

Further on in the lecture the Amazon dash button was mentioned. It is a button linked to various products such as laundry powder, toiletries or pet food. When someone runs out of a product they press the button and it will arrive the next day. I have mixed views about these dash buttons. I admit they are convenient and easy to use, especially f517kzpphyjl-_sl1000_or people who lead busy lives. On the other hand, they somewhat worry me as they could encourage lazy behavior in that they could stop people going to the shops to physically buy the products. Also, it’s scary to think that the future may only involve people pressing buttons to acquire all their products and hardly ever leaving their homes (except maybe for work).

Having said that, a lot of people (including me) online shop, so maybe the future of pressing buttons to acquire products is
already upon us and has been for sometime.