The Rise of AI

SAY NO TO THE ROBOTS!

AI Robotics

What is "roboethics"?

So you're probably wondering what on earth roboethics is, well, it's pretty self explanitory. Take the word "ethics" first. Ethics is related to the right and wrong, morality of humans, and the word robo refers to robotics. Therefore, roboethics outlines the issues and ethics of artificial intelligence. It relates to the behaviour of humans and what is truly right and/or wrong towards the robots we have created. It also asks the essential questions: how should we treat robots? How should we design them? Should we be allowed to destroy them? Do we treat them as the humans they were built as?



We can create AI, but should we create AI?

There is no doubt that in our day and age, we are very much capable of creating complete AI as technology and probabilities have grown and been made more advanced than we imagined. For an example, in wars nowadays we have created "drones". These are robots built as a weapon, a huge explosion used to kill civilians all over our country in wars. If this is the case, how can we possibly argue that they would be a great benefit towards our society? Would it be necessary and could we even trust them?


First of all, Artificial Intelligent robots are built as humans with a human brain, this means they have a very high chance of being given civil and human rights as well as it would be morally correct to do so. If they're given these rights, our society may as well crash too. Furthermore this means that they will then be able to own property, defend themselves when felt treated poorly, vote in elections, decide which work they prefer to do, take our jobs and slowly taking over everything humans once could do alone as their minds would be much more advanced as ours. They could possibilty even take up all our space, where will us humans go once we are no longer needed?

If these robots were created for things such as help in the medical field, the question of trust comes along again. Reason being, if they were to prescribe wrong medicine or make a mistake during an examination, who do we blame? The creator or the robot? Would it be fair to blame the robots? Because truth is, they've only been given the information and knowledge provided by their creator.

We are also in risk as we don’t know if these robots will always listen to their creators. And depending on who the creators are, how will we ever be certain that these robots can't turn against their owners, or on humans as a whole? How will we be certain that people wont take this opportunity for an advantage to create evil robots who kill, murder and misbehave?

Also, in some religions, AI is considered wrong as it is perceived as humans enhancing God’s creations, and only God is to say when a human life is to be born or gone to an end.

Although few of these robots would be a great benefit to our society there too many risks involved as stated above. Should we really risk our world for an experiment?



Big image

Artificial Inteligence (Film)

I have used this film as a reference to show examples of why robots would be a threat to our society and furthermore why we shouldn't risk having them apart of our daily lives more than they already are.

1. In AI when David woke up from under the seas, humans were extinct. The robots took over the earth through time as they grew stronger, smarter and thought with their own minds. They practically formed their own race, of the robots we created. This shows the main fear in our people for the rise of AI.

2. Human’s in the film didn’t seem to have thought about the creation of artificial intelligence well enough. Therefore they were continuously being built with various responsibilities and characteristics. In the film we see the various robots from: children, prostitutes etc. Not only this, but the humans in the film turned against the own robots they created, they were more intrigued by the idea of destroying their creations rather than taking care of them.

3. David for another example, even though he was built exactly as a young boy with "human feelings", was still not able to truly understand everything. Although he surely showed clear love for his mother, it wasn't real. It was planted into his brain to be made to love his mother. At another point he went to his brother for help as he was being bullied, but ended up almost drowning him with David himself. How are we certain what his intentions were? Did he not know what he was doing? Or did he not care, as he wasnt programmed to love his brother? We can't trust them. And ironicly enough, even though he was built as a child for the family, only the brother was being saved. David was left underwater; this also shows how unfair humanity was. They created a child with all kinds of emotions, but the child was treated like trash.

A.I. Artificial Intelligence - Pool Scene

Chinese Room

This famous thought experiment presented by John Searle, challenges the idea of artificial intelligence in general. Searl’s thought experiment supposes that he is placed in a closed room with a book or computer program that enables him to process Chinese characters which he is slotted under the door as well as produce Chinese characters to respond. It convinces the human Chinese speaker outside that he or she is actually speaking to another Chinese-speaking human. However Searl in fact does not speak any Chinese at all, therefore he cannot possibly understand what he is responding. This was then argued that a machine, computer and artificial intelligence itself would not understand the conversation either. Searl concludes with the experiment that “strong AI” is false as machines "don't have a mind and are therefore unable to think."

This is an excellent example for why we can't fully trust the minds of robots, as we aren't sure what they're thinking or believing is morally right or wrong. In this case how are we to assure anything they say is certain?

Big image

Branches of Philosophy

Aesthetics:
The robots are unnatural all over. They can be seen dehumanising even. If they're built to look and act as humans, how are we to be sure who is who? Are we supposed to accept them into the community as one of us or are they a seperate entity? Not only this, but the robots are made human and therefore can be seen trying to be us. This complication can cause many distruptions and confusions. Such as in AI the film, when the real son treats David like a toy and bullies him into doing things and taking advantage of him, when really David didn't know anything. Other than the fact that he loved his mother.

Epistemology:

This branch in philosophy can relate to how the robots are able to take upon themselves to make up their own mind and decisions, which puts us in the risk of them taking over because soon enough they don’t need us humans. They could create each other themselves with the highly advanced minds that they have. How do we know what we create will do exactly as we wish? How do we know how smart and advanced the robots can be? We will never be sure. Therefore this risk is no doubt, uneccessary and much too dangerous for us to control once it has spread like wild fire.