How Do Asimov’s Three Laws of Robotics Affect Artificial Intelligence?

Date:

The Three Laws of Robotics are iconic in the world of science fiction and have become a symbol in the artificial intelligence and robotics community of how difficult it is to properly design a perfect system.

To fully understand the importance of these three laws, we must first learn about the brilliant mind who designed these laws, the late science fiction writer Isaac Asimov.

Then we must understand how to adapt these laws and ensure that they evolve to protect humanity.

Isaac Asimov – The Rise of a Genius

Isaac Asimov was born in Russia. He immigrated to the United States on January 2, 1920, and at the age of three. 

He grew up in Brooklyn, New York, and graduated from Columbia University in 1939. He became known as a talented and prolific writer focusing on science and science fiction.

He wrote and/or edited more than 500 books throughout his career.

Asimov was heavily inspired by some of the most iconic writers in science fiction. He began his work at the Philadelphia Navy Yard, where he met two co-workers who would soon emerge as two of the most successful science fiction writers in the history of speculative fiction: L. Sprague de Camp and Robert A. Heinlein.

L. Sprague de Camp is an award-winning author who wrote more than 100 books and was a major figure in science fiction in the 1930s and 1940s. Some of his most popular works include “Darkness Fall” (1939), “The Wheels of If” (1940), “A Gun for Dinosaur” (1956), “Aristotle and the Gun” (1958), and “The Glory That Happened” (1956). 1960).

At the height of his career, Robert A. Heinlein was arguably the world’s most popular science fiction writer.

Along with Isaac Asimov and Arthur C. Clarke, he was considered the “Big Three” of science fiction writers. Some of Robert A.

Heinlein’s most popular works are “Farnham’s Freehold” (1964) and “ Sail Beyond the Sunset” (1987). The current generation probably knows him best for the film adaptation of his novel “Starship Troopers” (1959).

Being surrounded by these giants of futurism inspired Issac Asimov to begin his prolific writing career.

Asimov was also highly respected in the scientific community and was frequently commissioned as a speaker to give talks about science.

Three Laws of Robotics

Issac Asimov was the first person to use the term ‘Robotics’. ‘Liar!’ It was published in 1941.

Shortly thereafter, his 1942 short story “Runaround” introduced the world to the three laws of robotics. The laws are:

1. A robot cannot injure a human being or allow a human being to come to harm by remaining inactive.

2. A robot must obey orders given to it by humans, unless they conflict with the First Law.

3. A robot must protect its own existence as long as it does not violate the First and Second Laws.

These laws were designed to present interesting plot points, and Asimov went on to create a series of 37 science fiction short stories and six novels featuring positronic robots.

One of these short story collections, called “I, Robot”, was later adapted into film in 2004. The movie “I, Robot” starring Will Smith is set in a dystopian 2035 and features highly intelligent public servant robots who operate according to three laws.

of robotics. Much like the stories, the film quickly became a parable of how programming can go wrong and how programming any type of advanced AI involves a high level of risk.

The world has now caught up with what was previously science fiction, we are now designing artificial intelligence that is in some ways far more advanced and also far more limited than anything Issac Asimov could have imagined.

The three laws of Robotics are referenced quite frequently in discussions of Artificial General Intelligence (AGI).

We will quickly explore what AGI is and how the three laws of Robotics should evolve to avoid possible problems in the future.

Artificial General Intelligence (AGI)

Most types of AI we currently encounter on a daily basis are measured as “narrow AI.” This is a type of AI that is very specific and narrow in its utility function.

For example, an autonomous vehicle can navigate the streets, but due to its “narrow” limitations, the AI ​​cannot easily complete other tasks.

Another example of narrow AI would be an image recognition system that can easily identify and tag images in a database, but cannot be easily adapted to another task.

Artificial General Intelligence, commonly referred to as “AGI,” is an artificial intelligence that can quickly learn, adapt, pivot, and function in the real world, similar to humans.

It is a type of intelligence that is not narrow in scope, can adapt to any situation, and learns how to deal with real-world problems.

It is worth noting that while Artificial Intelligence is advancing exponentially, we still have not achieved AGI.

 When we will reach AGI is up for debate, and everyone has a different answer as to the timeline. I personally agree with the views of Ray Kurzweil, inventor, futurist and author of ‘The Singularity is Near’. Achieved AGI by 2029 .

It’s this 2029 timeline that’s a ticking clock, we have to learn to code some kind of rulebook into AI that is not only similar to the three laws, but also more advanced and able to actually avoid the real world. Conflict between humans and robots.

Today’s Robotics Laws

While the three laws of robotics are outstanding for the literature, they significantly lack the complexity to seriously program a robot. That was the plot point behind the short stories and novels, after all.

Contradictions between the three laws, or at least interpretations of the three laws, have resulted in robots melting down, retaliating against humans, or other major plot points.

The main problem with current laws is that the ethical programming of always following human instructions and always self-preservation may conflict.

After all, is the robot allowed to defend itself against an owner who abuses it?

What type of fail-safe mechanism needs to be programmed? How do we tell a robot that it must shut down, regardless of the consequences?

If a robot is in the process of rescuing a housewife from abuse, should the robot automatically shut down if the abusive husband instructs it to do so?

Who should give instructions to robots? With autonomous weapons capable of identifying and targeting enemies from around the world, could the robot refuse a command to eliminate a target if it identifies the target as a child?

In other words, if the robot is owned and controlled by a psychopath, can the robot refuse immoral orders? The questions are many and the answers are too difficult for any individual to answer.

 That is why such organizations Future of Life Institutes are so important that the time to discuss these moral dilemmas is before a true AGI emerges.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

Popular

More like this
Related

Artificial Intelligence Tools That Can Be Used in E-Export

In the "ChatGPT and Artificial Intelligence Tools in E-Export"...

What are SMART goals, why are they needed and how to set them correctly

In the modern world, where everyone strives to achieve...

How and why the United States is developing a lunar economy

The United States is seriously thinking about developing an...

China faces problem of untreatable gonorrhea

In China, there are a growing number of strains...