The news about Google AlphaGo vs. Lee Sedol surprised many people worldwide. Google AlphaGo is a computer program developed by Google DeepMind lab to play the board game Go. Lee Sedol is a South Korean professional Go player and he is ranked Top 3 in the world. Before the contest, many professionals expressed their point of Google AlphaGo could not beat this professional player. However, a 4: 1 final score was quite dramatic.
AL1Go is one of the oldest and the most complex board game play to
day. It has more than 2*10^170 legal way to put a “stone” (which is the white or black pieces in the above figure). The big bang (or the age of the universe) was 15 billion years old or 5*10^17 seconds! That means, no matter how fast a computer is, this computer cannot predict the result of the game and beat a professional human player for a limit time. However, AlphaGo does. AlphaGo has different strategy decision functions, and those are the computational functions to decide the next step of the game. The core function is the deep learning. Deep learning means AlphaGo can study the game like a normal human being. AlphaGo can obtain the experience from previous games. Hence, it looks like AlphaGo will be able to think and make a decision as a human.AL2

 

So, what does this news mean?

Artificial Intelligence (AI) is “smarter” and it starts to evolve. The threat of AI becomes a stronger voice, and these people who believe the threat is true also think the movies I, Robot, or The Terminator will finally happen.

But, I don’t believe this threat theory.

An article, Where Computers Defeat Humans, and Where They Can’t, New York Time, has addressed the self-learning power of AlphaGo. AlphaGo’s victories illustrate the power of a new approach, which instead of trying to program smart strategies into a computer, it builds systems that can learn winning strategies almost entirely on their own by seeing examples of successes and failures. Such ability may be the reason for people to believe the threat of Artificial Intelligence and worry about if AI will be out of control with a self-conscious mind and has an against ideology – against the human being, the founder and the master of AI. With such high development and assumption, even Steven Hawking expressed his concern about Artificial Intelligence during the interview of BBS on Dec. 2nd, 2014.

2

However, I do not believe this threat theory. In an article entitled with To Beat Go Champion, Google’s Program Needed a Human Army, Melanie Mitchell, a computer scientist at Portland State University and the Santa Fe Institute indicated the biggest difference between human being and a computer program – the pattern recognition. Computer programs have their limitations: the goal and method. Since programs performance computing with binary, 0 and 1 only, and following the certain rules, they will only do what human wants to. For example, they cannot use the same rules and program to play different board games.

Yes, the true reason of Artificial Intelligence cannot defeat humans is the different ways to realize and solve a problem. The developed AI cannot solve creative problems! But we can. We can solve the problem A by studying and using the experience from problem B even if A and B are 100% different, because humans can always make some connections between two things no matter how tiny this is. But AI doesn’t. Artificial Intelligence cannot solve the problem of Go from the result of chess. Also, the way of the human brain working is far more different from the way of the computers are. In spite of mimicking how the human brain works, AI is still an algorithm designed for machines. Human brains store or pass information through countless neurons with complicated chemical changes, which is of great importance for signals, whereas AI is much simpler than that.

AL3Artificial Intelligence is not to create a new brain. It seems like that machines are “thinking” and then making decisions, but all they do is just following established rules and performing calculations. Artificial Intelligence is still computing, not “thinking”, in high-speed with some rules. Machines with high-level Artificial Intelligence can only proceed under the pre-set rules. No matter how “smart” AI is, it has no ability to create rules of its own. But, the human can create the rules, break the rules, and change the rules.

Regardless, the victory of Google AlphaGo represents a new level of Artificial Intelligence. But, the advantages of it should be more noticed by people, instead of worrying it become a threat to humans in the future. Through the victory of Google AlphaGo, researchers can now confirm that computers possess the ability to make the optimal decision with highly developed algorithms and performing huge computations. In fact, I think the victory of Google AlphaGo relies on decent algorithms and high-quality hardware. People can take advantages of the benefits. The essence of Artificial Intelligence is to study how to make computers perform tasks that used to be done only by people. We control AI. Artificial Intelligence is not a threat! Instead of performing with instinct like humans do, AI is not “smart” enough! So, don’t worry about your coffee maker will try to kill you in the future.

Advertisements

3 thoughts on “Artificial Intelligence vs. Human

  1. This is a very entertaining social and cultural critique and comes at the perfect time. The cultural moment of which you speak – the defeat by GooglAlphaGo of Lee Sedol – is a perfect way to bring up this debate about the power of Artificial Intelligence and the fear this brings in some of us. I do think, however, that you need to mention a bit more about the movies you mention, I Robot and The Terminator, as those are perfect examples of what the general public thinks about AI. Likewise, you ought to use a picture from one of them to capture reader attention.

    Your critique works so well for two reasons, I think. First, you present to the reader the “They Say,” or what other cultural critics and academics are saying about this debate. I found myself reading the articles and becoming so interested. Secondly, you make clear claims throughout the article. You start by explaining that what GoogleAlphaGo does mimics the “deep learning” and experiential learning that is supposed to be the sole property of humans, yet you conclude that this is still “computing, not thinking.” You make this very clear as to why: “Since programs performance computing with binary, 0 and 1 only, and following the certain rules, they will only do what human wants to.” Furthermore, you expand on this notion: “Machines with high-level Artificial Intelligence can only proceed under the pre-set rules. No matter how ‘smart’ AI is, it has no ability to create rules of its own. But, the human can create the rules, break the rules, and change the rules.” These are both clear, well-worded statements explaining why AI is no threat.

    At the beginning you are losing some readers because you do not adequately explain the game of Go and use a complex numbering system (2*10^170 ) that seems more like a typographical error than anything else. You will end up confusing your readers! Likewise, although the article link at the beginning is a good one, I thought your introduction was lacking enough information about the game of Go until I read the link. Good choice on the link! Unfortunately not all readers will read the link and thus will probably remained a bit confused with Go.

    Overall this is a good social critique, as it uses a recent event to enter into a fascinating discussion about the role of AI in our lives and attempts to sway readership by explaining that the fear is imaginary; we have much to gain from the pursuit of AI. I do wonder, however, if you could have commented a bit more on the role in humans in all of this. The real fear, I think, is not that AI itself will harm humans and human life, but that a human mastermind will program them to do certain things that harm us. Does your column suggest that even the designer cannot have the AI do those things due to the limitations of AI? That is, by suggesting that WE are the ones that control (“only do what humans want to”), does that necessarily ease our tension about the potential dangers of AI? Great critique!

    Like

  2. Although I did not fully understand the way the program AlphaGo actually works, I got the gist of the technological breakthrough that it entails. I love the Sci-fi aspect and lore that comes with “the machines” becoming self-aware and taking over the world.” I do not get tired of that concept. I know that it has been overplayed throughout the years, yet I welcome the idea time and time again. The fact that the AlphaGo program is raising the fear of AI mutiny in real life intrigues me very much. I know that you dismiss the claim that this can ever happen in your post, however, the mere fact that we are reaching the point in society that we have to address the fear of AI takeover is freaking cool! The little sci-fi nerd in me is jumping for joy that we are in an age where AI programs can be compared to the stuff we only see in the movies, comics and books.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s