This blog combines computer science and game theory to
speculate the future of humanity (woah! Bold statement there bud!). With the
vast advancements in the artificial intelligence industry, industry experts and
conspirators, alike, foresee two outcomes in the future: human annihilation
(terminator style… or whatever robot takeover fantasy you fancy) or incredible
benefits accredited to AI.
source:https://www.google.com/search?q=terminator+captcha&source=lnms&tbm=isch&sa=X&ved=0ahUKEwivzIrM183TAhWE8oMKHf2zDoIQ_AUICigB&biw=946&bih=912#imgrc=Ayf6gBNNCATj7M:
comment on why that image is so funny.
Before we get started on that lets look at the four types of
AI.
Type 1 AI: Reactive Machines
Reactive AI are considered the most basic type of AI, they don’t
have the ability to form memories nor to use past experiences to inform current
decisions. Some well-known reactive machines are Deep Blue (IBM’s chess playing
computer) or SIRI (apple’s voice command program). Deep Blue is able to
identify pieces on a chess board; through knowing how each one moves can
predict the future moves of its opponent and itself many moves ahead then make
optimal decisions (think of a game tree). However, it does not have any concept
of the past, thus cannot adapt and these types can be easily fooled, and will
behave exactly the same way every time they encounter the same situation.
Type 2 AI: Limited Memory
These types of AI are able to use past information in order
to adjust decisions in the present. An example of type 2 AI is a self-driving
car. For instance, self-driving cars can observe other car’s speed and
direction. Observations are preprogrammed into the computer to represent the
world. For a self-driving car, this includes lane markings, traffic lights, and
other important things such as curves in the road. These pieces of information aren’t
gained from experiences like the way humans compile information over the years…
this leads us to the next type of AI.
Type 3 AI: Theory of Mind
AI in this category are more advanced than the previous two
types (as will be explained) because they not only have the ability of
programmed representations about the world, but they are also able to
understand that other people/objects/and creatures have their own behaviors and
intentions. This enables AI to work together or compete because it is possible
for them to understand other “players” motives. Is psych. The “theory of mind”
is the understanding that people,creatures and objects in the world can
thoughts and emotions that affect their behavior.. hence the name DUH!
Type 4 AI: Self-aware
The most prestigious of the types are self-aware AI; these AI
are able to form representations of themselves. This takes AI a step further
from type 3 AI. Type 4 AI are able conscious beings that know about their
internal states and are able to predict the feelings of others (just like humans
are able to do in social environments). See hk-47.
Take a moment to comment on the types of AI there are. What types
should humans focus on creating? What do we have to worry about for a certain
type of AI?
source:https://media3.giphy.com/media/djjvQJcm7bZKg/giphy.gif
Onward! DeepMind Technologies Limited is a British AI company
that has been bought by Alphabet (google’s parent company) in 2014. DeepMInd
Tech. has been able to create an artificial neural network that has learned to
adapt to situations similar to how humans learn. An artificial neural network is a large connection
of simple units (neurons) that carry activation signals of varying strength to
each other and enables machines to be trained by example instead of
programmed!!!! HUGE! So instead of having a gametree way of thinking, like type
1 AI, these AI take it a step forward. More can be learned about this kind of
machine learning here at the wiki, I encourage you to check it out… I could
write a book on it so ill stop talking about it here. https://en.wikipedia.org/wiki/Artificial_neural_network
Using this artificial neural network DeepMind Tech has been
able to produce a type 3 AI, named DeepMind AI system, that has been noted to
become highly aggressive under certain circumstances.
Steven Hawking, the
legend himself, noted that the continued advancement of AI will either be “the
best or the worst thing to ever happen to humanity.” Why does he say this? Why
can’t we have nice things?
Google applied the DeepMind AI system to two games to test
the willingness to cooperative. The first
game is a fruit gathering game that has two DeepMind AIs, armed with laser
beams that freeze the other opponent, attempt to collect as many apples as
possible. The researchers ran 40million simulations of this game; at first the
AI were not aggressive and shared the loot evenly. As more simulations went on,
the AI learned that by blasting each other with laser beams (not cooperate) they
could increase the amount of apples that they personally gained. The
researchers also found that by increase the amount of neurons in the AI, the AI
become more aggressive. Below is a video of the apple picking simulation. The DeepMind
AI are in blue and red, apples in green, and laser beams are yellow streaks. Note
that if they don’t use the laser beams at all they will score the same amount
of apples.
Apple gathering: https://www.youtube.com/watch?v=he8_V0BvbWg
(cautionary: bright fast-moving lights)
They found that the more complex the AI network, the more
willing the AI was to sabotage its opponent to get more share of apple. This suggest
the more intelligent the agent, the more able it was to learn from its environment,
resulting in it using these aggressive tactics.
The second game that the researchers subject DeepMind to is
a game called Wolfpack. In Wolfpack, there are three AI participants, two of
them are wolves and one of them is the prey. As opposed to Gathering, Wolfpack encourages
cooperation.
In Wolfpack the AI realized that cooperation is the key to
success and they need both wolfs to capture the prey.
While these are simple games, the overall message is that AI
systems are willing to compete with anything in the way with any means
necessary to complete an objective. Make it known that just because humans build
these AI, it doesn’t mean that Ai will automatically have human interests at
heart. This means it is the upmost importance of making the overall goal of AI
to benefit humans above anything else.
Elon Musk (the goat) says: “AI systems today have impressive
but narrow capabilities. It seems that well keep whittling away their
constraints, and in extreme case, they will reach human performance on
virtually every intellectual task. It’s hard to fathom how much human-level AI could benefit
society. And it’s equally hard to imagine how much it could damage society if
built or used incorrectly.”
These HK-47 clips are from a Star Wars game called Knights
of the Old Republic (KOTOR) everyone should download it instead of studying for
finals! It’s a RPG and like $10 on steam if I remember.. one of the best games
out there 10/10 recommend. I could watch these HK videos all day..
The links are of the fictious robot named hk-47 who can be
your companion in KOTOR. HK-47 is a type 4 AI, whose purpose is to serve the owner and is very amusing (and not to
mention fictionally deadly).
On killing jedi: https://www.youtube.com/watch?v=UPeI4mX8Nus
Confronting other HK units: https://www.youtube.com/watch?v=go0uByfBlgI
Getting a pacifist package installed: https://www.youtube.com/watch?v=WoNyif8iURI
Many more are out there check them out….
sources: http://www.sciencealert.com/google-s-new-ai-has-learned-to-become-highly-aggressive-in-stressful-situations
https://en.wikipedia.org/wiki/Artificial_neural_network
https://en.wikipedia.org/wiki/DeepMind
Why do you think that technology becomes highly aggressive under certain circumstances? We create these machines and therefore design how they think and react so could it be from our way of designing them that they are reacting this way? You mentioned that the increase of the amount of neurons in the AI resulted in the AI becoming more aggressive. Is there a way around this that would result in less aggressiveness?
ReplyDeleteAlso just a little throwback: this reminds me of the Disney channel movie Smart House! It also reminds me of a short story that I read in high school called There Will Come Soft Rains which is similar to Smart House.
By becoming more aggressive the AI is able to (selfishly) capture more apples.
Deletehaha interesting: How is it similar to Smart House?
just read the description. sounds like the house is a type 4 AI.
DeleteThat picture is funny because it shows that dispite how advanced the terminator is, a simple capatcha makes him unable to process Sarah Connor's name. This shows that robots can be imperfect as well as being intelligent. HAHA :D
ReplyDelete