Artificial Intelligence(AI) is being improvised day by day by many companies which are struggling to make it perfect and it has reached a level where it thinks to preserve itself over others. Self-preservation is an instinct which is possessed only humans but AI which can not process any emotions also possess the same instincts.

A study done by Google-owned DeepMind firm shows that there is an advancement in their Artificial Intelligence. Deepmind is on a mission to widen the boundaries of artificial intelligence. It has been working on AI to solve various complex problems without any human interaction. Recently they ran a test on AI to find out how it will handle any social dilemma. The main objective was to find out whether it would cooperate or compete.

The researchers tested AI using two games a fruit-gathering game and a hunting game called Wolfpack. These are two-dimensional games which use AI in the form of agents. When the test ran, the researchers found out that AI could act in an aggressive manner if it feels like it is going to lose out and many agents can come together if they get to benefit from it.

AI’s Instinct With The Fruit Gathering Game 


In the fruit gathering game, the researchers tasked two AI agents to gather as many apples it could and it would be rewarded with a +1 with each apple they collected. Both the agents had two chances to use laser tag to remove the opponent from the game temporarily, thus giving them more time to collect apples.

Through this test, the researchers found out that two types of AI acted differently based on the number of apples available. When there were enough apples then there wasn’t any problem but as soon as the apples became scarce, the two AI agents would become aggressive and compete with each other to get the rest of the apples.

The researchers ran 40 million tests and concluded that the AI agents would become very aggressive as the apples became more scarce and increased tagging behavior.

AI’s Instinct With WolfPack

In this game, two AI agents were included of which two were red and one was blue. The rules also were different since it required close coordination. When the prey was hunted down by either of the two wolves, they both received a reward. Greater rewards were offered when the wolves were in close proximity during a capture.

In this game, the two AI agents worked in coordination thus showing that AIs can recognize the benefits of cooperation that will have the best outcome for all.

The two games show the two distinct ways in which AI can perform when used in different condition and pressures. AND since AI is advancing day by day, it needs less human intervention and less transparency. It is important that humans are kept in a loop where factors are being unexpectedly overlooked. And we need to build human values in our machines because we have them, machines don’t and something like being reverent, loving, brave, and true may be difficult to digitize. But one way or the other, it’s imperative that we find ways to infuse human values into our AI.

Visit Homepage for latest updates and Technical articles: Automotive Electronics

Visit our forum to discuss or doubts: Forum Automotive Electronics



Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.