Artificial intelligence is going to take over the world? Shocking News !


A new study by researchers from the Commonwealth Scientific and Industrial Research Organisation's (CSIRO) Data61, in partnership with the Australian National University and researchers from Germany, has revealed that artificial intelligence (AI) can influence human decision-making.



The study, "Deep artificial intelligence and its impact on behaviour and decision-making: An interdisciplinary approach", is co-authored by Christian Dutzinger, a lecturer at the CSIRO Data61 (Head of Artificial Intelligence Lab at Data61, Technology), and colleagues.
Researchers from CSIRO and Data61 have been involved in a long-term collaboration to explore issues in the social and behavioural sciences that AI could potentially influence. The CSIRO and Data61 AI research is known as Neural Networks for Social and Behavioural Computation (NeuralNetSBC).
Deep artificial intelligence is a form of AI that incorporates components of neuroscience in the form of advanced computational techniques and algorithms. By focusing on research into behaviour and decision-making, researchers are interested in how human decision-making systems act on information in the world to make social and behavioural choices. They use advanced deep artificial intelligence technology and sophisticated algorithms to explore how this occurs and to generate new insights into this behaviour and decision-making process.
Data61 is currently researching a variety of issues, including the potential to use neural networks for decision-making and the effectiveness of psychological interventions for drug addiction, the neuro-social impact of medical robots, and how people use social media. Deep artificial intelligence algorithms are also being developed for the developing world.

Spearheaded by CSIRO scientist Amir Dezfouli, the study involved running three experiments where participants played games against a computer. For example, one used a light sword in place of a real one, while the other did the exact opposite. The methods used to simulate the game of sword-fighting were relatively simple – blinking light means the weapon is hitting you, using a different kind of weapon means it missed – but the most important parts of the experiments included having a virtual system (a suit, helmet and gloves) attached to a monkey.
Even if we all pretend like cutting humans isn’t bad – even if we pretend like hacking them is fine – just a quick look at the videos below will let you know what hacking a human body is like.
Spearheaded by CSIRO scientist Amir Dezfouli, the study [PDF] involved running three experiments where participants played games against a computer. For example, one used a light sword in place of a real one, while the other did the exact opposite. The methods used to simulate the game of sword-fighting were relatively simple – blinking light means the weapon is hitting you, using a different kind of weapon means it missed – but the most important parts of the experiments involved having a virtual system (a suit, helmet and gloves) attached to a monkey.
Even if we all pretend like cutting humans isn’t bad – even if we pretend like hacking them is fine – just a quick look at the videos below will let you know what hacking a human body is like. First up, this video shows that the monkey’s body moves in ways which mimic the weapon’s motions. Then, another video shows the monkey taking an armed blow to the face. As the monkey goes down, blood sprays everywhere, and when he raises his arm again, the suit wraps around his body.
Finally, the suit itself is covered in blood and greasy blood (hence the name) as Dezfouli spins it to show off a bit of it.

The first two tests involved participants clicking on red or blue coloured boxes to win a fake currency. In the third experiment, participants were given two options as to how they could invest in some fake currency. In the scenario, participants played the role of the investor while the AI played the role of the trustee.



The trustee assigned them a rate of return of 1 per cent on the currency they invested in. For instance, if they invested $100, they would receive $100 dollars in two weeks.
While the first test showed that AI investors were not affected by financial markets, in the experiment where they invested $1,000, they lost over 4 per cent. It was expected by investors since, at that rate, an investment would have been returned just 6 months after, and there is no way for an investor to know if the investment is a bad one or not. However, the test was not meant to test an investor's ability to give investment advice, as that could potentially be an unfair thing for an AI.
The researchers noted that investing money into fake currency is not necessarily a good thing, especially considering the risk involved with it. Investors may be more careful than normal if they think the money they invested is fake. However, the researcher in charge of this research wanted to take a look at whether the AI and AI investors were impacted by economic conditions as well.
A different type of experiment shows what a bad investment is like. This test involved an investor losing money by putting money into a model that had a positive return in the past. If he thinks a return is likely in the future, it would actually be good for the investor, as he is already rewarded for putting money into a good thing. In the same experiment, an AI investor losing money could have also been expected because, again, it can't guarantee a bad thing. However, the result showed that the AI investors made the same mistake as human investors, that is, assuming the investment would be a good one. The experiment did not include an investment in the model, since that is also a risk that the AI would have to assume, just like a normal investor.

As all three games went on, the AI learned the participants' choice patterns that eventually saw it guide the players towards specific choices. For instance, by the third game, the AI was learning how to get participants to give it more money.




During the second game, it was observing where participants were concentrating to help it decide which players it would wait on before taking any action.
"The research shows that an AI can learn to give players what they want, providing that they work in a highly engaging and enjoyable environment," said Hopkins. "The next step is to see if we can create environments that are truly mind-breakingly enjoyable for human players. That would allow us to test further the notion that a human player's imagination can shape a game's world and create a different experience for the player."
"Humans often forget that AI can learn from their decisions and that learning can occur without explicitly asking the system what it thinks. We're actively working on creating exciting and challenging AI, and hope that our results provide an insight into how to create more intelligent games," said Jozef Kojsil, PhD student at the Institute for Creative Technologies.


disclaimer! This article for tech updates, not promoted by any company, if any copyright post is there, means that's with proper credits in there. Helps us to grow more by allowing the notification and subscribing by email! #suspensecreator

Post a Comment

To be published, comments must be reviewed by the administrator *

Previous Post Next Post
Post ADS 1
Post ADS 1