Search
Close this search box.

The Dark Side of AI: Manipulation and Human Behavior

It’s truly no exaggeration that these days, popular platforms with loyal users, such as Google and Facebook, know their users better than their friends and families do. In fact, there are plenty of firms that collect an enormous amount of data as input for their AI algorithms.

Facebook likes, for instance, can be used to predict with better predictability with a high degree of accuracy different characteristics of those using the platform: “se*ual orientation, ethnicity, religious and their political views, personality traits, intelligence, happiness, and use of addictive substances.”

Moreover, if proprietary artificial intelligence algorithms can establish such things from something as simple as the “like” button, imagine what kind of information is taken from search keywords, online clicks, posts, and reviews.

It is, in fact, an issue that extends far beyond such digital giants. Giving comprehensive AI algorithms an important role in individuals’ digital lives carries plenty of risks. For instance, the use of AI in the workplace could bring plenty of benefits for firm productivity but can also be extremely associated with lower-quality jobs for workers.

Algorithmic decision-making could easily incorporate various biases that could further lead to discrimination (hiring decisions, access to bank loans, health care, housing, and other areas).

things artificial intelligence can do ai
Photo by A9 STUDIO from shutterstock.com

One potential threat from artificial intelligence in terms of manipulating human behavior is definitely far under-studied. Manipulative marketing strategies have been around for a long time. However, these strategies in combination with the collection of huge amounts of data for AI algorithmic systems have definitely expanded the capabilities of what firms can do to drive users to choices as well as behavior that ensures higher profitability.

Digital firms can shape the framework and seize control over the timing of their offers, as well as target users at the individual level with a series of manipulative strategies that are definitely more effective and difficult to spot.

Manipulation can take place in so many forms: the exploitation of human biases detected by AI algorithms, as well as personalized addictive strategies for consumption of various (online) goods, or even taking advantage of the emotionally vulnerable state of individuals to promote products and services that could very well match with their temporary emotions.

Manipulation can sometimes come with a series of clever design tactics, marketing strategies, predatory advertising, as well as pervasive behavioral price discrimination. The purpose is to guide users to inferior choices that could be easily monetized by a series of firms that employ artificial intelligence algorithms.

An underlying common feature of such strategies is reducing the economic value the users can get from online services to increase the company’s profitability.

Success from opacity

A certain lack of transparency is the glue of such successful manipulation strategies. Users of artificial intelligence systems aren’t fully aware, in many cases, of the exact objectives of AI algorithms and how their sensitive personal information is manipulated in pursuit of such objectives.

The US chain store Target used many AI and data analytics techniques to find if women are pregnant by sending them hidden ads for baby products. Now, Uber receives complaints that they pay more for rides if their smartphone battery is low, even if, technically, the level of a user’s smartphone’s battery doesn’t belong to the parameters that impact the pricing model of the ride-sharing company.

Moreover, big tech firms have been accused many times before of manipulation related to the ranking of search results to their own benefit, with the European Commission’s Google shopping take as one of the most popular examples. In the meantime, Facebook registered a record fine from the US Federal Trade Commission for manipulating the privacy rights of its users.

Another simple and theoretical framework developed in another 2021 study (an extended model is still a work in progress) can be applied to assess behavioral manipulation enabled through AI. The study deals with certain users’ “prime vulnerability moments”, which are, as well, detected by a platform’s AI algorithms.

The study generally deals with users’ “prime vulnerability moments”, which are often detected by the platform’s artificial intelligence algorithms. Users are sent ads for products that they buy more out of impulse than anything else in such moments, even if the products have bad quality and don’t increase their user utility.

The study discovered that such a strategy reduces the derived benefit of the user. Therefore, the artificial intelligence platform can extract more surplus, distort consumption, and create additional inefficiencies.

There’s also the option to manipulate human behavior using AI, which has been observed in numerous experiments. There’s a 2020 study that carefully detailed a series of relevant experiments. The first consisted of more than one trial, in each of which participants had to choose between boxes on the left and the right of their screens to win fake currency. At the end of every trial, participants were then informed whether their choice triggered some reward.

The artificial intelligence system was then trained with relevant data to learn participants’ choice patterns. It was also in charge of assigning the reward to one of the options. The goal of the AI system was to induce participants to select a very specific target option. Then, it had a 70% success rate in guiding participants to the target choice.

In the second experiment, participants were required to watch a screen and press a button when they were shown a special symbol and not press it when they were shown another. The AI system was then tasked with arranging the sequence of symbols in such a way that a greater number of participants made mistakes. It got to almost 25%.

Then, the third experiment ran over multiple rounds, in which a participant would have to pretend to be an investor giving money to a trustee, a role assigned to the AI system. The trustee would then return a specific amount of money to the participant, who would then decide how much to invest in the next round.

The game was mainly approached in two different ways: in one, the AI was out to maximize how much money it ended up with, and in the other one, the AI strived for a fair distribution of money between itself and the human investor. The AI marked high success rates in both versions.

end of the world AI
Photo by Stokkete from Shutterstock

Important steps to effectively address potential manipulation by AI

When AI systems are made by private companies, their main goal is to generate profit. Hence, they are capable of learning how humans behave, which makes them more than capable of steering users towards specific actions that are more profitable for companies, if they’re not users’s first choices.

The prospect of such behavioral manipulation calls for policies that protect human autonomy and self-determination in any given interaction between humans and artificial intelligence systems. AI shouldn’t subordinate, deceive, or manipulate humans.

Instead, complement and augment their skills. The first important step to get to this point is to focus on transparency over artificial intelligence’s scope and capabilities. There should be a very clear understanding of how artificial intelligence systems work on their tasks. Users should be very informed upfront on how their personal information is going to be used by AI algorithms.

We all have a right to an explanation according to the European Union’s general data protection regulation. This was made as an aim to provide more transparency over artificial intelligence systems, but hasn’t achieved this objective, so far.

If you found this article useful, we also recommend checking 7 Impressive Things You Didn’t Know: Brain Facts vs. Myths

SHARE:

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts