The problem lies in the principles of performance and the mechanisms that embody it: social networks encourage their participants to be as open as possible, to transfer any personal data, often deeply intimate and not intended for outsiders. The rules of social platforms are obscure and mostly not obvious to users, allowing platforms to use the received data for their own benefit at full capacity.
The platforms with a vast number of users communicating with each other is a magnet for advertisers. Users reveal their innermost secrets, and social networks analyze, categorize and allow third parties to obtain this data for marketing purposes.
Remember the twentieth century and the start of the twenty-first: no social networks yet, no penetration of the Internet to all corners of the planet, no digital media. Still, the threat, unrealized by careless people yet, is already here. How could the politicians, journalists and media stars of today think that pictures of awkward parties or similar events would suddenly turn out to be the compromising materials, seriously discussed by all the leading media on the planet. Did anyone think it would affect their career or even freedom?
The point is that the heroes of the scandals did not realize the fact that photos and videos can be stored for a very long time. It seems that they simply did not suspect that the mood in society and public policy could change so much. No one could have imagined that intolerance of future moral systems would reach such a high degree. So high that the watchful eye of public morality will search through the archives.
This is not only about countries where political regimes are monitoring any statements made by citizens. Somewhere gloomy people come for you, investigating an insult to the country's leader, but somewhere else you risk losing your job, social credit and the ability to pay bills when offended public groups raise hysteria because of your disagreement with their dominant point of view.
The person acting within the network must take this threat into account. The positiveness and ease of a person's manifestations in social networks has its dark sides: there are powerful mechanisms hidden in these depths that record everything about their users. With a couple of clicks arrays of personal information are extracted upon request of interested parties.
Credit scoring systems, fueled by AI systems, can estimate a potential customer's creditworthiness and reliability, while having data taken only from his or her social network account. The power of artificial neural networks in the analysis of behavioral factors is so great that there are examples of scoring of the use of a smartphone: working time with various types of apps, activity during the day, SMS content, charge consumption and many other activities.
Corporations and government services often control the messages published by employees on their behalf in personal profiles. There are frequent cases when the policy of a particular organization introduces requirements for the posts of its employees. There are examples of the prohibition of some platforms for employees in law enforcement agencies. There are also cases, when the administration of an organization discovers the “inappropriate content” in the posts of its employee and applies sanctions.
Articles of HR and security experts contain tips for those who are looking for job and they often recommend to maintain their accounts on social networks carefully:
Some experts propose creating “decoy profiles” with relevant content that should characterize the person in a right way.
Our online image is the image of a person who wants to be seen just like that among millions of those who also try to be a better version of themselves. Communication platforms benefit from such transformations because the business of social networks is advertising and user data. Our perfect looks are richer and more careless. These artificial people consume more, are sensitive to status, fashionable products, they show more prosperity than they really have. And social network advertisers want these people to like and fill their feeds with relevant content.
When the desire to have socially approved accounts becomes widespread, AI systems will learn to find the truth among the stream of “relevant” messages tailored to the requirements of the viewers. Then it’s the turn of the technology of introducing “distortions”, which will confuse smart machines and prevent them from “cracking” a lie. In response to this, artificial neural networks will begin to look for these patterns ... And so on. Perhaps, this ridiculous carnival of lies and hypocrisy will finally end. In the meantime, let's check our Facebook profiles, our old posts and likes, because the future is dangerous.
Image courtesy of Paprika Ads