Friday, June 23, 2023

FIXING THE INTERNET 3 Misinformation

The 2020 election and aftermath have intensified the conflicts over biased information.  People are concerned that distorted or false information have a negative impact on the integrity of the society itself  There are congressional hearings,  and innumerable articles in print media and online media,  and recently a documentary  SOCIAL DILEMMA, that review the situation.  (The film, a mixture of interview, narrative, and dramatization,  illustrates how easy it is to confuse the three!) It presents a dramatized family whose teen struggles with "internet addiction",  and imaginary characters representing the AI inside search engines.  There are focused interviews from previous employees of Google and other internet companies,  including Jarrod Lanier, who has written a book WHO OWNS THE FUTURE?  about this.   The interviewees as insiders make several important points, which clarify the problem and define it in a different way:  The model of the social media companies is selling you/users/your information to advertisers. Personal information is the commodity.  Facebook/Instagram collects this directly,  Google collects it by search activities, Twitter collects it by tweets and re-tweets.  The goal of all of these companies is increasing your screen time to get more of you to sell.  The more time you spend selecting on screen,  the more information about you they acquire.  Algorithms are designed to identify what is important to you, the user, and sell it.

The goal of advertisers is to sell their product to likely buyers as carefully as possible in selected users/groups.  The internet companies try to define groups with maximum value to advertisers using AI algorithms.  The core model is making money by selling user information and targeting ad/info to defined users.  This is a financial model, and cannot be reversed except by changing the financial incentives or penalties.  Lanier has proposed along with others,  that users/us share in the revenue of the use of information about ourselves.  This is one solution to the incentive model,  but hard to ensure the companies would do it equitably,  and it doesn't address several other issues.  California has instituted an "opt out" control by users,  which must be user initiated.  

When translated into the political arena, another kind of marketing emerges.  Russians, Chinese, and other information sources do not “hack” Facebook.  They pay to be advertisers hiding their identity,  or they create “bot” sites to spread information, as fake "personal sources".  Facebook and other site algorithms identify user preferences for these sources and feed users, and user/groups, the data users select.  Instead of targeted advertising for products or businesses,  they market/ target political influence.  This is not fundamentally different from other political advertising on television at election time,  but social media algorithm methods are more efficient at targeting viewers.  (This is a mixed benefit, since they do not always target "swing voters".)  The idea of “editorial control” concept borrowed from newspaper/TV media makes no sense because the management of distribution of this data by algorithm is precisely controlledUsers select what they wish by clicking, and get what they want and more of the same.  THAT is the problem.  If users do not select certain media,  the AI quickly diminishes its presentation.  When users continue to select it,  it is expanded.  This is how the system is designed and the user plays a basic role in the outcome.  This is widely misunderstood in congress.  Introducing government control authorizes whatever group is in power to totally control the distribution of information of a very powerful medium.  This is the Liberal answer that reflects the persistence of the belief that the editorial control in newspapers and network TV "protected" viewers from "wrong information". Conservatives object to control fearing that their sources would be the first to be suppressed.  Both sides favor restricting expression of information, because the selective choice of information reinforces the other side's bias!

The problem for society is a secondary effect of the process. Targeting information and advertising to users means that subgroups self select to get more information supporting their views, differentiating themselves from other users/groups.  The algorithm  inherently separates information delivery by the selection of individuals, producing a division in "consensus reality".  Previous media, including newspapers and network TV, aggregated the public into a small number of large subgroups by editorial policy.  This is more fragmented in social media (and talk radio and blogs), and less obviously under anyone's control.

The French are attempting to address this by educating the user,  especially children and teens, to understand the selection process and protect themselves by critically evaluating messages. (https://www.nytimes.com/2018/12/13/technology/france-internet-literacy-school.html) The US has resisted this because exploiting children and teens economically has been pervasive for two generations.  Any effort to educate consumers to resist the promotional distortion of advertising undermines the economic premise of social media.  And some studies suggest that rational thinking does not reduce the divide.(https://www.nytimes.com/2019/01/19/opinion/sunday/fake-news.html) Other factors play an important role in two recent books (The Misinformation Age How False Beliefs Spread, By Cailin O’Connor and James Owen Weatherall; Down to Earth Politics in the New Climatic Regime By Bruno Latour).  A study about the public opinion of the Trump tax cuts casts doubt on the publics' inability to understand the issues. (https://www.newyorker.com/news/our-columnists/fighting-fake-news-is-not-the-solution)  The user does have power to control the process.  My own strategy has been to use FACEBOOK, but  limit my feed by deleting ads and information that are not desirable.  This results in a highly focused, satisfactory experience for me, with limited marketing value, which makes me a user of little value to FACEBOOK.  During the last election, I got almost no political ads or promotions except re-posts by friends.   The user can interact with the algorithm to shape the experience,  that is how the algorithm is designed.  But the user must do this mindfully and intentionally to block undesired information.  This process does not address the biasing of information,  it just lets the user select his or her preferred bias.

This answers one issue of the internet: who controls what information I can receive?  But it does not answer what veracity that information carries?  Is it true or false?  This will be addressed in another musing of OBIRON.




No comments: