trib logo
ad-image
ad-image

Out of control by design: Big Tech deputizes Artificial Intelligence to secretly do its bidding

Special to WorldTribune, September 27, 2021

BIG TECH Watch

Commentary by Richard N. Madden

Despite owning the greatest share of wealth ever known to humanity, Big Tech struggles with fair and effective content moderation. There is bipartisan agreement in the United States that it is failing.

The implications are global and so enormous as to defy quantification. How has this happened?

All Big Tech platforms rely on some Artificial Intelligence (AI), as opposed to Human Intelligence, to manage editorial content. These companies “deputize” AIs to promote or demote that information at the AIs’ discretion.

As AI research improves and available computing power increases, content-moderation AIs are given ever increasing responsibilities, even so far as to have no human overseers whatsoever.

All Big Tech firms are confronted with this phenomenon and this conflict has an increasingly detrimental effect as their platforms integrate deeper into society.

How Does AI Work and Why Does It Become An Issue?

Although AI is often portrayed as a future sci-fi esque technology, practical artificial intelligence has existed since the 1950s. The research field has undergone many waves of interest from academia and industry, followed by pessimism and frustration at the lack of progress.

Generally speaking, AI is a tool or technique used to solve a given problem in much the same way humans do: An AI can perceive its environment and works to maximize its chances to achieve a goal. Current research within the field is dominated by “Statistical AI”, an area which combines statistics and learning techniques to solve very specific problems. “Machine Learning” — the study of algorithms which improve themselves automatically through experience — is a central idea within Statistical AI research.

Statistical AI is most successful at solving a problem when a clear goal is given and large set of training data is provided. For example, an AI could be very successful at determining the best routes for delivery drivers to take every morning. The goal is well defined: complete all scheduled deliveries as quickly as possible, and the training dataset is comprised of previous delivery records and traffic information. On a given day, an AI would utilize previous experiences to route drivers in the best way possible, with today’s successes and failures becoming part of tomorrow’s calculations.

For a clearly defined, unambiguous goal, Statistical AI excels.

However, for more nuanced and context sensitive goals, such as “differentiate hate speech from free speech”, as Facebook’s FreeFlow AI attempts to do, Statistical AI’s weaknesses bubble to the surface.

Statistical AI is heavily dependent on the quality of the training data and the initial assumptions made on that data. Since these two parameters are set by human programmers, “micro-biases” can manifest themselves within the AI.

Therefore it is no surprise when Big Tech’s AIs, which are predominantly designed, developed and deployed by California-based, left-leaning 20-40 year old white males, continually raise flags with liberal biases and misidentifying women and people of color.

Another issue with these AIs is the business model they are built to support. Social media platforms require user engagement and interaction. The more time a user spends consuming content and contributing to the platform, the more intertwined the platform becomes to the user’s life. To facilitate this goal, "recommendation" AIs prioritize engaging content over quality content i.e., fake news, clickbait, inflammatory media etc.

These priorities exacerbate the problem of Echo Chambers online, such as in the case of YouTube’s recommendation AI creating pathways for extremism and radicalization.

AI can be a great tool when used correctly. However when improperly utilized, AI can cause more harm than good.

It gives large companies disproportionate influence over society by allowing them to control and curate media to drive agendas, censor groups and pick narratives at will.

The black box nature of these AIs and algorithms itself sows mistrust. We have no idea why or how certain media is displayed to us, whether these selections are designed to influence us and begs the question of what sort of content is actively hidden from us.

One obvious solution is to force Big Tech to publish the exact ways these AIs and algorithms work so that experts and the general public alike can better understand how these platforms operate. Moreover, transparency in social media AI would greatly help uncover and remove the inherent biases.

However, openness would reduce the competitive advantage that Big Tech holds, and they continually resist these efforts, preferring to maintain their entrenched positions and keep us in the dark.

Richard N. Madden is a researcher in the fields of Cryptography and digital hardware design. He holds a Master’s degree in Computer Engineering with particular interest in security, data privacy and digital rights.
bigtechai by is licensed under Internet

This website uses essential cookies for site operation. We would also like to set optional cookies to help us improve our site and to analyze web traffic, as described in the Privacy Compliance. You may accept or reject the use of optional cookies by clicking the Accept or Reject button.

ACCEPT
REJECT