April 29, 2024

Artificially Intelligent

Artificially Intelligent
Photo by christy jacob / Unsplash

Artificial Intelligence is not [yet] self-aware, conscious, or capable of making moral or immoral judgments. Nor is it capable [yet] of taking action that weighs the moral outcomes of any event or circumstance. It is a collection of all things that have been placed on the internet, through which it looks based on the moral judgments of the algorithmic creators and indexers.

AI does not make any meaningful or universally agreed upon value of the content it “creates.” It merely scrolls the internet and looks for the most often used words and phrases, balances that against a set of parameters deemed acceptable by a programmer, and pieces together a sentence, paragraph, essay, story, or artwork based on the most common (or most commonly acceptable) outcome. If it were true intelligence, it would balance critical, unbiased thinking with moral judgment independently from algorithmic inputs. It might even disagree with popular narratives. But it does none of those things because it cannot think critically.

For centuries before and leading up to the turn of the century, critical thinking was considered a valued core competency for the both the youngest child and the most seasoned statesman. Around 2000, Critical thinking was still among the top 5 most sought-after skills for those exiting college. But beginning in the early 2000s, critical thinking began to fall down the list while another skill was quickly gaining
influence.

Emotional Intelligence, EQ, or EI began to rise in importance and coincided closely with the rise of the internet and the Information Age. DuckDuckGo, Yahoo!, Google, and Bing burst on to the scene with searchable information at the ready. No longer was it necessary to scour an encyclopedia, bookshelf, or the library to find answers for a given topic. What used to take effort to find, understand, and retain was suddenly replaced by typing a question or phrase into a search bar and voila! scrolls of information would appear.

Information at your fingertips lazifies the discovery process, and results in superficial learning and shallow understanding. This is known as the Google Effect. Information that is easy to come by is also easy to forget.

The learning process is further endangered by the way information is distributed. It’s not that information filtering is wrong, per se. Everything needs editing. But information control is a powerful – and valuable – thing. What information is shared can be as equally polarizing as why information is deemed shareable, especially when not all values held by programmers are shared by the population and where values are increasingly blurred with political ideology. What was meant as a warning in Pink Floyd's "Welcome to the Machine" now sounds like reality: "What did you dream? It's alright, we told you what to dream."

Google's algorithm programmers had the sense to keep the information corralled within a sequence of pages to encourage you to stick to the most popular links. Product managers discovered that companies are willing to pay for top searches, called search engine optimization, or SEO. By paying Google, you can have your information top of line where over 25% of searchers click, and fewer than 2.5% make it to the 10th position. The question must be asked, what is behind the information gate-keeping?

The problem begins at what information is presented to you in your Google search, but it doesn't end there. All search engines now have prompt assists. As colleges, high schools, and middle schools struggle to keep up with AI writing bots, deeper problems have seeped into scientific research papers and journals. Some papers include whole paragraphs of adopted AI language. Even peer reviews are becoming tainted by natural language processing AI (NLP).

What happens when students use AI engines that cite AI written source documents? Because AI does not think, it does not come up with any new insight. Dependence on AI runs the risk of an ever-shrinking pool of information. As AI copies itself, and cites itself, it creates a funnel of information that reenforces itself, giving prominence to itself. A real-world example of a similar scenario played out recently with an AI scientist, Juan Manuel Corchado, who cited himself hundreds of times, thus unduly amplifying his preeminence. Turns out he’s not the only one.

Thus we see that the problems run in both directions. Citing itself creates an diminishing set of information leading to over-emphasized, over-amplified importance, leading to further information control and memory loss, re-emphasizing the importance of programmatically-preferred information which in turn reinforces socially-preffered narratives, further emphasizing an emotional intelligence of a certain type over critical thinking.

AI runs the risk of becoming a copy of a copy of a copy where detail and information is lost over time in a process called generation loss. An apt description of where we're headed if we leave our thinking to AI machines.