top of page
Writer's pictureNathan Forti

How AI "Harvests" Human Intelligence


Many of us are familiar with the movie The Matrix, where machines grow and harvest humans for battery power. Though this remains a dystopian fantasy, modern AI models are extracting other resources from human beings, which is, in this case, our intelligence. Models like ChatGPT do not possess their own intelligence; rather, these complex computer programs mimic human intelligence by analyzing patterns from the vast amounts of human-generated data they consume.

Here is how it works: AI models are trained on massive data sets filled with text written by humans such as books, articles, ancient texts, and other sources. When prompted, the model doesn’t “understand” in the same way humans understand. Instead, it processes the words in the prompt by using statistical analysis to predict which letters or words are most likely to come next based on patterns it has learned from its training data. The responses generated by the AI are essentially reflections of the human intelligence behind the data, condensed and reordered to create the illusion of originality and independent thought.

To understand how AI "harvests" human intelligence, consider the analogy of a wheat field. The farmers want to produce valuable wheat from the field, but invasive plants grow among the wheat. Then a combine moves through the field separating the valuable wheat from the unwanted plants by identifying details that distinguish the crops from the weeds. Similarly, AI models sift through vast human-generated data (both wheat and weeds), identifying useful patterns and associations (wheat) while discarding irrelevant or redundant information (weeds). Through this process, it generates responses that seem intelligent but are, in fact, merely statistical predictions based on human prompting. The AI does not act on its own.

A helpful way to grasp this process of “intelligence harvesting” is through another analogy, that of a puppet show. AI, much like a puppet, appears to act with a will of its own through writing text, speaking audibly, creating images and video, etc. However, just as a puppet relies on the mind and hand of the puppeteer, AI depends entirely on the mind and hand of the human.

Here is an example: The Founding Fathers write the Constitution in 1787. Over two hundred years later, that document, along with countless other texts, becomes part of the training data for a modern AI model developed by OpenAI. John the politician then prompts the AI to draft a new amendment to the Constitution. Then the AI will generate a response that mimics the style and content of the original document based on patterns it gathered from the text of the human-written Constitution. 

The AI moves, but it does not move on its own. It is an incredibly convincing puppet show where the puppet master is the collective aggregate of all the human beings who ever wrote anything and all the human beings who contributed to its construction and training. In the example of the new Constitutional amendment, the puppet master is the aggregate of the United States Founding Fathers, the team at OpenAI, and anyone else who influenced them, such as the Enlightenment Thinkers.

But why are the analogies of the wheat field and the puppet show important? Why do we need to have these in mind as we develop AI? The analogies are crucial because they remind us that AI does not function autonomously. They are systems of statistical predictions that merely reflect patterns taken from human knowledge. They are extensions of the human mind, not minds of themselves. Viewing AI in this way helps us avoid the dangerous misconception that these systems possess unbiased intelligence. If we treat AI as an entity with its own will, we risk overlooking the flaws and biases in the data and training it relies on, allowing AI to perpetuate or amplify those biases.

The analogies also emphasize our responsibility. Since the actions of AI are shaped by the human intelligence behind it, we must ensure that the systems receive ethical training. In this way, AI can reflect the best of human thinking, rather than reinforcing destructive patterns. Recognizing AI as a tool that draws from human thought, not a thinking entity inherently better than ourselves, keeps us grounded in the reality that AI systems are only as good as the humans who design, feed, train, and guide them.

With the analogies in mind, how can we make sure that AI does not harvest biased thought processes? Who should be held responsible for the actions of the puppet when the puppet master is a vague conglomeration of AI developers and writers long dead? As we continue to develop these systems and integrate them into various industries and our personal lives, we need to consider the condensed human agency behind them and ensure their use remains aligned with our values and ethical standards.


50 views4 comments

4 則留言


AUSTIN WUTHRICH
AUSTIN WUTHRICH
11月05日

Started reading about harvesting and was expecting a grim perspective on AI; I was pleasantly surprised by the tone of the article. Interesting wheat analogy. Made me think. You have talent for thinking about these things in new ways, Nathan!

按讚

MATTHEW DAY
MATTHEW DAY
11月05日

Love the analogies!! Being able to communicate to technical and non-technical audiences about how AI works conceptually is incredibly useful for developing a dialogue with the public when it comes to how we will handle the ethical implications of AI. These implications are already rippling throughout society and will only continue to grow. The better we can all understand AI, the more likely we are to contour it's impacts to be beneficial for society.

按讚

Dana Forti
Dana Forti
11月04日

This makes me realize how unpredictable AI could become and how completely unaligned it is with human interests. Yikes!

按讚

Raymond Durr
Raymond Durr
10月31日

Wow what an interesting read. Makes me think about how biases in data might lead to harmful outcomes if we grow too reliant on AI.

按讚
bottom of page