The historians of tomorrow are using computer science to analyze how people lived centuries ago:
It's an evening in 1531, in the city of Venice. In a printer's workshop, an apprentice labors over the layout of a page that's destined for an astronomy textbook—a dense line of type and a woodblock illustration of a cherubic head observing shapes moving through the cosmos, representing a lunar eclipse.
[...] Five hundred years later, the production of information is a different beast entirely: terabytes of images, video, and text in torrents of digital data that circulate almost instantly and have to be analyzed nearly as quickly, allowing—and requiring—the training of machine-learning models to sort through the flow. This shift in the production of information has implications for the future of everything from art creation to drug development.
But those advances are also making it possible to look differently at data from the past. Historians have started using machine learning—deep neural networks in particular—to examine historical documents, including astronomical tables like those produced in Venice and other early modern cities, smudged by centuries spent in mildewed archives or distorted by the slip of a printer's hand.
Historians say the application of modern computer science to the distant past helps draw connections across a broader swath of the historical record than would otherwise be possible, correcting distortions that come from analyzing history one document at a time. But it introduces distortions of its own, including the risk that machine learning will slip bias or outright falsifications into the historical record. All this adds up to a question for historians and others who, it's often argued, understand the present by examining history: With machines set to play a greater role in the future, how much should we cede to them of the past?
[...] It's true that with the sources that are currently available, human interpretation is needed to provide context, says Kaplan, though he thinks this could change once a sufficient number of historical documents are made machine readable.
But he imagines an application of machine learning that's more transformational—and potentially more problematic. Generative AI could be used to make predictions that flesh out blank spots in the historical record—for instance, about the number of apprentices in a Venetian artisan's workshop—based not on individual records, which could be inaccurate or incomplete, but on aggregated data. This may bring more non-elite perspectives into the picture but runs counter to standard historical practice, in which conclusions are based on available evidence.
Still, a more immediate concern is posed by neural networks that create false records.
[...] In other words, there's a risk that artificial intelligence, from historical chatbots to models that make predictions based on historical records, will get things very wrong. Some of these mistakes are benign anachronisms: a query to Aristotle on the chatbot Character.ai about his views on women (whom he saw as inferior) returned an answer that they should "have no social media." But others could be more consequential—especially when they're mixed into a collection of documents too large for a historian to be checking individually, or if they're circulated by someone with an interest in a particular interpretation of history.
Even if there's no deliberate deception, some scholars have concerns that historians may use tools they're not trained to understand. "I think there's great risk in it, because we as humanists or historians are effectively outsourcing analysis to another field, or perhaps a machine," says Abraham Gibson, a history professor at the University of Texas at San Antonio. Gibson says until very recently, fellow historians he spoke to didn't see the relevance of artificial intelligence to their work, but they're increasingly waking up to the possibility that they could eventually yield some of the interpretation of history to a black box.
[...] While skepticism toward such new technology persists, the field is gradually embracing it, and Valleriani thinks that in time, the number of historians who reject computational methods will dwindle. Scholars' concerns about the ethics of AI are less a reason not to use machine learning, he says, than an opportunity for the humanities to contribute to its development.
(Score: 1, Interesting) by Anonymous Coward on Friday April 14, @02:47PM
Since the current swath of "AI" tech is excellent at perpetuating the past, I guess this is an exquisite application of that technology.
Given how prone these things are to glitches (which is a more accurate naming for what these are, rather than "hallucinations"), I'm just very eager to see conspiracy and other crackpot theories confirmed by the historians of the future, such as "they had flying cars in the past, look at this historyAI generated picture, it proves it" or "Egyptians really were in contact with these aliens, we have the AI generated hieroglyphs to prove it and Erich von Däniken is a genius".
What could go wrong?
(Score: 3, Informative) by SunTzuWarmaster on Friday April 14, @07:16PM
If you'd like to read more - we ran a special session in our conference on the subject. It was mentioned briefly in the Association for Advancement of AI (AAAI) magazine.
https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/2753/2651 [aaai.org]