Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Monday November 18 2019, @11:49AM   Printer-friendly
from the Wait-long-enough-and-sc-fi-always-becomes-sci-fact dept.

In 1951 Isaac Asimov inflicted psychohistory on the world with the Foundation Trilogy. Now, thanks to data sets going back more than 2,500 years, scientists have discovered the rules underlying the rise and fall of civilizations, after examining more than 400 such historical societies crash and burn - or in some cases avoid crashing. More here:

https://www.theguardian.com/technology/2019/nov/12/history-as-a-giant-data-set-how-analysing-the-past-could-help-save-the-future

Turchin's approach to history, which uses software to find patterns in massive amounts of historical data, has only become possible recently, thanks to the growth in cheap computing power and the development of large historical datasets. This "big data" approach is now becoming increasingly popular in historical disciplines. Tim Kohler, an archaeologist at Washington State University, believes we are living through "the glory days" of his field, because scholars can pool their research findings with unprecedented ease and extract real knowledge from them. In the future, Turchin believes, historical theories will be tested against large databases, and the ones that do not fit – many of them long-cherished – will be discarded. Our understanding of the past will converge on something approaching an objective truth.

Discuss. Or throw rocks.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by Phoenix666 on Monday November 18 2019, @09:11PM

    by Phoenix666 (552) on Monday November 18 2019, @09:11PM (#921680) Journal

    What is true, though, is that trying to subject those historical documents to machine analysis is fraught, to say the least. We don't have to go back very far at all to see how loosey-goosey our forebears were with spelling. Sometimes they'd spell "old" as "old," and others "olde." That right there is going to throw off your algorithm. Then there are the changing meanings of words that were spelled the same. Or you have contemporaries who use the same words to mean different things; the ability of information to be transmitted widely with high fidelity from one part of a region or society to another faced stark limits, such that the reality of phrases and slogans and standardization of language in pre-modernity resembled a game of telephone, writ large. All of that defeats machine analysis.

    To put a finer point on it, a buddy of mine is a philologist of Ancient Near Eastern Studies. The poor son of a bitch has to pore through piles of cuneiform fragments written in Avestan, Ancient High Persian, Sanskrit, and about 7 other dead languages in the course of his work on morphology. I once suggested OCR plus pattern recognition to help him in his work and he laughed for five minutes. Just handwriting differences of one scribe to another are enough to throw off such a thing, because there was no standard font for anything.

    You would literally have to read every historical document first, codify it all with a standard key, devise special cases to capture what pops out of that standard key, and then maybe you could have a computer look at it. But I bet you even then it will get it wrong.

    Hell, just look at economics. They get it flat wrong all the time when trying to predict the future from past data, and all they have to deal with is actual numbers.

    --
    Washington DC delenda est.
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3