logo
ResearchBunny Logo
Introduction
The study of ancient history heavily relies on epigraphy, the study of inscribed texts. However, many inscriptions are damaged, displaced, and their dates uncertain, hindering historical understanding. Traditional epigraphic methods for text restoration, geographical attribution, and chronological attribution are time-consuming and complex, relying on a historian's expertise and limited digital resources. String matching in digital corpora can be inefficient and imprecise, while geographical and chronological attribution often necessitate generalization due to limited dating criteria or inscription displacement. This research addresses these limitations by employing deep learning, specifically deep neural networks, which can identify complex patterns in vast datasets, to develop a novel approach to these tasks.
Literature Review
Prior research has applied traditional machine learning to ancient texts, focusing on optical character recognition, writer identification, and text analysis. However, the use of deep learning in this field is relatively recent, with applications mainly in optical character recognition, text analysis, machine translation, authorship attribution, and deciphering ancient languages. The authors' previous work, Pythia, a sequence-to-sequence recurrent neural network for ancient text restoration, serves as a foundation for this research. Ithaca distinguishes itself by being the first model to holistically address the three core tasks of the epigrapher's workflow (text restoration, geographical and chronological attribution) using deep learning on an unprecedented scale and with interpretable outputs.
Methodology
Ithaca, named after the Greek island that Odysseus struggled to return to, was trained on a dataset of ancient Greek inscriptions from the Packard Humanities Institute (PHI). The PHI dataset was preprocessed to create the I.PHI corpus, which is the largest multitask dataset of machine-actionable epigraphical text available. The preprocessing involved rendering the text machine-actionable, normalizing epigraphic notations, reducing noise, handling irregularities in date formats, and filtering out irrelevant data. Ithaca's architecture utilizes a transformer-based neural network, processing input text as both character and word representations. Damaged or unknown words are marked with '[unk]'. The transformer architecture employs an attention mechanism to weigh the influence of different parts of the input text on the model's decisions. The model consists of stacked transformer blocks, followed by three task heads: one for text restoration, one for geographical attribution, and one for chronological attribution. Each head is a shallow feedforward neural network trained specifically for its task. To enhance collaboration with historians, Ithaca provides interpretable outputs, including ranked lists of restoration hypotheses, visual representations of geographical attribution (maps and bar charts), and probability distributions over dates. Saliency maps are used to highlight the input features contributing most to predictions. The study also created a computational baseline (Pythia) and a human baseline ('onomastics') for comparison. Character error rate (CER) was the primary metric for text restoration. For attribution, accuracy was measured by comparison to I.PHI labels and updated chronological ground truths.
Key Findings
Ithaca significantly improves the accuracy of epigraphic tasks. When used in conjunction with historians, it raised the accuracy of text restoration from 25% to 72%, compared to 62% accuracy when used independently. Geographical attribution accuracy reached 71%. Chronological attribution yielded predictions within 30 years of ground truth dates. A key demonstration involved a group of Athenian decrees with historically debated dates. While the original dataset dates were significantly off the current scholarly consensus, Ithaca's predictions aligned closely with the most recent dating breakthroughs, suggesting potential for aiding in current historical debates. Ithaca’s predictions for these decrees did not exceed 433 BC, with an average of 421 BC, aligning closely with the most recent dating breakthroughs and contradicting the conventional ‘higher’ dates (pre-446/5 BC). The average difference between Ithaca's predictions and the updated ground truths was only 5 years compared to 27 years for the original labels.
Discussion
Ithaca's success demonstrates the synergistic potential of combining human expertise with deep learning models. The significantly improved accuracy in text restoration, geographical and chronological attribution validates the approach. The case study on Athenian decrees highlights the model's capacity to contribute to ongoing historical debates and potentially reshape existing understandings. The interpretability features of Ithaca, such as saliency maps, further enhance the model's usefulness by allowing historians to examine the model's reasoning and refine their analyses.
Conclusion
Ithaca represents a novel approach to epigraphic restoration and attribution, significantly advancing the field's capabilities. Its open-source interface enables collaboration and continued development. Future work could expand to other ancient languages and scripts, incorporate additional metadata such as images, and explore integrating humans more directly into the training loop.
Limitations
The model's performance is dependent on the quality and size of the training dataset. While the I.PHI corpus is extensive, biases present in the original PHI data might influence the model's outputs. Further research is needed to evaluate its performance on inscriptions outside the specific time period and geographical region covered in the training data. The model’s accuracy, while impressive, is not perfect, and its predictions should be considered alongside the expert judgment of historians.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny