0 purchases
ttta 0.9.0
ttta: Tools for temporal text analysis
ttta (spoken: "triple t a") is a collection of algorithms to handle diachronic texts in an efficient and unbiased manner.
As code for temporal text analysis papers is mostly scattered across many different repositories and varies heavily in both code quality and usage interface, we thought of a solution. ttta is designed to be a provide a collection of methods with a consistent interface and a good code quality.
The package is maintained by Kai-Robin Lange.
Contributing
If you have implemented temporal text analysis methods in Python, we would be happy to include them in this package. Your contribution will, of course, be acknowledged on this repository and all further publications. If you are interested in sharing your code, feel free to contact me at [email protected].
Features
Pipeline: An object to help the user to use the respective methods in a consistent manner. The pipeline can be used to preprocess the data, split it into time chunks, train the model on each time chunk, and evaluate the results. The pipeline can be used to train and evaluate all methods in the package. This feature was implemented by Kai-Robin Lange.
Preprocessing: Tokenization, lemmatization, stopword removal, and more. This feature was implemented by Kai-Robin Lange.
LDAPrototype: A method for more consistent LDA results by training multiple LDAs and selecting the best one - the prototype. See the respective paper by Rieger et. al. here. This feature was implemented by Kai-Robin Lange.
RollingLDA: A method to train an LDA model on a time series of texts. The model is updated with each new time chunk. See the respective paper by Rieger et. al. here. This feature was implemented by Niklas Benner and Kai-Robin Lange.
TopicalChanges: A method, to detect changes in word-topic distribution over time by utilizing RollingLDA and LDAPrototype and using a time-varying bootstrap control chart. See the respective paper by Rieger et. al. here. This feature was implemented by Kai-Robin Lange.
Poisson Reduced Rank Model: A method to train the Poisson Reduced Rank Model - a document scaling technique for temporal text data, based on a time series of term frequencies. See the respective paper by Jentsch et. al. here. This feature was implemented by Lars Grönberg.
BERT-based sense disambiguation: A method to track the frequency of a word sense over time using BERT's contextualized embeddings. This method was inspired by # todo: add reference. This feature was implemented by Aymane Hachcham.
Word2Vec-based semantic change detection: A method that aligns Word2Vec vector spaces, trained on different time chunks, to detect changes in word meaning by comparing the embeddings. This method was inspired by this paper by Hamilton et. al.. This feature was implemented by Imene Kolli.
Upcoming features
Hierarchichal Sense Modeling
Graphon-Network-based word sense modeling
Spatiotemporal topic modeling
Hopefully many more
Installation
You can install the package by cloning the GitHub repository, by using pip or by using conda.
Cloning the repository
git clone https://github.com/K-RLange/ttta.git
cd ttta
pip install .
Using pip
pip install git+https://github.com/K-RLange/ttta.git
or
pip install ttta
Using conda
conda install ttta
Getting started
You can find a tutorial on how to use each feature of the package in the examples folder.
Citing ttta
If you use ttta in your research, please cite the package as follows:
@software{ttta,
author = {Kai-Robin Lange, Lars Grönberg, Niklas Benner, Imene Kolli, Aymane Hachcham, Jonas Rieger and Carsten Jentsch},
title = {ttta: Tools for temporal text analysis},
url = {https://github.com/K-RLange/ttta},
version = {1.0.0},
}
For personal and professional use. You cannot resell or redistribute these repositories in their original state.
There are no reviews.