ttqakit 1.0.0

Creator: bradpython12

Last updated:

0 purchases

TODO
Add to Cart

Description:

ttqakit 1.0.0

🌐Website |
πŸŽ₯Video |
πŸ“¦PyPI |
πŸ€—Huggingface Datasets


TableQAKit: A Toolkit for Table Question Answering
πŸ”₯ Updates

[2023-8-7]: We released our code, datasets and PyPI Package. Check it out!

✨ Features
TableQAKit is a unified platform for TableQA (especially in the LLM era). Its main features includes:

Extensible design: You can use the interfaces defined by the toolkit, extend methods and models, and implement your own new models based on your own data.
Equipped with LLM: TableQAKit supports LLM-based methods, including LLM-prompting methods and LLM-finetuning methods.
Comprehensive datasets: We design a unified data interface to process data and store them in Huggingface datasets.
Powerful methods: Using our toolkit, you can reproduce most of the SOTA methods for TableQA tasks.
Efficient LLM benchmark: TableQAEval, a benchmark to evaluate the performance of LLM for TableQA. It evaluates LLM's modeling ability of long tables (context) and comprehension capabilities (numerical reasoning, multi-hop reasoning).
Comprehensive Survey: We are about to release a systematic TableQA Survey, this project is a pre-work.

βš™οΈ Install
pip install tableqakit
or
git clone git@github.com:lfy79001/TableQAKit.git
pip install -r requirements.txt

pip install ttqakit


πŸ“ Folder
The TableQAKit repository is structured as follows:
β”œβ”€β”€ icl/ # LLM-prompting toolkit
β”œβ”€β”€ llama/ # LLM-finetuning toolkit
β”œβ”€β”€ mmqa_utils/ # EncyclopediaQA toolkit
β”‚ β”œβ”€β”€ classifier_module/ # The package for classifier
β”‚ β”œβ”€β”€ retriever_module/ # The package for encyclopedia retrieval
β”œβ”€β”€ structuredqa/ # Read model TaLMs
β”‚ β”œβ”€β”€ builder/
β”‚ β”œβ”€β”€ utils/
β”œβ”€β”€ retriever/ # TableQA's general retriever (SpreadSheet examplesοΌ‰
β”œβ”€β”€ multihop/ # Readers for encyclopediaQA
β”‚ β”œβ”€β”€ Retrieval/
β”‚ └── Read/
β”œβ”€β”€ numerical/ # Readers for some TableQA datasets
β”œβ”€β”€ TableQAEval/ # The proposed new LLM-Long-Table Benchmark
β”‚ β”œβ”€β”€ Baselines/ # Add your LLMs
β”‚ β”‚ β”œβ”€β”€ turbo16k-table.py
β”‚ β”‚ β”œβ”€β”€ llama2-chat-table.py
β”‚ β”‚ └── ...
β”‚ β”œβ”€β”€ Evaluation/ # metrics
β”‚ └── TableQAEval.json
β”œβ”€β”€ outputs/ # the results of some models
β”œβ”€β”€ loaders/
β”‚ β”œβ”€β”€ WikiSQL.py
β”‚ └── ...
β”œβ”€β”€ structs/
β”‚ β”œβ”€β”€ data.py
β”œβ”€β”€ static/
β”œβ”€β”€ LICENSE
└── README.md

πŸ—ƒοΈ Dataset
According to our taxonomy, we classify the TableQA task into three categories of tasks, as shown in the following figure:






πŸ”§ Get started
Retrieval Modules
QuickStart
MultiHiertt Dataset as a demonstration
from TableQAKit.retriever import MultiHierttTrainer


trainer = MultiHierttTrainer()

# train stage:
trainer.train()

# infer stage:
trainer.infer()

Train
python main.py \
--train_mode row \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 1 \
--dataloader_pin_memory False \
--output_dir ./ckpt \
--train_path ./data/train.json \
--val_path ./data/val.json \
--save_steps 1000 \
--logging_steps 20 \
--learning_rate 0.00001 \
--top_n_for_eval 10 \
--encoder_path ./PLM/bert-base-uncased/

Inference
python infer.py \
--output_dir ./ckpt \
--encoder_path ./ckpt/encoder/deberta-large \
--dataloader_pin_memory False \
--ckpt_for_test ./ckpt/retriever/deberta/epoch1_step30000.pt \
--test_path ./data/MultiHiertt/test.json \
--test_out_path ./prediction.json \
--top_n_for_test 10

Create Trainer for New Dataset
from TableQAKit.retriever import RetrieverTrainer as RT

class NewTrainer(RT):
def read_data(self, data_path: str) -> List[Dict]:
"""

:param data_path: The path of data
:return: List of raw data
[
data_1,
data_2,
……
]
"""
data = json.load(
open(data_path, 'r', encoding='utf-8')
)
return data

def data_proc(self, instance) -> Dict:
"""

:return:
{
"id": str,
"question": str,
"rows": list[str],
"labels": list[int]
}
"""
rows = instance["paragraphs"]
labels = [0] * len(instance["paragraphs"])
if len(instance["qa"]["text_evidence"]):
for text_evidence in instance["qa"]["text_evidence"]:
labels[text_evidence] = 1
for k, v in instance["table_description"].items():
rows.append(v)
labels.append(1 if k in instance["qa"]["table_evidence"] else 0)
return {
"id": instance["uid"],
"question": instance["qa"]["question"],
"rows": rows,
"labels": labels
}

LLM-Prompting Methods



Check hear for more details.
LLM-Finetuning Methods



Check hear for more details.
Reading Modules
TaLM Reasoner
Check hear for more details.
Multimodal Reasoner
Check hear for more details.
TableQAEval



TableQAEval is a benchmark to evaluate the performance of LLM for TableQA. It evaluates LLM's modeling ability of long tables (context) and comprehension capabilities (numerical reasoning, multi-hop reasoning).
Leaderboard



Model
Parameters
Numerical Reasoning
Multi-hop Reasoning
Structured Reasoning
Total




Turbo-16k-0613
-
20.3
52.8
54.3
43.5


LLaMA2-7b-chat
7B
2.0
14.2
13.4
12.6


ChatGLM2-6b-8k
6B
1.4
10.1
11.5
10.2


LLaMA2-7b-4k
7B
0.8
9.2
5.4
6.6


longchat-7b-16k
7B
0.3
7.1
5.1
5.2


LLaMA-7b-2k
7B
0.5
7.3
4.1
4.5


MPT-7b-65k
7B
0.3
3.2
2.0
2.3


LongLLaMA-3b
3B
0.0
4.3
1.7
2.0



More details are shown in TableQAEval.
βœ… TODO
We will continue to optimize the toolkit.
Acknowledge
Primary contributors: Fangyu Lei, Tongxu Luo, Pengqi Yang, Weihao Liu, Hanwen Liu, Jiahe Lei, Yifan Wei, Shizhu He and Kang Liu.
Thank you very much to Yilun Zhao(Yale UniversityοΌ‰and Yongwei Zhou (HIT) for their assistance.

License

For personal and professional use. You cannot resell or redistribute these repositories in their original state.

Files:

Customer Reviews

There are no reviews.