FaKnow (Fake Know), a unified Fake News Detection algorithms library based on PyTorch, is designed for reproducing and developing fake news detection algorithms. It includes 22 models(see at Integrated Models), covering 2 categories:
- content based
- social context
- Unified Framework: provide a unified interface to cover a series of algorithm development processes, including data processing, model developing, training and evaluation
- Generic Data Structure: use json as the file format read into the framework to fit the format of the data crawled down, allowing the user to customize the processing of different fields
- Diverse Models: contains a number of representative fake news detection algorithms published in conferences or journals during recent years, including a variety of content-based and social context-based models
- Convenient Usability: pytorch based style makes it easy to use with rich auxiliary functions like loss visualization, logging, parameter saving
- Great Scalability: users just focus on the exposed API and inherit built-in classes to reuse most of the functionality and only need to write a little code to meet new requirements
FaKnow is available for Python 3.8 and higher.
Make sure PyTorch(including torch and torchvision) and PyG(including torch_geometric and optional dependencies) are already installed.
- from pip
pip install faknow
- from source
git clone https://github.com/NPURG/FaKnow.git && cd FaKnow
pip install -e . --verbose
We provide several methods to run integrated models quickly with passing only few arguments. For hyper parameters like learning rate, values from the open source code of the paper are taken as default. You can also pass your own defined hyper parameters to these methods.
You can use run
and run_from_yaml
methods to run integrated models. The former receives the parameters as dict
keyword arguments and the latter reads them from the yaml
configuration file.
- run from kargs
from faknow.run import run
model = 'mdfend' # lowercase short name of models
kargs = {'train_path': 'train.json', 'test_path': 'test.json'} # dict arguments
run(model, **kargs)
the json file for mdfend should be like:
[
{
"text": "this is a sentence.",
"domain": 9,
"label": 1
},
{
"text": "this is a sentence.",
"domain": 1,
"label": 0
}
]
- run from yaml
# demo.py
from faknow.run import run_from_yaml
model = 'mdfend' # lowercase short name of models
config_path = 'mdfend.yaml' # config file path
run_from_yaml(model, config_path)
your yaml config file should be like:
# mdfend.yaml
train_path: train.json # the path of training set file
test_path: test.json # the path of testing set file
You can also run specific models using run_$model$
and run_$model$_from_yaml
methods by passing parameter,
where $model$
should be the lowercase name of the integrated model you want to use.
The usages are the same as run
and run_from_yaml
.
Following is an example to run mdfend.
from faknow.run.content_based.run_mdfend import run_mdfend, run_mdfend_from_yaml
# run from kargs
kargs = {'train_path': 'train.json', 'test_path': 'test.json'} # dict training arguments
run_mdfend(**kargs)
# or run from yaml
config_path = 'mdfend.yaml' # config file path
run_mdfend_from_yaml(config_path)
Following is an example to run mdfend from scratch.
from faknow.data.dataset.text import TextDataset
from faknow.data.process.text_process import TokenizerFromPreTrained
from faknow.evaluate.evaluator import Evaluator
from faknow.model.content_based.mdfend import MDFEND
from faknow.train.trainer import BaseTrainer
import torch
from torch.utils.data import DataLoader
# tokenizer for MDFEND
max_len, bert = 170, 'bert-base-uncased'
tokenizer = TokenizerFromPreTrained(max_len, bert)
# dataset
batch_size = 64
train_path, test_path, validate_path = 'train.json', 'test.json', 'val.json'
train_set = TextDataset(train_path, ['text'], tokenizer)
train_loader = DataLoader(train_set, batch_size, shuffle=True)
validate_set = TextDataset(validate_path, ['text'], tokenizer)
val_loader = DataLoader(validate_set, batch_size, shuffle=False)
test_set = TextDataset(test_path, ['text'], tokenizer)
test_loader = DataLoader(test_set, batch_size, shuffle=False)
# prepare model
domain_num = 9
model = MDFEND(bert, domain_num)
# optimizer and lr scheduler
lr, weight_decay, step_size, gamma = 0.00005, 5e-5, 100, 0.98
optimizer = torch.optim.Adam(params=model.parameters(),
lr=lr,
weight_decay=weight_decay)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma)
# metrics to evaluate the model performance
evaluator = Evaluator()
# train and validate
num_epochs, device = 50, 'cpu'
trainer = BaseTrainer(model, evaluator, optimizer, scheduler, device=device)
trainer.fit(train_loader, num_epochs, validate_loader=val_loader)
# show test result
print(trainer.evaluate(test_loader))
If you use the library above, please cite our work.
@article{faknow,
author = {Yiyuan Zhu,Yongjun Li,Jialiang Wang,Ming Gao,Jiali Wei},
title = {FaKnow: A Unified Library for Fake News Detection},
journal = {Data Intelligence},
pages = {-},
url = {http://www.sciengine.com/publisher/Beijing Zhongke Journal Publising Co. Ltd./journal/Data Intelligence///10.3724/2096-7004.di.2024.0026},
doi = {https://doi.org/10.3724/2096-7004.di.2024.0026}
}
FaKnow has a MIT-style license, as found in the LICENSE file.