Skip to content

Crowdsourcing experiment results for Product Search Explanation Evaluation.

Notifications You must be signed in to change notification settings

utahIRlab/AMTurk-Product-Search-Explanation-Evaluation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

Crowdsourcing Data for Product Search Explanation Evaluation

This repository provides the data created by the crowdsourcing experiment for product search explanation evaluation in Model-agnostic vs. Model-intrinsic Interpretability for Explainable Product Search

The experiment conducted pairwise comparisons between the search explanation provided by Vanilla DREM and DREM-HGN.

Data Structure

  • explanation_sample.csv: the product search explanations provided by DREM and DREM-HGN.

  • AMT_result.csv: the annotation results from AMT workers (0: bad, 1:good).

Experimental Setup

The settings used for AMTurk workers are:

  • HIT Approval Rate (%) for all Requesters' HITs greater than 80
  • Number of HITs Approved greater than 1000
  • Location is US

Data Preparation

Our crowdsourcing dataset is sampled from the retrieval experiment dataset of Electronics, which can be found in the Amazon Review Datasets.

The source code for creating the explanations and crowdsourcing UI can be found in here. For more detailed information, please refer to the paper.

Citation

If you use these data in your research, please cite with the following BibTex entry.

@misc{ai2021modelagnostic,
      title={Model-agnostic vs. Model-intrinsic Interpretability for Explainable Product Search}, 
      author={Qingyao Ai and Lakshmi Narayanan Ramasamy},
      year={2021},
      eprint={2108.05317},
      archivePrefix={arXiv},
      primaryClass={cs.IR}
}

About

Crowdsourcing experiment results for Product Search Explanation Evaluation.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published