This task is divided into cross-domain authorship attribution and style change detection. You can choose to solve one or both of them.
Authorship attribution is an important problem in information retrieval and computational linguistics but also in applied areas such as law and journalism where knowing the author of a document (such as a ransom note) may be able to save lives. The most common framework for testing candidate algorithms is the closed-set attribution task: given known sample documents from a small, finite set of candidate authors, which wrote a questioned document of unknown authorship? This task may be quite challenging when documents of known and unkonwn authorship come from different domains (e.g., thematic area, genre).
In this edition of PAN, for the first time, we focus on cross-domain attribution applied to Fanfiction. Fanfiction refers to fictional forms of literature which are nowadays produced by admirers ('fans') of a certain author (e.g. J.K. Rowling), novel ('Pride and Prejudice'), TV series (Sherlock Holmes), etc. The fans heavily borrow from the original work's theme, atmosphere, style, characters, story world etc. to produce new fictional literature, i.e. the so-called fanfics. This is why fanfiction is also known as transformative literature and has generated a number of controversies in recent years related to the intellectual rights property of the original authors (cf. plagiarism). Fanfiction, however, is typically produced by fans without any explicit commercial goals. The publication of fanfics typically happens online, on informal community platforms that are dedicated to making such literature accessible to a wider audience (e.g. fanfiction.net). The original work of art or genre is typically refered to as a fandom.
The cross-domain attribution task in this edition of PAN can be more accurately described as cross-fandom attribution in fanfiction. In more detail, all documents of unknown authorship are fanfics of the same fandom (target fandom) while the documents of known authorship by the candidate authors are fanfics of several fandoms (other than the target-fandom).
<div class="panel panel-default">
<div class="panel-heading">Task</div>
<div class="panel-body">Given a set of documents (known fanfics) by a small number (up to 20) of candidate authors, identify the authors of another set of documents (unknown fanfics). All unknown fanfics belong to the same target fandom. The known fanfics belong to several fandoms (excluding the target fandom), not necessarily the same for all candidate authors. An equal number of fanfics per candidate author is provided. In contrast, the unknown fanfics are not equally distributed over the authors. Text-length of fanfics varies from 500 to 1,000 tokens. Language of documents may be <strong>English, French, Italian, Polish, or Spanish</strong></div>
</div>
<div class="panel panel-default">
<div class="panel-heading">Development Phase</div>
<div class="panel-body"><p>To develop your software, we provide you with a corpus of similar characteristics with the evaluation corpus. It comprises a set of cross-domain authorship attribution problems in each of the following 5 languages: English, French, Italian, Polish, and Spanish. Note that we specifically avoid to use the term 'training corpus' because <strong>the sets of candidate authors of the developing and the evaluation corpora are not overlapping</strong>. Therefore, your approach should not be designed to particularly handle the candidate authors of the development corpus. </p>
<p>Each problem consists of a set of known fanfics by each candidate author and a set of unknown fanfics located in separate folders. The file <code>problem-info.json</code> that can be found in the main folder of each problem, shows the name of folder of unknown documents and the list of names of candidate author folders. </p>
<pre class="prettyprint lang-py" style="overflow-x:auto">
{ "unknown-folder": "unknown", "candidate-authors": [ { "author-name": "candidate00001" }, { "author-name": "candidate00002" }, ... ] }
The true author of each unknown document can be seen in the file ground-truth.json
, also found in the main folder of each problem.
In addition, to handle a collection of such problems, the file collection-info.json
includes all relevant information. In more detail, for each problem it lists its main folder, the language (either "en"
, "fr"
, "it"
, "pl"
, or "sp"
) and encoding (always UTF-8
) of its documents.
[ { "problem-name": "problem00001", "language": "en", "encoding": "UTF-8" }, { "problem-name": "problem00002", "language": "fr", "encoding": "UTF-8" }, ... ]
<p><a class="btn btn-default" target="_blank" href="">Download corpus</a> </p>
<p> This is a password-protected file. To obtain the password, first <a href=https://docs.google.com/forms/d/e/1FAIpQLSfR_xoBuGU3q7o3EYPoItN28UPuZENjs3wlWYEX_EdRGUyRfA/viewform>register</a> for the author identification task at PAN-2018, and then <a href="mailto:[email protected]?Subject=PAN-18" target="_top">notify</a> PAN organizers.
After the competition, the evaluation corpus will become available including ground truth data. This way, you have all the necessities to evaluate your approach on your own, yet being comparable to those who took part in the competition.
</div></div>
<div class="panel panel-default">
<div class="panel-heading">Output</div>
<div class="panel-body">
Your system should produce one output file for each authorship attribution problem in JSON. The name of the output files should be answers-PROBLEMNAME.txt
(e.g., answers-problem00001.txt
, answers-problem00002.txt
) including the list of unknown documents and their predicted author:
[ { "unknown-document": "unknown00001.txt", "predicted-author": "candidate00003.txt" }, { "unknown-document": "unknown00002.txt", "predicted-author": "candidate00005.txt" }, … ]
The submissions will be evaluated in each attribution problem separately based on their macro-average classification accuracy (macro-A). Participants will be ranked according to their average macro-A across all attibution problems of the evaluation corpus.
We ask you to prepare your software so that it can be executed via command line calls. The command shall take as input (i) an absolute path to the directory of the evaluation corpus and (ii) an absolute path to an empty output directory:
> mySoftware -i EVALUATION-DIRECTORY -o OUTPUT-DIRECTORY
Within EVALUATION-DIRECTORY
a collection-info.json
file and a number of folders, one for each attribution problem, will be found (similar to the developing corpus as described above). For each clustering problem, the output file should be written in OUTPUT-DIRECTORY
.
You can choose freely among the available programming languages and among the operating systems Microsoft Windows and Ubuntu. We will ask you to deploy your software onto a virtual machine that will be made accessible to you after registration. You will be able to reach the virtual machine via ssh and via remote desktop. More information about how to access the virtual machines can be found in the user guide below:
PAN Virtual Machine User Guide »
Once deployed in your virtual machine, we ask you to access TIRA at www.tira.io, where you can self-evaluate your software on the test data.
Note: By submitting your software you retain full copyrights. You agree to grant us usage rights only for the purpose of the PAN competition. We agree not to share your software with a third party or use it for other purposes than the PAN competition.
<div class="panel panel-default">
<div class="panel-heading">Related Work</div>
<div class="panel-body">
<p>We refer you to:</p>
<ul> <li>
<a href="http://pan.webis.de/clef12/pan12-web/author-identification.html">Author identification task at PAN@CLEF'12</a> (closed-set authorship attribution) </li>
<li>
<a href="http://pan.webis.de/clef11/pan11-web/author-identification.html">Author identification task at PAN@CLEF'11</a> (closed-set authorship attribution)</li>
<li>
Patrick Juola. <a href="http://portal.acm.org/citation.cfm?id=1373451">Authorship Attribution</a>. In Foundations and Trends in Information Retrieval, Volume 1, Issue 3, March 2008.
</li><li>
Moshe Koppel, Jonathan Schler, and Shlomo Argamon. <a href="http://onlinelibrary.wiley.com/doi/10.1002/asi.20961/full">Computational Methods
Authorship Attribution</a>. Journal of the American Society for Information Science and Technology, Volume 60, Issue 1, pages 9-26, January 2009.
</li><li>
Efstathios Stamatatos. <a href="http://onlinelibrary.wiley.com/doi/10.1002/asi.21001/full">A Survey of Modern Authorship Attribution Methods</a>.
Journal of the American Society for Information Science and Technology, Volume 60, Issue 3, pages 538-556, March 2009.
</li></ul>
</div>
</div>
<div id="task-committee-clustering" class="row">
<div class="col-xs-12">
<h2 class="page-header">Task Chair</h2>
</div>
</div>
<div class="row">
<div class="col-xs-6">
<div class="thumbnail" style="text-align:center;">
<a href="http://www.mike-kestemont.org/" target="_blank"><img src="../pan18-figures/mike.jpg" class="img-rounded" alt="Mike Kestemont"></a>
<p style="white-space:nowrap"><a href="http://www.mike-kestemont.org/" target="_blank">Mike Kestemont</a></p>
<p style="font-size:10pt">University of Antwerp</p>
</div>
</div>
</div>
<div class="row">
<div class="col-xs-12">
<h2>Task Committee</h2>
</div>
</div>
<div class="row">
<div class="col-xs-6">
<div class="thumbnail" style="text-align:center;">
<a href="http://www.icsd.aegean.gr/lecturers/stamatatos/" target="_blank"><img src="../pan18-figures/stathis.jpg" class="img-rounded" alt="Efstathios Stamatatos"></a>
<p><a href="http://www.icsd.aegean.gr/lecturers/stamatatos/" target="_blank">Efstathios Stamatatos</a></p>
<p style="font-size:10pt">University of the Aegean</p>
</div>
</div>
<div class="col-xs-6">
<div class="thumbnail" style="text-align:center;">
<a href="http://www.clips.ua.ac.be/~walter/" target="_blank"><img src="../pan18-figures/walter.jpg" class="img-rounded" alt="Walter Daelemans"></a>
<p style="white-space:nowrap"><a href="http://www.clips.ua.ac.be/~walter/" target="_blank">Walter Daelemans</a></p>
<p style="font-size:10pt">University of Antwerp</p>
</div>
</div>
<div class="col-xs-6">
<div class="thumbnail" style="text-align:center;">
<a href="http://www.uni-weimar.de/medien/webis/people" target="_blank"><img src="../pan17-figures/martin.jpg" class="img-rounded" alt="Martin Potthast"></a>
<p style="white-space:nowrap"><a href="http://www.uni-weimar.de/medien/webis/people" target="_blank">Martin Potthast</a></p>
<p style="font-size:10pt">Bauhaus-Universität Weimar</p>
</div>
</div>
<div class="col-xs-6">
<div class="thumbnail" style="text-align:center;">
<a href="http://www.webis.de" target="_blank"><img src="../pan17-figures/benno.jpg" class="img-rounded" alt="Benno Stein"></a>
<p style="white-space:nowrap"><a href="http://www.webis.de" target="_blank">Benno Stein</a></p>
<p style="font-size:10pt">Bauhaus-Universität Weimar</p>
</div>
</div>
</div>
© pan.webis.de
<script src="../../js/jquery.js"></script> <script src="../../js/bootstrap.min.js"></script> <script src="../../js/prettify.js"></script> <script> !function ($) { $(function(){ window.prettyPrint && prettyPrint() }) }(window.jQuery) </script> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-70770005-1', 'auto'); ga('send', 'pageview'); </script>