paint-brush
Understanding and Generating Dialogue between Characters in Stories: Proposed Tasks by@teleplay
136 reads

Understanding and Generating Dialogue between Characters in Stories: Proposed Tasks

by Teleplay Technology May 9th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Exploring machine understanding of story dialogue via new tasks and dataset, improving coherence and speaker recognition in storytelling AI.
featured image - Understanding and Generating Dialogue between Characters in Stories: Proposed Tasks
Teleplay Technology  HackerNoon profile picture

Authors:

(1) Jianzhu Yao, The CoAI group, Tsinghua University, Beijing, China Department of Computer Science and Technology, Tsinghua University, Beijing, China Beijing National Research Center for Information Science and Technology; (2) Ziqi Liu, The CoAI group, Tsinghua University, Beijing, China Department of Computer Science and Technology, Tsinghua University, Beijing, China Beijing National Research Center for Information Science and Technology; (3) Jian Guan, The CoAI group, Tsinghua University, Beijing, China Department of Computer Science and Technology, Tsinghua University, Beijing, China Beijing National Research Center for Information Science and Technology; (4) Minlie Huang, The CoAI group, Tsinghua University, Beijing, China Department of Computer Science and Technology, Tsinghua University, Beijing, China Beijing National Research Center for Information Science and Technology.

Abstract and Intro

Related Works

DIALSTORY Dataset

Proposed Tasks

Methodology

Experiments

Discussion

Future Work

Conclusion

Limitations and References

4 Proposed Tasks

We aim to measure model’s ability to understand and generate dialogue in a story. To this end, we design the dialogue generation task Masked Dialogue Generation and dialogue understanding task Dialogue Speaker Recognition. We show the task definitions, targets, dataset construction and statistics below.

4.1 Masked Dialogue Generation


Dataset Construction We use the following constraints to construct the DialGen dataset based on DIALSTORY:


• We randomly mask 30% of the dialogue turns in each story.


• We do not mask the first 50 tokens to provide sufficient background information for the story.


• We do not mask the last 30 tokens to provide ending information for that story.


• We ensure that each input story (i.e. with masked dialogue turns) mentions at least five characters.


Table 2 shows the detailed statistics.

4.2 Dialogue Speaker Recognition


Dataset Construction We randomly sampled 20k stories from DIALSTORY and automatically annotate the speaker for each dialogue turn for training, and resorted to manual annotation for validation and testing. For manual annotation, we first ask one annotator to label the characters in a story and the speaker of each dialogue turn. Then we asked another two annotators to check the correctness of the annotations, e.g., whether all mentioned characters are annotated, and whether each dialogue speaker is correct. We require the first annotator to re-annotate those examples that another two annotators do not agree on, and repeat the above process until all annotators agree on the examples. We also sampled 100 stories in the training set for manual annotation to investigate the accuracy of automatic annotation, which we will discuss in Section 6.2. Table 2 shows the detailed statistics.



This paper is under CC 4.0 DEED license.


바카라사이트 바카라사이트 온라인바카라