tells convincing story
Technology Research News
Although text generators have been around
for decades, there’s a good reason why a machine has never won a Pulitzer.
The text tends to be choppy and simplistic.
A pair of researchers at North Carolina State University has developed
software that produces more sophisticated prose by combining the rules
language generation with artificial
intelligence research on story generation. The result is output that
comes closer to the free-flowing speech of HAL than the subject-verb-object
utterances of E.T.
Given a “logic-based representation of characters, props, actions, and
descriptions, [it can] convert them into prose that looks just like the
text you would get if you bought a book off the shelf at a bookstore,”
said Charles Callaway, now a research scientist at the center for scientific
and technological research at the Cultural Institute of Trentino in Italy.
The Storybook software does this by using a narrative plan, which is a
logical representation of the characters and actions in a story, Callaway
The researchers borrowed the concept of a narrative plan from the Russian
Formalist school of literary criticism, which explains a story in terms
of fabula and syuzhet. The fabula is the sum of events. It is the synopsis
of a film or a book you would tell a friend. The syuzhet is the plot or
the order in which the fabula unfolds. The narrative plan charts the who,
what, when, where, why, and how of a story along with the order of events.
Storybook’s prose-generation architecture first constructs short sentences
and paragraphs. It then looks for prior references, choosing synonyms
for previously used words, and matches actors and events in the story
with semantics and syntax. Last, it reorders the text to eliminate short
choppy sentences and formats it.
The result is text composed of average-sized sentences with a wide vocabulary
that includes pronouns, adverbs, and dialogue. It is longer, more variable,
higher quality prose than previous systems have made, Callaway said. “Only
one other computer system has produced text longer than two paragraphs,
and it was in a scientific domain,” he said.
The system could generate conversation in role-playing games, said Callaway.
It could also be used to create animated teaching agents in tutoring systems.
“If … a young child could create the characters and plot of a story by
dragging around icons on the screen with a mouse, then the system could
generate a story corresponding to their selections,” he said. The child
could then alter the elements to read a subtly different story each time.
“Researchers in tutoring systems believe that this type of self-motivation
will result in children recreating the story [many] times,” Callaway said.
The work is a nice piece of research that “shows how natural language
generation (NLG) technology can be used to enhance the quality of story
generators,” said Ehud Reiter, a lecturer in computing science at the
University of Aberdeen in Scotland. “This is the sort of thing that people
in the past have vaguely thought about, but I believe this is the first
serious attempt” to integrate natural language generation and story generation,
“Story generation and natural-language generation … have in fact been
quite separate strands of research, with people in [the former] focusing
on high-level content issues … and ignoring how well it reads at the sentence
level; and people in [the latter] focusing on how well it reads at a low
level, but in non-fiction applications such as generating customer-service
letters and weather reports,” said Reiter.
“I was also pleased that the authors … made an attempt to evaluate their
stories experimentally, which, as far as I know, has not been done in
previous research on story generation,” said Reiter.
One barrier to practical use, however, is that the system requires a considerable
amount of knowledge to be encoded in knowledge bases and a finite-state
narrative model, Reiter said. In other words, “setting up the system to
produce a story [requires] a lot of time specifying in computer-friendly
form information about the structure of the story and the world the story
takes place in,” he said.
In addition, the work may be “more interesting scientifically than practically,
since there is no shortage of human authors who are willing to write stories,”
he added. One possible application is incorporating messages like ‘smoking
is bad for you’ in the stories, he said.
The system could be in practical use within three years, Callaway said.
“There still is a lot of work to be done on the front end … but I'm confident
that I won't be the only person working on it. There are a few topics
in [artificial intelligence research] that draw everybody's attention,
like robots and speech recognition, and I think story generation is one
of those areas,” he said.
The next step is to speed up behind-the-scenes processes. “It took me
four months to write the input logic for the two-page story shown in the
presented paper, although it takes the system 45 seconds to turn that
into text,” said Callaway. He is also looking into prose generation in
Spanish or Italian, he said.
Callaway’s research colleague was James C. Lester at North Carolina State
University. The research was funded by the university. They presented
the paper at the Seventeenth International Joint Conference on Artificial
Intelligence held from August 6 to August 10 in Seattle, Washington.
TRN Categories: Natural Language Processing; Artificial
Story Type: News
Related Elements: Technical paper, "Narrative Prose Generation,"
Physical Review E, presented at the Seventeenth International Joint Conference
on Artificial Intelligence, in Seattle, Washington, August 7, 2001.
Tiny tubes make logic
Mobile radios make intranet
Quantum code splits secrets
Computer tells convincing
boost evolutionary theory
Research News Roundup
Research Watch blog
View from the High Ground Q&A
How It Works
News | Blog
Buy an ad link