Author
Aranv Harish Jhala
Bio: Aranv Harish Jhala is an academic researcher from North Carolina State University. The author has contributed to research in topics: Narrative inquiry & Narrative criticism. The author has an hindex of 1, co-authored 1 publications receiving 9 citations.
Papers
More filters
01 Jan 2009
TL;DR: An end-to-end camera planning system - Darshak - that constructs visual narrative discourse of a given story in a 3D virtual environment that uses a hierarchical partial order causal link planning algorithm to generate narrative plans that contain both story and camera actions.
Abstract: Narrative is one of the fundamental ways in which humans organize information. 3D virtual environments provide a compelling new medium for creating and sharing narratives. In pre-rendered virtual environments like animated movies, directors communicate complex narratives by carefully constructing them shot-by-shot. To do this, a film's director exploits the viewer's familiarity with narrative patterns and cinematic idioms to effectively convey a structured story. In real-time environments like games and training simulations, however, a system has much less control over the stories that need to be told, since often novel stories are constructed on demand and tailored to a specific session or user's needs. In many contexts, stories are built not solely by the system, but collaboratively with many users whose choices for action contribute to the construction of unanticipated narrative structure.
In the past, intelligent cinematography systems have been developed to automatically record the actions of users within a virtual world and then to construct coherent visualizations that communicate these action sequences. While these systems generate coherent visualizations, they do not attempt to address the careful construction of narrative discourse based on established and identifiable patterns of narrative communication. Current automated camera systems take into account local coherence of shots and transitions but do not address the rhetorical coherence of the communication across multiple shots.
I describe an end-to-end camera planning system - Darshak - that constructs visual narrative discourse of a given story in a 3D virtual environment. Darshak uses a hierarchical partial order causal link planning algorithm to generate narrative plans that contain both story and camera actions. Dramatic situation patterns commonly used by writers of fictional narratives and endorsed by narrative theorists are formalized as communicative plan operators that provide a basis for structuring the cinematic content of the story's visualization. The dramatic patterns are realized through abstract communicative operators that represent operations on a viewer's beliefs about the story and its telling. Camera shots and transitions are defined in this plan-based framework as execution primitives.
Representation of narrative discourse as a hierarchical plan structure enables us to utilize (1) the hierarchical nature of narrative patterns and film idioms through the hierarchy in decompositional plan operators, and (2) explicit representation of causal motivation for selection of shots through causal links. I present an empirical evaluation of the algorithm, based on cognitive metrics, for three properties of cinematic discourse: Saliency, Coherence and Temporal Consistency.
9 citations
Cited by
More filters
01 Jan 2007
TL;DR: A translation apparatus is provided which comprises an inputting section for inputting a source document in a natural language and a layout analyzing section for analyzing layout information.
Abstract: A translation apparatus is provided which comprises: an inputting section for inputting a source document in a natural language; a layout analyzing section for analyzing layout information including cascade information, itemization information, numbered itemization information, labeled itemization information and separator line information in the source document inputted by the inputting section and specifying a translation range on the basis of the layout information; a translation processing section for translating a source document text in the specified translation range into a second language; and an outputting section for outputting a translated text provided by the translation processing section.
740 citations
••
TL;DR: A novel refinement search planning algorithm - the Intent-based Partial Order Causal Link (IPOCL) planner - is described that, in addition to creating causally sound plot progression, reasons about character intentionality by identifying possible character goals that explain their actions and creating plan structures that explain why those characters commit to their goals.
Abstract: Narrative, and in particular storytelling, is an important part of the human experience. Consequently, computational systems that can reason about narrative can be more effective communicators, entertainers, educators, and trainers. One of the central challenges in computational narrative reasoning is narrative generation, the automated creation of meaningful event sequences. There are many factors - logical and aesthetic - that contribute to the success of a narrative artifact. Central to this success is its understandability. We argue that the following two attributes of narratives are universal: (a) the logical causal progression of plot, and (b) character believability. Character believability is the perception by the audience that the actions performed by characters do not negatively impact the audience's suspension of disbelief. Specifically, characters must be perceived by the audience to be intentional agents. In this article, we explore the use of refinement search as a technique for solving the narrative generation problem - to find a sound and believable sequence of character actions that transforms an initial world state into a world state in which goal propositions hold. We describe a novel refinement search planning algorithm - the Intent-based Partial Order Causal Link (IPOCL) planner - that, in addition to creating causally sound plot progression, reasons about character intentionality by identifying possible character goals that explain their actions and creating plan structures that explain why those characters commit to their goals. We present the results of an empirical evaluation that demonstrates that narrative plans generated by the IPOCL algorithm support audience comprehension of character intentions better than plans generated by conventional partial-order planners.
507 citations
••
02 Jul 2010TL;DR: This paper presents a fully automated real-time cinematography system that constructs a movie from a sequence of low-level narrative elements (events, key subjects actions and key subject motions) and offers an expressive framework which delivers notable variations in directorial style.
Abstract: Developers of interactive 3D applications, such as computer games, are expending increasing levels of effort on the challenge of creating more narrative experiences in virtual worlds. As a result, there is a pressing requirement to automate an essential component of a narrative -- the cinematography -- and develop camera control techniques that can be utilized within the context of interactive environments in which actions are not known in advance. Such camera control algorithms should be capable of enforcing both low-level geometric constraints, such as the visibility of key subjects, and more elaborate properties related to cinematic conventions such as characteristic viewpoints and continuity editing. In this paper, we present a fully automated real-time cinematography system that constructs a movie from a sequence of low-level narrative elements (events, key subjects actions and key subject motions). Our system computes appropriate viewpoints on these narrative elements, plans paths between viewpoints and performs cuts following cinematic conventions. Additionally, it offers an expressive framework which delivers notable variations in directorial style.Our process relies on a viewpoint space partitioning technique in 2D that identifies characteristic viewpoints of relevant actions for which we compute the partial and full visibility. These partitions, to which we refer as Director Volumes, provide a full characterization over the space of viewpoints. We build upon this spatial characterization to select the most appropriate director volumes, reason over the volumes to perform appropriate camera cuts and rely on traditional path-planning techniques to perform transitions. Our system represents a novel and expressive approach to cinematic camera control which stands in contrast to existing techniques that are mostly procedural, only concentrate on isolated aspects (visibility, transitions, editing, framing) or do not encounter for variations in directorial style.
62 citations
••
28 Nov 2011
TL;DR: The Director's Lens, an intelligent interactive assistant for crafting virtual cinematography using a motion-tracked hand-held device that can be aimed like a real camera, enables efficient exploration of a wide range of cinematographic possibilities, and rapid production of computer-generated animated movies.
Abstract: We present the Director's Lens, an intelligent interactive assistant for crafting virtual cinematography using a motion-tracked hand-held device that can be aimed like a real camera. The system employs an intelligent cinematography engine that can compute, at the request of the filmmaker, a set of suitable camera placements for starting a shot. These suggestions represent semantically and cinematically distinct choices for visualizing the current narrative. In computing suggestions, the system considers established cinema conventions of continuity and composition along with the filmmaker's previous selected suggestions, and also his or her manually crafted camera compositions, by a machine learning component that adapts shot editing preferences from user-created camera edits. The result is a novel workflow based on interactive collaboration of human creativity with automated intelligence that enables efficient exploration of a wide range of cinematographic possibilities, and rapid production of computer-generated animated movies.
50 citations
••
01 Jan 2011TL;DR: It is argued that the traditional goal of AI in games-to win the game-is but one of several interesting goals to pursue, and the alternative goal of making the human player’s play experience “better,” i.e., AI systems in games should reason about how to deliver the best possible experience within the context of the game.
Abstract: Much research on artificial intelligence in games has been devoted to creating opponents that play competently against human players. We argue that the traditional goal of AI in games-to win the game-is but one of several interesting goals to pursue. We promote the alternative goal of making the human player’s play experience “better,” i.e., AI systems in games should reason about how to deliver the best possible experience within the context of the game. The key insight we offer is that approaching AI reasoning for games as “storytelling reasoning” makes this goal much more attainable. We present a framework for creating interactive narratives for entertainment purposes based on a type of agent called an experience manager. An experience manager is an intelligent computer agent that manipulates a virtual world to dynamically adapt the narrative content the player experiences, based on his or her actions and inferences about his or her preferred style of play. Following a theoretical perspective on game AI as a form of storytelling, we discuss the implications of such a perspective in the context of several AI technological approaches.
29 citations