Towards Stylistic Consonance in Human Movement Synthesis
Elizabeth Bradley*, 1, David Capps, Jeffrey Luftig, Joshua M. Stuart4
Identifiers and Pagination:Year: 2010
First Page: 1
Last Page: 19
Publisher Id: TOAIJ-4-1
Article History:Electronic publication date: 08/2/2010
Collection year: 2010
open-access license: This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International Public License (CC-BY 4.0), a copy of which is available at: (https://creativecommons.org/licenses/by/4.0/legalcode). This license permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
A common task in dance, martial arts, animation, and many other movement genres is for the character to move in an innovative and yet stylistically consonant fashion. In this paper, we describe two mechanisms for automating this process and evaluate the results with a Turing Test. Our algorithms use the mathematics of chaos to achieve innovation and simple machine-learning techniques to enforce stylistic consonance. Because our goal is stylistic consonance, we used a Turing Test, rather than standard cross-validation-based approaches, to evaluate the results. This test indicated that the novel dance segments generated by these methods are nearing the quality of human-choreographed routines. The test-takers found the human-choreographed pieces to be more aesthetically pleasing than computer-choreographed pieces, but the computer-generated pieces were judged to be equally plausible and not significantly less graceful.