Towards Stylistic Consonance in Human Movement Synthesis

Elizabeth Bradley*, 1, David Capps, Jeffrey Luftig, Joshua M. Stuart4
Department of Computer Science, University of Colorado, Boulder, CO 80309-0430 USA
Dance Program/Music Department, Hunter College, 695 Park Avenue, New York, NY 10021, USA
Department of Engineering Management, University of Colorado, Boulder, CO 80309-0433 USA;
Biomolecular Engineering, 129 Baskin Engineering, University of California at Santa Cruz, Santa Cruz, CA 95064 USA

© 2017 Bradley et al.;

open-access license: This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International Public License (CC-BY 4.0), a copy of which is available at: ( This license permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

* Address correspondence to this author at the Department of Computer Science, University of Colorado, Boulder, CO 80309-0430, USA; E-mail: Elizabeth.Bradley@Colorado.EDU


A common task in dance, martial arts, animation, and many other movement genres is for the character to move in an innovative and yet stylistically consonant fashion. In this paper, we describe two mechanisms for automating this process and evaluate the results with a Turing Test. Our algorithms use the mathematics of chaos to achieve innovation and simple machine-learning techniques to enforce stylistic consonance. Because our goal is stylistic consonance, we used a Turing Test, rather than standard cross-validation-based approaches, to evaluate the results. This test indicated that the novel dance segments generated by these methods are nearing the quality of human-choreographed routines. The test-takers found the human-choreographed pieces to be more aesthetically pleasing than computer-choreographed pieces, but the computer-generated pieces were judged to be equally plausible and not significantly less graceful.