AI 驱动的 EDGE 舞蹈动画师将生成式 AI 应用于编舞
AI-powered EDGE Dance Animator Applies Generative AI to Choreography
AI 驱动的 EDGE 舞蹈动画师将生成式 AI 应用于编舞
AI analyzes the music’s rhythmic and emotional content and creates realistic dances that are also physically plausible — a real dancer could perform them.
The EDGE generative AI model can help choreographers design new moves and sequences.
tanford University researchers have developed a generative AI model that can choreograph human dance animation to match any piece of music. It’s called Editable Dance GEneration (EDGE).
“EDGE shows that AI-enabled characters can bring a level of musicality and artistry to dance animation that was not possible before,” says Karen Liu, a professor of computer science who led a team that included two student collaborators, Jonathan Tseng and Rodrigo Castellon, in her lab.
“EDGE表明，支持人工智能的角色可以为舞蹈动画带来以前不可能的音乐性和艺术性，”计算机科学教授Karen Liu说，她领导的团队包括两名学生合作者Jonathan Tseng和Rodrigo Castellon。
The researchers believe that the tool will help choreographers design sequences and communicate their ideas to live dancers by visualizing 3D dance sequences. Key to the program’s advanced capabilities is editability. Liu imagines that EDGE could be used to create computer-animated dance sequences by allowing animators to intuitively edit any parts of dance motion.
For example, the animator can design specific leg movements of the character, and EDGE will “auto-complete” the entire body from that positioning in a way that is realistic, seamless, and physically plausible as well — a human could complete the moves. Above all, the moves are consistent with the animator’s choice of music.
例如，动画师可以设计角色的特定腿部动作，EDGE将以逼真，无缝且物理上合理的方式从该位置“自动完成”整个身体 - 人类可以完成动作。最重要的是，这些动作与动画师对音乐的选择是一致的。
Like other generative models for images and text — ChatGPT and DALL-E, for instance — EDGE represents a new tool for choreographic idea generation and movement planning. The editability means that dance artists and choreographers can iteratively refine their sequences move by move, position by position, adding specific poses at precise moments. EDGE then incorporates the additional details into the sequence automatically. In the near future, EDGE will allow users to input their own music and even demonstrate the moves themselves in front of a camera.
“We think it’s a really a fun and engaging way for everyone, not just dancers, to express themselves through movement and tap into their own creativity,” Liu says.
“With its ability to generate captivating dances in response to any music, we think EDGE represents a major milestone in the intersection of technology and movement,” adds Tseng. “It will unlock new possibilities for creative expression and physical engagement,” says Castellon.
The team has published a paper and will formally introduce EDGE at the Computer Vision and Pattern Recognition conference in Vancouver, British Columbia, in June. There is also a website called “EDGE Playground” where anyone who is interested can pick the tune and watch as EDGE creates a new dance sequence from scratch.
“Everyone is invited to play with it. It’s fun!” Liu says.