Get the latest updates on daily news, career tips, and monthly insights on topics like AI, sustainability, and software, delivered straight to your inbox. Delve into the world of technology with Interesting Engineering, the most powerful tech event globally, and stay tuned as we bring you the highlights from CES week. Gain access to expert perspectives, exclusive content, and in-depth explorations of engineering and innovation.
We offer a range of engineering-inspired products such as textiles, mugs, hats, and thoughtful gifts. At Interesting Engineering, we facilitate connections between top engineering talent and the most innovative companies, empowering professionals with advanced engineering and tech education to advance their careers. We also acknowledge exceptional achievements in engineering, innovation, and technology.
Chengxu Zhou, an associate professor in UCL Computer Science, has been awarded an NVIDIA Academic Grant to support groundbreaking research on audio-driven movement for humanoid robots. Through this grant, Zhou aims to enhance humanoid robots by focusing on real-time, audio-responsive whole-body motion. The grant includes resources such as two NVIDIA RTX PRO 6000 GPUs and two Jetson AGX Orin devices, enabling faster iterations and reducing the time between simulation and real-world testing.
Zhou and his team are working on the Beat-to-Body project, where humanoid robots respond to sound cues with expressive and adaptable whole-body movements. This innovative approach allows robots to dynamically react to various audio stimuli, adapting their behavior in real-time. By leveraging GPU compute for simulation training and Jetson hardware for low-latency inference on robots, the project aims to revolutionize human-robot interaction using sound signals.
The research at UCL’s Humanoid Robotics Lab is a significant advancement in enabling robots to generate expressive movements based on audio cues without predefined motion templates. This work opens doors for immersive installations and interactive performances in the short term and paves the way for multi-robot coordination using common audio signals in the long term.
As the project progresses, the focus will shift towards expanding simulation-based training and testing on real humanoid robots to develop early closed-loop, sound-responsive behaviors. This research aligns with the lab’s expertise in machine learning, control systems, and interaction for humanoid robots, signaling a promising future for expressive human-robot interactions driven by audio cues.
