This event has passed.
Engineering research through the lens of electromyography (EMG) has significantly advanced the robustness and reliability of systems that interface with the human body. In fields like human-machine interaction, rehabilitation robotics, and extended reality (XR), high-quality EMG signals have proven to be an established parameter for gesture recognition, precise control, posture analysis, and muscle fatigue assessment.
The Delsys API plays a critical role in this space by enabling seamless EMG streaming into Python, C#, and Unity environments. This functionality empowers researchers and developers to build real-time, low-latency, and customizable interfaces for a wide range of engineering applications—spanning academia, OEM development, and industry innovation.
This webinar will feature use cases of the Delsys API in virtual reality, rehabilitation robotics, musculoskeletal modelling, and myoelectric control, to demonstrate how EMG-driven projects and open-source toolkits are empowering unique integration strategies and real-world implementations. These shared resources aim to accelerate future research by removing common developmental barriers and fostering a more collaborative scientific community.
Joanna Koshy, Application Engineer, Delsys
This webinar focuses on making EMG more accessible by lowering the technical barriers that often limit its use. Aimed at researchers in both industry and academia, it highlights practical tools and approaches to advance movement-based engineering and research applications.
Dr. Erik Scheme, Evan Campbell, & Ethan Eddy, Institute of Biomedical Engineering, University of New Brunswick, Canada
This talk will provide an overview of myoelectric control and how it is evolving in recent years. Drawing on its historical roots in prosthetics, we will describe recent work on improving the robustness of continuous myoelectric control and its generalizability to unseen environments. We will then explore its potential as an input modality for human-computer interaction and the design implications when transitioning to emerging consumer applications. Finally, we will discuss the need for broader collaboration, transparency, and reproducibility in myoelectric control research and introduce LibEMG – an open-source framework for accelerating the development and translation of myoelectric control.
Dr. Sean Banerjee, Dr. Natasha Banerjee, & Dr. Ashutosh Shivakumar, Terascale All-Sensing Research Studio (TARS), Wright State University, USA
Virtual Reality (VR) is a catalytic force, propelling the development and deployment of immersive, scalable and customizable virtual simulations to understand and assess team performance and behavior in human-human and human-machine teaming scenarios. By incorporating relevant sensors in VR based simulations, precise, user-generated multimodal data can be extracted, processed and represented producing insightful performance metrics that quantify and enhance collaborative performance.
This talk showcases Terascale All-sensing Research Studio (TARS)’s current research in “VR Augmented Human Performance Enhancement” at Wright State University, Dayton, Ohio. It discusses TARS’ projects where the state-of-the-art in virtual reality is utilized to inform and enhance collaborative performance in human-human and human-machine teaming scenarios.
Mouhamed Zorkot, Rehabilitation and Assistive Robotics Group, EPFL, Switzerland
In this talk, Mouhamed will present neuromodulation strategies for lower limb rehabilitation using EMG-driven approaches. The session will highlight how muscle activity can be accessed and processed to develop control strategies to enhance transcutaneous spinal cord stimulation and functional electrical stimulation for gait rehabilitation in individuals with neurological disorders. He will also present the technical implementation and discuss the potential of this approach to support personalized therapies in both clinical and research settings.
Dr. Vittorio Caggiano, MyoLab.AI, USA
Understanding and replicating human motor control is a grand challenge spanning neuroscience, robotics, and artificial intelligence. Human movement emerges from the complex interplay of muscles, tendons, neural signals, and dynamic interactions with the environment. While experimentally collecting the full biological and physical complexity of this system is nearly impossible, computational modelling offers a scalable path for a holistic understanding. Yet, most existing simulation tools are tailored to specific domains and not scalable, hence limiting their ability to capture the whole complexity of the human-environment behavioral interactions.
To bridge this gap, we developed MyoSuite, an open-source multidisciplinary ecosystem for high-performance, physiologically realistic simulation of human musculoskeletal and sensory systems, enabling agents to interact with complex, physically grounded environments. MyoSuite combines biomechanical fidelity with the scalability and speed demanded by modern AI research, making it a powerful platform for advancing embodied intelligence. Originally released with a set of physiologically plausible upper-limb and lower limb models and control tasks for manipulation and locomotion, MyoSuite has since evolved into more complex full body control and tasks. Furthermore, it now includes a broader ecosystem encompassing multiple initiatives. MyoAssist, which explores human-AI co-adaptation and the design of exoskeletal support systems. MyoSuite-MJX, a hardware-accelerated backend built on MuJoCo MJX to improve scalability and enable faster training. MyoUser, aimed at studying ergonomics, fatigue, and human-centered design in simulated work and everyday environments.
In this talk, I will also present the 4th edition of the NeurIPS MyoChallenge: an annual community-driven initiative designed to benchmark progress in motor control and reinforcement learning through standardized, reproducible tasks. These challenges promote collaboration across disciplines and provide a shared testbed for evaluating control, adaptation, and learning in physiologically realistic human embodiments.
Through the open release of the platform, benchmarks, and tools, MyoSuite aims at democratizing access to high-fidelity embodied simulations and accelerating discoveries in motor learning, ergonomics, assistive robotics, and digital physiological twins. Together, these efforts set the stage for a new generation of AI systems that move, adapt, and collaborate with the dexterity and nuance of the human body.
The webinar will conclude with a live Q&A panel featuring all presenters, offering attendees the opportunity to ask questions and dive deeper into EMG tools, methods, and applications.
Erik Scheme, PhD, PEng, is a Professor of Electrical and Computer Engineering and the Associate Director of the Institute of Biomedical Engineering at the University of New Brunswick. He is an expert in biological signal processing and control and applies machine learning and AI to study and improve human movement, health, and happiness. Dr. Scheme is interested in fostering and international collaborations in mobility, rehabilitation, and human machine-interaction, and in creating rich and rewarding training experiences for the next generation of leaders.
Evan Campbell is a PhD candidate in Electrical and Computer Engineering at the University of New Brunswick, specializing in physiological signal machine learning for human-machine interaction. His work focuses on adaptive myoelectric control, incremental learning, and the real-world deployability of EMG-based interfaces. He is a key co-developer of LibEMG, an open-source Python library that promotes reproducible and accessible research in myoelectric control by supporting both offline analysis and real-time interaction.
Ethan Eddy is a PhD student in Electrical and Computer Engineering at the University of New Brunswick. His research explores the use of myoelectric control for ubiquitous human-computer interaction, with a focus on robust recognition of discrete gestures such as flicks and swipes. He is a key co-developer of LibEMG and has demonstrated the potential of large cross-user models to enable calibration-free myoelectric control.
Ashutosh Shivakumar is an Assistant Professor at the Department of Computer Science and Engineering at Wright State University, Dayton, Ohio. He has over 5 years of interdisciplinary research experience in Human- Centered Computing (HCC) and Human-machine Teaming (HMT). His work focusses on design and development of mobile-cloud computing based software applications that leverage natural language processing and machine learning to facilitate and enhance human-human collaboration and human-machine collaboration in spoken-language based communication tasks. Examples of his work include design and development of distributed interruption management systems to minimize disruptiveness of interruptions in collaborative communication tasks, dialogue assessment and training systems to enhance healthcare provider-client collaboration for client behavior modification and development of learner-centered training tools to promote language acquisition among dual language learners.
Currently, at the Terascale All-sensing Research Studio (TARS), Wright State University, Dayton, Ohio, Ashutosh Shivakumar, in collaboration with colleagues Dr. Natasha Banerjee and Dr. Sean Banerjee, focusses on leveraging Extending reality (XR) based technologies, biomedical sensors, principles in social VR and human-machine teaming to design virtual simulations that optimize human performance assessment in areas such as human-robot teaming (HRT), healthcare delivery and pedagogy
Natasha Kholgade Banerjee is the LexisNexis Endowed Co-Chair for Advanced Data Science and Engineering and Associate Professor in the Department of Computer Science & Engineering at Wright State University in Dayton, Ohio, USA. She is co-founder and co-director of the Terascale All-Sensing Research Studio (TARS). She performs research at the intersection of computer graphics, computer vision, and machine learning. Her research uses large-scale multimodal multi-viewpoint data to contribute artificial intelligence (AI) algorithms imbibed with comprehensive awareness of how humans interact with objects in everyday environments. Her work addresses data-driven object repair and robotic assembly, human-robot handover informed by multi-person interactions, and AI-driven detection of need for assistance from multimodal data on human-object interactions. Her work has been published at prestigious venues such as ICRA, CVPR, NeurIPS, ECCV, IEEE RO-MAN, SIGGRAPH Asia, and IEEE VR.
Sean Banerjee is the LexisNexis Endowed Co-Chair for Advanced Data Science and Engineering and Associate Professor in the Department of Computer Science and Engineering at Wright State University in Dayton, Ohio. He is co-founder and co-director of the Terascale All-Sensing Research Studio (TARS). He performs research at the intersection of human-computer interaction and artificial intelligence (AI). His research interests are on using human behavior data collected using multiview multimodal sensing systems in real and virtual environments to contribute artificial intelligence (AI) algorithms that show awareness of the nuances of everyday human-human and human-object interactions for VR security, immersive learning environments, assistive robotics, and healthcare. His work has been published at prestigious venues such as IEEE VR, IEEE RO-MAN, ICRA, CVPR, NeurIPS, ECCV, and SIGGRAPH Asi
Mouhamed Zorkot is a neuroengineer and PhD candidate at the École Polytechnique Fédérale de Lausanne (EPFL), Switzerland. His research focuses on neurotechnologies for gait rehabilitation, with a particular interest in neuromodulation strategies — such as spinal cord stimulation and functional electrical stimulation — wearable technologies, and lower limb exoskeletons.
He has experience in both development and clinical settings, combining control strategies and data-driven approaches with hardware integration to enhance motor recovery in individuals with neurological impairments. Through translational projects and collaboration with healthcare professionals, Mouhamed aims to develop practical and personalized rehabilitation strategies that bridge the gap between research and clinical application.
Vittorio Caggiano, PhD, is an Adjunct at Harvard Medical School and Spaulding Rehabilitation Hospital, Boston, USA, as well as at the King’s College in London. He is the co-founder and CTO of MyoLab.AI and a former technical lead at Meta AI (Facebook FAIR) and IBM Research.
Dr. Caggiano holds a PhD from the University of Tübingen (Germany), and postdoctoral training at MIT (USA) and the Karolinska Institute (Sweden). His work sits at the intersection of neuroscience and AI, with key contributions ranging from mirror neuron dynamics, midbrain and spinal locomotor circuits, and the neural control of locomotion and dexterity.
He has authored over 40 scientific publications with seminal research in motor neuroscience and AI. As one of the creator and maintainer of MyoSuite and the NeurIPS MyoChallenge, Dr. Caggiano is actively building and supporting a global research community focused on the next-generation of human embodied artificial intelligence.