Program
Timestamp | Topic |
---|---|
0:00 | Welcome and Introduction by Joanna Koshy, Application Specialist at Delsys |
2:26 | Erik Scheme, Evan Campbell, Ethan Eddy — From Prosthetics to Human-Computer Interactions: Advances and Trends in Myoelectric Control |
24:28 | Sean Banerjee, Natasha Banerjee, Ashutosh Shivakumar — VR Augmented Human Performance Enhancement |
51:39 | Mouhamed Zorkot — Online EMG-Driven Neuromodulation Strategies for Lower Limb Rehabilitation |
1:13:50 | Vittorio Caggiano — An Open-Source Ecosystem for High Performance, Scalable Human Embodied AI |
1:35:24 | Q&A and Roundtable Discussion |
Speakers

Erik Scheme, Evan Campbell & Ethan Eddy
"From Prosthetics to Human-Computer Interactions: Advances and Trends in Myoelectric Control" Speakers
Erik Scheme, PhD, PEng, is a Professor of Electrical and Computer Engineering and the Associate Director of the Institute of Biomedical Engineering at the University of New Brunswick. He is an expert in biological signal processing and control and applies machine learning and AI to study and improve human movement, health, and happiness. Dr. Scheme is interested in fostering and international collaborations in mobility, rehabilitation, and human machine-interaction, and in creating rich and rewarding training experiences for the next generation of leaders.
Evan Campbell is a PhD candidate in Electrical and Computer Engineering at the University of New Brunswick, specializing in physiological signal machine learning for human-machine interaction. His work focuses on adaptive myoelectric control, incremental learning, and the real-world deployability of EMG-based interfaces. He is a key co-developer of LibEMG, an open-source Python library that promotes reproducible and accessible research in myoelectric control by supporting both offline analysis and real-time interaction.
Ethan Eddy is a PhD student in Electrical and Computer Engineering at the University of New Brunswick. His research explores the use of myoelectric control for ubiquitous human-computer interaction, with a focus on robust recognition of discrete gestures such as flicks and swipes. He is a key co-developer of LibEMG and has demonstrated the potential of large cross-user models to enable calibration-free myoelectric control.

Ashutosh Shivakumar, Natasha & Sean Banerjee
"VR Augmented Human Performance Enhancement" Speakers
Ashutosh Shivakumar is an Assistant Professor at the Department of Computer Science and Engineering at Wright State University, Dayton, Ohio. He has over 5 years of interdisciplinary research experience in Human- Centered Computing (HCC) and Human-machine Teaming (HMT). His work focusses on design and development of mobile-cloud computing based software applications that leverage natural language processing and machine learning to facilitate and enhance human-human collaboration and human-machine collaboration in spoken-language based communication tasks. Examples of his work include design and development of distributed interruption management systems to minimize disruptiveness of interruptions in collaborative communication tasks, dialogue assessment and training systems to enhance healthcare provider-client collaboration for client behavior modification and development of learner-centered training tools to promote language acquisition among dual language learners.
Currently, at the Terascale All-sensing Research Studio (TARS), Wright State University, Dayton, Ohio, Ashutosh Shivakumar, in collaboration with colleagues Dr. Natasha Banerjee and Dr. Sean Banerjee, focusses on leveraging Extending reality (XR) based technologies, biomedical sensors, principles in social VR and human-machine teaming to design virtual simulations that optimize human performance assessment in areas such as human-robot teaming (HRT), healthcare delivery and pedagogy
Natasha Kholgade Banerjee is the LexisNexis Endowed Co-Chair for Advanced Data Science and Engineering and Associate Professor in the Department of Computer Science & Engineering at Wright State University in Dayton, Ohio, USA. She is co-founder and co-director of the Terascale All-Sensing Research Studio (TARS). She performs research at the intersection of computer graphics, computer vision, and machine learning. Her research uses large-scale multimodal multi-viewpoint data to contribute artificial intelligence (AI) algorithms imbibed with comprehensive awareness of how humans interact with objects in everyday environments. Her work addresses data-driven object repair and robotic assembly, human-robot handover informed by multi-person interactions, and AI-driven detection of need for assistance from multimodal data on human-object interactions. Her work has been published at prestigious venues such as ICRA, CVPR, NeurIPS, ECCV, IEEE RO-MAN, SIGGRAPH Asia, and IEEE VR.
Sean Banerjee is the LexisNexis Endowed Co-Chair for Advanced Data Science and Engineering and Associate Professor in the Department of Computer Science and Engineering at Wright State University in Dayton, Ohio. He is co-founder and co-director of the Terascale All-Sensing Research Studio (TARS). He performs research at the intersection of human-computer interaction and artificial intelligence (AI). His research interests are on using human behavior data collected using multiview multimodal sensing systems in real and virtual environments to contribute artificial intelligence (AI) algorithms that show awareness of the nuances of everyday human-human and human-object interactions for VR security, immersive learning environments, assistive robotics, and healthcare. His work has been published at prestigious venues such as IEEE VR, IEEE RO-MAN, ICRA, CVPR, NeurIPS, ECCV, and SIGGRAPH Asi

Mouhamed Zorkot
"Online EMG-Driven Neuromodulation Strategies for Lower Limb Rehabilitation" Speaker
Mouhamed Zorkot is a neuroengineer and PhD candidate at the École Polytechnique Fédérale de Lausanne (EPFL), Switzerland. His research focuses on neurotechnologies for gait rehabilitation, with a particular interest in neuromodulation strategies — such as spinal cord stimulation and functional electrical stimulation — wearable technologies, and lower limb exoskeletons.
He has experience in both development and clinical settings, combining control strategies and data-driven approaches with hardware integration to enhance motor recovery in individuals with neurological impairments. Through translational projects and collaboration with healthcare professionals, Mouhamed aims to develop practical and personalized rehabilitation strategies that bridge the gap between research and clinical application.

Vittorio Caggiano
"MyoSuite: An Open-Source Ecosystem for High-Performance, Scalable Human Embodied AI" Speaker
Vittorio Caggiano, PhD, is an Adjunct at Harvard Medical School and Spaulding Rehabilitation Hospital, Boston, USA, as well as at the King’s College in London. He is the co-founder and CTO of MyoLab.AI and a former technical lead at Meta AI (Facebook FAIR) and IBM Research.
Dr. Caggiano holds a PhD from the University of Tübingen (Germany), and postdoctoral training at MIT (USA) and the Karolinska Institute (Sweden). His work sits at the intersection of neuroscience and AI, with key contributions ranging from mirror neuron dynamics, midbrain and spinal locomotor circuits, and the neural control of locomotion and dexterity.
He has authored over 40 scientific publications with seminal research in motor neuroscience and AI. As one of the creator and maintainer of MyoSuite and the NeurIPS MyoChallenge, Dr. Caggiano is actively building and supporting a global research community focused on the next-generation of human embodied artificial intelligence.