New Hope for People Living with Paralysis
Eclipse researchers with UMB research (Dr. Sandy McCombe-Waller) were recently featured in a video developed by Microsoft Research. The video showcases Banerjee, Robucci, and UMB professor Sandra McCombe Waller, as they discuss the application of Microsoft’s Lab of Things to the team’s wearable sensing system project.
Low-cost Distracted Driving Detection
My group has developed a novel proximity sensor-based system that analyzes driver movements and infers whether he/she is distracted or driving dangerously. The key novelity of our approach is that our sensors are built into head-gears, dashboards, and door mats inside vehicles and can non-intrusively detect dangerous and distracted driving using our novel signal processing algorithm. Our goal is to build these sensors into fleet vehicles.
Our goal is to make it easier for home residents to make smart choices about managing energy. Renewable technologies, such as solar and wind, are becoming more widely adopted, however, current best practices for energy use and conservation do not necessarily apply in green homes. This project seeks to better understand energy generation and consumption in green homes, and to explore automated techniques for helping residents to achieve better utilization of resources. This includes building demand response systems, energy analytics for home energy usage, and visualization systems for home energy usage. Project website
An estimated 1.5 million individuals in the United States are hospitalized each year because of strokes, brain injuries and spinal cord injuries. Severe impairment such as paralysis, paresis, weakness and limited range of motion are common sequels resulting from these injuries, requiring extensive rehabilitation. This project is developing invisible sensing systems (using textile-based capacitive sensor arrays and micro-doppler radars) embedded into bed sheets, pillows, wheelchair pads, and clothing, for environmental control and physical therapy for such paralysis patients. The system detects gestures regardless of evolving environmental and patient conditions and provides explicit real-time feedback to the user. Through the use of low-cost and ultra-low power capacitive sensing and micro-radars built into headgears, the system reduces hospital visits and therapy costs.
In a number of projects, we are working on developing hierarchical processing systems. These include combining processors of different capabilities and energy consumption, as well as platforms with varying capabilities and energy consumption into a single integrated system that has a wide operating range and low energy consumption. We have applied the concept to developing systems for gait analysis, sensor microserver design, solar power nodes for emergency control.
We study power-supply sidechannel leakage on FPGAs and ASICs through hardware experimentaion and simulation. Our goal is discovery of new side-channel volnerabilites along with techniques and EDA tools for coutermeasures in embedded systems. We are espcially focused on security for low-power embedded systems.
Sidewalk navigation for the visually impaired, especially those who use wheelchairs, can be a daunting task. While laws advocate proper standards for accessibility-compatible sidewalks, several develop cracks and obstacles over time and many have curbs and steps. Emerging wearable devices such as Google glasses provide an opportunity for continuous vision-based systems that can navigate individuals around accessibility issues on sidewalks. Unfortunately, real-time vision-based navigation systems are scarce. The problem stems from a basic limitation of vision algorithms---without a priori contextual information on a scene, it is impossible for a vision algorithm to search for objects of interest. To address this critical problem, this project proposes a cyber-physical system that augments machine vision algorithms with a priori contextual information collected using human crowdsourcing. The key idea is to use humans in conjunction with custom system build a rich library of information on scenes with accessibility issues. This library can then be used to design context aware machine vision algorithms that can efficiently detect accessibility problems in real-time. Project website