Prof. Matuszek receives prestigious NSF CAREER award for robotics research

Matuszek’s new CAREER award focuses on how robots can learn to understand how speech refers to objects and environments when dealing with diverse end-users.

Prof. Cynthia Matuszek receives prestigious
NSF CAREER award for robotics research

CSEE professor Cynthia Matuszek received an NSFCAREER award to support her research on improving the ability of robots to interact with people in everyday environments. The five-year award will provide nearly $550,000 in funds to support research by Dr. Matuszek and her students in the Interactive Robotics and Language lab.

The CAREER award is part of the NSF Faculty Early Career Development Program and is considered one of NSF’s most prestigious grants.  It supports faculty members beginning their independent careers and who have “the potential to serve as academic role models in research and education and to lead advances in the mission of their department or organization.”  One of the program’s central goals is to help early-career faculty “build a firm foundation for a lifetime of leadership in integrating education and research.”

Dr. Matuszek joined UMBC in 2014 after receiving her Ph.D. at the University of Washington in Seattle, where she was co-advised by Dieter Fox and Luke Zettlemoyer.  Before beginning her graduate studies, she was a senior research lead at Cycorp.

Dr. Matuszek’s proposal, Robots, Speech, and Learning in Inclusive Human Spaces, addresses the problem of how robots can use spoken language and perception to learn how to help support people.  A description of her project is below.

“The goal of this project is to allow robots to learn to understand spoken instructions and information about the world directly from speech with end users. Modern robots are small and capable, but not adaptable enough to perform the variety of tasks people may require. Meanwhile, too many machine learning systems work poorly for people from under-represented groups. The research will use physical, real-world context to enable learning directly from speech, including constructing a data set that is large, realistic, and inclusive of speakers from diverse backgrounds.

As robots become more capable and ubiquitous, they are increasingly moving into traditionally human-centric environments such as health care, education, and eldercare. As robots engage in tasks as diverse as helping with household work, deploying medication, and tutoring students, it becomes increasingly critical for them to interact naturally with the people around them. Key to this progress is the development of robots that acquire an understanding of goals and objects from natural communications with a diverse set of end-users. One way to address this is using language to build systems that learn from people they are interacting with. Algorithms and systems developed in this project will allow robots to learn about the world around them from linguistic interactions. This research will focus on understanding spoken language about the physical world from diverse groups of people, resulting in systems that are more able to robustly handle a wide variety of real-world interactions. Ultimately, the project will increase the usability and fairness of robots deployed in human spaces.

This CAREER project will study how robots can learn about noisy, unpredictable human environments from spoken language combined with perception, using context derived from sensors to constrain the learning problem. Grounded language refers to language that occurs in and refers to the physical world in which robots operate. Human interactions are fundamentally contextual: when learning about the world, we focus on learning by considering not only direct communication but also the context of that interaction. This work will focus on learning semantics directly from perceptual inputs combined with speech from diverse sources. The goal is to develop learning infrastructure, algorithms, and approaches to enable robots to learn to understand task instructions and object descriptions from spoken communication with end users. The project will develop new methods of efficiently learning from multi-modal data inputs, with the ultimate goal of enabling robots to efficiently and naturally learn about their world and the tasks they should perform.”

talk: Risk-Aware Coordination between Aerial & Ground Robots, 12-1 Wed. 3/2

ArtIAMAS Seminar Series
Co-organized by UMBC, UMCP, and Army Research Lab

Risk-Aware Coordination between Aerial and Ground Robots

Pratap Tokekar
Computer Science, University of Maryland, College Park

12-1 PM ET, Wed. 2 March 2022 via Webex

As autonomous systems are fielded in unknown, dynamic, potentially contested conditions, they will need to operate with partial, uncertain information. Successful long-term deployments will need agents to reason about their energy logistics and require careful coordination between robots with vastly different energetics (e.g., air and ground platforms), which is especially challenging in the face of uncertainty. To make matters complicated, communication between the agents may not always be available. In this talk, I will present our ongoing ArtIAMAS work on risk-aware route planning and coordination algorithms that can reason about uncertainty in a provable fashion to enable long-term autonomous deployments.

Dr. Pratap Tokekar is an Assistant Professor in the Department of Computer Science and UMIACS at the University of Maryland. Between 2015 and 2019, he was an Assistant Professor at the Department of Electrical and Computer Engineering at Virginia Tech. Previously, he was a Postdoctoral Researcher at the GRASP lab of the University of Pennsylvania. He obtained his Ph.D. in Computer Science from the University of Minnesota in 2014 and Bachelor of Technology degree in Electronics and Telecommunication from the College of Engineering Pune, India in 2008. He is a recipient of the NSF CAREER award (2020) and CISE Research Initiation Initiative award (2016). He serves as an Associate Editor for the IEEE Transactions on Robotics, IEEE Transactions of Automation Science & Engineering, and the ICRA and IROS Conference Editorial Board.

CSEE alum Balaji Vishwanathan’s robotics company featured in Forbes

Balaji Vishwanathan, CEO of Invento Robotics, with Mitra, its flagship robot. Image: Hemant Mishra for Forbes India

Balaji Vishwanathan (MS ’07) startup company Invento Robotics is featured in Forbes India magazine


Balaji Viswanathan started his career at Microsoft, and moved from there to develop startups in such diverse areas as robotics, education, and finance. He has embraced the true calling of an entrepreneur, using long term goals to develop companies that actively seek to make a global impact. This is exemplified by his Bengaluru-based company, Invento Robotics, which is currently using its humanoid robots to provide a myriad of services, from taking temperatures to collecting patient information to bringing medications and food to patients in isolation wards, in an effort to fight COVID-19.

His business was featured in Forbes India magazine as part of a series on companies that have pivoted to use technology to address the Covid-19 pandemic. The article discusses how  Invento has applied its first mobile robot models, Mitra, to perform tasks like collecting patient details, checking temperatures, and setting up video calls with doctors. Two new models, C-Astra and Robodoc have now been deployed to disinfect rooms and virtually interact with patients inside Covid-19 wards.

Balaji has recently returned to UMBC as a part-time Ph.D. student in the Computer Science program and will work on research topics that will advance the state of the art in supporting intelligent robotics.

New NSF grant to improve human-robot interaction

person interacting with a virtual robot
Professor Ferraro in UMBC’s Pi2 visualization laboratory talking to a virtual robot.

CSEE faculty receive NSF award to help robots learn tasks by interacting naturally with people


UMBC Assistant Professors Cynthia Matuszek (PI) and Francis Ferraro (Co-PI), along with senior staff scientist at JHU-APL John Winder (Co-PI) received a three-year NSF award as part of the National Robotics Initiative on Ubiquitous Collaborative Robots. The award for Semi-Supervised Deep Learning for Domain Adaptation in Robotic Language Acquisition will advance the ability of robots to learn from interactions with people using spoken language and gestures in a variety of situations.

This project will enable robots to learn to perform tasks with human teammates from language and other modalities, and then transfer what they have learned to other robots with different capabilities in order to perform different tasks. This will ultimately allow human-robot teaming in domains where people use varied language and instructions to complete complex tasks. As robots become more capable and ubiquitous, they are increasingly moving into complex, human-centric environments such as workplaces and homes.

Being able to deploy useful robots in settings where human specialists are stretched thin, such as assistive technology, elder care, and education, has the potential to have far-reaching impacts on human quality of life. Achieving this will require the development of robots that learn, from natural interaction, about an end user’s goals and environment.

This work is intended to make robots more accessible and usable for non-specialists. In order to verify success and involve the broader community, tasks will be drawn from and tested in community Makerspaces, which are strongly linked with both education and community involvement. It will address how collaborative learning and successful performance during human-robot interactions can be accomplished by learning from and acting on grounded language. To accomplish this, the project will revolve around learning structured representations of abstract knowledge with goal-directed task completion, grounded in a physical context.

There are three high-level research thrusts: leverage grounded language learning from many sources, capture and represent the expectations implied by language, and use deep hierarchical reinforcement learning to transfer learned knowledge to related tasks and skills. In the first, new perceptual models to learn an alignment among a robot’s multiple, heterogeneous sensor and data streams will be developed. In the second, synchronous grounded language models will be developed to better capture both general linguistic and implicit contextual expectations that are needed for completing shared tasks. In the third, a deep reinforcement learning framework will be developed that can leverage the advances achieved by the first two thrusts, allowing the development of techniques for learning conceptual knowledge. Taken together, these advances will allow an agent to achieve domain adaptation, improve its behaviors in new environments, and transfer conceptual knowledge among robotic agents.

The research award will support both faculty and students working in the Interactive Robotics and Language lab on this task. It includes an education and outreach plan designed to increase participation by and retention of women and underrepresented minorities (URM) in robotics and computing, engaging with UMBC’s large URM population and world-class programs in this area.

New NSF award will help robots learn to understand humans in complex environments

Prof. Ferraro in UMBC’s Pi2 visualization laboratory talking to a virtual robot, modeled using a combination of Unity, ROS, and Gazebo. Image from a recent paper on this research..

New NSF project will help robots learn to understand humans in complex environments

UMBC Assistant Professor Cynthia Matuszek is the PI on a new NSF research award, EAGER: Learning Language in Simulation for Real Robot Interaction, with CO-PIs Don Engel and Frank Ferraro. Research funded by this award will be focused on developing better human-robot interactions using machine learning to enable robots to learn the meaning of human commands and questions informed by their physical context.

While robots are rapidly becoming more capable and ubiquitous, their utility is still severely limited by the inability of regular users to customize their behaviors. This EArly Grant for Exploratory Research (EAGER) will explore how examples of language, gaze, and other communications can be collected from a virtual interaction with a robot in order to learn how robots can interact better with end users. Current robots’ difficulty of use and inflexibility are major factors preventing them from being more broadly available to populations that might benefit, such as aging-in-place seniors. One promising solution is to let users control and teach robots with natural language, an intuitive and comfortable mechanism. This has led to active research in the area of grounded language acquisition: learning language that refers to and is informed by the physical world. Given the complexity of robotic systems, there is growing interest in approaches that take advantage of the latest in virtual reality technology, which can lower the barrier of entry to this research.

This EAGER project develops infrastructure that will lay the necessary groundwork for applying simulation-to-reality approaches to natural language interactions with robots. This project aims to bootstrap robots’ learning to understand language, using a combination of data collected in a high-fidelity virtual reality environment with simulated robots and real-world testing on physical robots. A person will interact with simulated robots in virtual reality, and his or her actions and language will be recorded. By integrating with existing robotics technology, this project will model the connection between the language people use and the robot’s perceptions and actions. Natural language descriptions of what is happening in simulation will be obtained and used to train a joint model of language and simulated percepts as a way to learn grounded language. The effectiveness of the framework and algorithms will be measured on automatic prediction/generation tasks and transferability of learned models to a real, physical robot. This work will serve as a proof of concept for the value of combining robotics simulation with human interaction, as well as providing interested researchers with resources to bootstrap their own work.

Dr. Matuszek’s Interactive Robotics and Language lab is developing robots that everyday people can talk to, telling them to do tasks or about the world around them. Their approach to learning to understand language in the physical space that people and robots occupy is called grounded language acquisition and is a key to building robots that can perform tasks in noisy, real-world environments, instead of being pre-emptively programmed to handle a fixed set of predetermined tasks.

Meet Your Professor: Dr. Cynthia Matuszek, 12-1 Mon 4/15, ITE231

Meet Your Professor: Dr. Cynthia Matuszek

On April 15th, come join the Computer Science Education Club for the third installment of its Spring 2019 Meet Your Professor series featuring Dr. Cynthia Matuszek. The Meet Your Professor events provide students with the opportunity to learn more about their professors, including how they achieved their position, what they believe makes an effective teacher, what research they conduct, and more!

Dr. Matuszek’s areas of research include robotics, natural language processing, human-robot interaction, and artificial intelligence. At UMBC she heads the Interactive Robotics and Language lab She has taught courses in robotics, artificial intelligence, advanced AI, human-robot interaction, and ethics in computing.

If you want to learn from Dr. Matuszek’s experience in academia, come to ITE 231 on April 15th from 12pm-12:50pm.

talk: Learning to Ground Instructions to Plans, 2:30 Thr 3/21, ITE346

Learning to Ground Natural Language Instructions to Plans

Nakul Gopalan, Brown University

2:30-3:30pm Thursday, 21 March 2019, ITE 346, UMBC

In order to easily and efficiently collaborate with humans, robots must learn to complete tasks specified using natural language. Natural language provides an intuitive interface for a layperson to interact with a robot without the person needing to program a robot, which might require expertise. Natural language instructions can easily specify goal conditions or provide guidances and constraints required to complete a task. Given a natural language command, a robot needs to ground the instruction to a plan that can be executed in the environment. This grounding can be challenging to perform, especially when we expect robots to generalize to novel natural language descriptions and novel task specifications while providing as little prior information as possible. In this talk, I will present a model for grounding instructions to plans. Furthermore, I will present two strategies under this model for language grounding and compare their effectiveness. We will explore the use of approaches using deep learning, semantic parsing, predicate logic and linear temporal logic for task grounding and execution during the talk.

Nakul Gopalan is a graduate student in the H2R lab at Brown University. His interests are in the problems of language grounding for robotics, and abstractions within reinforcement learning and planning. He has an MSc. in Computer Science from Brown University (2015) and an MSc. in Information and Communication Engineering from T.U. Darmstadt (2013) in Germany. He completed a Bachelor of Engineering from R.V. College of Engineering in Bangalore, India (2008). His team recently won the Brown-Hyundai Visionary Challenge for their proposal to use Mixed Reality and Social Feedback for Human-Robot collaboration.

Host: Prof. Cynthia Matuszek (cmat at umbc.edu)

MD-AI Meetup holds 1st event at UMBC 6-8pm Wed 10/3, 7th floor library

MD-AI Meetup holds 1st event at UMBC
6-8pm Wed 10/3, 7th floor library

 

A new Maryland-based meetup interest group has been established for Artificial Intelligence (MD-AI Meetup) and will have its first meeting at UMBC this coming Wednesday (Oct 3) from 6:00-8:00pm in the 7th floor of the library.  The first meeting will feature a talk by UMCP Professor Phil Resnik on the state of NLP and an AI research agenda.  Refreshments will be provided.  The meetup is organized by Seth Grimes and supported by TEDCO, local AI startup RedShred, and the Maryland Tech Council.

If you are interested in attending this and possibly future meetings (which will probably be monthly), go to the Meetup site and join (it’s free) and RSVP to attend this meeting (if there’s still room).  If you join the meetup and RSVP, you can see who’s registered to attend.

These meetups are good opportunities to meet and network with people in the area who share interests. It’s a great opportunity for students who are will be looking for internships or jobs in the coming year.

Prof. Cynthia Matuszek named one of AI’s 10 to Watch

Cynthia Matuszek named one of AI’s 10 to Watch 

UMBC CSEE Professor Cynthia Matuszek was named as one AI’s 10 to Watch by IEEE Intelligent Systems. The designation is given every two years to a group of “10 young stars who have demonstrated outstanding AI achievements”.  IEEE Intelligent Systems accepts nominations from around the world, which are then evaluated by the the publication’s  editorial and advisory boards based on reputation, impact, expert endorsement, and diversity.  Dr. Matuszek was recognized for her research that “combined robotics, natural language processing, and machine learning to build systems that nonspecialists can instruct, control, and interact with intuitively and naturally”.

Professor Matuszek joined UMBC in 2014 after receiving her Ph.D. in Computer Science from the University of Washington.  At UMBC, she established and leads the Interactive Robotics and Language Lab that integrates research on robotics and natural language processing with the goal of “bringing the fields together: developing robots that everyday people can talk to, telling them to do tasks or about the world around them”.

Here is how she describes her research in the IEEE Intelligent Systems article.

Robot Learning from Language and Context

As robots become more powerful, capable, and autonomous, they are moving from controlled industrial settings to human-centric spaces such as medical environments, workplaces, and homes. As physical agents, they will soon be able help with entirely new categories of tasks that require intelligence. Before that can happen, though, robots must be able to interact gracefully with people and the noisy, unpredictable world they occupy.

This undertaking requires insight from multiple areas of AI. Useful robots will need to be flexible in dynamic environments with evolving tasks, meaning they must learn and must also be able to communicate effectively with people. Building advanced intelligent agents that interact robustly with nonspecialists in various domains requires insights from robotics, machine learning, and natural language processing.

My research focuses on developing statistical learning approaches that let robots gain knowledge about the world from multimodal interactions with users, while simultaneously learning to understand the language surrounding novel objects and tasks. Rather than considering these problems separately, we can efficiently handle them concurrently by employing joint learning models that treat language, perception, and task understanding as strongly associated training inputs. This lets each of these channels provide mutually reinforcing inductive bias, constraining an otherwise unmanageable search space and allowing robots to learn from a reasonable number of ongoing interactions.

Combining natural language processing and robotic understanding of environments improves the efficiency and efficacy of both approaches. Intuitively, learning language is easier in the physical context of the world it describes. And robots are more useful and helpful if people can talk naturally to them and teach them about the world. We’ve used this insight to demonstrate that robots can learn unanticipated language that describes completely novel objects. They can also learn to follow instructions for performing tasks and interpret unscripted human gestures, all from interactions with nonspecialist users.

Bringing together these disparate research areas enables the creation of learning methods that let robots use language to learn, adapt, and follow instructions. Understanding humans’ needs and communications is a long-standing AI problem, which fits within the larger context of understanding how to interact gracefully in primarily human environments. Incorporating these capabilities will let us develop flexible, inexpensive robots that can integrate into real-world settings such as the workplace and home.

​You can access a pdf version of the full IEEE AI’s 10 to Watch article here.

🤖 talk: Where’s my Robot Butler? 1-2pm Friday 4/13, ITE 231

UMBC ACM Student Chapter Talk

Where’s my Robot Butler?
Robotics, NLP and Robots in Human Environments

Professor Cynthia Matuszek, UMBC

1:00-2:00pm Friday, 13 April 2018, ITE 231, UMBC

As robots become more powerful, capable, and autonomous, they are moving from controlled industrial settings to human-centric spaces such as medical environments, workplaces, and homes. As physical agents, they will soon be able help with entirely new categories of tasks that require intelligence. Before that can happen, though, robots must be able to interact gracefully with people and the noisy, unpredictable world they occupy, an undertaking that requires insight from multiple areas of AI. Useful robots will need to be flexible in dynamic environments with evolving tasks, meaning they must learn from and communicate effectively with people. In this talk, I will describe current research in our lab on combining natural language learning and robotics to build robots people can use in the home.


Dr. Cynthia Matuszek

is an assistant professor of computer science and electrical engineering at the University of Maryland, Baltimore County. Her research occurs at in the intersection of robotics, natural language processing, and machine learning, and their application to human-robot interaction. She works on building robotic systems that non-specialists can instruct, control, and interact with intuitively and naturally. She has published on AI, robotics, machine learning, and human-robot interaction. Matuszek received her Ph.D. in computer science and engineering from the University of Washington.

1 2 3