NASA has committed $178 million to extend support for the Center for Research and Exploration in Space Science & Technology II (CRESST II) through 2027. Founded in 2006 and renewed in 2016, CRESST II is a partnership between NASA’s Goddard Space Flight Center and four universities. UMBC and the University of Maryland, College Park (UMD) are the two primary funding recipients, with UMD leading the consortium. CRESST II also supports researchers at Catholic University of America, Howard University, and the Southeastern Universities Research Association.
New UMBC funding to support these projects will be more than $63 million over five years under the CRESST II renewal. Since the last renewal in 2016, the UMBC arm of the partnership, the Center for Space Sciences and Technology (CSST), has focused on offering additional training for budding space scientists. Graduate students with NASA fellowships are co-advised by UMBC faculty and NASA scientists, undergraduates have internship opportunities on-site at Goddard, and post-baccalaureate programs offer recent grads a chance to get more experience before applying to jobs or graduate school. Career workshops are available to all.
“We’re trying to do more to support their growth, and also prepare them to move on to other things afterwards,” says Don Engel, director of CSST and assistant professor of computer science and electrical engineering. “We’re building more infrastructure around career support for our scientists, especially those at earlier levels.”
Engel has also been leading an effort to engage more departments at UMBC in the partnership. Physics is the most involved so far, but researchers in computer science and electrical engineering, mechanical engineering, information systems, and even geography and environmental systems have connected with CSST, meaning the Center spans all three UMBC colleges.
Read the full article on UMBC News.
“Adversarial thinking” (AT), sometimes called the “security mindset” or described as the ability to “think like an attacker,” is widely accepted in the computer security community as an essential ability for successful cybersecurity practice. Supported by intuition and anecdotes, many in the community stress the importance of AT, and multiple projects have produced interventions explicitly intended to strengthen individual AT skills to improve security in general. However, there is no agreed-upon definition of “adversarial thinking” or its components, and accordingly, no test for it. Because of this absence, it is impossible to meaningfully quantify AT in subjects, AT’s importance for cybersecurity practitioners, or the effectiveness of interventions designed to improve AT. Working towards the goal of a characterization of AT in cybersecurity and a non-technical test for AT that anyone can take, I will discuss existing conceptions of AT from the security community, as well as ideas about AT in other fields with adversarial aspects including war, politics, law, critical thinking, and games. I will also describe some of the unique difficulties of creating a non-technical test for AT, compare and contrast this effort to our work on the CATS and Security Misconceptions projects, and describe some potential solutions. I will explore potential uses for such an instrument, including measuring a student’s change in AT over time, measuring the effectiveness of interventions meant to improve AT, comparing AT in different populations (e.g., security professionals vs. software engineers), and identifying individuals from all walks of life with strong AT skills—people who might help meet our world’s pressing need for skilled and insightful security professionals and researchers. Along the way, I will give some sample non-technical adversarial thinking challenges and describe how they might be graded and validated.
Peter A. H. Peterson is an assistant professor of computer science at the University of Minnesota Duluth, where he teaches and directs the Laboratory for Advanced Research in Systems (LARS), a group dedicated to research in operating systems and security, with a special focus on research and development to make security education more effective and accessible. He is an active member of the Cybersecurity Assessment Tools (CATS) project working to create and validate two concept inventories for cybersecurity, is working on an NSF-funded grant to identify and remediate commonsense misconceptions about cybersecurity, and is also the author of several hands-on security exercises for Deterlab that have been used at many institutions around the world. He earned his Ph.D. from the University of California, Los Angeles for work on “adaptive compression”—systems that make compression decisions dynamically to improve efficiency. He can be reached at .
Host: Alan T. Sherman, Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays. All meetings are open to the public. Upcoming CDL Meetings: May 7, Farid Javani (UMBC), Anonymization by oblivious transfer
The UMBC Center for Cybersecurity (UCYBR) & The Department of Computer Science & Electrical Engineering (CSEE) Present:
“Cyber Lessons, Learned and Unlearned”
Professor Eugene Spafford
Professor of Computer Science & Executive Director Emeritus of the Purdue CERIAS (Center for Education and Research in Information Assurance and Security)
Tuesday 20 April 2021 1-2PM ET
Dr. Eugene Spafford is a professor with an appointment in Computer Science at Purdue University, where he has served on the faculty since 1987. He is also a professor of Philosophy (courtesy), a professor of Communication (courtesy), a professor of Electrical and Computer Engineering (courtesy) and a Professor of Political Science (courtesy). He serves on a number of advisory and editorial boards. Spafford’s current research interests are primarily in the areas of information security, computer crime investigation and information ethics. He is generally recognized as one of the senior leaders in the field of computing.
Among other things, Spaf (as he is known to his friends, colleagues, and students) is Executive Director Emeritus of the Purdue CERIAS (Center for Education and Research in Information Assurance and Security), and was the founder and director of the (superseded) COAST Laboratory. He is Editor-on-Chief of the Elsevier journal Computers & Security, the oldest journal in the field of information security, and the official outlet of IFIP TC-11.
Spaf has been a student and researcher in computing for over 40 years, 35 of which have been in security-related areas. During that time, computing has evolved from mainframes to the Internet of Things. Of course, along with these changes in computing have been changes in technology, access, and both how we use and misuse computing resources. Who knows what the future holds?
In this UCYBR talk, Spaf will reflect upon this evolution and trends and discuss what he sees as significant “lessons learned” from history. Will we learn from our past? Or are we destined to repeat history (again!) and never break free from the many cybersecurity challenges that continue to impact our world? Join UCYBR and CSEE for an engaging and informative presentation from one of the most respected luminaries of the cybersecurity field!
More information about Spaf’s distinguished career in cybersecurity, his publications, talks, and more can be found at https://spaf.cerias.purdue.edu/.
Host: Dr. Richard Forno ()
We present our progress and plans in developing MeetingMayhem, a new web-based educational exercise that helps students learn adversarial thinking in communication networks. The goal of the exercise is to arrange a meeting time and place by sending and receiving messages through an insecure network that is under the control of a malicious adversary. Players can assume the role of participants or an adversary. The adversary can disrupt the efforts of the participants by intercepting, modifying, blocking, replaying, and injecting messages. Through this engaging authentic challenge, students learn the dangers of the network, and in particular, the Dolev-Yao network intruder model. They also learn the value and subtleties of using cryptography (including encryption, digital signatures, and hashing), and protocols to mitigate these dangers. Our team is developing the exercise in spring 2021 and will evaluate its educational effectiveness.
Akriti Anand () is an MS student in computer science working with Alan Sherman. She is the lead software engineer and focuses on the web frontend. Richard Baldwin () is a BS student in computer science, a member of Cyberdawgs, and lab manager for the Cyber Defense Lab. Sudha Kosuri () is a MS student in computer science. She is working on the frontend (using React and Flask) and its integration with the backend. Julie Nau () is a BS student in computer science. She is working on the backend and on visualizations. Ryan Wunk-Fink () is a PhD student in computer science working with Alan Sherman. He is developing the backend.
Host: Alan T. Sherman, Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays. All meetings are open to the public.
Upcoming CDL Meetings: April 23, Peter Peterson (Univ. of Minnesota Duluth), Adversarial thinking; May 7, Farid Javani (UMBC), Anonymization by oblivious transfer
Recognizing entities that follow or closely resemble a regular expression (regex) pattern is an important task in information extraction. Due to a vast diversity of web documents and ways in which they are generated, even seemingly straightforward tasks such as identifying mentions of date in a document becomes very challenging. It is reasonable to claim that it is impossible to create a regex that is capable of identifying such entities from web documents with perfect precision and recall. Rather than abandoning regex as a go-to approach for entity detection, we present methods to combine the expressive power of regexes, the ability of deep learning to learn from large data, and the human-in-the-loop approach into a new integrated framework for entity identification from web data. The framework starts by creating or collecting the existing regexes for a particular type of entity. Those regexes are then used over a large document corpus to collect weak labels for the entity mentions and a neural network is trained to predict those regex-generated weak labels. Finally, a human expert is asked to label a set of documents and the neural network is fine-tuned on those documents.
While human effort is critical to build an entity recognition model, surprisingly little is known about how to best invest that effort given a limited time budget. Should a human’s effort be spent on writing a regex recognizing an entity or on manually label entity mentions in a document corpus? When a user is allowed to choose between regex construction and manual labeling, we discover that (1) if the time budget is low, spending all time for regex construction is often advantageous, (2) if the time budget is high, spending all time for manual labeling seems to be superior, and (3) between those two extremes, writing regexes followed by manual labeling is typically the best approach. I will also give an overview of the ongoing and future projects.
Eduard Dragut is an Associate Professor in the Computer and Information Sciences Department at Temple University. He received his Ph.D. degree in Computer Science from the University of Illinois at Chicago. He previously was a Postdoctoral Research Associate at Purdue University, Discovery Park, Cyber Center. His main area of research is Web data management, e.g., retrieval, extraction, representation, cleaning, analysis, and integration. He is actively pursuing projects in Data Cleaning, Social Media Mining (e.g., user behavior and fake news), the Future of Work, and Cyber-Infrastructure for Scientific Research. He is co-author of a book on Deep Web data integration, Deep Web Query Interface Understanding, and Integration.
Increasingly, individuals are turning to social media and online forums such as Twitter and Reddit to communicate about a range of issues including their health and well-being, public health concerns, and large public events such as the presidential debates. These user-generated social media data are prone to noise and misinformation. Developing and applying Artificial Intelligence (AI) algorithms can enable researchers to glean pertinent information from social media and online forums for a range of uses. For example, patients’ social media data may include information about their lifestyle that might not typically be reported to clinicians; however, this information may allow clinicians to provide individualized recommendations for managing their patients’ health. Separately, insights obtained from social media data can aid government agencies and other relevant institutions in better understanding the concerns of the populace as it relates to public health issues such as COVID-19 and its long-term effects on the well-being of the public. Finally, insights obtained from social media posts can capture how individuals react to an event and can be combined with other data sources, such as videos, to create multimedia summaries. In all these examples, there is much to be gained by applying AI algorithms to user-generated social media data.
In this talk, I will discuss my work in creating and applying AI algorithms that harness data from various sources (online forums, electronic medical records, and health care facility ratings) to gain insights about health and well-being and public health. I will also discuss the development of an algorithm for resolving pronoun mentions in event-related social media comments and a pipeline of algorithms for creating a multimedia summary of popular events. I will conclude by discussing my current and future work around creating and applying AI algorithms to: (a) gain insights about county-level COVID-19 vaccine concerns, (b) detect, reduce, and mitigate misinformation in text and online forums, and (c) understand the expression and evolution of bias (expressed in text) over time.
Anietie Andy is a senior data scientist at Penn Medicine Center for Digital Health. His research focuses on developing and applying natural language processing and machine learning algorithms to health care, public health, and well-being. Also, he is interested in developing natural language processing and machine learning algorithms that use multimodal sources (text, video, images) to summarize and gain insights about events and online communities.
Low-cost digital fabrication technology, and in particular 3D printing, is ushering in a new wave of personal computing. The technology promises that users will be able to design, customize and create any object to fit their needs. While the objects that we interact with daily are generally made of many types of materials—they may be hard, soft, conductive, etc.—current digital fabrication machines have largely been limited to producing rigid and passive objects. In this talk, I will present my research on developing digital fabrication processes that incorporate new materials such as textiles and hydrogels. These processes include novel 3D printer designs, software tools, and human-in-the-loop fabrication techniques. With these processes, new materials can be controlled, customized, and integrate computational capabilities—at design time and after fabrication—for creating personalized and interactive objects. I will conclude Research this talk with my vision for enabling anyone to create with digital fabrication technology and its impact beyond the individual.
Michael Rivera is a Ph.D. Candidate at the Human-Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University where he is advised by Scott Hudson. He works at the intersection of human-computer interaction, digital fabrication, and materials science. He has published papers on novel digital fabrication processes and interactive systems at top-tier HCI venues, including ACM CHI, UIST, DIS, and IMWUT. His work has been recognized with a Google – CMD-IT Dissertation Fellowship, an Adobe Research Fellowship Honorable Mention, and a Xerox Technical Minority Scholarship. Before Carnegie Mellon, he completed a M.S.E in Computer Graphics and Game Technology and a B.S.E in Digital Media Design at the University of Pennsylvania. He has also worked at the Immersive Experiences Lab of HP Labs, and as a software engineer at Facebook and LinkedIn.