Department of Computer Science and Electrical Engineering
talk: Building Resilience against Cyberattacks, 12pm ET, Dec 15
ArtIAMAS Seminar Series, Co-organized by UMBC, UMCP, and the Army Research Lab
Building Resilience against Cyberattacks
Aryya Gangopadhyay, UMBC
12-1 PM ET Wednesday, 15 December 15, 2021 Online via webex
In this talk, we will address the issue of building resilient systems in the face of cyberattacks. We will present a defense mechanism for cyberattacks using a three-tier architecture that can be used to secure army assets and tactical information. The top tier represents the front-end where autonomous sensing and inferencing through AI models take place by UAVs, UGVs, etc. We will illustrate how models can be defended against data poisoning attacks. In the middle tier, we focus on building cyber defense against attacks in federated learning environments, where models are trained on a large corpus of decentralized data without transferring raw data over a communication channel. The bottom tier represents back-end servers that train deep learning models with large amounts of data that can subsequently be pushed to the edge for inferencing. We will demonstrate how adaptive models can be developed for detecting and preventing various types of attacks at this level.
Dr. Aryya Gangopadhyay is a Professor in the Information Systems department at the University of Maryland, Baltimore County. Dr. Gangopadhyay has a courtesy appointment as a Professor in Computer Science and Electrical Engineering at UMBC. He is also the Director of the Center for Real-time Sensing and Autonomy (CARDS) at UMBC. His research interests include adversarial machine learning at the edge, cybersecurity, and smart cities. He has graduated 16 Ph.D. students and is currently mentoring several others at UMBC. He has published over 125 peer-reviewed research articles and has received extramural support from ARL, NSF, NIST, the Department of Education, and IBM.
talk: Shadow IT in Higher Ed: Survey & Case Study for Cybersecurity, 12-1 Fri 12-3
The UMBC Cyber Defense Lab presents
Shadow IT in Higher Education: Survey and Case Study for Cybersecurity
Selma Gomez Orr, Cyrus Jian Bonyadi, Enis Golaszewski, and Alan T. Sherman UMBC Cyber Defense Lab
Joint work with Peter A. H. Peterson (University of Minnesota Duluth), Richard Forno, Sydney Johns, and Jimmy Rodriguez
12-1:00 pm, Friday, 3 December 2021, online via WebEx
We explore shadow information technology (IT) at institutions of higher education through a two-tiered approach involving a detailed case study and comprehensive survey of IT professionals. In its many forms, shadow IT is the software or hardware present in a computer system or network that lies outside the typical review process of the responsible IT unit. We carry out a case study of an internally built legacy grants management system at the University of Maryland, Baltimore County that exemplifies the vulnerabilities, including cross-site scripting and SQL injection, typical of such unauthorized and ad-hoc software. We also conduct a survey of IT professionals at universities, colleges, and community colleges that reveals new and actionable information regarding the prevalence, usage patterns, types, benefits, and risks of shadow IT at their respective institutions.
Further, we propose a security-based profile of shadow IT, involving a subset of elements from existing shadow IT taxonomies, that categorizes shadow IT from a security perspective. Based on this profile, survey respondents identified the predominant form of shadow IT at their institutions, revealing close similarities to findings from our case study.
Through this work, we are the first to identify possible susceptibility factors associated with the occurrence of shadow IT-related security incidents within academic institutions. Correlations of significance include the presence of certain graduate schools, the level of decentralization of the IT department, the types of shadow IT present, the percentage of security violations related to shadow IT, and the institution’s overall attitude toward shadow IT. The combined elements of our case study, profile, and survey provide the first comprehensive view of shadow IT security at academic institutions, highlighting the tension between its risks and benefits, and suggesting strategies for managing it successfully.
Dr. Selma Gomez Orr ( ) received her Ph.D. from Harvard University in the field of decision sciences. She also holds Masters degrees in applied mathematics, engineering sciences, and business administration, also from Harvard. She has worked in the private sector in the fields of cybersecurity and data analytics. Most recently, as a CyberCorps Scholarship for Service (SFS) Scholar, Dr. Orr completed a Master’s of Professional Studies in both cybersecurity and data science at UMBC.
Cyrus Jian Bonyadi ( ) is a computer science Ph.D. student and former SFS scholar studying consensus theory at UMBC under the direction of Alan T. Sherman, Sisi Duan, and Haibin Zhang.
Enis Golaszewski ( ) is a Ph.D. student at UMBC under Alan T. Sherman where he studies, researches, and teaches cryptographic protocol analysis. A former SFS scholar, Golaszewski helps lead annual research studies that analyze and break software at UMBC.
Dr. Alan T. Sherman () is a professor of computer science, director of CDL, and associate director of UMBC’s Cybersecurity Center. His main research interest is high-integrity voting systems. Sherman earned the Ph.D. degree in computer science at MIT in 1987 studying under Ronald L. Rivest.
Host: Alan T. Sherman, Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays 12-1 pm. All meetings are open to the public. Upcoming CDL Meetings: Feb 4, Filipo Sharevski
talk: Thinking Like an Attacker: Towards a Definition and Non-Technical Assessment of Adversarial Thinking, 12-1pm ET 4/30
The UMBC Cyber Defense Lab presents
Thinking Like an Attacker: Towards a Definition and Non-Technical Assessment of Adversarial Thinking
Prof. Peter A. H. Peterson Department of Computer Science University of Minnesota Duluth
“Adversarial thinking” (AT), sometimes called the “security mindset” or described as the ability to “think like an attacker,” is widely accepted in the computer security community as an essential ability for successful cybersecurity practice. Supported by intuition and anecdotes, many in the community stress the importance of AT, and multiple projects have produced interventions explicitly intended to strengthen individual AT skills to improve security in general. However, there is no agreed-upon definition of “adversarial thinking” or its components, and accordingly, no test for it. Because of this absence, it is impossible to meaningfully quantify AT in subjects, AT’s importance for cybersecurity practitioners, or the effectiveness of interventions designed to improve AT. Working towards the goal of a characterization of AT in cybersecurity and a non-technical test for AT that anyone can take, I will discuss existing conceptions of AT from the security community, as well as ideas about AT in other fields with adversarial aspects including war, politics, law, critical thinking, and games. I will also describe some of the unique difficulties of creating a non-technical test for AT, compare and contrast this effort to our work on the CATS and Security Misconceptions projects, and describe some potential solutions. I will explore potential uses for such an instrument, including measuring a student’s change in AT over time, measuring the effectiveness of interventions meant to improve AT, comparing AT in different populations (e.g., security professionals vs. software engineers), and identifying individuals from all walks of life with strong AT skills—people who might help meet our world’s pressing need for skilled and insightful security professionals and researchers. Along the way, I will give some sample non-technical adversarial thinking challenges and describe how they might be graded and validated.
Peter A. H. Peterson is an assistant professor of computer science at the University of Minnesota Duluth, where he teaches and directs the Laboratory for Advanced Research in Systems (LARS), a group dedicated to research in operating systems and security, with a special focus on research and development to make security education more effective and accessible. He is an active member of the Cybersecurity Assessment Tools (CATS) project working to create and validate two concept inventories for cybersecurity, is working on an NSF-funded grant to identify and remediate commonsense misconceptions about cybersecurity, and is also the author of several hands-on security exercises for Deterlab that have been used at many institutions around the world. He earned his Ph.D. from the University of California, Los Angeles for work on “adaptive compression”—systems that make compression decisions dynamically to improve efficiency. He can be reached at .
Host: Alan T. Sherman, Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays. All meetings are open to the public. Upcoming CDL Meetings: May 7, Farid Javani (UMBC), Anonymization by oblivious transfer
Talk: Cyber Lessons, Learned and Unlearned, 1-2 pm ET 4/20/21
The UMBC Center for Cybersecurity (UCYBR) & The Department of Computer Science & Electrical Engineering (CSEE) Present:
“Cyber Lessons, Learned and Unlearned”
Professor Eugene Spafford Professor of Computer Science & Executive Director Emeritus of the Purdue CERIAS (Center for Education and Research in Information Assurance and Security) Purdue University
Dr. Eugene Spafford is a professor with an appointment in Computer Science at Purdue University, where he has served on the faculty since 1987. He is also a professor of Philosophy (courtesy), a professor of Communication (courtesy), a professor of Electrical and Computer Engineering (courtesy) and a Professor of Political Science (courtesy). He serves on a number of advisory and editorial boards. Spafford’s current research interests are primarily in the areas of information security, computer crime investigation and information ethics. He is generally recognized as one of the senior leaders in the field of computing.
Among other things, Spaf (as he is known to his friends, colleagues, and students) is Executive Director Emeritus of the Purdue CERIAS (Center for Education and Research in Information Assurance and Security), and was the founder and director of the (superseded) COAST Laboratory. He is Editor-on-Chief of the Elsevier journal Computers & Security, the oldest journal in the field of information security, and the official outlet of IFIP TC-11.
Spaf has been a student and researcher in computing for over 40 years, 35 of which have been in security-related areas. During that time, computing has evolved from mainframes to the Internet of Things. Of course, along with these changes in computing have been changes in technology, access, and both how we use and misuse computing resources. Who knows what the future holds?
In this UCYBR talk, Spaf will reflect upon this evolution and trends and discuss what he sees as significant “lessons learned” from history. Will we learn from our past? Or are we destined to repeat history (again!) and never break free from the many cybersecurity challenges that continue to impact our world? Join UCYBR and CSEE for an engaging and informative presentation from one of the most respected luminaries of the cybersecurity field!
talk: MeetingMayhem: Teaching Adversarial Thinking through a Web-Based Game, 12-1 ET 4/9
The UMBC Cyber Defense Lab presents
MeetingMayhem: Teaching Adversarial Thinking through a Web-Based Game
Akriti Anand, Richard Baldwin, Sudha, Kosuri, Julie Nau, and Ryan Wunk-Fink UMBC Cyber Defense Lab
joint work with Alan Sherman, Marc Olano, Linda Oliva, Edward Zieglar, and Enis Golazewski
12:00 noon–1 pm ET, Friday, 9 April 2021 online via WebEx
We present our progress and plans in developing MeetingMayhem, a new web-based educational exercise that helps students learn adversarial thinking in communication networks. The goal of the exercise is to arrange a meeting time and place by sending and receiving messages through an insecure network that is under the control of a malicious adversary. Players can assume the role of participants or an adversary. The adversary can disrupt the efforts of the participants by intercepting, modifying, blocking, replaying, and injecting messages. Through this engaging authentic challenge, students learn the dangers of the network, and in particular, the Dolev-Yao network intruder model. They also learn the value and subtleties of using cryptography (including encryption, digital signatures, and hashing), and protocols to mitigate these dangers. Our team is developing the exercise in spring 2021 and will evaluate its educational effectiveness.
Akriti Anand () is an MS student in computer science working with Alan Sherman. She is the lead software engineer and focuses on the web frontend. Richard Baldwin () is a BS student in computer science, a member of Cyberdawgs, and lab manager for the Cyber Defense Lab. Sudha Kosuri () is a MS student in computer science. She is working on the frontend (using React and Flask) and its integration with the backend. Julie Nau () is a BS student in computer science. She is working on the backend and on visualizations. Ryan Wunk-Fink () is a PhD student in computer science working with Alan Sherman. He is developing the backend.
Host: Alan T. Sherman, Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays. All meetings are open to the public.
Upcoming CDL Meetings: April 23, Peter Peterson (Univ. of Minnesota Duluth), Adversarial thinking; May 7, Farid Javani (UMBC), Anonymization by oblivious transfer
talk: Mining social media data for health, public health & popular events, 1-2pm ET 4/2
Mining social media data for health, public health, and popular events
Increasingly, individuals are turning to social media and online forums such as Twitter and Reddit to communicate about a range of issues including their health and well-being, public health concerns, and large public events such as the presidential debates. These user-generated social media data are prone to noise and misinformation. Developing and applying Artificial Intelligence (AI) algorithms can enable researchers to glean pertinent information from social media and online forums for a range of uses. For example, patients’ social media data may include information about their lifestyle that might not typically be reported to clinicians; however, this information may allow clinicians to provide individualized recommendations for managing their patients’ health. Separately, insights obtained from social media data can aid government agencies and other relevant institutions in better understanding the concerns of the populace as it relates to public health issues such as COVID-19 and its long-term effects on the well-being of the public. Finally, insights obtained from social media posts can capture how individuals react to an event and can be combined with other data sources, such as videos, to create multimedia summaries. In all these examples, there is much to be gained by applying AI algorithms to user-generated social media data.
In this talk, I will discuss my work in creating and applying AI algorithms that harness data from various sources (online forums, electronic medical records, and health care facility ratings) to gain insights about health and well-being and public health. I will also discuss the development of an algorithm for resolving pronoun mentions in event-related social media comments and a pipeline of algorithms for creating a multimedia summary of popular events. I will conclude by discussing my current and future work around creating and applying AI algorithms to: (a) gain insights about county-level COVID-19 vaccine concerns, (b) detect, reduce, and mitigate misinformation in text and online forums, and (c) understand the expression and evolution of bias (expressed in text) over time.
Anietie Andy is a senior data scientist at Penn Medicine Center for Digital Health. His research focuses on developing and applying natural language processing and machine learning algorithms to health care, public health, and well-being. Also, he is interested in developing natural language processing and machine learning algorithms that use multimodal sources (text, video, images) to summarize and gain insights about events and online communities.
talk: Enabling Computation, Control, and Customization of Materials with Digital Fabrication Processes, 1-2pm 3/31
Enabling Computation, Control, and Customization of Materials with Digital Fabrication Processes
Low-cost digital fabrication technology, and in particular 3D printing, is ushering in a new wave of personal computing. The technology promises that users will be able to design, customize and create any object to fit their needs. While the objects that we interact with daily are generally made of many types of materials—they may be hard, soft, conductive, etc.—current digital fabrication machines have largely been limited to producing rigid and passive objects. In this talk, I will present my research on developing digital fabrication processes that incorporate new materials such as textiles and hydrogels. These processes include novel 3D printer designs, software tools, and human-in-the-loop fabrication techniques. With these processes, new materials can be controlled, customized, and integrate computational capabilities—at design time and after fabrication—for creating personalized and interactive objects. I will conclude Research this talk with my vision for enabling anyone to create with digital fabrication technology and its impact beyond the individual.
Michael Rivera is a Ph.D. Candidate at the Human-Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University where he is advised by Scott Hudson. He works at the intersection of human-computer interaction, digital fabrication, and materials science. He has published papers on novel digital fabrication processes and interactive systems at top-tier HCI venues, including ACM CHI, UIST, DIS, and IMWUT. His work has been recognized with a Google – CMD-IT Dissertation Fellowship, an Adobe Research Fellowship Honorable Mention, and a Xerox Technical Minority Scholarship. Before Carnegie Mellon, he completed a M.S.E in Computer Graphics and Game Technology and a B.S.E in Digital Media Design at the University of Pennsylvania. He has also worked at the Immersive Experiences Lab of HP Labs, and as a software engineer at Facebook and LinkedIn.
talk: Forward & Inverse Causal Inference in a Tensor Framework, 1-2 pm ET, 3/29
Forward and Inverse Causal Inference in a Tensor Framework
M. Alex O. Vasilescu Institute of Pure and Applied Mathematics, UCLA
Developing causal explanations for correct results or for failures from mathematical equations and data is important in developing a trustworthy artificial intelligence, and retaining public trust. Causal explanations are germane to the “right to an explanation” statute, i.e., to data-driven decisions, such as those that rely on images. Computer graphics and computer vision problems, also known as forward and inverse imaging problems, have been cast as causal inference questions consistent with Donald Rubin’s quantitative definition of causality, where “A causes B” means “the effect of A is B”, a measurable and experimentally repeatable quantity. Computer graphics may be viewed as addressing analogous questions to forward causal inference that addresses the “what if” question, and estimates a change in effects given a delta change in a causal factor. Computer vision may be viewed as addressing analogous questions to inverse causal inference that addresses the “why” question which we define as the estimation of causes given a forward causal model, and a set of observations that constrain the solution set. Tensor algebra is a suitable and transparent framework for modeling the mechanism that generates observed data. Tensor-based data analysis, also known in the literature as structural equation modeling with multimode latent variables, has been employed in representing the causal factor structure of data formation in econometrics, psychometric, and chemometrics since the 1960s. More recently, tensor factor analysis has been successfully employed to represent cause-and-effect in computer vision, and computer graphics, or for prediction and dimensionality reduction in machine learning tasks.
M. Alex O. Vasilescu received her education at the Massachusetts Institute of Technology and the University of Toronto. She is currently a senior fellow at UCLA’s Institute of Pure and Applied mathematics (IPAM) that has held research scientist positions at the MIT Media Lab from 2005-07 and at New York University’s Courant Institute of Mathematical Sciences from 2001-05. Vasilescu introduced the tensor paradigm for computer vision, computer graphics, and machine learning. She addressed causal inferencing questions by framing computer graphics and computer vision as multilinear problems. Causal inferencing in a tensor framework facilitates the analysis, recognition, synthesis, and interpretability of data. The development of the tensor framework has been spearheaded with premier papers, such as Human Motion Signatures (2001), TensorFaces (2002), Multilinear Independent Component Analysis (2005), TensorTextures (2004), and Multilinear Projection for Recognition (2007, 2011). Vasilescu’s face recognition research, known as TensorFaces, has been funded by the TSWG, the Department of Defenses Combating Terrorism Support Program, Intelligence Advanced Research Projects Activity (IARPA), and NSF. Her work was featured on the cover of Computer World and in articles in the New York Times, Washington Times, etc. MIT’s Technology Review Magazine named her to their TR100 list of honorees, and the National Academy of Science co-awarded the Keck Futures Initiative Grant.
talk: Transparent Dishonesty: Front-Running Attacks on Blockchain, 12-1 pm ET 3/26
The UMBC Cyber Defense Lab presents
Transparent Dishonesty: Front-Running Attacks on Blockchain
Professor Jeremy Clark Concordia Institute for Information Systems Engineering Concordia University, Montreal, Canada
12–1 pm ET Friday, March 26, 2021 online via WebEx
We consider front-running to be a course of action where an entity benefits from prior access to privileged market information about upcoming transactions and trades. Front-running has been an issue in financial instrument markets since the 1970s. With the advent of blockchain technology, front-running has resurfaced in new forms we explore here, instigated by blockchain’s decentralized and transparent nature. I will discuss our “systemization of knowledge” paper which draws from a scattered body of knowledge and instances of front-running across the top 25 most active decentral applications (DApps) deployed on Ethereum blockchain. Additionally, we carry out a detailed analysis of Status.im initial coin offering (ICO) and show evidence of abnormal miner’s behavior indicative of front-running token purchases. Finally, we map the proposed solutions to front-running into useful categories.
Jeremy Clark is an associate professor at the Concordia Institute for Information Systems Engineering. At Concordia, he holds the NSERC/Raymond Chabot Grant Thornton/Catallaxy Industrial Research Chair in Blockchain Technologies. He earned his Ph.D. from the University of Waterloo, where his gold medal dissertation was on designing and deploying secure voting systems including Scantegrity—the first cryptographically verifiable system used in a public sector election. He wrote one of the earliest academic papers on Bitcoin, completed several research projects in the area, and contributed to the first textbook. Beyond research, he has worked with several municipalities on voting technology and testified to both the Canadian Senate and House finance committees on Bitcoin. email:
Host: Alan T. Sherman, Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays. All meetings are open to the public. Upcoming CDL Meetings: April 9, (UMBC), MeetingMayhem: A network adversarial thinking game; April 23, Peter Peterson (University of Minnesota Duluth), Adversarial thinking; May 7, Farid Javani (UMBC), Anonymization by oblivious transfer.
talk: Machine Learning: New Methodology for Physical & Social Sciences, 1pm ET 3/24