talk: Semantically Rich Knowledge Graphs to Automate Cloud Data Security & Compliance, 12-1 Feb 22


The UMBC Cyber Defense Lab presents

Semantically Rich Knowledge Graphs to Automate Cloud Data Security and Compliance

Prof. Karuna Joshi
Information Systems, UMBC

12-1 pm ET, Friday, 18 February 2022, via WebEx


To address data protection concerns, authorities and standards bodies worldwide have released a plethora of regulations, guidelines, and software controls to be applied to cloud services data. As a result, service providers maintaining their end-users private attributes have seen a surge in compliance requirements. This becomes especially important in critical domains like healthcare and finance. As most of these cloud data regulations are not available in a machine-processable format, it requires significant manual effort to adhere to them. Often many of the laws have overlapping rules, but as they are not referencing each other, providers must duplicate efforts to comply with each regulation. Furthermore, providers often encrypt cloud data to meet regulatory requirements, but these records cannot be queried without the large overhead of decryption. As the volume of cloud-based services reaches big data levels, it is essential to be able to have searchable encrypted cloud data.

We have developed a semantically rich ontology or knowledge graph that captures knowledge embedded in various cloud data compliance regulations using techniques from AI, NLP, and text extraction. It includes data threats and security controls that are needed to mitigate the risks. We have also developed a novel approach that facilitates searchable encryption using attribute-based encryption (ABE) and multi-keyword search techniques. In this talk, I will present the results of this work, especially as applied to GDPR, PCI-DSS, and HIPAA regulations.

Dr. Karuna Pande Joshi is an associate professor of information systems at UMBC and UMBC director of the Center of Accelerated Real-Time Analytics (CARTA). She also directs the Knowledge Analytics Cognitive and Cloud (KnACC) Lab. Her research focus is in the areas of data science, cloud computing, data security and privacy, and healthcare IT systems. She has published over 70 papers and her research is supported by ONR, NSF, DoD, IBM, GE Research, and Cisco. She teaches courses in big data, database systems design, decision support systems, and software engineering. She received her MS and Ph.D. in computer science from UMBC, where she was twice awarded the IBM Ph.D. Fellowship, and her Bachelors in computer engineering from the University of Mumbai, India. Dr. Joshi also has extensive experience working in industry, primarily as an IT program/project manager at the International Monetary Fund.


Host: Alan T. Sherman, . Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays 12-1 pm. All meetings are open to the public. Upcoming CDL meetings: Mar 4, Mar 18, Apr 1 Kirellos Elsaad (UMBC), Apr 15, Apr 29 Ian Blumenfeld (UMBC), May 13 Enka Blanchard (Digitrust Loria, France)

talk: Users’ Preferences for Enhanced Misinformation Warnings on Twitter


The UMBC Cyber Defense Lab presents

Context, a Red Flag, or Both? Users’ Preferences for Enhanced Misinformation Warnings on Twitter

Prof. Filipo Sharevski
Adversarial Cybersecurity Automation Lab
DePaul University

12–1pm ET Friday, 4 Feb. 2022, WebEx


Warning users about hazardous information on social media is far from a simple usability task. The so-called soft moderation must balance between debunking falsehoods and avoiding moderation bias while avoiding disrupting the social media consumption flow. Platforms thus employ visually indistinguishable warning tags with generic text under a suspected misinformation content. This approach resulted in an unfavorable outcome where the warnings “backfired” and users believed the misinformation more, not less. To address this predicament, we developed enhancements to the misinformation warnings where users are advised on the context of the information hazard and exposed to standard warning iconography.

Balancing for comprehensibility, the enhanced warning tags provide context in regards to (1) fabricated facts; and (2) improbable interpretations of facts. Instead of the generic “Get the facts about the COVID-19 vaccine” warning, users in the first case are warned about “Strange, Potentially, Adverse Misinformation (SPAM): If this were an email, this would have ended up in your spam folder” and in the second case about “For Facts Sake (FFS): In this tweet, facts are missing, out of context, manipulated, or missing a source.” The SPAM warning tag contextualizes misinformation with an analogy to an already known phenomenon of spam email, while the FFS warning tag as an acronym blends with the characteristic communication Twitter behavior with compact language due to the tweets’ length restriction. The text-only warning tags were then paired with the hereto ignored usable security intervention when it comes to misinformation: red flags as watermarks over the suspected misinformation tweets. The tag-and-flag variant provided an option for us also to test user receptivity to warnings that incorporate contrast (red), gestalt iconography for general warnings (flag), and actionable advice for inspection (watermark).

We ran an A/B evaluation with Twitter’s original warnings in a usability study with 337 participants. The majority of the participants preferred the enhancements as a nudge towards recognizing and avoiding misinformation. The enhanced warnings were most favored by the politically left-leaning and to a lesser degree moderate participants, but they also appealed to roughly a third of the right-leaning participants. The education level was the only demographic factor shaping participants’ preferences for the proposed enhancements. Through this work, we are the first to perform an A/B evaluation of enhanced social media warnings providing context and introducing visual design frictions in interacting with hazardous information. Our sentiment analysis towards soft moderation in general, and enhanced warning tags in particular from a political and demographic perspective, provides the basis for our recommendations about future refinements, frictions, and adaptations of soft moderation towards secure and safe behavior on social media.

About the Speaker. Dr. Filipo Sharevski () is an assistant professor of cybersecurity and director of the Adversarial Cybersecurity Automation Lab (https://acal.cdm.depaul.edu). His main research interest is adversarial cybersecurity automation, m/disinformation, usable security, and social engineering. Sharevski earned the PhD degree in interdisciplinary information security at Purdue University, CERIAS in 2015.

Host: Alan T. Sherman, . Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays 12-1 pm. All meetings are open to the public.

talk: Building Resilience against Cyberattacks, 12pm ET, Dec 15


ArtIAMAS Seminar Series, Co-organized by UMBC, UMCP, and the Army Research Lab

Building Resilience against Cyberattacks

Aryya Gangopadhyay, UMBC


12-1 PM ET Wednesday, 15 December 15, 2021
Online via webex


In this talk, we will address the issue of building resilient systems in the face of cyberattacks. We will present a defense mechanism for cyberattacks using a three-tier architecture that can be used to secure army assets and tactical information. The top tier represents the front-end where autonomous sensing and inferencing through AI models take place by UAVs, UGVs, etc. We will illustrate how models can be defended against data poisoning attacks. In the middle tier, we focus on building cyber defense against attacks in federated learning environments, where models are trained on a large corpus of decentralized data without transferring raw data over a communication channel. The bottom tier represents back-end servers that train deep learning models with large amounts of data that can subsequently be pushed to the edge for inferencing. We will demonstrate how adaptive models can be developed for detecting and preventing various types of attacks at this level.

Dr. Aryya Gangopadhyay is a Professor in the Information Systems department at the University of Maryland, Baltimore County. Dr. Gangopadhyay has a courtesy appointment as a Professor in Computer Science and Electrical Engineering at UMBC. He is also the Director of the Center for Real-time Sensing and Autonomy (CARDS) at UMBC. His research interests include adversarial machine learning at the edge, cybersecurity, and smart cities. He has graduated 16 Ph.D. students and is currently mentoring several others at UMBC. He has published over 125 peer-reviewed research articles and has received extramural support from ARL, NSF, NIST, the Department of Education, and IBM.

talk: Shadow IT in Higher Ed: Survey & Case Study for Cybersecurity, 12-1 Fri 12-3

Shadow IT is the use of information technology systems, devices, software, applications, and services without explicit IT department approval.

The UMBC Cyber Defense Lab presents

Shadow IT in Higher Education: Survey and Case Study for Cybersecurity

Selma Gomez Orr, Cyrus Jian Bonyadi, Enis Golaszewski, and Alan T. Sherman
UMBC Cyber Defense Lab

Joint work with Peter A. H. Peterson (University of Minnesota Duluth), Richard Forno, Sydney Johns, and Jimmy Rodriguez

12-1:00 pm, Friday, 3 December 2021, online via WebEx


We explore shadow information technology (IT) at institutions of higher education through a two-tiered approach involving a detailed case study and comprehensive survey of IT professionals. In its many forms, shadow IT is the software or hardware present in a computer system or network that lies outside the typical review process of the responsible IT unit. We carry out a case study of an internally built legacy grants management system at the University of Maryland, Baltimore County that exemplifies the vulnerabilities, including cross-site scripting and SQL injection, typical of such unauthorized and ad-hoc software. We also conduct a survey of IT professionals at universities, colleges, and community colleges that reveals new and actionable information regarding the prevalence, usage patterns, types, benefits, and risks of shadow IT at their respective institutions.

Further, we propose a security-based profile of shadow IT, involving a subset of elements from existing shadow IT taxonomies, that categorizes shadow IT from a security perspective. Based on this profile, survey respondents identified the predominant form of shadow IT at their institutions, revealing close similarities to findings from our case study.

Through this work, we are the first to identify possible susceptibility factors associated with the occurrence of shadow IT-related security incidents within academic institutions. Correlations of significance include the presence of certain graduate schools, the level of decentralization of the IT department, the types of shadow IT present, the percentage of security violations related to shadow IT, and the institution’s overall attitude toward shadow IT. The combined elements of our case study, profile, and survey provide the first comprehensive view of shadow IT security at academic institutions, highlighting the tension between its risks and benefits, and suggesting strategies for managing it successfully.


Dr. Selma Gomez Orr ( ) received her Ph.D. from Harvard University in the field of decision sciences. She also holds Masters degrees in applied mathematics, engineering sciences, and business administration, also from Harvard. She has worked in the private sector in the fields of cybersecurity and data analytics. Most recently, as a CyberCorps Scholarship for Service (SFS) Scholar, Dr. Orr completed a Master’s of Professional Studies in both cybersecurity and data science at UMBC.

Cyrus Jian Bonyadi ( ) is a computer science Ph.D. student and former SFS scholar studying consensus theory at UMBC under the direction of Alan T. Sherman, Sisi Duan, and Haibin Zhang.

Enis Golaszewski ( ) is a Ph.D. student at UMBC under Alan T. Sherman where he studies, researches, and teaches cryptographic protocol analysis. A former SFS scholar, Golaszewski helps lead annual research studies that analyze and break software at UMBC.

Dr. Alan T. Sherman () is a professor of computer science, director of CDL, and associate director of UMBC’s Cybersecurity Center. His main research interest is high-integrity voting systems. Sherman earned the Ph.D. degree in computer science at MIT in 1987 studying under Ronald L. Rivest.


Host: Alan T. Sherman, . Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays 12-1 pm. All meetings are open to the public. Upcoming CDL Meetings: Feb 4, Filipo Sharevski

talk: Thinking Like an Attacker: Towards a Definition and Non-Technical Assessment of Adversarial Thinking, 12-1pm ET 4/30


The UMBC Cyber Defense Lab presents


Thinking Like an Attacker:
Towards a Definition and Non-Technical Assessment of Adversarial Thinking


Prof. Peter A. H. Peterson
Department of Computer Science
University of Minnesota Duluth


12:00–1:00 pm ET,  Friday, 30 April 2021
via WebEx


“Adversarial thinking” (AT), sometimes called the “security mindset” or described as the ability to “think like an attacker,” is widely accepted in the computer security community as an essential ability for successful cybersecurity practice. Supported by intuition and anecdotes, many in the community stress the importance of AT, and multiple projects have produced interventions explicitly intended to strengthen individual AT skills to improve security in general. However, there is no agreed-upon definition of “adversarial thinking” or its components, and accordingly, no test for it. Because of this absence, it is impossible to meaningfully quantify AT in subjects, AT’s importance for cybersecurity practitioners, or the effectiveness of interventions designed to improve AT. Working towards the goal of a characterization of AT in cybersecurity and a non-technical test for AT that anyone can take, I will discuss existing conceptions of AT from the security community, as well as ideas about AT in other fields with adversarial aspects including war, politics, law, critical thinking, and games. I will also describe some of the unique difficulties of creating a non-technical test for AT, compare and contrast this effort to our work on the CATS and Security Misconceptions projects, and describe some potential solutions. I will explore potential uses for such an instrument, including measuring a student’s change in AT over time, measuring the effectiveness of interventions meant to improve AT, comparing AT in different populations (e.g., security professionals vs. software engineers), and identifying individuals from all walks of life with strong AT skills—people who might help meet our world’s pressing need for skilled and insightful security professionals and researchers. Along the way, I will give some sample non-technical adversarial thinking challenges and describe how they might be graded and validated.


 Peter A. H. Peterson is an assistant professor of computer science at the University of Minnesota Duluth, where he teaches and directs the Laboratory for Advanced Research in Systems (LARS), a group dedicated to research in operating systems and security, with a special focus on research and development to make security education more effective and accessible. He is an active member of the Cybersecurity Assessment Tools (CATS) project working to create and validate two concept inventories for cybersecurity, is working on an NSF-funded grant to identify and remediate commonsense misconceptions about cybersecurity, and is also the author of several hands-on security exercises for Deterlab that have been used at many institutions around the world. He earned his Ph.D. from the University of California, Los Angeles for work on “adaptive compression”—systems that make compression decisions dynamically to improve efficiency. He can be reached at .


Host: Alan T. Sherman, Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681The UMBC Cyber Defense Lab meets biweekly Fridays.  All meetings are open to the public. Upcoming CDL Meetings: May 7, Farid Javani (UMBC), Anonymization by oblivious transfer

Talk: Cyber Lessons, Learned and Unlearned, 1-2 pm ET 4/20/21


The UMBC Center for Cybersecurity (UCYBR) & The Department of Computer Science & Electrical Engineering (CSEE) Present:

“Cyber Lessons, Learned and Unlearned”

Professor Eugene Spafford
Professor of Computer Science & Executive Director Emeritus of the Purdue CERIAS (Center for Education and Research in Information Assurance and Security)
Purdue University

Tuesday 20 April 2021 1-2PM ET

WHERE
https://umbc.webex.com/umbc/j.php?MTID=m576a3dada9e0c63c07beb51fedbff3d1

Dr. Eugene Spafford is a professor with an appointment in Computer Science at Purdue University, where he has served on the faculty since 1987. He is also a professor of Philosophy (courtesy), a professor of Communication (courtesy), a professor of Electrical and Computer Engineering (courtesy) and a Professor of Political Science (courtesy). He serves on a number of advisory and editorial boards. Spafford’s current research interests are primarily in the areas of information security, computer crime investigation and information ethics. He is generally recognized as one of the senior leaders in the field of computing.

Among other things, Spaf (as he is known to his friends, colleagues, and students) is Executive Director Emeritus of the Purdue CERIAS (Center for Education and Research in Information Assurance and Security), and was the founder and director of the (superseded) COAST Laboratory. He is Editor-on-Chief of the Elsevier journal Computers & Security, the oldest journal in the field of information security, and the official outlet of IFIP TC-11.

Spaf has been a student and researcher in computing for over 40 years, 35 of which have been in security-related areas. During that time, computing has evolved from mainframes to the Internet of Things. Of course, along with these changes in computing have been changes in technology, access, and both how we use and misuse computing resources. Who knows what the future holds?

In this UCYBR talk, Spaf will reflect upon this evolution and trends and discuss what he sees as significant “lessons learned” from history. Will we learn from our past? Or are we destined to repeat history (again!) and never break free from the many cybersecurity challenges that continue to impact our world? Join UCYBR and CSEE for an engaging and informative presentation from one of the most respected luminaries of the cybersecurity field!

More information about Spaf’s distinguished career in cybersecurity, his publications, talks, and more can be found at https://spaf.cerias.purdue.edu/.

Host: Dr. Richard Forno ()

talk: MeetingMayhem: Teaching Adversarial Thinking through a Web-Based Game, 12-1 ET 4/9

The UMBC Cyber Defense Lab presents

MeetingMayhem:  Teaching Adversarial Thinking through a Web-Based Game


Akriti Anand, Richard Baldwin, Sudha, Kosuri, Julie Nau, and Ryan Wunk-Fink
UMBC Cyber Defense Lab

joint work with Alan Sherman, Marc Olano, Linda Oliva, Edward Zieglar, and Enis Golazewski

12:00 noon–1 pm ET, Friday, 9 April 2021
online via WebEx


We present our progress and plans in developing MeetingMayhem, a new web-based educational exercise that helps students learn adversarial thinking in communication networks. The goal of the exercise is to arrange a meeting time and place by sending and receiving messages through an insecure network that is under the control of a malicious adversary.  Players can assume the role of participants or an adversary.  The adversary can disrupt the efforts of the participants by intercepting, modifying, blocking, replaying, and injecting messages.  Through this engaging authentic challenge, students learn the dangers of the network, and in particular, the Dolev-Yao network intruder model. They also learn the value and subtleties of using cryptography (including encryption, digital signatures, and hashing), and protocols to mitigate these dangers.  Our team is developing the exercise in spring 2021 and will evaluate its educational effectiveness.


Akriti Anand () is an MS student in computer science working with Alan Sherman.  She is the lead software engineer and focuses on the web frontend. Richard Baldwin () is a BS student in computer science, a member of Cyberdawgs, and lab manager for the Cyber Defense Lab. Sudha Kosuri () is a MS student in computer science.  She is working on the frontend (using React and Flask) and its integration with the backend. Julie Nau () is a BS student in computer science.  She is working on the backend and on visualizations. Ryan Wunk-Fink () is a PhD student in computer science working with Alan Sherman. He is developing the backend.


Host: Alan T. Sherman, Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays.  All meetings are open to the public.

 Upcoming CDL Meetings: April 23, Peter Peterson (Univ. of Minnesota Duluth), Adversarial thinking; May 7, Farid Javani (UMBC), Anonymization by oblivious transfer

talk: Mining social media data for health, public health & popular events, 1-2pm ET 4/2


Mining social media data for health, public health, and popular events

Anietie Andy, University of Pennsylvania

1:00-2:00 pm ET, Friday, 2 April 2021

online via WebEx


Increasingly, individuals are turning to social media and online forums such as Twitter and Reddit to communicate about a range of issues including their health and well-being, public health concerns, and large public events such as the presidential debates. These user-generated social media data are prone to noise and misinformation. Developing and applying Artificial Intelligence (AI) algorithms can enable researchers to glean pertinent information from social media and online forums for a range of uses.  For example, patients’ social media data may include information about their lifestyle that might not typically be reported to clinicians; however, this information may allow clinicians to provide individualized recommendations for managing their patients’ health. Separately, insights obtained from social media data can aid government agencies and other relevant institutions in better understanding the concerns of the populace as it relates to public health issues such as COVID-19 and its long-term effects on the well-being of the public. Finally, insights obtained from social media posts can capture how individuals react to an event and can be combined with other data sources, such as videos, to create multimedia summaries. In all these examples, there is much to be gained by applying AI algorithms to user-generated social media data.

In this talk, I will discuss my work in creating and applying AI algorithms that harness data from various sources (online forums, electronic medical records, and health care facility ratings) to gain insights about health and well-being and public health. I will also discuss the development of an algorithm for resolving pronoun mentions in event-related social media comments and a pipeline of algorithms for creating a multimedia summary of popular events. I will conclude by discussing my current and future work around creating and applying AI algorithms to: (a) gain insights about county-level COVID-19 vaccine concerns, (b) detect, reduce, and mitigate misinformation in text and online forums, and (c) understand the expression and evolution of bias (expressed in text) over time. 


Anietie Andy is a senior data scientist at Penn Medicine Center for Digital Health. His research focuses on developing and applying natural language processing and machine learning algorithms to health care, public health, and well-being. Also, he is interested in developing natural language processing and machine learning algorithms that use multimodal sources (text, video, images) to summarize and gain insights about events and online communities.

talk: Enabling Computation, Control, and Customization of Materials with Digital Fabrication Processes, 1-2pm 3/31


Enabling Computation, Control, and Customization of Materials with Digital Fabrication Processes

Michael Rivera, Carnegie Mellon University 

1:00-2:00 pm Wednesday, 31 March 2022

via WebEx


Low-cost digital fabrication technology, and in particular 3D printing, is ushering in a new wave of personal computing. The technology promises that users will be able to design, customize and create any object to fit their needs. While the objects that we interact with daily are generally made of many types of materials—they may be hard, soft, conductive, etc.—current digital fabrication machines have largely been limited to producing rigid and passive objects. In this talk, I will present my research on developing digital fabrication processes that incorporate new materials such as textiles and hydrogels. These processes include novel 3D printer designs, software tools, and human-in-the-loop fabrication techniques. With these processes, new materials can be controlled, customized, and integrate computational capabilities—at design time and after fabrication—for creating personalized and interactive objects. I will conclude Research this talk with my vision for enabling anyone to create with digital fabrication technology and its impact beyond the individual.


Michael Rivera is a Ph.D. Candidate at the Human-Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University where he is advised by Scott Hudson. He works at the intersection of human-computer interaction, digital fabrication, and materials science. He has published papers on novel digital fabrication processes and interactive systems at top-tier HCI venues, including ACM CHI, UIST, DIS, and IMWUT. His work has been recognized with a Google – CMD-IT Dissertation Fellowship, an Adobe Research Fellowship Honorable Mention, and a Xerox Technical Minority Scholarship. Before Carnegie Mellon, he completed a M.S.E in Computer Graphics and Game Technology and a B.S.E in Digital Media Design at the University of Pennsylvania. He has also worked at the Immersive Experiences Lab of HP Labs, and as a software engineer at Facebook and LinkedIn.

talk: Forward & Inverse Causal Inference in a Tensor Framework, 1-2 pm ET, 3/29


Forward and Inverse Causal Inference in a Tensor Framework


M. Alex O. Vasilescu
Institute of Pure and Applied Mathematics, UCLA

1-2:00 pm Monday, March 29, 2021
via WebEx

Developing causal explanations for correct results or for failures from mathematical equations and data is important in developing a trustworthy artificial intelligence, and retaining public trust.  Causal explanations are germane to the “right to an explanation” statute, i.e., to data-driven decisions, such as those that rely on images.  Computer graphics and computer vision problems, also known as forward and inverse imaging problems, have been cast as causal inference questions consistent with Donald Rubin’s quantitative definition of causality, where “A causes B” means “the effect of A is B”, a measurable and experimentally repeatable quantity. Computer graphics may be viewed as addressing analogous questions to forward causal inference that addresses the “what if” question, and estimates a change in effects given a delta change in a causal factor. Computer vision may be viewed as addressing analogous questions to inverse causal inference that addresses the “why” question which we define as the estimation of causes given a forward causal model, and a set of observations that constrain the solution set.  Tensor algebra is a suitable and transparent framework for modeling the mechanism that generates observed data.  Tensor-based data analysis, also known in the literature as structural equation modeling with multimode latent variables, has been employed in representing the causal factor structure of data formation in econometrics, psychometric, and chemometrics since the 1960s.  More recently, tensor factor analysis has been successfully employed to represent cause-and-effect in computer vision, and computer graphics, or for prediction and dimensionality reduction in machine learning tasks.   


M. Alex O. Vasilescu received her education at the Massachusetts Institute of Technology and the University of Toronto. She is currently a senior fellow at UCLA’s Institute of Pure and Applied mathematics (IPAM) that has held research scientist positions at the MIT Media Lab from 2005-07 and at New York University’s Courant Institute of Mathematical Sciences from 2001-05.  Vasilescu introduced the tensor paradigm for computer vision, computer graphics, and machine learning. She addressed causal inferencing questions by framing computer graphics and computer vision as multilinear problems. Causal inferencing in a tensor framework facilitates the analysis, recognition, synthesis, and interpretability of data. The development of the tensor framework has been spearheaded with premier papers, such as Human Motion Signatures (2001), TensorFaces (2002), Multilinear Independent Component Analysis (2005), TensorTextures (2004), and Multilinear Projection for Recognition (2007, 2011). Vasilescu’s face recognition research, known as TensorFaces, has been funded by the TSWG, the Department of Defenses Combating Terrorism Support Program, Intelligence Advanced Research Projects Activity (IARPA), and NSF. Her work was featured on the cover of Computer World and in articles in the New York Times, Washington Times, etc. MIT’s Technology Review Magazine named her to their TR100 list of honorees, and the National Academy of Science co-awarded the Keck Futures Initiative Grant.  

1 2 3 4 58