talk: Thinking Like an Attacker: Towards a Definition and Non-Technical Assessment of Adversarial Thinking, 12-1pm ET 4/30


The UMBC Cyber Defense Lab presents


Thinking Like an Attacker:
Towards a Definition and Non-Technical Assessment of Adversarial Thinking


Prof. Peter A. H. Peterson
Department of Computer Science
University of Minnesota Duluth


12:00–1:00 pm ET,  Friday, 30 April 2021
via WebEx


“Adversarial thinking” (AT), sometimes called the “security mindset” or described as the ability to “think like an attacker,” is widely accepted in the computer security community as an essential ability for successful cybersecurity practice. Supported by intuition and anecdotes, many in the community stress the importance of AT, and multiple projects have produced interventions explicitly intended to strengthen individual AT skills to improve security in general. However, there is no agreed-upon definition of “adversarial thinking” or its components, and accordingly, no test for it. Because of this absence, it is impossible to meaningfully quantify AT in subjects, AT’s importance for cybersecurity practitioners, or the effectiveness of interventions designed to improve AT. Working towards the goal of a characterization of AT in cybersecurity and a non-technical test for AT that anyone can take, I will discuss existing conceptions of AT from the security community, as well as ideas about AT in other fields with adversarial aspects including war, politics, law, critical thinking, and games. I will also describe some of the unique difficulties of creating a non-technical test for AT, compare and contrast this effort to our work on the CATS and Security Misconceptions projects, and describe some potential solutions. I will explore potential uses for such an instrument, including measuring a student’s change in AT over time, measuring the effectiveness of interventions meant to improve AT, comparing AT in different populations (e.g., security professionals vs. software engineers), and identifying individuals from all walks of life with strong AT skills—people who might help meet our world’s pressing need for skilled and insightful security professionals and researchers. Along the way, I will give some sample non-technical adversarial thinking challenges and describe how they might be graded and validated.


 Peter A. H. Peterson is an assistant professor of computer science at the University of Minnesota Duluth, where he teaches and directs the Laboratory for Advanced Research in Systems (LARS), a group dedicated to research in operating systems and security, with a special focus on research and development to make security education more effective and accessible. He is an active member of the Cybersecurity Assessment Tools (CATS) project working to create and validate two concept inventories for cybersecurity, is working on an NSF-funded grant to identify and remediate commonsense misconceptions about cybersecurity, and is also the author of several hands-on security exercises for Deterlab that have been used at many institutions around the world. He earned his Ph.D. from the University of California, Los Angeles for work on “adaptive compression”—systems that make compression decisions dynamically to improve efficiency. He can be reached at .


Host: Alan T. Sherman,  Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681The UMBC Cyber Defense Lab meets biweekly Fridays.  All meetings are open to the public. Upcoming CDL Meetings: May 7, Farid Javani (UMBC), Anonymization by oblivious transfer

Talk: Cyber Lessons, Learned and Unlearned, 1-2 pm ET 4/20/21


The UMBC Center for Cybersecurity (UCYBR) & The Department of Computer Science & Electrical Engineering (CSEE) Present:

“Cyber Lessons, Learned and Unlearned”

Professor Eugene Spafford
Professor of Computer Science & Executive Director Emeritus of the Purdue CERIAS (Center for Education and Research in Information Assurance and Security)
Purdue University

Tuesday 20 April 2021 1-2PM ET

WHERE
https://umbc.webex.com/umbc/j.php?MTID=m576a3dada9e0c63c07beb51fedbff3d1

Dr. Eugene Spafford is a professor with an appointment in Computer Science at Purdue University, where he has served on the faculty since 1987. He is also a professor of Philosophy (courtesy), a professor of Communication (courtesy), a professor of Electrical and Computer Engineering (courtesy) and a Professor of Political Science (courtesy). He serves on a number of advisory and editorial boards. Spafford’s current research interests are primarily in the areas of information security, computer crime investigation and information ethics. He is generally recognized as one of the senior leaders in the field of computing.

Among other things, Spaf (as he is known to his friends, colleagues, and students) is Executive Director Emeritus of the Purdue CERIAS (Center for Education and Research in Information Assurance and Security), and was the founder and director of the (superseded) COAST Laboratory. He is Editor-on-Chief of the Elsevier journal Computers & Security, the oldest journal in the field of information security, and the official outlet of IFIP TC-11.

Spaf has been a student and researcher in computing for over 40 years, 35 of which have been in security-related areas. During that time, computing has evolved from mainframes to the Internet of Things. Of course, along with these changes in computing have been changes in technology, access, and both how we use and misuse computing resources. Who knows what the future holds?

In this UCYBR talk, Spaf will reflect upon this evolution and trends and discuss what he sees as significant “lessons learned” from history. Will we learn from our past? Or are we destined to repeat history (again!) and never break free from the many cybersecurity challenges that continue to impact our world? Join UCYBR and CSEE for an engaging and informative presentation from one of the most respected luminaries of the cybersecurity field!

More information about Spaf’s distinguished career in cybersecurity, his publications, talks, and more can be found at https://spaf.cerias.purdue.edu/.

Host: Dr. Richard Forno ()

talk: MeetingMayhem: Teaching Adversarial Thinking through a Web-Based Game, 12-1 ET 4/9

The UMBC Cyber Defense Lab presents

MeetingMayhem:  Teaching Adversarial Thinking through a Web-Based Game


Akriti Anand, Richard Baldwin, Sudha, Kosuri, Julie Nau, and Ryan Wunk-Fink
UMBC Cyber Defense Lab

joint work with Alan Sherman, Marc Olano, Linda Oliva, Edward Zieglar, and Enis Golazewski

12:00 noon–1 pm ET, Friday, 9 April 2021
online via WebEx


We present our progress and plans in developing MeetingMayhem, a new web-based educational exercise that helps students learn adversarial thinking in communication networks. The goal of the exercise is to arrange a meeting time and place by sending and receiving messages through an insecure network that is under the control of a malicious adversary.  Players can assume the role of participants or an adversary.  The adversary can disrupt the efforts of the participants by intercepting, modifying, blocking, replaying, and injecting messages.  Through this engaging authentic challenge, students learn the dangers of the network, and in particular, the Dolev-Yao network intruder model. They also learn the value and subtleties of using cryptography (including encryption, digital signatures, and hashing), and protocols to mitigate these dangers.  Our team is developing the exercise in spring 2021 and will evaluate its educational effectiveness.


Akriti Anand () is an MS student in computer science working with Alan Sherman.  She is the lead software engineer and focuses on the web frontend. Richard Baldwin () is a BS student in computer science, a member of Cyberdawgs, and lab manager for the Cyber Defense Lab. Sudha Kosuri () is a MS student in computer science.  She is working on the frontend (using React and Flask) and its integration with the backend. Julie Nau () is a BS student in computer science.  She is working on the backend and on visualizations. Ryan Wunk-Fink () is a PhD student in computer science working with Alan Sherman. He is developing the backend.


Host: Alan T. Sherman,  Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays.  All meetings are open to the public.

 Upcoming CDL Meetings: April 23, Peter Peterson (Univ. of Minnesota Duluth), Adversarial thinking; May 7, Farid Javani (UMBC), Anonymization by oblivious transfer

talk: Mining social media data for health, public health & popular events, 1-2pm ET 4/2


Mining social media data for health, public health, and popular events

Anietie Andy, University of Pennsylvania

1:00-2:00 pm ET, Friday, 2 April 2021

online via WebEx


Increasingly, individuals are turning to social media and online forums such as Twitter and Reddit to communicate about a range of issues including their health and well-being, public health concerns, and large public events such as the presidential debates. These user-generated social media data are prone to noise and misinformation. Developing and applying Artificial Intelligence (AI) algorithms can enable researchers to glean pertinent information from social media and online forums for a range of uses.  For example, patients’ social media data may include information about their lifestyle that might not typically be reported to clinicians; however, this information may allow clinicians to provide individualized recommendations for managing their patients’ health. Separately, insights obtained from social media data can aid government agencies and other relevant institutions in better understanding the concerns of the populace as it relates to public health issues such as COVID-19 and its long-term effects on the well-being of the public. Finally, insights obtained from social media posts can capture how individuals react to an event and can be combined with other data sources, such as videos, to create multimedia summaries. In all these examples, there is much to be gained by applying AI algorithms to user-generated social media data.

In this talk, I will discuss my work in creating and applying AI algorithms that harness data from various sources (online forums, electronic medical records, and health care facility ratings) to gain insights about health and well-being and public health. I will also discuss the development of an algorithm for resolving pronoun mentions in event-related social media comments and a pipeline of algorithms for creating a multimedia summary of popular events. I will conclude by discussing my current and future work around creating and applying AI algorithms to: (a) gain insights about county-level COVID-19 vaccine concerns, (b) detect, reduce, and mitigate misinformation in text and online forums, and (c) understand the expression and evolution of bias (expressed in text) over time. 


Anietie Andy is a senior data scientist at Penn Medicine Center for Digital Health. His research focuses on developing and applying natural language processing and machine learning algorithms to health care, public health, and well-being. Also, he is interested in developing natural language processing and machine learning algorithms that use multimodal sources (text, video, images) to summarize and gain insights about events and online communities.

talk: Enabling Computation, Control, and Customization of Materials with Digital Fabrication Processes, 1-2pm 3/31


Enabling Computation, Control, and Customization of Materials with Digital Fabrication Processes

Michael Rivera, Carnegie Mellon University 

1:00-2:00 pm Wednesday, 31 March 2022

via WebEx


Low-cost digital fabrication technology, and in particular 3D printing, is ushering in a new wave of personal computing. The technology promises that users will be able to design, customize and create any object to fit their needs. While the objects that we interact with daily are generally made of many types of materials—they may be hard, soft, conductive, etc.—current digital fabrication machines have largely been limited to producing rigid and passive objects. In this talk, I will present my research on developing digital fabrication processes that incorporate new materials such as textiles and hydrogels. These processes include novel 3D printer designs, software tools, and human-in-the-loop fabrication techniques. With these processes, new materials can be controlled, customized, and integrate computational capabilities—at design time and after fabrication—for creating personalized and interactive objects. I will conclude Research this talk with my vision for enabling anyone to create with digital fabrication technology and its impact beyond the individual.


Michael Rivera is a Ph.D. Candidate at the Human-Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University where he is advised by Scott Hudson. He works at the intersection of human-computer interaction, digital fabrication, and materials science. He has published papers on novel digital fabrication processes and interactive systems at top-tier HCI venues, including ACM CHI, UIST, DIS, and IMWUT. His work has been recognized with a Google – CMD-IT Dissertation Fellowship, an Adobe Research Fellowship Honorable Mention, and a Xerox Technical Minority Scholarship. Before Carnegie Mellon, he completed a M.S.E in Computer Graphics and Game Technology and a B.S.E in Digital Media Design at the University of Pennsylvania. He has also worked at the Immersive Experiences Lab of HP Labs, and as a software engineer at Facebook and LinkedIn.

talk: Forward & Inverse Causal Inference in a Tensor Framework, 1-2 pm ET, 3/29


Forward and Inverse Causal Inference in a Tensor Framework


M. Alex O. Vasilescu
Institute of Pure and Applied Mathematics, UCLA

1-2:00 pm Monday, March 29, 2021
via WebEx

Developing causal explanations for correct results or for failures from mathematical equations and data is important in developing a trustworthy artificial intelligence, and retaining public trust.  Causal explanations are germane to the “right to an explanation” statute, i.e., to data-driven decisions, such as those that rely on images.  Computer graphics and computer vision problems, also known as forward and inverse imaging problems, have been cast as causal inference questions consistent with Donald Rubin’s quantitative definition of causality, where “A causes B” means “the effect of A is B”, a measurable and experimentally repeatable quantity. Computer graphics may be viewed as addressing analogous questions to forward causal inference that addresses the “what if” question, and estimates a change in effects given a delta change in a causal factor. Computer vision may be viewed as addressing analogous questions to inverse causal inference that addresses the “why” question which we define as the estimation of causes given a forward causal model, and a set of observations that constrain the solution set.  Tensor algebra is a suitable and transparent framework for modeling the mechanism that generates observed data.  Tensor-based data analysis, also known in the literature as structural equation modeling with multimode latent variables, has been employed in representing the causal factor structure of data formation in econometrics, psychometric, and chemometrics since the 1960s.  More recently, tensor factor analysis has been successfully employed to represent cause-and-effect in computer vision, and computer graphics, or for prediction and dimensionality reduction in machine learning tasks.   


M. Alex O. Vasilescu received her education at the Massachusetts Institute of Technology and the University of Toronto. She is currently a senior fellow at UCLA’s Institute of Pure and Applied mathematics (IPAM) that has held research scientist positions at the MIT Media Lab from 2005-07 and at New York University’s Courant Institute of Mathematical Sciences from 2001-05.  Vasilescu introduced the tensor paradigm for computer vision, computer graphics, and machine learning. She addressed causal inferencing questions by framing computer graphics and computer vision as multilinear problems. Causal inferencing in a tensor framework facilitates the analysis, recognition, synthesis, and interpretability of data. The development of the tensor framework has been spearheaded with premier papers, such as Human Motion Signatures (2001), TensorFaces (2002), Multilinear Independent Component Analysis (2005), TensorTextures (2004), and Multilinear Projection for Recognition (2007, 2011). Vasilescu’s face recognition research, known as TensorFaces, has been funded by the TSWG, the Department of Defenses Combating Terrorism Support Program, Intelligence Advanced Research Projects Activity (IARPA), and NSF. Her work was featured on the cover of Computer World and in articles in the New York Times, Washington Times, etc. MIT’s Technology Review Magazine named her to their TR100 list of honorees, and the National Academy of Science co-awarded the Keck Futures Initiative Grant.  

talk: Transparent Dishonesty: Front-Running Attacks on Blockchain, 12-1 pm ET 3/26


The UMBC Cyber Defense Lab presents

Transparent Dishonesty: Front-Running Attacks on Blockchain


Professor Jeremy Clark
Concordia Institute for Information Systems Engineering
Concordia University, Montreal, Canada


12–1 pm ET Friday, March 26, 2021
online via WebEx


We consider front-running to be a course of action where an entity benefits from prior access to privileged market information about upcoming transactions and trades. Front-running has been an issue in financial instrument markets since the 1970s. With the advent of blockchain technology, front-running has resurfaced in new forms we explore here, instigated by blockchain’s decentralized and transparent nature. I will discuss our “systemization of knowledge” paper which draws from a scattered body of knowledge and instances of front-running across the top 25 most active decentral applications (DApps) deployed on Ethereum blockchain. Additionally, we carry out a detailed analysis of Status.im initial coin offering (ICO) and show evidence of abnormal miner’s behavior indicative of front-running token purchases. Finally, we map the proposed solutions to front-running into useful categories.


Jeremy Clark is an associate professor at the Concordia Institute for Information Systems Engineering. At Concordia, he holds the NSERC/Raymond Chabot Grant Thornton/Catallaxy Industrial Research Chair in Blockchain Technologies. He earned his Ph.D. from the University of Waterloo, where his gold medal dissertation was on designing and deploying secure voting systems including Scantegrity—the first cryptographically verifiable system used in a public sector election. He wrote one of the earliest academic papers on Bitcoin, completed several research projects in the area, and contributed to the first textbook. Beyond research, he has worked with several municipalities on voting technology and testified to both the Canadian Senate and House finance committees on Bitcoin. email:


Host: Alan T. Sherman, Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays. All meetings are open to the public. Upcoming CDL Meetings: April 9, (UMBC), MeetingMayhem: A network adversarial thinking game; April 23, Peter Peterson (University of Minnesota Duluth), Adversarial thinking;
May 7, Farid Javani (UMBC), Anonymization by oblivious transfer.

talk: Machine Learning: New Methodology for Physical & Social Sciences, 1pm ET 3/24

24 hour LIDAR backscatter profiles and PBLH points generated from image machine learning system

The Infusion of Machine Learning as a New Methodology for the Physical and Social Sciences

Dr. Jennifer Sleeman
CSEE, UMBC

1:00-2:00 pm ET, Wednesday, March 24
Online via WebEx


Machine learning has made improvements in many areas of computing. Recently attention has been given to infusing social science methodology with machine learning. In addition, the physical sciences have begun to embrace machine learning to augment their physical parameterization and to discover new features in their computations. I will describe my work that relates to these new emerging areas of research. I will first describe our machine learning research efforts related to understanding the changing role of climate and its effects on society. I will describe how this methodology was also applied to understanding cyber-related exploits. As part of this work, I developed an expertise in generative modeling, which led to a patent in generative and translation-based methods applied to imagery. These ideas were fundamental to a contribution in machine learning using quantum annealing. Quantum computing holds promise for deep learning to reach model convergence faster than classical computers. I will describe work related to developing a new hybrid method that overcame qubit limitations for image generation. 

In addition, I will describe my current work related to machine learning for the Physical Sciences. As part of a multi-disciplinary team from UMBC and other universities, my current work explores ways to augment and replace existing physical parameterizations with neural network based models. I have led a research effort to calculate the planetary boundary layer’s height (PBLH) used for ceilometer-based backscatter profiles and satellite-borne lidar instruments. This work addresses the largest uncertainty in climate change, namely the role of aerosols (dust, carbon, sulfates, sea salt, etc.). We employ a novel method that includes a deep segmentation neural network that uses near-time continuous profiles forming an image to determine boundary layer heights. This method overcomes limitations in wavelet approaches which are unable to identify the PBLH under certain conditions. I will also give a preview of two efforts related to Long Short Term Memory (LSTM) neural networks related to learning PBLH changes over time. These research efforts result from collaborations with two students in the UMBC CSEE department and are being published and presented at the AAAI 2021 Spring Symposium on Combining Artificial Intelligence and Machine Learning with Physics Sciences. 


Dr. Jennifer Sleeman is a Research Assistant Professor in Computer Science at the University of Maryland, Baltimore County (UMBC). Her research interests include generative models, natural language processing, semantic representation, image generation, and deep learning. Dr. Sleeman received the prestigious recognition of being a 2019 EECS Rising Star. She was also recognized in 2017 as one of the best Data Scientists in the Washington, DC region by DCFemTech. She defended her Ph.D. thesis, Dynamic Data Assimilation for Topic Modeling (DDATM) in 2017 under Tim Finin and Milton Halem. Her thesis-related work was awarded a Microsoft “AI for Earth” resource grant in 2017 and 2018 and also won the best paper award in the Semantic Web for Social Good Workshop presented at International Semantic Web Conference in 2018. She was an invited guest panelist at the AI for Social Good AAAI Fall Symposium in 2019 and was also an invited keynote speaker at the Sixth IEEE International Conference on Data Science and Engineering (ICDSE 2020), where she presented her ideas related to AI for Social Good and Science. She is an active research scientist in generative deep learning methods for which she holds a patent. She has over 12 years of machine learning experience and over 22 years of software engineering experience, in both academic and government/industry settings. She is currently funded by NASA and NOAA (PI). She also teaches Introduction to Artificial Intelligence at the University of Maryland, Baltimore County (UMBC) and currently mentors two Master’s students

talk: (Don’t) Mind the Gap: Bridging the Worlds of People and IoT Devices, 1-2 ET 3/23

TIPPERS is an IoT data management middleware system developed at UCI that manages IoT smart spaces by collecting sensor data, inferring semantically meaningful information, and providing developers with data for intelligent applications.


(Don’t) Mind the Gap: Bridging the Worlds of People and IoT Devices


Dr. Roberto Yus
University of California, Irvine

1:00-2:00 pm ET, Tuesday, 23 March 2021
online via WebEx


The Internet of Things (IoT) has the potential to improve our lives through different services given the diversity of smart devices and their capabilities. For example, the IoT can empower services to make the re-opening of business during the current pandemic safer by monitoring adherence to regulations. But the large amounts of highly heterogeneous data captured by IoT devices typically require further processing to become useful information. The challenge is thus for IoT systems to determine which sensor data has to be captured/stored/processed/shared to, for instance, determine the occupancy of a specific office building or the spaces in which a potential exposure took place. This becomes even more challenging when IoT systems have to take into account the privacy preferences of individuals, such as the need to prevent sharing data about their daily patterns or habits.

In this talk, I will discuss my efforts into helping IoT systems bridge the gap between the world of IoT devices and the world where people act. First, I will introduce a model to represent knowledge about sensors/actuators, people, spaces, events, and their relationships. Based on the model, I will explain an algorithmic solution to translate user requests and privacy preferences defined in a high-level, more semantically meaningful way into operations on IoT devices and their captured data. Second, I will talk about the enforcement of privacy preferences in the context of the IoT. Finally, I will overview my experience building and deploying an IoT data management system, TIPPERS, which has been deployed at UC Irvine and two US Navy vessels and is soon to be deployed on other campuses. I will conclude the talk by discussing the exciting future work opportunities towards supporting the next generation of ubiquitous IoT data management systems and technologies that autonomously, transparently, and at scale, balance the trade-off between providing users with high utility and respecting people’s privacy requirements.


Roberto Yus is a Postdoctoral Researcher in the Computer Science department at the University of California, Irvine working with Prof. Sharad Mehrotra. Before that, he spent a year as a visiting researcher at the University of Maryland, Baltimore County working with Prof. Anupam Joshi and Prof. Tim Finin. He obtained his Ph.D. in Computer Science from the University of Zaragoza, Spain, funded through a 4-year fellowship from the Spanish Ministry of Science and Innovation. His research interests are in the fields of data management, knowledge representation, privacy, and the Internet of Things (IoT). His research focuses on the design of semantic data management solutions to empower IoT systems to understand user information requirements and user privacy preferences and adapt their operations taking those into account. Roberto’s research has been published in top-tier conferences and journals such as VLDB and the Journal of Web Semantics. He is part of the editorial board of the “Sensors” and “Frontiers in Big Data” journals and has served as part of the organizing and program committee of several conferences and workshops in addition to serving as an external reviewer for multiple conferences and journals.

talk: Towards Contextual Security of AI-enabled platforms, 1-2 pm ET 3/22


Towards Contextual Security of AI-enabled platforms

Dr. Nidhi Rastogi
Rensselaer Polytechnic Institute

1-2:00pm ET, Monday, 22 March 2021

via WebEx

The explosive growth of Internet-connected and AI-enabled devices and data produced by them has introduced significant threats. For example, malware intrusions (SolarWinds) have become perilous and extremely hard to discover, while data breaches continue to compromise user privacy (Zoom credentials exposed) and endanger personally identifiable information. My research takes a holistic approach towards systems and platforms to address security-related concerns using contextual and explainable models. 

In this talk, I will present ongoing work that analyzes and improves the cybersecurity posture of Internet-connected systems and devices using automated, trustworthy, and contextual AI-models. Specifically, my research in malware threat intelligence gathers diverse information from varied datasets – system and network logs, source code, and text. In [1], an open-source ontology (MALOnt) contextualizes threat intelligence by aggregating malware-related information into classes and relations. TINKER [2, 3] – the first open-source malware knowledge graph, instantiates MALOnt classes and enables information extraction, reasoning, analysis, detection, classification, and cyber threat attribution. At present, the research is addressing the trustworthiness of information sources and extractors.

1. RastogiN., Dutta, S., Zaki, M. J., Gittens, A., & Aggarwal, C. (2020). MALOnt: An ontology for malware threat intelligence, In KDD’20 Workshop at International workshop on deployable machine learning for security defense. Springer, Cham.

2. RastogiN., Dutta, S., Christian, R., Gridley, J., Zaki, M. J., Gittens, A., and Aggarwal, C.  (2021). Knowledge graph generation and completion for contextual malware threat intelligence. In USENIX Security’21, Accepted.

3. Yee, D., Dutta, S., RastogiN., Gu, C., and Ma, Q. (2021). TINKER: Knowledge graph for threat intelligence. In ACL- IJCNLP’21, Under Review.


Dr. Nidhi Rastogi is a Research Scientist at Rensselaer Polytechnic Institute. Her research is at the intersection of cybersecurity, artificial intelligence, large-scale networks, graph analytics, and data privacy. She has papers accepted at top venues such as USENIX, TrustCom, ISWC, Wireless Telecommunication Symposium, and Journal of Information Policy. For the past two years, Dr. Rastogi has been the lead PI for three cybersecurity, privacy research projects and a contributor to one healthcare AI project. For her contributions to cybersecurity and encouraging women in STEM, Dr. Rastogi was recognized in 2020 as an International Women in Cybersecurity by the Cyber Risk Research Institute. She was a speaker at the SANS cybersecurity summit and the Grace Hopper Conference. Dr. Rastogi is the co-chair for DYNAMICS workshop (2020-) and has served as a committee member for DYNAMICS’19, IEEE S&P’16 (student PC), invited reviewer for IEEE Transactions on Information Forensics and Cybersecurity (2018,19), FADEx laureate for the 1st French-American Program on Cyber-Physical Systems’16, Board Member (N2Women 2018-20), and Feature Editor for ACM XRDS Magazine (2015-17). Before her Ph.D. from RPI, Dr. Rastogi also worked in the industry on heterogeneous wireless networks (cellular, 802.1x, 802.11) and network security through engineering and research positions at Verizon and GE Global Research Center, and GE Power. She has interned at IBM Zurich, BBN Raytheon, GE GRC, and Yahoo, which provides her a quintessential perspective in applied industrial research and engineering.

1 2 3 57