talk: Using CPSA to Analyze Force-Latency Protocols, 12-1 4/19

UMBC Cyber Defense Lab

Using CPSA to Analyze Force-Latency Protocols

Dr. Edward Zieglar, National Security Agency

12-1 Friday, 19 April 19, ITE 227

Several cryptographic protocols have been proposed to address the Man-in-the-Middle attack without the prior exchange of keys. This talk will describe a formal analysis of one such protocol proposed by Zooko Wilcox-O’Hearn, the forced-latency defense against the chess grandmaster attack. Using the Cryptographic Protocol Shapes Analyzer (CPSA), we validate the security properties of the protocol through the novel use of CPSA’s state features to represent time. We also describe a small message space attack that highlights how assumptions made in protocol design can affect the security of a protocol in use, even for a protocol with proven security properties.

Edward Zieglar is a security researcher in the Research Directorate of the National Security Agency, where he concentrates on formal analysis and verification of cryptographic protocols and network security. He is also an adjunct professor at UMBC where he teaches courses in networking and network security. He received his master’s and doctoral degrees in computer science from UMBC.

Host: Alan T. Sherman,

talk: IPv6 and its Security Issues, 5:30 Mon. 4/22

IPv6 and its Security Issues

Neal Ziring, National Security Agency

5:30-6:45 Monday 22 April 2019, Math/Psych 101

CMSC 626 Guest Lecture — all are welcome to attend

In this talk, we will introduce the basics of IPv6 and some of the security issues associated with it. Specifically, we discuss the motivations, history and adoption of IPv6, and current status in the global Internet. We then detail the structure of an IPv6 address and the types of addresses used, and the conceptual model for address assignment in IPv6. The modes of deployment of IPv6, and understanding of how dual-stack mode works, is then provided. We then discuss the basic model for IPv6 control protocols, ICMPv6, and how they support low-level network operations. We then identify IPv6’s place in the network stack, and explain how that does, and does not, affect security. Several basic threats to IPv6 devices and networks will be identified as well as how common network security posture/hygiene can be affected by dual stack operation. Lastly, we identify some key concepts in secure use of IPv6, and discuss the concept of NAT and its use in IPv4 and why IPv6 does not use it.

Mr. Neal Ziring is the Technical Director for the National Security Agency’s Capabilities Directorate, serving as a technical advisor to the Capabilities Director, Deputy Director, and other senior leadership. Mr. Ziring is responsible for setting the technical direction across many parts of the capabilities mission space, including in cyber-security. Mr. Ziring tracks technical activities, promotes technical health of the staff, and acts as liaison to various industry, intelligence, academic, and government partners. Prior to the formation of the Capabilities Directorate, Mr. Ziring served five years as Technical Director of the Information Assurance Directorate. His personal expertise areas include security automation, IPv6, cloud computing, cross-domain information exchange, and data access control, and cyber defense. Prior to coming to NSA in 1988, Neal worked at AT&T Bell Labs. He has BS degrees in Computer Science and Electrical Engineering, and an MS degree in Computer Science, all from Washington University in St. Louis.

talk: Why are memory-corruption bugs still a thing?, 10:30am Mon 4/8, ITE325

Why are memory-corruption bugs still a thing?

The challenges of securing software at an assembly level

Doug Britton
CTO, RunSafe Security Inc.

10:30-11:30 Monday, 8 April 2019, ITE346

Methods to chip away at the danger of memory-corruption bugs have been available for some time.  Why has the going-price of memory-corruption-based exploits not spiked?  If the methods were have a broad-based result in mitigating exploit vectors, there would be a reduction in supply, causing an increase in prices.  Also, there would be a reduction in the pool of people qualified to develop zero-days, allowing them to push the prices up.  The data suggest that prices have remained generally stable and attackers are able to move with impunity.  What are the challenges to large-scale adoption of memory-corruption based mitigation methods. 


Doug Britton serves as Chief Technology Officer and Director of RunSafe Security, Inc. Mr. Britton Co-founded Kaprica Security, Inc., in 2011 and serves as its Chief Executive Officer. Prior to his leadership role in Kaprica, Mr. Britton was a cyber-security focused research and development manager at Lockheed Martin. He has an MBA and MS from University of Maryland and a BS in Computer Science from the University of Illinois.

talk: Learning to Ground Instructions to Plans, 2:30 Thr 3/21, ITE346

Learning to Ground Natural Language Instructions to Plans

Nakul Gopalan, Brown University

2:30-3:30pm Thursday, 21 March 2019, ITE 346, UMBC

In order to easily and efficiently collaborate with humans, robots must learn to complete tasks specified using natural language. Natural language provides an intuitive interface for a layperson to interact with a robot without the person needing to program a robot, which might require expertise. Natural language instructions can easily specify goal conditions or provide guidances and constraints required to complete a task. Given a natural language command, a robot needs to ground the instruction to a plan that can be executed in the environment. This grounding can be challenging to perform, especially when we expect robots to generalize to novel natural language descriptions and novel task specifications while providing as little prior information as possible. In this talk, I will present a model for grounding instructions to plans. Furthermore, I will present two strategies under this model for language grounding and compare their effectiveness. We will explore the use of approaches using deep learning, semantic parsing, predicate logic and linear temporal logic for task grounding and execution during the talk.

Nakul Gopalan is a graduate student in the H2R lab at Brown University. His interests are in the problems of language grounding for robotics, and abstractions within reinforcement learning and planning. He has an MSc. in Computer Science from Brown University (2015) and an MSc. in Information and Communication Engineering from T.U. Darmstadt (2013) in Germany. He completed a Bachelor of Engineering from R.V. College of Engineering in Bangalore, India (2008). His team recently won the Brown-Hyundai Visionary Challenge for their proposal to use Mixed Reality and Social Feedback for Human-Robot collaboration.

Host: Prof. Cynthia Matuszek (cmat at umbc.edu)

talk: Algorithms for Weakly Supervised Denoising of EEG Data, 6:30pm Feb 28

The February meeting of the Data Works MD Meetup features a talk by UMBC Professor Tim Oates on  Two Algorithms for Weakly Supervised Denoising of EEG Data, 6:30-9pm Thursday, February 28, 2019 at UMBC’s South Campus.  Join the meetup and register to attend this free talk and network with members of the Maryland data science community.  The talk abstract and Dr. Oates’s biosketch are given below.

Electroencephalogram (EEG) data is used for a variety of purposes, including brain-computer interfaces, disease diagnosis, and determining cognitive states. Yet EEG signals are susceptible to noise from many sources, such as muscle and eye movements, and motion of electrodes and cables. Traditional approaches to this problem involve supervised training to identify signal components corresponding to noise so that they can be removed. However, these approaches are artifact specific. In this talk, I will discuss two algorithms for solving this problem that uses a weak supervisory signal to indicate that some noise is occurring, but not what the source of the noise is or how it is manifested in the EEG signal. In the first algorithm, the EEG data is decomposed into independent components using Independent Components Analysis, and these components form bags that are labeled and classified by a multi-instance learning algorithm that can identify the noise components for removal to reconstruct a clean EEG signal. The second algorithm is a novel Generative Adversarial Network (GAN) formulation. I’ll present empirical results on EEG data gathered by the Army Research Lab, and discuss pros and cons of both algorithms.

Dr. Tim Oates is an Oros Family Professor in the Computer Science Department at the University of Maryland, Baltimore County. His Ph.D. from the University of Massachusetts Amherst was in the areas of artificial intelligence and machine learning with a focus on situated language learning. After working as a postdoctoral researcher in the MIT Artificial Intelligence Lab, he joined UMBC where he has taught extensively in core areas of Computer Science, including data structures, discrete math, compiler design, artificial intelligence, machine learning, and robotics. Dr. Oates has published more than 150 peer-reviewed papers in areas such as time series analysis, natural language processing, relational learning, and social media analysis. He has developed systems to determine operating room state from video streams, predict the need for blood transfusions and emergency surgery for traumatic brain injury patients based on vital signs data, detect seizures from scalp EEG, and find story chains (causal connections) joining news articles, among many others. Recently Dr. Oates served as the Chief Scientist of a Virginia-based startup where he developed architectures and algorithms for managing contact data, including entity linking, fuzzy record matching, and connected components on billion node graphs stored in a columnar database. He has extensive knowledge of machine learning algorithms, implementations, and usage.

talk: Does Wireless and Mobile Networking Research Still Matter? 12pm Wed 2/27, ITE325

Does Wireless and Mobile Networking Research Still Matter?

Dr. Dmitri Perkins, National Science Foundation

12:00pm Wed. Feb. 27, 2019, ITE325, UMBC

The miniaturization of radio and communication technologies has led to the widespread proliferation of wireless-enabled devices and to an explosion of mobile applications and services. Without question, wireless networking has become an enabling and critical component in practically every business sector. Wireless technologies and terms, such as, WiFi, Bluetooth, and broadband cellular are now embedded in our world and have become a part of society’s regular vocabulary. Given this ever-increasing success, one might be tempted to opine whether any core research challenges remain in the wireless and mobile networking domain.

In this talk, I will present the case that the answer is most certainly “yes” and that the promise of a truly ubiquitous Wireless Internet of Everything, capable of seamlessly interconnecting billions of devices, humans, intelligent systems, information sources, and enabling transformative applications (e.g., remote healthcare monitoring) still faces a plethora of inter-related challenges. These include, for example, exponential growth in mobile traffic, dynamic spectrum allocation, real-time management of network resources, design of intelligent radio technology, energy-efficient protocol designs, and network security/trust/privacy. I will highlight our most recent work on the topic of opportunistic wireless spectrum access, which focused on practical and implementable radio spectrum management frameworks and related spectrum sensing and sharing protocols, using today’s front-end communication technology. I will also discuss my vision for developing a sustainable and nationally recognized wireless networking research program, which includes emerging areas such as networked cyber-physical systems and mobile IoT.

Dr. Dmitri Perkins is currently a Program Director at the U.S. National Science Foundation (NSF), where he leads the Industry-University Cooperative Research Centers (IUCRC) Program for the Directorate of Computer & Information Science & Engineering (CISE). In this role, Dr. Perkins provides oversight of 25 multi-university industry-focused research centers, spanning all areas of CISE research and comprising over 75 U.S. academic institutions, 5 international sites, and 225+ industry partners. Prior to joining the NSF in 2015, Dr. Perkins was the Hardy Edmiston Endowed Professor of Computer Science and Engineering at the University of Louisiana at Lafayette, where he was the founding Director of the Wireless Systems and Performance Engineering Research (WiSPER) Laboratory. His core research interests include wireless and mobile communications, networking, and computing, with an emphasis on cognitive and adaptive protocols, formal design of experiments, performance engineering, dynamic resource and spectrum management, and security challenges. His research work spans multiple networking paradigms, including sensor/actuator networks, wireless broadband networks, multi-hop wireless networks, cognitive radio networks, and large-scale heterogeneous wireless systems. Dr. Perkins has published over 45 peer-reviewed journal articles and conference papers and is also the co-author of the book, Cognitive Radio Networks: From Theory to Practice. He received the NSF CAREER award in 2005 and was the recipient of the Outstanding Professor Award within the College of Sciences at the University of Louisiana in 2012. In 2013 and 2014, he was an ONR visiting research fellow at the Naval Research Laboratory (NRL), conducting research on dynamic spectrum awareness in heterogeneous wireless networks. Dr. Perkins has held leadership roles at the university and national levels. He was elected to serve as the Chair of the University Graduate Council and served a two-year term as Associate Dean of the Ray P. Authement College of Sciences at the University of Louisiana in 2012-2013. He has served on the technical program committee of numerous IEEE and ACM international conferences and served on advisory committees for Computer and Networks Systems (CNS) Division of the NSF. He is currently an associate editor of IEEE Transactions on Mobile Computing. Dr. Perkins received the Ph.D. degree in computer science and engineering from Michigan State University in 2002 and the B.S. degree in computer science from Tuskegee University in 1995.

talk and demo: Exploiting IoT Vulnerabilities, 11:45-1:00pm Mon 2/18


Exploiting IoT Vulnerabilities

Dr. Yatish Joshi, Senior Engineer, Cisco Systems

11:45am-1:00pm Monday, 18 February 2019, ITE 325-B

The past decade has seen explosive growth in the use and deployment of IoT (Internet of Things) devices. According to Gartner there will be about 20.8 billion IoT devices in use by 2020. These devices are seeing wide spread adoption as they are cheap, easy to use and require little to no maintenance. In most cases, setup simply requires using a web or phone app to configure Wi-Fi credentials. Digital home assistants, security cameras, smart locks, home appliances, smart switches, toys, vacuum cleaners, thermostats, leakage sensors etc are examples of IoT devices that are widely used and deployed in home and enterprise environments.

The threat landscape is constantly evolving and threat actors are always on the prowl for new vulnerabilities they can exploit. With traditional attack methods yielding fewer exploits   due to the increased focus on security testing, frequent patches, increased user awareness, Threat actors have turned their attention on IoT devices and are exploiting inherent vulnerabilities in these devices. The vulnerabilities, always ON nature, and autonomous mode of operation allow attackers to spy on users, spoof data, or leverage them as botnet infrastructure to launch devastating attacks on third parties. Mirai, a well known IoT malware utilized hundreds and thousands of enslaved IoT devices to launch DDoS attacks on Dyn affecting access to Netflix, Twitter, Github and many other websites. With the release of the Mirai source code numerous variants of the malware are infecting IoT devices across the world and using them to carry out attacks.

These attacks are made possible because the devices are manufactured without security in mind!. In this talk I will demonstrate how one can hack a widely available off-the-shelf IP Camera and router by exploiting the vulnerabilities present in these devices to get on the network, steal personal data, spy on a user, disrupt operation etc. We will also look at what can be done to mitigate the dangers posed by IOT devices.

So attend hack & defend!

Dr. Yatish Joshi is a software engineer in the Firepower division at Cisco Systems where he works on developing new features for Cisco’s security offerings. Yatish has a PhD in Computer Engineering from UMBC. Prior to Cisco Yatish worked as a lecturer at UMBC, and was a senior software engineer developing TV software at Samsung Electronics. When not working, he enjoys reading spy thrillers and fantasy novels.

talk: OMI, Invisible Technology that will Revolutionize Supercomputing and AI; 3pm Thr Feb 14, ITE325


Distinguished Lecture Series

OMI: The Invisible Technology that will Revolutionize Supercomputing and AI

Prof. Harm Peter Hofstee
Delft University of Technology
Distinguished Research Staff, IBM Austin Research Laboratory

3:00pm Thursday 14 February, 2019, ITE325, UMBC

In this talk, we present some major trends in compute, memory/storage, and networking, and for each we will discuss how OpenCAPI Memory Interface (OMI) and related interface technologies are set to transform how we build, program, and think about our computer systems. For the first of these trends, it allows us to compensate for the reduced growth in processor performance (per dollar) and performance per Watt. Accelerators are sharing memory and other resources over NVLink or OpenCAPI with conventional IBM POWER cores and are driving performance in the world’s largest supercomputers and IBM’s systems are targeting AI and other workloads. The second addresses the reduced improvement in memory cost and capacity. OMI allows us to use technologies other than DRAM as memory, and because many of these technologies are nonvolatile, the line between memory and storage is becoming blurred. The third, OpenCAPI-based networking leverages the rapid improvements in cost per Gb/s and allows us to contemplate systems that extend memory beyond the node using commodity infrastructure.

Harm Peter Hofstee is a Dutch physicist and computer scientist who currently is a distinguished research staff member at the IBM Austin Research Laboratory, USA, and a part-time Professor in Big Data Systems at Delft University of Technology, Netherlands. Hofstee is best known for his contributions to Heterogeneous computing as the chief architect of the Synergistic Processor Elements in the Cell Broadband Engine processor used in the Sony PlayStation 3, and the first supercomputer to reach sustained Petaflop operation. His early research work on coherently attached reconfigurable acceleration on POWER7 paved the way for the new coherent attach processor interface on POWER8. Hofstee is an IBM Master Inventor with more than 100 issued patents and a member of the IBM Academy of Technology. Hofstee was born in Groningen and obtained his master’s degree in theoretical physics of the University of Groningen in 1988. He continued to study at the California Institute of Technology where he wrote a master’s thesis Constructing Some Distributed Programs in 1991 and obtained a Ph.D. with a thesis titled Synchronizing Processes in 1995. He joined Caltech as a lecturer for two years and moved to IBM in the Austin, Texas, Research Laboratory, where he had staff member, senior technical staff member and distinguished engineer positions.

MD-AI Meetup: An AI Enabled Vision of the Future, 6-8pm 2/12, UMBC


MD AI Meetup: An AI Enabled Vision of the Future

The February MD AI meetup will be held at UMBC and features Kathleen Walch from Cognilytica, speaking on An AI Enabled Vision of the Future. The meetup starts at 6:00pm on Tuesday, February 12 in UC 312, UMBC with half an hour of networking time, and the program starts at 6:30 pm.

Artificial Intelligence (AI) represents an interesting paradox. On the one hand, the goals of systems that can behave with the intelligence of humans have been lofty goals envisioned by many for millennia. On the other hand, much of what we envision applying AI to are the fundamental, day-to-day needs of enterprises, individuals, and organizations. In this way we have the conflicting demands of where we want AI research to go and its eventual desired end state combined with the practical needs of today that keep the engine of AI research funded and progressing. In this talk we combine these two different conflicting desires for the future of AI into a cohesive, comprehensive, four-part vision of where these emerging technologies are taking us and the desires of individuals and enterprises.

Kathleen Walch is a serial entrepreneur, savvy marketer, AI and Machine Learning expert, and tech industry connector. She is a principal analyst, managing partner, and founder of Cognilytica, an AI research and advisory firm, and co-host of the popular AI Today podcast.

talk: Using Deep Learning in Identifying Network Intrusions, 10:30am Mon 2/11, UMBC


 

Using Deep Learning in Identifying Network Intrusions

Dr. Rajeev Agrawal
Information Technology Laboratory
US Army Engineer Research and Development Center

10:30-11:30 Monday, February 11, 2019, ITE325

Deep Learning algorithms have been very successful in computer vision, natural language processing, and speech recognition. However, there is a big challenge in applying it in cyber security domain due to non‐availability of ‘real’ cybersecurity data. Many researchers have tried using synthetic data such as KDD‐NSL or newer UNSW-NB15 network intrusion datasets, however, it is difficult to determine the performance of the proposed research on a dataset captured from an enterprise network. The DoD’s High Performance Computing Modernization Program (HPCMP) operates Defense Research Engineering network (DREN), which has multiple security software and hardware tools installed across the network. A variety of cybersecurity logs are captured using these tools. We use a TensorFlow based framework to analyze DREN’s Bro alert data generated under Cybersecurity Environment for Detection, Analysis and Reporting (CEDAR) project. These alerts are marked as bad or normal by the cybersecurity analysts and used as ground truths. This labeled data is used to measure the performance of our approach in identifying network intrusions. We are able to achieve high level accuracy by tuning hyper-parameters used in any deep learning approach. In this presentation, we will discuss the results of our approach which harnesses the power of HPC systems to train our proposed model.

Dr. Rajeev Agrawal joined Cyber Engineering and Analysis branch (CEAB), Information Technology Laboratory in 2016. He is the Data Science lead of the High Performance Computing Architecture (HPC) for Cyber Situational Awareness (HACSAW) Project. The goal of this project is to analyze the cybersecurity data captured across Defense Research and Engineering Network (DREN). He is also a member of the HPC-based deep learning project team and exploring deep learning applicability in cybersecurity domain. Dr. Agrawal received his Ph.D. in Computer Science with minor in Engineering from Wayne State University in 2009. Prior to joining ITL, he was an Associate Professor in the Department of Computer Systems Technology at North Carolina A&T State University.  Dr. Agrawal’s research interests include Deep Learning, Cyber Security, SCADA/ICS, Machine Learning and Pattern Recognition. He has published more than 80 technical papers and book chapters in refereed conferences and journals in these areas. He was selected a Data Science Fellow by the National Consortium of Data Science (NCDS) in 2014. His research has been funded by NSF, US Army, John Deere, ACM, RedHat, National Consortium of Data Science and Michigan State University.

1 2 3 51