CSEE
talk: Cognitive Computing & Visualization at IBM Research/RPI, 10am Thur 5/19, UMBC

news_viz

Cognitive Computing and Visualization at IBM Research/RPI CISL

Dr. Hui Su, IBM Research

10:00-11:00am, Thursday, 19 May 2016, ITE 325b

Dr. Hui Su will talk about Cognitive and Immersive Systems Lab, a research initiative to develop the new frontier of immersive cognitive systems that explore and advance natural, collaborative problem-solving among groups of humans and machines. This lab is a collaboration between IBM Research and Rensselaer Polytechnic Institute. Dr. Su will talk about why the research for human computer interaction is extended to build a symbiotic relationship between human beings and smart machines, what research is going to be important to build immersive cognitive systems in order to transform the way professionals work in the future.

Dr. Hui Su is the Director of Cognitive and Immersive Systems Lab, a collaboration between IBM Research and Rensselaer Polytechnic Institute. He has been a technical leader and an executive at IBM Research. Most recently, he was the Director of IBM Cambridge Research Lab in Cambridge, MA and was responsible for a broad scope of global missions in IBM Research, including Cognitive User Experience, Center for Innovation in Visual Analytics and Center for Social Business. As a technical leader and a researcher for 19 years at IBM Research, Dr. Su has been an expert in multiple areas ranging from Human Computer Interaction, Cloud Computing, Visual Analytics, and Neural Network Algorithms for Image Recognition etc. As an executive, he has been leading research labs and research teams in the US and China. He is passionate about game-changing ideas and fundamental research, passionate in speeding up the impact generation process for technical innovations, discovering and developing new linkages between innovative research work and business needs.

Host: Jian Chen ()

Rick Forno and Anupam Joshi discuss ‘cyberbombs’ in The Conversation

cyberbombs

America is ‘dropping cyberbombs’ – but how do they work?

Richard Forno and Anupam Joshi

Recently, United States Deputy Defense Secretary Robert Work publicly confirmed that the Pentagon’s Cyber Command was “dropping cyberbombs,” taking its ongoing battle against the Islamic State group into the online world. Other American officials, including President Barack Obama, have discussed offensive cyber activities, too.

The American public has only glimpsed the country’s alleged cyberattack abilities. In 2012 The New York Times revealed the first digital weapon, the Stuxnet attack against Iran’s nuclear program. In 2013, former NSA contractor Edward Snowden released a classified presidential directive outlining America’s approach to conducting Internet-based warfare.

The terms “cyberbomb” and “cyberweapon” create a simplistic, if not also sensational, frame of reference for the public. Real military or intelligence cyber activities are less exaggerated but much more complex. The most basic types are off-the-shelf commercial products used by companies and security consultants to test system and network security. The most advanced are specialized proprietary systems made for exclusive – and often classified – use by the defense, intelligence and law enforcement communities.

So what exactly are these “cyberbombs” America is “dropping” in the Middle East? The country’s actual cyber capabilities are classified; we, as researchers, are limited by what has been made public. Monitoring books, reports, news events and congressional testimony is not enough to separate fact from fiction. However, we can analyze the underlying technologies and look at the global strategic considerations of those seeking to wage cyber warfare. That work allows us to offer ideas about cyber weapons and how they might be used.

Read more @ The Conversation and also on the Scientific American Web site.

talk: Predicting Demographics and Affects in Social Networks, 11am Fri 5/13, UMBC

UMBC Information Systems Department

Predicting Demographics and Affects in Social Networks

Dr. Svitlana Volkova
Pacific Northwest National Laboratory

11am Friday, 13 May 2016, ITE 459

Social media predictive analytics bring unique opportunities to study people and their behaviors in real time, at an unprecedented scale: who they are, what they like and what they think and feel. Such large-scale real-time social media predictive analytics provide a novel set of conditions for the construction of predictive models. This talk focuses on various approaches to handling this dynamic data for predicting latent user demographics, from constrained-resource batch classification, to incremental bootstrapping, and then iterative learning via interactive rationale (feature) crowdsourcing. In addition, we present the relationships between a variety of perceived user properties e.g., income, education etc. and opinions, emotions and interests in a social network.

Svitlana Volkova received her PhD in Computer Science from Johns Hopkins University. She was affiliated with the Center for Language and Speech Processing and the Human Language Technology Center of Excellence. Her PhD research focused on building predictive models for sociolinguistic content analysis in social media. She built online models for streaming social media analytics, fine-grained emotion detection and multilingual sentiment analysis, and effective annotation techniques via crowdsourcing incorporated into the active learning framework. She interned at Microsoft Research in 2011, 2012 and 2014 at the Natural Language Processing and Machine Learning and Perception teams. She was awarded the Google Anita Borg Memorial Scholarship in 2010 and the Fulbright Scholarship in 2008.

talk: Topic Modeling for Analyzing Document Collection, 11am Mon 3/16

Ogihara

CHMPR Lecture Series

  • Topic Modeling for Analyzing Document Collection

Mitsunori Ogihara
Department of Computer Science, University of Miami

11:00am Monday, 16 May 2016, ITE 325b, UMBC

Topic modeling (in particular, Latent Dirichlet Analysis) is a technique for analyzing a large collection of documents. In topic modeling we view each document as a frequency vector over a vocabulary and each topic as a static distribution over the vocabulary. Given a desired number, K, of document classes, a topic modeling algorithm attempts to estimate concurrently K static distributions and for each document how much each K class contributes. Mathematically, this is the problem of approximating the matrix generated by stacking the frequency vectors into the product of two non-negative matrices, where both the column dimension of the first matrix and the row dimension of the second matrix are equal to K. Topic modeling is gaining popularity recently, for analyzing large collections of documents.

In this talk I will present some examples of applying topic modeling: (1) a small sentiment analysis of a small collection of short patient surveys, (2) exploratory content analysis of a large collection of letters, (3) document classification based upon topics and other linguistic features, and (4) exploratory analysis of a large collection of literally works. I will speak not only the exact topic modeling steps but also all the preprocessing steps for preparing the documents for topic modeling.

Mitsunori Ogihara is a Professor of Computer Science at the University of Miami, Coral Gables, Florida. There he directs the Data Mining Group in the Center for Computational Science, a university-wide organization for providing resources and consultation for large-scale computation. He has published three books and approximately 190 papers in conferences and journals. He is on the editorial board for Theory of Computing Systems and International Journal of Foundations of Computer Science. Ogihara received a Ph.D. in Information Sciences from Tokyo Institute of Technology in 1993 and was a tenure-track/tenured faculty member in the Department of Computer Science at the University of Rochester from 1994 to 2007.

talk: Human mental models and robots: Grasping and tele-presence, 11am 5/9

apple picker

Human mental models and robots:
Grasping and tele-presence

Dr. Cindy Grimm, Oregon State University

11:00-12:00 Monday 9 May 2016, ITE 325b

In this talk I will cover two separate research efforts in robotics, both of which use human mental models to improve robotic functionality. Robots struggle to pick up and manipulate physical objects, yet humans do this with ease – but can’t tell you how they do it. In this research we focus on how to capture human data in such a way as to gain insight into how people structure the grasping task. Specifically, we look at the role of perceptual cues in evaluating grasps and mental classification models of grasps (i.e., all these grasps are the “same”). In the second half of the talk I will switch to discussing how human mental models of privacy, trust, and presence come in to play in remote tele-presence applications (“Skype-on-a-movable-stick”).

Dr. Cindy Grimm is currently an associate professor at Oregon State University (since 2013) in the School of Mechanical, Industrial, and Manufacturing Engineering (application area robotics). Prior to that she was tenured faculty at Washington University in St. Louis in Computer Science (12 years). Her research areas range from 3D sketching to biological modeling to human-robot interaction. She approaches these problems with a combination of mathematical models and empirically-verified human-centered design (HCD). Mathematical models provide a sound, quantitative, rigorous, elegant basis for representing shape and function, and are a core part of the “language” of computation. Including a human in the loop is a key component of the application areas she works in; HCD provides the mechanism for addressing the fundamental problem of how to make mathematical computation “useful” for humans. She has worked with collaborators in fields ranging from psychology, mechanical and biological engineering, statistics, to art.

talk: Statistical Testing of Hash Bit Sequences, 11:15am Fri May 6, UMBC

The UMBC Cyber Defense Lab presents

Statistical Testing of Hash Bit Sequences

Enis Golaszewski
CSEE, UMBC

11:15am-12:30pm Friday, 6 May 2016, ITE 237

We tested bit sequences generated from the MD5 hash function using multinomial distribution and close-point spatial statistical tests for randomness. We found that bit sequences generated from truncated-round MD5 hash fail these tests for high- and low-density input choices.

In 2000, the National Institute of Standards and Technology concluded a competition to select the Advanced Encryption Standard. One of the requirements for candidates was randomness of output bits. The techniques used to evaluate symmetric block cipher randomness have not been extensively applied to hash functions.

In this study, we adapt a subset of the techniques used to analyze the randomness of AES candidate algorithms to study the randomness of the well-known MD5 hash function. Our approach uses high-density, lo- density, and chained-input methods to generate MD5 hashes. We concatenate these hash outputs and subjected them to multinomial distribution and close-point spatial tests. We iterated this approach over reduced-round versions of MD5. Our presentation includes specifications for the input methods, details on the statistical tests, and analysis of the statistical results.

Through statistical testing of concatenated MD5 hashes, we derive results that demonstrate a link between the performance of the concatenated hash bit sequences in our statistical testing and the number of hash rounds applied to the high-density and low-density input methods. Randomness is a desirable property for cryptographic hash functions. We present a new approach that facilitates the analysis and interpretation of hash functions for statistical randomness.

About the Speaker. Enis Golaszewski is a prospective PhD student in CS at UMBC, working with Dr. Alan T. Sherman. His research interests include the security of software-defined networks. He graduated from UMBC in CS in December 2015 and was a student in the fall 2015 INSuRE class. Email: <>

Host: Alan T. Sherman,

UMBC students demonstrate smartphone applications, 12:30-2:30 Tue 5/10

mobile_class_csee

cordova
7919_New Faculty 2009 Nilanjan Banerjee Computer Science and Computer Engineering

Student groups drawn from two UMBC classes will demonstrate twelve mobile applications they developed as projects from 12:30 to 2:30 on Tuesday, 10 May 2016 in the UC Ballroom. Pizza will be provided.

The projects are a result of an innovative collaboration between a computer science class lead by Professor Nilanjan Banerjee (CMSC 678 Mobile Computing) and a visual arts class lead by Professor Viviana Chacon (ART 434 Advanced Interface Design).

The two faculty were awarded a grant from the fall 2015 round of the Hrabowski Fund for Innovation competition to develop and evaluate the collaboration between the two courses. The classes held regular joint sessions and each project group comprised students from both Engineering and Visual Arts.

In ART 434 Prof. Cordova concentrated on the visual experience of the interface in mobile and desktop applications, while in CMSC 628 Prof.  Banerjee provided the tools necessary to design and implement mobile applications.  Specific mobile development topics such as user interface design and implementation, accessing and displaying sensor and location data, and mobile visual design were co-­‐taught by both instructors.  Teams comprising Engineering and Visual Arts students designed and built mobile applications for local clients in Baltimore and Washington DC area.

poster describing the event has brief descriptions of the twelve class projects.

NSF CyberCorps: Scholarship For Service, May 15 deadline

UMBC undergraduate and graduate students interested in cybersecurity can apply for an Federal CyberCorps: Scholarship For Service scholarship by 15 May 2016. This application deadline will be the last one under the current NSF grant, which ends August 2017.

The Federal CyberCorps: Scholarship For Service program is designed to increase and strengthen the cadre of federal information assurance professionals that protect the government’s critical information infrastructure. This program provides scholarships that may fully fund the typical costs incurred by full-time students while attending a participating institution, including tuition and education and related fees. Participants also receive stipends of $22,500 for undergraduate students and $34,000 for graduate students.

Applicants must be be full-time UMBC students within two years of graduation with a BS or MS degree; a student within three years of graduation with both the BS/MS degree; a student participating in a combined BS/MS degree program; or a research-based doctoral student within three years of graduation in an academic program focused on cybersecurity or information assurance. Recipients must also be US citizens or permanent residents; meet criteria for Federal employment; and be able to obtain a security clearance, if required.

For more information and instructions on how to apply see the UMBC CISA site (use old application form, and be sure to include the cover sheet).

tutorial: Design, Analysis and Security of Automotive Networks, 2pm 4/29

Design, Analysis and Security of Automotive Networks

Sekar Kulandaivel
University of Maryland, Baltimore County

2:00-3:30pm Friday, 29 April 2016, ITE 325b

As more electronic and wireless technologies permeate modern vehicles, understanding the design of an embedded automotive network becomes necessary to protect drivers from external agents with a malicious intent to disrupt onboard electronics. By analyzing the different types of automotive networks and current security issues that the industry faces, we will learn how intruders are able to access an automotive network, read data that streams from the connected nodes and inject potentially malicious messages. This presentation will cover the electrical design of automotive networks, the communication protocols between electronic control units, methods for analyzing network messages and a detailed overview of previous automotive attacks and current security issues.

Sekar Kulandaivel is a Meyerhoff Scholar and Computer Engineering undergraduate student at UMBC. He currently works on designing an intrusion detection system for automotive networks with Dr. Nilanjan Banerjee of the UMBC Eclipse Cluster. Sekar has had previous internships at MIT Lincoln Laboratory, Northrop Grumman Corporation and Johns Hopkins University. He will attend Carnegie Mellon University in Fall 2016 to pursue a PhD in Electrical and Computer Engineering with a focus in electric vehicle security.

3D Capturing the Future at UMBC

3D Capture Studio Cameras

Nestled in the back of room 109 of the Information Technology and Engineering building, Dan Bailey, head of the Image Researching Center (IRC) is being captured. He sits in the center of the room, in a open metal rig with cameras that are surrounding him. IRC staff walk around him, fine tuning each of the camera settings as they make their final preparations.

The lights turn off, and within a second a bright flash illuminates the room. The lights turn back on, and a staff member exclaims “got it” as another successful 3D capture has been performed at the IRC. In the next few hours a powerful computer will start to build a 3D scan of Dan Bailey.

“Being able to capture a 3D model is just a priceless ability,”

Outside of the room sits Dr. Marc Olano, a professor of Computer Science & Electrical Engineering, who helps run the studio. Olano and Bailey worked with Direct Dimensions Inc, a company based in Owings Mills, Maryland, and funded the space through a $180,000 grant from the National Science Foundation.

Olano told Stephen Babcock, a reporter at Technical.ly Baltimore that “Being able to capture a 3D model is just a priceless ability,” and “That flash is the first step.”

Dr. Olano outside of capture studio

Dr. Marc Olano outside the capture room. Here, he can view the 3D models and manipulate them.

The system is smart enough to ignore all the cameras around the person or object and instead finds unique points of features to focus on. In addition, there are projectors that help capture something that doesn’t have enough detail to focus on by projecting more detail onto a person or object so the person can find it.

Between captures, there are a couple of tools that are used to help calibrate the space. One tool is a pole that is used to help position the cameras using tape on the pole that gives reference points for the center and edge of the frame for the cameras.

For capturing, there is a calibration dummy made out of cardboard construction tubes with a lot of different clothing patterns. This is essential in making it easier to build a 3D model of the dummy, as the reconstruction process can find the shape of the object and camera positions simultaneously, but if the part of the object or person you are scanning is too featureless, it can have trouble solving for both of those at once

After the image is captured, the computer starts building a 3D model out of 90 images. Zooming in on the image, you can actually see each individual polygon that makes up the image.

3D Scans

3D scans of Marc Olano and Dan Bailey done in the capture studio.

“This [studio] serves as not only the intersection of art and computer science, but other disciplines as well.”

Olano believes that this 3D capture studio can go beyond computer science. “This [studio] serves as not only the intersection of art and computer science, but other disciplines as well.” When scanning in a person, it is possible to make that person into an animation model for use in video games.

Museums could ask to scan in historical objects so that one could rotate and inspect the object at any angle they want. People who have had amputations could get scanned and have something custom built for them. The possibilities for this space are immense and will continue to grow over time.

The 3D capture studio is still limited to only a few projects at a time but Olano hopes to open it up to more departments soon. You can view some of the scans the studio has done online here.