PhD defense: Oehler on Private Packet Filtering, 11/21

from wikipedia

Computer Science and Electrical Engineering
University of Maryland, Baltimore County

Ph.D. Dissertation Defense

Private Packet Filtering Searching for Sensitive Indicators
without Revealing the Indicators in Collaborative Environments

Michael John Oehler

10:30-12:30 Thursday, 21 November 2013, ITE 325

Private Packet Filtering (PPF) is a new capability that preserves the confidentiality of sensitive attack indicators, and retrieves network packets that match those indicators without revealing the matching packets. The capability is achieved through the definition of a high-level language, the definition of a conjunction operator that expands the breadth of the language, a simulation of the document detection and recovery rates of the output buffer, and through a description of applicable system facets. Fundamentally, PPF uses a private search mechanism that in turn relies on the (partial) homomorphic property of the Paillier cryptosystem. PPF is intended for use in a collaborative environment involving a cyber defender and a partner: The defender has access to a set of sensitive indicators, and is willing to share some of those indicators with the partner. The partner has access to network data, and is willing to share that access. Neither is willing to provide full access. Using the language, the defender creates an encrypted form of the sensitive indicators, and passes the encrypted indicators to the partner. The partner then uses the encrypted indicators to filter packets, and returns an encrypted packet capture file. The partner does not decrypt the indicators and cannot identify which packets matched. The defender decrypts, reassembles the matching packets, gains situational awareness, and notifies the partner of any malicious activity. In this sense, the defender reveals only the observed indicator and retains control of all other indicators. PPF allows both parties to gain situational awareness of malicious activity, and to retain control without exposing every indicator or all network data.

Committee: Dhananjay Phatak (chair), Michael Collins, Josiah Dykstra, Russell Fink, John Pinkston and Alan Sherman

PhD defense: Xianshu Zhu, Finding Story Chains and Creating Story Maps in Newswire Articles

Ph.D. Dissertation Defense
Computer Science and Electrical Engineering
University of Maryland, Baltimore County

Finding Story Chains and Creating Story Maps in Newswire Articles

Xianshu Zhu

10:00-12:00pm Monday 25 November 2013, ITE 325B

There are huge amounts of news articles about events published on the Internet everyday. The flood of information on the Internet can easily swamp people, which seems to produce more pain than gain. While there are some excellent search engines, such as Google, Yahoo and Bing, to help us retrieve information by simply providing keywords, the problem of information overload makes it hard to understand the evolution of a news story. Conventional search engines display unstructured search results, which are ranked by relevance using keyword-based ranking methods and other more complicated ranking algorithms. However, when it comes to searching for a story (a sequence of events), none of the ranking algorithms above can organize the search results by evolution of the story. Limitations of unstructured search results include: (1) Lack of the big picture on complex stories. In general, news articles tend to describe the news story from different perspectives. For complex news stories, users can spend significant time looking through unstructured search results without being able to see the big picture of the story. For instance, Hurricane Katrina struck New Orleans on August 23, 2005. By typing “Hurricane Katrina” in Google, people can get much information about the event and its impact on the economy, health, and government policies, etc. However, people may feel desperate to sort the information to form a story chain that tells how, for example, Hurricane Katrina has impacted government policies. (2) Hard to find hidden relationships between two events: The connections between news events are sometimes extremely complicated and implicit. It is hard for users to discover the connections without thorough investigation of the search results.

In this dissertation, we seek to extend the capability of existing search engines to output coherent story chains and story maps (a map that demonstrates various perspectives on news events), rather than loosely connected pieces of information. By this means, people can obtain a better understanding of the news story, capture the big picture of the news story quickly, and discover hidden relationships between news events. First of all, algorithms for finding story chains have the following two advantages: (1) they can find out how two events are correlated by finding a chain of events that coherently connect them together. Such story chains will help people discover hidden relationship between two events. (2) they allow users to search by complex queries such as “how is event A related to event B”, which does not work well on conventional keyword-based search engines. Secondly, creating story maps by finding different perspectives on a news story and grouping news articles by the perspectives can help users better capture the big picture of the story and give them suggestions on what directions they can further pursue. From a functionality point of view, the story map is similar to the table of content of a book which gives users a high-level overview of the story and guides them during news reading process.

The specific contributions of this dissertation are: (1) Develop various algorithms to find story chains, including: (a) random walk based story chain algorithm; (b) co-clustering based story chain algorithm which further improves the story chains by grouping semantically close words together and propagating the relevance of word nodes to document nodes; (c) finding story chains by extracting multi-dimensional event profiles from unstructured news articles, which aims to better capture relationships among news events. This algorithm significantly improves the quality of the story chains. (2) Develop an algorithm to create story maps which uses Wikipedia as the knowledge base. News articles are represented in the form of bag-of-aspects instead of bag-of-words. Bag-of-aspects representation allows users to search news articles through different aspects of a news event but not through simple keywords matching.

Committee: Drs. Tim Oates (chair), Tim Finin, Charles Nicholas, Sergei Nirenburg and Doug Oard

Council of Computing Majors to meet 12-12:45 Wed, Nov 20 in BIO LH1

MPSSL

The UMBC Council of Computing Majors (CCM) will meet from Noon to 12:45pm on Wednesday, November 20 in BIO 101 (Lecture Hall 1 in the building behind the Biology Building). The CCM is a student organization representing undergraduate computer science and computer engineering majors and anyone else with an interest in computing. Everyone is welcome.

At this meeting, students from Professor Nilanjan Banerjee’s Mobile, Pervasive and Sensor Systems Lab (MPSSL) will talk about their research. Their lab currently focuses on application areas that include renewable energy, healthcare applications and mobile phone systems as well as theoretical work on network topology compression and analytical modeling of hybrid mobile networks.

2014 Global Game Jam at UMBC, Jan 24-26

For the 6th year in a row, UMBC is the Baltimore host site for the Global Game Jam, which will be held from 5:00pm Friday January 24 to 5:00pm Sunday January 26. Many Global Game Jam sites charge admission. Thanks to the generous support of Next Century, the UMBC site will again be free, meals included. So we can plan appropriately, we do require advance registration.

In a game jam, participants come together to make a video game. Each participant joins a small team at the jam, and over a couple of day period creates a new, unique and creative video game according to the rules of the jam. Game Jams are a great way to meet other developers, beef up your resume, or just learn what it takes to make a game. Teams need designers who can come up with a creative game idea according to the jam constraints, artists, programmers and testers, so there is something to do for participants at all levels of experience.

The Global Game Jam takes place in the same 48 hours all over the world! Last year had more than 300 host sites across the world.  All participants will be constrained by the same theme and set of rules. After the theme is announced, participants will have the chance to brainstorm game ideas and pitch them to other participants to form development teams. After a couple of mad days of game development, all the games are demoed and submitted to the global game jam site.

Even if you don’t participate, you can track the action on twitter by following @globalgamejam and monitoring #ggj14 and #umbcggj, and try out the game submissions after the event is over.

Rick Forno speaking at Cyber Education Symposium

CSEE’s Dr. Richard Forno, Cybersecurity GPD and Assistant Director of UMBC’s Center for Cybersecurity, is speaking at the NCSI Cybersecurity Education Symposium in Arlington, Virginia on Tuesday, November 19th. The panel session, “Creating the Cyber Curricula for Success” will explore the interdisciplinary nature of cybersecurity and cybersecurity education at the undergraduate and graduate levels and discuss the right balance of disciplines, knowledge, skills, and attributes required to prepare future cybersecurity practitioners.

Also speaking later that day are Ellen Hemmerly (Executive Director and President, UMBC Research Park Corporation) and Kent Malwitz (President, UMBC Training Centers) on a panel session entitled “The Business Model for a Cybersecurity Program.”

Defense: Emokpae on Design and Analysis of Underwater Acoustic Networks, 11am Tue 11/12

Dissertation Defense

Design and Analysis of Underwater Acoustic
Networks with Reflected Links

Lloyd Emokpae

11:00am Tuesday, 12 November 2013, ITE 346, UMBC

Underwater acoustic networks have applications in environmental state monitoring, oceanic profile measurements, leak detection in oil fields, distributed surveillance, and navigation. For these applications, sets of nodes are employed to collaboratively monitor an area of interest and track certain events or phenomena. In addition, it is common to find autonomous underwater vehicles (AUVs) acting as mobile sensor nodes that perform search-and-rescue missions, reconnaissance in combat zones, and coastal patrol. These AUVs are to work cooperatively to achieve a desired goal and thus need to be able to, in an ad-hoc manner, establish and sustain communication links in order to ensure some desired level of quality of service. Therefore, each node is required to adapt to environmental changes and be able to overcome broken communication links caused by external noise affecting the communication channel due to node mobility. In addition, since radio waves are quickly absorbed in the water medium, it is common for most underwater applications to rely on acoustic (or sound) rather than radio channels for mid-to-long range communications. However, acoustic channels pose multiple challenging issues, most notably the high transmission delay due to slow signal propagation and the limited channel bandwidth due to high frequency attenuation. Moreover, the inhomogeneous property of the water medium affects the sound speed profile while the signal surface and bottom reflections leads to multipath effects.

In this dissertation, we address these networking challenges by developing protocols that take into consideration the underwater physical layer dynamics. We begin by introducing a novel surface-based reflection scheme (SBR), which takes advantage of the multipath effects of the acoustic channel. SBR works by using reflections from the water surface, and bottom, to establish non-line-of-sight (NLOS) communication links. SBR makes it possible to incorporate both line-of-sight (LOS) and NLOS links by utilizing directional antennas, which will boost the signal-to-noise ratio (SNR) at the receiver while promoting NLOS usage. In our model, we employ a directional underwater acoustic antenna composed of an array of hydrophones that can be summed up at various phases and amplitudes resulting in a beam-former. We have also adopted a practical multimodal directional transducer concept which generates both directional and omni-directional beam patterns by combining the fundamental vibration modes of a cylindrical acoustic radiator. This allows the transducer to be electrically controlled and steered by simply adjusting the electrical voltage weights. A prototype acoustic modem is then developed to utilize the multimodal directional transducer for both LOS and NLOS communication. The acoustic modem has also been used as a platform for empirically validating our SBR communication model in a tank and with empirical data.

Networking protocols have been developed to exploit the SBR communication model. These protocols include node discovery and localization, directional medium access control (D-MAC) and geographical routing. In node discovery and localization, each node will utilize SBR-based range measurements to its neighbors to determine their relative position. The D-MAC protocol utilizes directional antennas to increase the network throughput due to the spatial efficiency of the antenna model. In the proposed reflection-enabled directional MAC protocol (RED-MAC), each source node will be able to determine if an obstacle is blocking the LOS link to the destination and switch to the best NLOS link by utilizing surface/bottom reflections. Finally, we have developed a geographical routing algorithm which aims to establish the best stable route from a source node to a destination node. The optimized route is selected to achieve maximum network throughput. Extensive analysis of the network throughput when utilizing directional antennas is also presented to show the benefits of directional communication on the overall network throughput.

Committee: Mohamed Younis (Chair), Tulay Adali, Charles Laberge, Anupam Joshi, Lisa Marvel, Edward Hua

Professors Mohsenin and Slaughter receive NSF grants for medical applications

CSEE Professors Tinoosh Mohsenin and Gymama Slaughter each received recent grants from NSF to apply their computing engineering expertise to develop new medical technology. What follows is an excerpt from a recent article on their work written by Joel N.Shurkin.

The Body Electric: UMBC researchers forge links between tech and medicine

Monitoring significant developments in a patient’s health outside a hospital setting can be challenging, but two UMBC researchers – Tinoosh Mohsenin and Gymama Slaughter – have won separate grants from the National Science Foundation (NSF) to help meet those challenges.

Mohsenin received a $100,000 grant from the NSF to develop signal processing architecture to detect seizures. The award of $150,000 to Slaughter from the foundation was to pursue work on nanoelectric probe arrays.

Not only does the work done by these two UMBC professors have important implications for basic medical science, but it is also research that may also provide more insight into how we think and feel, or improve how people with disabilities navigate in the world.

Read the rest here.

talk: The strange world of quantum computing, 2:30 Tue 11/5, ITE 325

from wikipedia

Computer Science and Electrical Engineering
Quantum Computing Seminar

The Strange World of Quantum Computing

Samuel Lomonaco, CSEE, UMBC

2:30-3:00 Tuesday, 5 November 2013, ITE 325b

This talk will give an introductory overview of quantum computing in an intuitive and conceptual fashion. No prior knowledge of quantum mechanics will be assumed. This is the first of a series of talks based on the four invited lectures given at Oxford. PowerPoint slides can be found online here.

Samuel J. Lomonaco is a professor at the Department of Computer Science and Electrical Engineering of the University of Maryland Baltimore County. He is internationally known for his many contributions in mathematics and in computer science. His research interests span a wide range of subjects from knot theory, algebraic and differential topology to algebraic coding theory, quantum computation, and symbolic computation. In quantum cryptography, he has shown how quantum information theory can be used to gain a better understanding of eavesdropping with quantum entanglement. In quantum computation, he has shown how Lie groups can be used to solve problems arising in the study of quantum entanglement. In 2000 Professor Lomonoco organized the first American Mathematical Society short course on quantum computation.

Organizer: Prof. Samuel Lomonaco,

Defense: Nguyen on Fast Modular Exponentiation Using Residue Domain Representation, Noon 11/5

from http://upload.wikimedia.org/wikipedia/en/2/26/COPACOBANA_FPGA_BOARD.jpg

M.S. Thesis Defense
Computer Science and Electrical Engineering
University of Maryland, Baltimore County

Fast Modular Exponentiation Using Residue Domain Representation:
A Hardware Reference Implementation and Analysis

Christopher D. Nguyen

12:00–2:00pm, Tuesday, 5 November 2013, ITE 228, UMBC

Using field-programmable gate arrays (FPGAs) we engineered and analyzed the first hardware implementation of Phatak’s reduced-precision residue number system (RP-RNS) to perform modular exponentiation.

Residue number systems (RNSs) provide an alternative representation to the binary system for performing integer arithmetic used in applications such as public-key cryptography and digital signal processing. They offer full parallel computation for addition, subtraction, and multiplication increasing performance from O(K) to O(lg K) for a K-bit number. However, base extension, division, and sign detection become harder operations.

RP-RNS is a new set of algorithms that uses approximation and a time-memory trade-off to address the hard operations. The partial reconstruction (PR) algorithm addresses base extension and the quotient-first scaling (QFS) algorithm addresses scaling. RP-RNS modular exponentiation uses the PR and QFS algorithms. RP-RNS improves performance of modular multiplication in an RNS with range [0, M-1] from the O((lg n)^2) delay of current systems (e.g. Cox-Rower) to a theoretical O(lg n) delay where n is the word-length of M.

Our implementation is based on Phatak’s description and recommended architecture diagrams. We found even low-end FPGAs can store over 30 channels of logic. Following the recommendation of parallel look-up table (LUT) access, we distributed the LUTs to be local to each channel. We found this recommendation applied to QFS exceeds the capacity of today’s high-capacity FPGAs (e.g. Xilinx Virtex-7) for modest 2,000-bit divisors. We propose several improvements to increase feasibility; one is to store the LUTs external to the FPGA, which would introduce a performance penalty per look-up.

Committee: Alan Sherman (chair), Dhananjay Phatak, Chintan Patel and
Ryan Robucci

talk: Acoustic-Tactile Rendering of Visual Information for the Visually Impaired, 2:30 Mon 11/11, ITE325b

Computer Science and Electrical Engineering
University of Maryland, Baltimore County

Acoustic-Tactile Rendering of Visual
Information for the Visually Impaired

Thrasyvoulos N. Pappas

Electrical Engineering and Computer Science
Northwestern University

2:30pm Monday, 11 November 2013, ITE 325B, UMBC

After a brief overview of research in the Department of Electrical Engineering and Computer Science at Northwestern University, we will focus on one particular research problem, the use of hearing and touch for conveying graphical and pictorial information to visually impaired (VI) people. This problem combines visual, acoustic, and tactile signal analysis with and understanding of human perception and interface design. The main idea is that the user actively explores a two-dimensional layout consisting of one or more objects with the finger on a touch screen. The objects are displayed via sounds and raised-dot tactile patterns. The finger acts as a pointing device and provides kinesthetic feedback. The touch screen is partitioned into regions, each representing an element of a visual scene or graphical display. A key element of our research is the use of spatial sound to facilitate active exploration of the layout. We use the head-related transfer function (HRTF) for rendering sound directionality and variations of sound intensity and tempo for rendering proximity. Our research has addressed object shape and size perception, as well as the of a 2-D layout of simple objects with identical size and shape. We have also considered the rendering of a simple scene layout consisting of objects in a linear arrangement, each with a distinct tapping sound, which we compare to a “virtual cane.” Subjective experiments with visually-blocked subjects demonstrate the effectiveness of the proposed approaches. Our research findings are also expected to have an impact in other applications where vision cannot be used, e.g., for GPS navigation while driving, fire-fighter operations in thick smoke, and military missions conducted under the cover of darkness.

Thrasos Pappas received the Ph.D. degree in electrical engineering and computer science from MIT in 1987. From 1987 until 1999, he was a Member of the Technical Staff at Bell Laboratories, Murray Hill, NJ. He joined the EECS Department at Northwestern in 1999. His research interests are in human perception and electronic media, and in particular, image and video quality and compression, image and video analysis, content-based retrieval, model-based halftoning, and tactile and multimodal interfaces. Prof. Pappas is a Fellow of the IEEE and SPIE. He has served as editor-in-chief of the IEEE Transactions on Image Processing (2010-12), elected member of the Board of Governors of the Signal Processing Society of IEEE (2004-07), chair of the IEEE Image and Multidimensional Signal Processing Technical Committee (2002-03), and technical program co-chair of ICIP-01 and ICIP-09. Since 1997 he has been co-chair of the SPIE/IS&T Conference on Human Vision and Electronic Imaging.

Host: Janet C. Rutledge, Ph.D.

1 66 67 68 69 70 100