From sherman@umbc.edu Thu Apr 30 18:24:52 2009 Date: Thu, 30 Apr 2009 18:24:22 -0400 From: Dr. Alan T. Sherman To: CSEE ALL Subject: [Csee-faculty-lecturer] poster abstracts - 2009 CSEE Research Review CSEE Research Review ~V Poster Abstracts Friday, May 1, 2009 Department of Computer Science and Electrical Engineering University of Maryland, Baltimore County (UMBC) Note: The students who won research awards also have posters, but they are ineligible for the three poster awards. For their abstracts, see the talk abstracts. MS Students 1. Wesley Griffin (Advisor: Marc Olano), MAPLE and VANGOGH Labs Creating Lines on the Geometry Shader Recent advances in graphics hardware include capabilities that have yet to be fully explored. One specific capability is the processing of mesh adjacency information in the programmable geometry shader. Mesh adjacency information is the set of triangles that are adjacent to the primary triangle currently being processed. Having access to adjacent triangles allows a shader program to compute information normally not available in the graphics pipeline, such as edge orientation or surface curvature data. A popular domain in non-photorealistic rendering is line drawing. There have been many techniques developed to draw lines in an artistic manner. These techniques can be divided into two groups: image-based and object-based. Image-based techniques work by rendering objects to a frame buffer and then processing the image to create lines. Object-based techniques, on the other hand, work with the polygonal mesh representation of objects, and analyze various aspects of the geometry to create different types of lines. Recent object-based techniques specifically analyze the surface curvature of polygonal meshes to extract lines. Typically these algorithms would be run on the host computer and the lines would be textured and rendered on the graphics hardware. These algorithms are potentially uniquely suited to the geometry shader hardware capabilities. We will develop a system that creates and renders stylized lines for generic models and runs completely in graphics hardware. 2. Albert Kir, Tejaswini Kavallappa (Advisor: Joel Morris) Develop statistical and deterministic signal processing algorithms for enhanced mid-IR sensor performance A mid-IR laser-based gas sensing system based on Laser-based Spectroscopy (LAS) is an important application in numerous fields. We used detection theory to develop a generic statistical analysis model for mid-IR gas sensing systems to compute the different performance probabilities. A new signal strength measure, (delta) SNR, is defined to address the detection problem for the gas sensing system and used to perform system improvement analysis for the detection performance of the sensing system. PhD Students 3. Lushan Han (Advisors: Tim Finin, Yelena Yesha and Anupam Joshi), ebiquity Group Finding the Most Appropriate Semantic Web Terms from Words The Semantic Web language RDF was designed to unambiguously define and use ontologies to encode data and knowledge on the Web. Many people find it difficult, however, to write complex RDF statements and queries because doing so requires familiarity with the appropriate ontologies and the terms they define. We develop a system that maps a set of ordinary English words to a set of most appropriate (considering both term consistency and ontology popularity) RDF terms among an interconnected ontology network. We use the Swoogle Semantic Web search engine to provide RDF term and ontology co-occurrence statistics, the WorldNet lexical ontology to resolve synonyms, and a practical three step approach to find the best suitable ontology context as well as the most appropriate terms for the input words. 4. Palanivel Kodeswaran (Advisor: Anupam Joshi), ebiquity Group Towards a Declarative Framework for Managing Application and Network Adaptations Cross layer optimizations are increasingly being used for a variety of applications for pushing application intelligence into the network layer with the overall goal of improving application specific metrics. However most of these implementations are ad hoc and performed on a per application basis. In this paper we propose a declarative framework for managing application and network adaptations. The declarative framework provides a much needed clean line of separation between the high level goals and the low level implementations. Our framework exposes the tunable features of both the application and the network across layers which can then be jointly optimized through operator specified policies. This allows operators to control the adaptation process and retain control over their networks while the application and the network adapt in response to changing conditions. We pursue an ontology based approach and use semantic web languages such as OWL, RDF etc. in our framework for the policy and declarative specifications, thereby leveraging the inherent reasoning and conflict resolution features of these languages. We then describe our simulator developed on top of the NS2 simulator to demonstrate the utility of our approach in the easy implementation of cross layer optimizations through sample application scenarios. 5. John Krautheim (Advisors: Dhananjay Phatak and Alan Sherman), Cyber Defense Lab Virtual Private Infrastructure Cloud computing places an organization~Rs sensitive data in the control of a third party, introducing a significant level of risk on the privacy and security of the data. We propose a new management and security model for cloud computing called the Private Virtual Infrastructure (PVI) that shares the responsibility of security in cloud computing between the service provider and client, decreasing the risk exposure to both. The PVI datacenter is under control of the information owner while the cloud fabric is under control of the service provider. A cloud Locator Bot pre-measures the cloud for security properties, securely provisions the datacenter in the cloud, and provides situational awareness through continuous monitoring of the cloud security. PVI and Locator Bot provide the tools organizations require to maintain control of their information in the cloud and realize the benefits cloud computing. 6. Wenjia Li (Advisors: Anupam Joshi and Tim Finin), ebiquity Group Policy-based Malicious Peer Dectection in Mobile Ad Hoc Networks Mobile Ad Hoc Networks (MANETs) are susceptible to various node misbehaviors due to their unique features, such as a highly dynamic network topology, rigorous power constraints and error-prone transmission media. While significant research efforts have been made to address the problem of misbehavior detection, little research work has been done on distinguishing truly malicious from simply faulty behaviors. We are developing a policy-based malicious peer detection framework that collects and use context information to determine the likely intent of a peer that is misbehaving. The context information includes features such as the communication channel status, buffer status, and transmission power levels. Our simulation results show that framework can distinguish malicious from faulty peers with high confidence. Moreover, the mechanism converges to a consistent view of malicious nodes amongst all the nodes with a limited communication overhead. 7. Justin Martineau (Advisor: Tim Finin), ebiquity Group Delta TFIDF: An Improved Feature Space for Sentiment Analysis Mining opinions and sentiment from social networking sites is a popular application for social media systems. Common approaches use a machine learning system with a bag of words feature set. We present Delta TFIDF, an intuitive general purpose technique to weight word scores efficiently before classification. Delta TFIDF is easy to compute, implement, and understand. We use Support Vector Machines to show that Delta TFIDF significantly improves accuracy for sentiment analysis problems using three well known data sets. 8. Don Miner (Advisor: Marie desJardins), MAPLE Lab Learning Non-Explicit Control Parameters of Swarm Systems Swarm-level behavior of a swarm system is easily measurable. For example, the density of a boid flock can be measured by observing the area covered by the flock, divided by the number of agents. However, adjusting the Explicit Control Parameters (ECP) of the system's program to generate specific non-explicit behavior is non-trivial. Determining the agent-level parameter values of the boid flock that results in the swarm exhibiting a specific density is difficult. Our approach is to use common and novel machine learning approaches to learn correlations between the ECP and user-defined Non-Explicit Control Parameters (NECP), which represent more abstract concepts in the system. Users adjust the value for an NECP, which is then translated to values for the ECPs that the program can handle directly. NECPs provide more intuitive and more efficient user control of these swarm systems since they represent more abstract or swarm-level concepts of the system. In addition, NECPs can be used as predictors to determine what how the swarm will behave without running an experiment. Our main contribution is a general framework for defining non-explicit control parameters. Our work focuses on a few popular domains: Reynolds boid flocking, particle swarm optimization, wireless sensor network layout, and traffic simulations. Approaches we have investigated for this purpose include linear regression, gradient descent, perceptrons and k-nearest neighbor. 9. Rory Mulvaney (Advisor: Dhananjay Phatak) Regularization and Diversification against Overfitting and Over-specialization In machine learning, regularization against over fitting attempts to solve the problem caused when a learning algorithm assumes that all training patterns are always present together. Cross validation provides a more realistic distribution of training data, since it trains with various subsets, weighted by appropriate probabilities. Basis Function Regularization (BFR) is a new augmentation to an objective function that attempts to efficiently emulate cross validation without requiring cross validation subsets, and furthermore provides a cross validation routine with a single regularization parameter to optimize, reducing risk of over fitting. Experiments confirmed BFR's usefulness for regularizing least squares regression against over fitting. It is felt that over-specialization is an analogous phenomenon, in human researchers, to over fitting. Just as financial advisors recommend broad diversification, it is imagined that, in order to diversify, researchers should publish their work in a homogenized ~SPriority-setting Market,~T where the ~Sprice~T of an issue is proportional to its priority. In the spirit of diversification and systems engineering, several other promising projects have taken shape, including: a cache-motivated clustering objective, update lists for relocatable data objects, in-cache B-trees for extension words, a universal programming language framework with loadable syntax modules, and process management note-taking techniques. 10. Zareen Syed (Advisor: Tim Finin), ebiquity Group Wikitology: A Wikipedia Derived Hybrid Knowledgebase We are developing ~SWikitology~T as a Wikipedia derived novel hybrid knowledge base using Wikipedia and other related knowledge resources to expose the knowledge hidden in different forms such as RDF triples, links, graphs, tables and free text to applications thereby enabling effective access and utilization of world knowledge. We have successfully developed and evaluated Wikitology 1.0 system, a blend of the statistical and ontological approach for predicting concepts in documents. An enhanced version of Wikitology, ~SWikitology 2.0~T was constructed as a knowledge base of known individuals and organizations as well as general concepts for use in the ACE cross document co-reference task by incorporating structured data in RDF from DBpedia and Freebase and encoded in an RDFa-like format. The evaluation results showed high precision (0.966) and reasonably high recall (0.72). We are currently working on our Wikitology 3.0 system by focusing on enhancements targeted towards TAC 2009 Knowledge base population task for persons, organizations and locations. It involves extraction of information about entities with reference to an external knowledge source. The main tasks include entity linkage and ontology slot filling. We have incorporated and integrated data from Freebase, DBpedia and Wikipedia to construct entity link graphs for persons, organizations and geo-locations. We plan to implement graph based algorithms on the entity link graphs to aid both the entity linking and slot filling tasks. We are also employing Wikitology for named entity disambiguation which would support the entity linking task. Abstracts from Students Unable to Attend Patricia Ordonez (Advisor: Marie desJardins), MAPLE Lab A Multivariate Time-Series Visualization of Clinical and Physiological Data We present an approach that creates a multivariate time-series representation, a Multivariate Time Series Amalgam (MTSA), of physiological and clinical data that medical providers can visually interpret. It enables medical providers to receive more personalized and well rounded information about a patient to make more informed and efficient decisions about an individual~Rs care. The representation serves as a visual model of a patient~Rs state over time and is organized in a manner so that a dependency between the data and the state of four vital organs- the heart, lung, liver and kidney- is emphasized. The objective of the visualization is to provide an integrated, visual patient history in a time critical situation which emphasizes the change in parameter values over time. _______________________________________________ Csee-faculty-lecturer mailing list Csee-faculty-lecturer@cs.umbc.edu http://lists.cs.umbc.edu/mailman/listinfo/csee-faculty-lecturer