PhD Defense: Semantically Rich, Policy Based Framework to Automate Lifecycle of Cloud Based Services

https://www.csee.umbc.edu/wp-content/uploads/2011/09/cloudComputing.jpg

Ph.D. Thesis Defense Announcement

Semantically Rich, Policy Based Framework to Automate Lifecycle of Cloud Based Services

Karuna P. Joshi

10:00am 19 November 2012, ITE 325B

 

Managing virtualized services efficiently over the cloud is an open challenge.  Traditional models of software development are very time consuming and labor intensive for the cloud computing domain, where software (and other) services are acquired on demand. Virtualized services are often composed of pre-existing components that are assembled on an as-needed basis. We have developed a new framework to automate the acquisition, composition and consumption/monitoring of virtualized services delivered on the cloud.  We have divided the service lifecycle into five phases of requirements, discovery, negotiation, composition, and consumption and have developed ontologies to represent the concepts and relationships for each phase. These are represented in Semantic Web languages. We have developed a protocol to automate the negotiation process when acquiring virtualized services. This protocol allows complex relaxation of constraints being negotiated based on user defined policies. We have also developed detailed ontologies to define service level agreements for cloud services. To illustrate and validate how this framework can automate the acquisition of cloud services, we have built two applications from real world scenarios. The Smart cloud services application enables users to determine and procure the cloud storage application that matches most of their constraints and policies. We have also built a VCL broker application that allows users to automatically reserve the VCL Image that will best meet their requirements. We have developed a framework to measure and semi-automatically track quality of a virtualized service delivery system. The framework provides a mechanism to relate hard metrics typically measured at the backstage of the delivery process to quality related hard and soft metrics tracked at the front stage where the consumer interacts with the service. While this framework is general enough to be applied to any type of IT service, in this dissertation we have primarily concentratated on the Helpdesk service and include the performance rules we have created by mining Helpdesk data.

Thesis Committee:

  • Dr. Yelena Yesha (chair)
  • Dr. Tim Finin (co-chair)
  • Dr. Milton Halem
  • Dr. Yaacov Yesha
  • Dr. Aryya Gangopadhyay

Talk: An architecture for enterprise information interoperability, 11am Nov 9

CSEE Colloquium

Active PURLs: An architecture for enterprise information interoperability

Dr. David Wood
Three Round Stones

11:00am Friday, 9 November 2012, ITE 325b, UMBC

The World Wide Web differed from other early hypertext systems in the removal of "back links" (the ability of a hyperlinked object to link back to a referring resource). The removal of back links allowed the scalability inherent in the Web's design, but sacrificed the knowledge necessary to update links when content moved. Persistent URLs (PURLs) have been used on the Web since 1995 to provide an inexpensive and partial solution to link updates via HTTP redirection: PURLs do not change their URL, but they may change the target they redirect to. Various iterations of the PURL concept have allowed Web addresses to be updated, clients notified of permanent changes of address and the provision of directions to metadata about a requested resource.

"Active" PURLs are a relatively new (2007) iteration of the PURL concept that allow PURLs to actively participate in the creation of data returned. The Callimachus Project, an Open Source Linked Data management system, now implements Active PURLs as a means to automate the collection, transformation and provision of information from distributed sources. Active PURLs are implemented in Callimachus by means of a PURL service, a new PURL type and an XML pipeline (XProc) implementation.

This talk will introduce Active PURLs and describe how they may be used to address long standing problems in enterprise architecture, especially those of distributed information interoperability, by facilitating a strong separation of concerns between data producers, publishers, administrators, librarians and consumers.

Dr. David Wood has contributed to the evolution of the World Wide Web since 1999, especially in the formation of standards and technologies for the Semantic Web. He has architected key aspects of the Web to include the Persistent Uniform Resource Locator (PURL) service and several Semantic Web databases and frameworks. David is co-chair of the W3C RDF Working Group, co-chaired the Semantic Web Best Practices and Deployment Working Group and is a member of the Semantic Web Coordination Group. David has represented international organizations in the evolution of Internet standards at the International Standards Organization (ISO), the Internet Engineering Task Force (IETF) and the World Wide Web Consortium. David is a founding and contributing member of many Free/Libre/Open Source Software (FLOSS) projects, including the Mulgara Semantic Store, Persistent URL (PURLs), Freemix and the Callimachus Project. He is the author of Programming Internet Email (O'Reilly, 1999), editor of Linking Enterprise Data (Springer, 2010) and Linking Government Data (Springer, 2011) and lead author of Linked Data (Manning, anticipated 2013).

Host: Tim FInin,

— more information and directions: http://bit.ly/UMBCtalks

talk: Modeling the dynamics of pulsed optical fiber lasers that rely on nonlinear polarization rotation

CSEE Colloquium

Modeling the dynamics of pulsed optical fiber lasers that rely on nonlinear polarization rotation

Brian Marks
Research Scientist
UMBC Computational Photonics Laboratory

1 pm Friday, 2 November 2012, ITE 227, UMBC

 

Ultrashort pulse lasers are important tools in time and frequency metrology, atomic spectroscopy, and medical applications. Passively modelocked fiber lasers are short pulse lasers that have many advantages over non-fiber alternatives — particularly size, weight, and cost. However, fiber lasers can drift due to environmental changes and changes in fiber properties, making robustness a problem. Although fiber modelocked lasers have been studied for decades, until recently modeling these devices has primarily been phenomenological. In this talk, I will discuss how passively modelocked fiber lasers work, improvements in the modeling effort in recent years, challenges for their robustness, and possible improvements for robustness based on our modeling work.

Brian Marks is a research scientist in the computational photonics laboratory at UMBC. He received his Ph.D. in Engineering Sciences and Applied Mathematics at Northwestern University, and B.S.'s in Math and Physics from N. C. State University. He was at UMBC from 2000–2005 in the computational photonics lab, then taught math and statistics at Indiana University in Bloomington for several years, and is now back at UMBC. His research interests include modeling and simulation of photonics and communications systems.

talk: Emerging Challenges in High Performance Computing

CSEE Colloquium

Emerging Challenges in High Performance Computing: Resilience and the Science of Embracing Failure

John. T. Daly
Advanced Computing Systems Program at the Department of Defense / Center for Exceptional Computing

1:00 p.m. Friday, 9 November 2012, ITE 227, UMBC

 

Resilience is about keeping the application workload running to a correct solution in a timely and efficient manner in spite of system failures. Future extreme scale supercomputers are likely to suffer more frequent failures than current systems: As devices scale, they are more susceptible to upsets due to radiation and to errors due to manufacturing variances. The probability of multiple bit upsets is growing, since an event is increasingly likely to impact multiple nearby cells. The use of near-threshold voltage in order to reduce power consumption also increases error rates. Thus, we can expect more frequent hardware failures, and a significant rate of undetected soft errors. While it is desirable to have failure-free system hardware and software, this goal may not be achievable at reasonable cost as both hardened components and methodologies to design and test critical software tend to be extremely expensive. The challenge is to construct a system out of less than perfectly reliable hardware and software that nevertheless behaves as a reliable system from the perspective of the user.

John T. Daly is a computer systems researcher for the Advanced Computing Systems (ACS) Program at the Department of Defense / Center for Exceptional Computing (CEC). He is focused on the problem of keeping supercomputer applications running toward a correct solution in a timely and efficient manner in the presence of system degradations and failures. His research interests include mathematical modeling and analysis of failure, reliability, fault tolerance, calculational correctness, and throughput for applications at extreme scale. Before coming to the CEC, John was a researcher and resilience technical leader in the High Performance Computing (HPC) division at Los Alamos National Laboratory and a software engineer and application analyst for Raytheon Intelligence and Information Systems. He is a nationally recognized expert in resilience with 25 years of experience developing, porting, and running applications as an early adopter of many of the world's fastest supercomputers. He holds degrees in engineering and applied science and aerospace engineering from Caltech and Princeton University.

 

    — more information and directions: http://bit.ly/UMBCtalks

CYBERInnovation briefing on cybersecurity mergers and acquisitions

Cyber Security Mergers & Acquisitions — Striving for a
Successful Exit: Trends, Preparation, and Lessons Learned

9:00-10:30am, Friday 9 November 2012

RWD Building, UMBC Research Park
5521 Research Park Dr.
Baltimore, MD 21228

The Cyber Incubator at bwtech@UMBC will host a third CYBERInnovation Briefing on Friday 9 November 2012 in the RWD Building of UMBC's Research Park. Registration begins at 8:30am.

Cyber security acquisitions continue to heat up. Join the CyberHive community as we host a distinguished panel of cyber security executives and capital markets experts who will share their recent merger and acquisition experiences in the cyber security industry. Learn from buyers, sellers, and deal flow managers – how to drive a successful deal and be best prepared. Our panel will explore recent trends in activity, acquisition characteristics, attributes that enhance company valuation, lessons learned, process and financial preparation, retention of key employees, and offer words of wisdom.

Acquisition activity involving cyber security companies will continue to influence the economic growth of our region, as innovators from the National Security Agency, US Cyber Command, and the Defense Industrial Base launch creative business opportunities. These sessions are very interactive and we look forward to and welcome your participation.

For more information and to RSVP, contact Alexandra Gold, .

UMBC ACM Tech Talk Series 10/24: Oates on Machine Learning

 
In the first talk of the UMBC ACM Student Chapter's Tech Talk Series, CSEE Prof. Tim Oates will talk about Machine Learning and how it makes an impact on your daily life.
 
Abstract : 
Facebook has one billion users, there are more than 400 million tweets per day, and Google is approaching 5 billion searches per day.  These companies and many of their brick and mortar counterparts are increasingly interested in what their data can tell them, and are hiring data scientists – people with a background in machine learning or data mining – at an astounding rate.  In this talk I will briefly introduce the core concepts of machine learning, and describe some of its most interesting successes and some of the more mundane (though perhaps surprising) ways it impacts your life on a daily basis. Finally, I will conclude with a short overview of some successes of machine learning in my own lab, including producing textual descriptions of people in triage images involved in mass disasters, extracting scripts (stereotypical actions sequences) from massive text corpora, and predicting outcomes for victims of traumatic brain injury using vital signs time series.
 
Light refreshments will be served. Please RSVP via the event on Facebook.
 
Where: ITE Building, Room 239 
Date : Wednesday October 24, 2012
Time 11.45 am – 12.45 pm

talk: Computational Science at the Argonne Leadership Computing Facility

Center for Hybrid Multicore Productivity Research (CHMPR)
Distinguished Computational Science Lecture Series

Computational Science at the
Argonne Leadership Computing Facility

Paul Messina
Director of Science Argonne National Laboratory
http://www.alcf.anl.gov

3:00 p.m. Thursday, 1 November 2012, ITE 456, UMBC

 

The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF’s current production computer has over 150,000 cores, and the system currently being readied for production – Mira, an IBM Blue Gene/Q system — has nearly one million cores.  How does one program such systems?  Are current software tools such as MPI and OpenMP available for such systems. Are scientific and engineering applications able to scale to such levels of parallelism?   Is resilience a new concern for 1,000,000 production codes on Mira This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research.  Finally, the ways to gain access to ALCF resources will be presented.

Paul Messina is Director of Science at the ALCF. Dr. Messina guides the ALCF science teams using the IBM Blue Gene systems. In 2002-2004, he served as Distinguished Senior Computer Scientist at Argonne and as Adviser to the Director General at CERN (European Organization for Nuclear Research). Previously at Caltech, Dr. Messina served as Director of the Center for Advanced Computing Research, as Assistant Vice President for Scientific Computing, and as Faculty Associate for Scientific Computing. He led the Computational and Computer Science component of Caltech’s research project funded by the Academic Strategic Alliances Program of the Accelerated Strategic Computing Initiative. He also acted as Co-principal Investigator for the National Virtual Observatory and TeraGrid. At Argonne, he held a number of positions from 1973-1987 and was the founding Director of the Mathematics and Computer Science Division.

talk: Experiences Teaching Thousands Online

CSEE Colloquium

Experiences Teaching Thousands Online

Professor Michael L. Littman
Computer Science, Brown University

1:00pm Friday, 26 October 2012, ITE 227, UMBC

Last Fall, a pair of well-respected computer scientists at Stanford offered their AI class for free to people everywhere via the Internet. Over 160,000 students signed up, spurring a worldwide conversation on the impact of online teaching on higher education and sending universities throughout the US scrambling to announce initiatives in this space. I had the good fortune to teach a class for one of the startups and will share my experiences.

Michael L. Littman is a professor of computer scientist at Brown University. He works mainly in reinforcement learning, but has done work in machine learning, game theory, computer networking, partially observable Markov decision process solving, computer solving of analogy problems and other areas. He has held faculty positions in the computer science departments at Duke University and Rutgers University, where he chaired the department from 2009 to 2012.  In the summer of 2012 he taught a massive open online course (MOOC) on graph algorithms.

Host: Tim Finin,

more information and directions

talk: Energy Conservation in Biometric Algorithms

CSEE Colloquium

Energy Conservation in Biometric Algorithms

LCDR Robert Schultz
United States Naval Academy

1:00pm Friday, 19 October 2012, ITE 227

Whether using iris recognition to gain access to a secure facility or face recognition to unlock a cell phone, biometric signal processing is rapidly becoming a part of everyday life. Many algorithms are being implemented on portable devices that have a limited battery life. This talk will present some work, conducted at the USNA Center for Biometric Signal Processing, which indicates that significant energy savings can be obtained by using C versus Java and Integers versus software Floats in applications written for the Android operating system. A comparison of the effect of using Integers versus Floats on a modern iris recognition algorithm will also be presented.

LCDR Robert Schultz is a submarine officer that has been assigned as a Junior Permanent Military Professor of Electrical Engineering at the United States Naval Academy. His research interests include hyperspectral and biometric image processing. As a member of the USNA Center for Biometric Signal Processing, he has recently been working to identify more energy efficient methods for biometric algorithm implementation.

talk: The 'Learning Health System' as the Consummate Informatics Challenge

UMBC Information Systems Department
Fall 2012 Distinguished Lecture Series

The 'Learning Health System'
as the Consummate Informatics Challenge

Dr. Charles P. Friedman
Professor of Information and Public Health
Director of the Michigan Health Informatics Program
University of Michigan

11:00am 19 October 2012, ITE456, UMBC

It is widely recognized that the nation requires a Learning Health System (LHS) to provide higher quality, safer, and more affordable health care. An LHS is one that can routinely and securely aggregate data from disparate sources, convert the data to knowledge, and disseminate that knowledge, in actionable forms, to everyone who can benefit from it. Achieving a Learning Health System at national scale requires solution of a wide array of technology and policy problems and, as such, is the consummate challenge in health informatics. This presentation will describe the LHS, why it is vital to our future, the specific problems that must be addressed, and a pathway through which the nation might achieve an LHS.

Charles Friedman directs the Health Informatics program at the University of Michigan. Prior to joining the university in 2011, he was chief scientific officer of the Office of National Coordinator for Health Information Technology in the U.S. Department of Health and Human Services. From 2007-2009 he served as the nation’s deputy national coordinator for health IT. He has also held federal positions as associate director for research informatics and information technology at the National Heart Lung and Blood Institute at the National Institutes of Health and senior scholar at the National Library of Medicine. He led the creation of informatics programs during his professorships in medicine, information science, and biomedical engineering at the University of Pittsburgh and the University of North Carolina-Chapel Hill. He is the author of a well-known health informatics textbook and serves as associate editor of the Journal of the American Medical Informatics Association.

see http://bit.ly/SVgTEE for more information

1 40 41 42 43 44 58