Department of Computer Science and Electrical Engineering
National Science Foundation Graduate Research Fellowship Program Workshop
On October 3, 2019, Dr. Francis
Ferraro presented a workshop for the National Science Foundation Graduate
Research Fellowship Program (NSF GRFP). During
the workshop, Dr. Ferraro covered many topics including scholarship
eligibility, funding, and the application process. He also provided a detailed
application checklist as well as suggestions for developing personal and
research statements. In addition to giving information about the NSF GRFP, Dr.
Ferraro provided an overview of the graduate school experience.
Application deadline for the NSF GRFP is October 22, 2019.
The purpose of the NSF Graduate Research Fellowship Program (GRFP) is to help ensure the vitality and diversity of the scientific and engineering workforce of the United States. The program recognizes and supports outstanding graduate students who are pursuing full-time research-based master’s and doctoral degrees in science, technology, engineering, and mathematics (STEM) or in STEM education. The GRFP provides three years of support for the graduate education of individuals who have demonstrated their potential for significant research achievements in STEM or STEM education. NSF especially encourages women, members of underrepresented minority groups, persons with disabilities, veterans, and undergraduate seniors to apply.
Three years of funding to use across five years (in 12 month blocks). Stipend: $34,000 per year. Tuition/education expenses: $12,000 per year.
Applicants must be US citizens, national or permanent residents. Applicants must be an undergraduate senior, or first or second year graduate student.
TALK: Computer Aided Assessment of Computed Tomography Screenings
UMBC ACM Chapter Talk
Computer Aided Assessment of Pulmonary Nodule Malignancy in from Low Dose Computed Tomography Screenings
Professor David Chapman, CSEE, UMBC
11:30–12:30, Friday 11 October 2019, ITE 346, UMBC
We propose to develop a novel quantitative algorithm to estimate the probability of malignancy of pulmonary nodules from a time series of successive LDCT screenings in patients with a high risk of developing lung cancer. Lung cancer kills approximately 200,000 Americans annually and is responsible for 25% of all cancer-related deaths. Imaging with Low Dose Computed Tomography (LDCT) has been proven to reduce Non-Small Cell Lung Cancer (NSCLC) mortality by 20% and has become standard guidelines (NLST 2011a,b). These new clinical guidelines have led to hospitals, including Mercy Medical Center in Baltimore, to collect an abundance of LDCT images of high risk individuals since 2014. These LDCT images along with additional CT/biopsy and PET/CT images collected by Mercy hospital in Baltimore have now been organized into an IRB exempt clinical research dataset to use anonymous radiology imagery for the purpose of training and evaluation of improved Computer Aided Diagnosis (CAD) algorithms. Imaging biomarkers including cross-sectional diameter, calcification patterns, irregular margins, wall thickness all of which are known to have discriminating power to differentiate benign and malignant pulmonary nodules. Furthermore, temporal changes in the size and biomarker characteristics of pulmonary nodules over multiple images are also highly informative and yield greater ability to differentiate malignancy. The proposed CAD algorithm will be capable of detecting and quantifying temporal changes of imaging biomakers in order to estimate malignancy probability. The algorithm will make use of convolutional neural networks for feature extraction as well as recurrent neural networks to analyze the temporal changes in extracted features. The Mercy hospital dataset contains approximately 30,000 chest CT images. Training of the algorithm will incorporate semi-supervised learning using chest CT images from Mercy as well as the public portion of the NLST dataset. A fraction of the Mercy images will be designated for evaluation of the sensitivity and specificity of the proposed algorithm for determining nodule malignancy. Pulmonary nodules remain a challenging area for clinical management decision-making, and improved analysis of malignancy including temporal changes of imaging biomarkers have the potential to reduce patient morbidity and mortality through earlier and more accurate diagnosis.
Talk: Localization of Brain Activations Based on EEG Recordings and Sparse Signal Recovery Theory
Electrical and Computer Engineering Rutgers University
1:00-2:00pm Friday, 1 November 2019
Room 104, ITE Building, UMBC
Sparse signal recovery is often formulated as an l1-norm minimization problem. However, unless certain conditions are satisfied, there is no guarantee that the least l1-norm solution will also be a sparsest solution. In this talk, we show that by appropriately weighting the sensing matrix, we can formulate an l1-norm minimization problem whose solution is guaranteed to be one of the sparsest solutions. The weights can be obtained based on a low resolution estimate of the sparse signal, obtained for example via a method that does not encourage sparsity.
The proposed weighting approach is a good candidate for Electroencephalography (EEG) sparse source localization, where measurements of sensors, placed on a subject’s head are used to localize activations inside the brain. In many cases, the locations of these activations are related to the subject’s reactions or intensions, and estimating them via a non-invasive and inexpensive modality like EEG can find applications in several domains, including cognitive and clinical neuroscience as well as brain-computer interfaces (BCIs). In response to simple tasks, the brain activations are sparse, and thus, their localization based on the EEG recordings can be formulated as a sparse signal recovery problem. In this case, the corresponding basis matrix, referred to as lead field matrix, has high mutual coherence, which means that the least l1-norm solution will not necessarily lead to the brain sources. In spite of the high coherence of the lead field matrix, the proposed weighting approach can still estimate the sources inside the brain. In this talk, this is demonstrated by localizing active sources in the brain corresponding to an auditory task from EEG recordings of human subjects.
Athina P. Petropulu received her undergraduate degree from the National Technical University of Athens, Greece, and the M.Sc. and Ph.D. degrees from Northeastern University, Boston MA, all in Electrical and Computer Engineering. She is Distinguished Professor at the Electrical and Computer Engineering (ECE) Department at Rutgers, having served as chair of the department during 2010-2016. Before joining Rutgers in 2010, she was faculty at Drexel University. She held Visiting Scholar appointments at SUPELEC, Universite’ Paris Sud, Princeton University and University of Southern California. Dr. Petropulu’s research interests span the area of statistical signal processing, wireless communications, signal processing in networking, physical layer security, and radar signal processing. Her research has been funded by various government industry sponsors including the National Science Foundation (NSF), the Office of Naval research, the US Army, the National Institute of Health, the Whitaker Foundation, Lockheed Martin and Raytheon.
Talk: how algorithms are shaping our lives, 1pm Thr Oct. 17, ITE 104
Lawrence Gussman Professor Emeritus of Computer Science, Columbia University
1:00-2:00pm Thursday, 17 October 2019, ITE 104, UMBC
Dr. Aho will explain what algorithms are and how they have evolved over several millennia. Algorithms are now shaping all aspects of our lives from healthcare to jobs to entertainment. Good algorithms can enrich our lives and unfortunately, bad algorithms can wreak havoc. An important societal question concerning algorithms arises. Should we regulate algorithms so they don’t totally distort our lives, and if so, how should we do it? The fundamental nature of algorithms makes this an unusually difficult challenge.
Alfred Aho joined the Department of Computer Science at Columbia in 1995 and served as Chair of the department from 1995 to 1997, and again in the spring of 2003. He has a B.A.Sc. in Engineering Physics from the University of Toronto and a Ph.D. in Electrical Engineering/Computer Science from Princeton University.
Professor Aho won the Great Teacher Award for 2003 from the Society of Columbia Graduates. In 2014 he was again recognized for teaching excellence by winning the Distinguished Faculty Teaching Award from the Columbia Engineering Alumni Association. He has received the IEEE John von Neumann Medal and is a Member of the U.S. National Academy of Engineering and of the American Academy of Arts and Sciences. He is a Fellow of the Royal Society of Canada. He shared the 2017 C&C prize with John Hopcroft and Jeff Ullman. He has received honorary doctorates from the Universities of Helsinki, Toronto and Waterloo, and is a Fellow of the American Association for the Advancement of Science, ACM, Bell Labs, and IEEE.
Professor Aho is a co-inventor of AWK, a widely used pattern-matching language. He also wrote the initial versions of the UNIX string pattern-matching utilities egrep and fgrep; fgrep was the first widely used implementation of what is now called the Aho-Corasick algorithm. His research interests include programming languages, compilers, algorithms, software engineering, and quantum computation.
Talk: Impacting healthcare through collaborative technology innovations, 1:30pm Mon 7 Oct
Impacting healthcare through collaborative technology innovations
India’s healthcare scenario presents a set of unique challenges to ensure effective delivery of care to the large population suffering from various communicable and non communicable diseases. The medical devices market in India is largely catered by imports, most of which were not designed to handle the constraints and requirements of country’s care delivery system and market. While this presents a significant challenge to established players, it is an exciting opportunity for innovators and entrepreneurs to create and scale indigenous technology solutions tailored to local needs. However, development of affordable technology solutions which create large impact, and can achieve scale in India requires a deep understanding of the care delivery system and strong partnerships with various stakeholders of the ecosystem.
Healthcare Technology Innovation Centre of IIT Madras focuses on developing and commercializing affordable healthcare technologies through its team of over 200 engineers, doctors, researchers, students and entrepreneurs working with more than 30 medical institutions, industries, government agencies. The talk will highlight some of its technology successes and the use of AI and machine learning in tackling the healthcare challenges. The potential of these technologies beyond Indian market will be discussed.
Mohanasankar Sivaprakasam is a faculty of Electrical Engineering at IIT Madras and Director of the Healthcare Technology Innovation Centre (HTIC), a R&D centre of IIT Madras. After his PhD and postdoctoral research in US in implantable medical devices for 8 years, he returned to India with goal of developing affordable medical technologies in the country. Since 2009, he has successfully built an ecosystem of technologists, clinicians and industry, culminating in setting up of Healthcare Technology Innovation Centre (HTIC) in 2011. Over the years, HTIC has grown into a unique and leading med-tech innovation ecosystem in the country bringing together more than 20 medical institutions, industry, government agencies, collaborating with HTIC in developing affordable medical technologies for unmet healthcare needs. He has more than 70 peer reviewed publications in journals and conferences.
talk: Three Related Takes on Investigating Human-Like Intelligence
Professor of Computer Science and Director of Cognitive Architecture Research, Institute for Creative Technologies University of Southern California
1:00-2:00pm Friday, 11 October 2019, ITE 325b, UMBC
This talk explores a trio of related takes on how to investigate the nature of human-like intelligence. The first concerns cognitive architectures – implemented models of the fixed structure and processes that yield natural and artificial minds – with a drill down to Sigma, an attempt at a deep synthesis across what has been learned over the past four decades on (what started as) high-level symbolic cognitive architectures versus the low-level graphical/network technologies of probabilistic graphical models (such as Bayesian networks) and neural networks. The second concerns a more abstract attempt at specifying a Common Model of Cognition that yields an evolving community consensus over what must be part of any cognitive architecture for human-like intelligence. The final take concerns an even more abstract (and speculative) attempt at understanding more deeply the space of approaches to intelligence – framed as maps resulting from cross products among core cognitive dichotomies – along with how such maps may help to understand and structure the capabilities required for (human-like) intelligence.
Paul Rosenbloom is a professor of computer science in the Viterbi School of Engineering at the University of Southern California (USC) and director for cognitive architecture research at USC’s Institute for Creative Technologies (ICT). He was a member of USC’s Information Sciences Institute for two decades, ending as its deputy director, and earlier was on the faculty at Carnegie Mellon University and Stanford University (where he had a joint appointment in Computer Science and Psychology). His research concentrates on cognitive architectures (models of the fixed structures and processes that together yield a mind), the Common Model of Cognition (an attempt at developing a community consensus concerning what must be part of a human-like mind), and on computing as a scientific domain (understanding the computing sciences as akin to the physical, life and social sciences). He is a fellow of the Association for the Advancement of Artificial Intelligence (AAAI), the Association for the Advancement of Science (AAAS), and the Cognitive Science Society; and with J. Laird was recently awarded the Herbert A. Simon Prize for Advances in Cognitive Systems. He has served as councilor and conference chair for AAAI; chair of the Association for Computing Machinery Special Interest Group on Artificial Intelligence; and president of the faculty at USC.
talk: Bringing Social, Information, and Natural Sciences together to Understand Human Transformation of Earth
Department of Geography and Environmental Systems
Bringing Social, Information, and Natural Sciences together to Understand Human Transformation of Earth
Dr. Earle Ellis, UMBC
12:00-1:00pm Wednesday, 25 September 2019, ITE 229
The principal investigator of a UMBC-led “massively collaborative” project published in Science Magazine will describe how archaeologists, geographers, and information science came together to show that human societies began transforming earth thousands of years earlier than known by earth scientists; evidence for an earlier anthropocene.
talk: Analysis of the Secure Remote Password (SRP) Protocol Using CPSA
The UMBC Cyber Defense Lab presents
Analysis of the Secure Remote Password (SRP) Protocol Using CPSA
Erin Lanus, UMBC Cyber Defense Lab
12:00–1:00pm, Friday, 6 September 2019, ITE 227, UMBC
Joint work with Alan Sherman, Richard Chang, Enis Golaszewski, Ryan Wnuk-Fink, Cyrus Bonyadi, Mario Costa, Moses Liskov, and Edward Zieglar
Secure Remote Password (SRP) is a widely deployed password authenticated key exchange (PAKE) protocol used in products such as 1Password and iCloud Keychain. As with other PAKE protocols, the two participants in SRP use knowledge of a pre-shared password to authenticate each other and establish a session key. I will explain the SRP protocol and security goals it seeks to achieve. I will demonstrate how to model the protocol using the Cryptographic Protocol Shapes Analyzer (CPSA) tool and present my analysis of the shapes produced by CPSA.
Erin Lanus earned her Ph.D. in computer science in May 2019 from Arizona State University. Dr. Lanus is currently conducting research with Professor Sherman’s Protocol Analysis Lab at UMBC. Her previous results include how to use state to enable CPSA to reason about time in forced-latency protocols. Her research also explored algorithmic approaches to constructing combinatorial arrays employed in interaction testing and the creation of a new type of array for attribute distribution to achieve anonymous authorization in attribute-based systems. In October she will begin as a research assistant professor at Virginia Tech’s Hume Center in Northern Virginia. email:
Support for this research was provided in part by grants to CISA from the Department of Defense, CySP grants H98230-17-1-0387 and H98230-18-0321.
talk: Correlation analysis with small sample sizes, 2pm Tue 6/18, UMBC
Correlation analysis with small sample sizes
Peter Schreier, Univ. of Paderborn, Germany
2:00-3:00 Tuesday, 18 June 2019, ITE 325B, UMBC
Most common techniques for correlation analysis (e.g., canonical correlation analysis) require sufficiently large sample support, but in many applications only a limited number of samples are available. Correlation analysis with small sample sizes poses some unique challenges. In this talk, I will focus on the problem of determining the correlated components between two or more data sets when the number of samples from these data sets is extremely small. Applications are plentiful, and among them I will discuss the identification of weather patterns in climate science and analyzing the effects of extensive physical exercise on the autonomic nervous system.
Peter Schreier was born in Munich, Germany, in 1975. He received a Master of Science from the University of Notre Dame, IN, USA, in 1999, and a Ph.D. from the University of Colorado at Boulder, CO, USA, in 2003, both in electrical engineering. From 2004 until 2011, he was on the faculty of the School of Electrical Engineering and Computer Science at the University of Newcastle, NSW, Australia. Since 2011, he has been Chaired Professor of Signal and System Theory at Paderborn University, Germany. He has spent sabbatical semesters at the University of Hawaii at Manoa, Honolulu, HI, and Colorado State University, Ft. Collins, CO.
From 2008 until 2012, he was an Associate Editor of the IEEE Transactions on Signal Processing, from 2010 until 2014 a Senior Area Editor for the same Transactions, and from 2015 to 2018 an Associate Editor for the IEEE Signal Processing Letters. From 2009 until 2014, he was a member of the IEEE Technical Committee on Machine Learning for Signal Processing, and he currently serves on the IEEE Technical Committee on Signal Processing Theory and Methods. He is the Chair of the Steering Committee of the IEEE Signal Processing Society’s Data Science Initiative, and he serves on the IEEE SPS Regional Committee for Region 8. He was the General Chair of the 2018 IEEE Statistical Signal Processing Workshop in Freiburg, Germany.
talk: Tensor Decomposition of ND data arrays, 2pm 6/13 ITE325
Tensor Decomposition of ND data arrays
Prof. David Brie, University of Lorraine
2:00pm Thursday, 13 June 2019, ITE 325B, UMBC
The goal of this talk is to give an introduction to tensor decompositions for the analysis of multidimensional data. First, we recall some basic notions and operations on tensors. Then two tensor decompositions are presented: the Tucker decomposition (TD) and the Candecomp/Parafac decomposition (CPD). A particular focus is placed on the identifiability conditions of the CPD. Finally, various applications in biology are presented.
David Brie received the Ph.D. degree in 1992 and the Habilitation à Diriger des Recherches degree in 2000, both from Université de Lorraine, France. He is currently full professor at the Department of Telecommunications and Networking of the Institut Universitaire de Technologie, Université de Lorraine, France. He is editor-in-chief of the French journal “Traitement du Signal” since 2013 and will be co-general chair of the next IEEE CAMSAP 2019. His current research interests include vector-sensor-array processing, spectroscopy and hyperspectral image processing, non-negative matrix factorization, multidimensional signal processing, and tensor decompositions.