Capital Area Women in Computing Celebration, 2/24-25

The Capital Area Women in Computing Celebration, sponsored by ACM-W, will be held at Georgetown University on Friday, February 24th and Saturday, February 25.

The celebration will bring together women at the high school, undergraduate, graduate, and professional levels to promote the recruitment, retention, and progression of women in computing fields.

The cost of student attendance is modest: $50 with shared hotel room, or $25 without hotel. Scholarships are available as well.

To get more information and to register, visit the CAPWIC 2017 Web site.

Reasons to Attend

  • Share your work and ideas with your peers and experts during the poster session, flash talk, or technical short.
  • Be inspired. Meet technical women like you and celebrate your accomplishments together.
  • Hear success stories of technical women who made it this far!
  • Broaden your skills by attending a workshop.
  • Meet recruiters from business, industry, and academia for internships, jobs, or graduate programs.
  • Find a new job or internship. Bring your resume to our career fair to apply for job and internship opportunities.
  • Did we mention that it is FUN!

Attacking and Defending the Automotive CAN Bus

MS Thesis Defense

Attacking and Defending the Automotive CAN Bus

Jackson Schmandt

12:30pm Thursday, 8 December, 2016, ITE 325b, UMBC

The scope and complexity of Automotive Computer Networks have grown drastically in the last decade. Once present only in high end vehicles, multi-use infotainment systems are now included in base models of some economy vehicles. Frequently connected to drivetrain components, these systems bring out multiple network access points, many of which are wireless. This unprecedented access has led to several high-profile exploits from both white-hat hackers and criminals. Although industry members are working toward long-term solutions, current systems suffer from inadequate protocol security and a lack of common-sense design practices. To address the security problem in the short term, this thesis describes a flexible Message Authentication Code that can be retrofitted with software only, as well as implementations on microcontrollers, an FPGA and an ASIC design. This work shows that on current embedded controllers, message authentication tags can be generated or verified in under 400 microseconds and in under 10 microseconds on a special-purpose ASIC.

Committee Members: Drs. Nilanjan Banerjee (chair), Alan Sherman (co-chair) and Anupam Joshi

PhD defense: Deep Neural Networks in Real-Time Embedded Systems


PhD Dissertation Defense

Deploying Deep Neural Networks in Real-Time Embedded Systems

Adam Page

10:00am Monday, 21 November 2016, ITE 325b

Deep neural networks have been shown to outperform prior state-of-the-art solutions that rely heavily on hand-engineered features coupled with simple classification techniques. In addition to achieving several orders of magnitude improvement, they offer a number of additional benefits such as the ability to perform end-to-end learning by performing both hierarchical feature abstraction and inference. Furthermore, their success continues to be demonstrated in a growing number of fields for a wide-range of applications, including computer vision, speech recognition, biomedical, and model forecasting. As this area of machine learning matures, a major challenge that remains is the ability to efficiently deploy such deep networks in embedded, resource-bound settings that have strict power and area budgets. While GPUs have been shown to improve throughput and energy efficiency over traditional computing paradigms, they still impose significant power burden for such low-power embedded settings. In order to further reduce power while still achieving desired throughput and accuracy, classification-efficient networks are required in addition to optimal deployment onto embedded hardware.

In this dissertation, we target both of these enterprises. For the first objective, we analyze simple, biologically-inspired reduction strategies that are applied both before and after training. The central theme of the techniques is the introduction of sparsification to help dissolve away the dense connectivity that is often found at different levels in neural networks. The sparsification techniques developed include feature compression partition, structured filter pruning and dynamic feature pruning.

In the second contribution, we propose scalable, hardware-based accelerators that enable deploying networks in such resource-bound settings by both exploiting efficient forms of parallelism inherent in convolutional layers and by exploiting the sparsification and approximation techniques proposed. In particular, we developed SPARCNet, an efficient and scalable hardware convolutional neural network accelerator, along with a corresponding resource-aware API to reduce, translate, and deploy a pre-trained network. The SPARCNet accelerator has been fully implemented in FPGA hardware and successfully employed for a number of case studies and evaluated against several existing state-of-the-art embedded platforms including NVIDIA Jetson TK1/TX1 in real-time. A full hardware demonstration with the developed API will be showcased that enables selecting between hardware platforms and state-of-the-art vision datasets while performing real-time power, throughput, and classification analysis.

Committee: Drs. Tinoosh Mohsenin (chair), Anupam Joshi, Tim Oates, Mohamed Younis, Farinaz Koushanfar

Dissertation defense: Cross-Layer Techniques for Boosting Base-Station Anonymity in Wireless Sensor Networks

Dissertation Defense Announcement

Cross-Layer Techniques for Boosting Base-Station Anonymity in Wireless Sensor Networks

Sami Alsemairi

9:30 Wednesday, 9 November 2016, ITE 346

Wireless Sensor Networks (WSNs) provide an effective solution for surveillance and data gathering applications in hostile environments where human presence is infeasible, risky or very costly. Examples of these applications include military reconnaissance, guarding boarders against human trafficking, security surveillance, etc. A WSN is typically composed of a large number of sensor nodes that probe their surrounding and transmit measurements over multi-hop paths to an in-situ Base-Station (BS). The BS not only acts as a sink of all collected sensor data but also provides network management and serves as a gateway to remote commend centers. Such an important role makes the BS a target of adversary attacks that opt to achieve Denial-of-Service (DoS) and nullify the WSN utility to the application. Even if the WSN applies conventional security mechanisms such as authentication and data encryption, the adversary may apply traffic analysis techniques to locate the BS and target it with attacks. This motivates a significant need for boosting BS anonymity to conceal its location.

In this dissertation, we address the challenges of BS anonymity and develop a library of techniques to counter the threat of traffic analysis. The focus of our work is on the link and network layers. We first exploit packet combining as a means to vary the traffic density throughout the network. We call this technique combining the data payload of multiple packets (CoDa), where a node groups the payload of multiple incoming data packets into a single packet that is forwarded toward the BS. CoDa cuts on the number of transmissions that constitute evidences for implicating the BS as a destination of all traffic and thus degrades the adversary’s ability in conducting effective traffic analysis.

Next we develop a novel technique for increasing BS anonymity by establishing a sleep/active schedule among the nodes that are far away from the BS, and increasing the traffic density in selected parts of the network in order to give the impression that the BS is located in the vicinity of the sleeping nodes. We call this technique Adaptive Sampling Rate for increased Anonymity (ASRA). Moreover, we develop three novel techniques based on a hierarchical routing topology. The first, which we call Hierarchical Anonymity-aware Routing Topology (HART), forms clusters and an inter-cluster-head routing topology so that a high traffic volume can be observed in areas away from the BS. The second is a novel cross-layer technique that forms a mesh topology. We call this technique cluster mesh topology to boost BS’s anonymity (CMBA). CMBA opts to establish a routing topology such that the traffic pattern does not implicate any particular node as a sink.

The third technique creates multiple mesh-based routing topologies among the cluster-heads (CHs). By applying the closed space-filling curves such as the Moore curve, for forming a mesh, the CHs are offered a number of choices for disseminating aggregated data to the BS through inter-CH paths. Then, the BS forwards the aggregated data as well so that it appears as one of the CH. We call this technique boosting the BS anonymity through multiple mesh-based routing topologies (BAMT). We validate the effectiveness of all anonymity-boosting techniques through simulation and highlight the trade-off between anonymity and overhead.

Committee: Drs. Mohamed Younis (Chair), Charles Nicholas, Chintan Patel, Richard Forno and Waleed Youssef

Career and internship opportunities at Google, 9/29-30


Interested in learning more about Google?
Come hear it from Googlers and UMBC alumni!

On Thursday Sept. 29 and Friday Sept. 30, Google host hour tech/culture/info talk events on campus for UMBC students to learn more about Google and the internship and career opportunities it offers to students. They will have food, swags and many internship and full time opportunities for students.

Check out the details below and register for the event(s) HERE, if you’re interested in Google opportunites make sure to include a soft copy of your resume.


Who: Except the first event on Thursday 09/29, at  1pm that is designed for PhD engineering students, all Computer Science and Engineering students regardless of degrees they are pursuing, and anyone else with an interest in software development are welcome!

Why: Learn more about Google’s hiring process, culture, technology, job and/ internship opportunities, and more! – directly from a Googler!

What to do next?: Register for the event HERE! Make sure your resume and LinkedIn profiles are up to date (feel free to link both in the form above) and of course come with lots of good questions!

Here’s information on the four events:

  • What: Info Sharing: Google PhD Info Session for PhD CS/Engineering Students
    When: 9/29, Thursday, 1pm – 3pm
    Where: Commons 318 RSVP: RSVP Form

  • What: Info Sharing: Resume Tips & Tricks for Technical Opportunities
    When: 9/29, Thursday, 4pm – 5pm
    Where: Commons 331 RSVP: RSVP Form

  • What: Talk and Workshop: Google Technical Interview Prep Workshop
    When: 9/30, Friday, 1pm – 2:30pm
    Where: Commons 331 RSVP: RSVP Form

  • What: Tech Talk: Google AppEngine, Simple & Scalable Solution for Startups
    When: 9/30, Friday, 3pm – 4pm
    Where: Commons 329 RSVP: RSVP Form


Are you interested in assistive robotics research?

Are you interested in assistive robotics research?

Kavita Krishnaswamy is a Ph.D. candidate in the UMBC Computer Science program and has Spinal Muscular Atrophy (SMA), a neuromuscular disorder that affects the control of muscle movement.

Her goal is to develop robotics aids to increase independence for people with physical disabilities like herself. As part of her research she is conducting a survey on attitudes toward robotic aids and how they may improve the quality of life for those with physical disabilities, their family members, and their caregivers.

If you have a physical disability, are a caregiver for a person with a physical disability, or are a friend or family member of a person with a physical disability, you can help Kavita with her research by particiating in the survey. Participation is voluntary and anonymous. The participant must be 18 years or older. You can access the survey here.

This study has been reviewed and approved by the UMBC Institutional Review Board (IRB). A representative of that Board, from the Office.for Research Protections and Compliance, is available to discuss the review process or Kavita’s rights as a research participant. Contact information of the Office is (410) 455-2737 or

Consider pursuing an advanced degree in computing

Screen Shot 2016-08-30 at 1.04.19 AM

The Computing Research Association has published five short videos to explain the benefits of pursuing a PhD in a computing discipline. The videos showcase young researchers with PhDs who are now working in industry as they talk about what compelled them to pursue a doctorate and how they are using their advanced training in their work. The videos illustrate how a PhD is useful in industry as well as in academia.


Click to watch all five videos or select one below.
  • Video 1: Adrienne Porter Felt (PhD Berkeley) talks about her work on security at Google.
  • Video 2: Hoda Eldardiry (PhD Purdue) talks about her work on predictive analytics, using machine learning and data mining at Palo Alto Research Center (PARC)
  • Video 3: Susanna Ricco (PhD Duke) and Mac Mason (PhD Duke) at Google talk about their work in robotics and vision.
  • Video 4: Richard Socher (PhD Stanford) talks about his work in artificial intelligence at Salesforce.
  • Video 5: Tiffany Chen (PhD Stanford) talks about her work in bioinformatics at Cytobank.

Omar Shehab PhD defense: Solving Mathematical Problems in Quantum Regime, 7/7


Ph.D. Dissertation Defense
Computer Science and Electrical Engineering

Solving Mathematical Problems in Quantum Regime

Omar Shehab

2:00pm Thursday, 7 July 2016, ITE 325b

In this dissertation, I investigate a number of algorithmic approaches in quantum computational regime to solve mathematical problems. My problems of interest are the graph isomorphism and graph automorphism problems, and the complexity of memory recall of Hopfield network. I show that the hidden subgroup algorithm, quantum Fourier sampling, always fails, to construct the automorphism group for the class of the cycle graphs. I have discussed what we may infer for a few non-trivial classes of graphs from this result. This raises the question, which I have discussed in this dissertation, whether the hidden subgroup algorithm is the best approach for these kinds of problems. I have given a correctness proof of the Hen-Young quantum adiabatic algorithm for graph isomorphism for cycle graphs. To the best of my knowledge, this result is the first of its kind. I also report a proof-of-concept implementation of a quantum annealing algorithm for the graph isomorphism problem on a commercial quantum annealing device. This is also, to the best of my knowledge, the first of its kind. I have also discussed the worst-case for the algorithm. Finally, I have shown that quantum annealing helps us achieve exponential capacity for Hopfield networks.

Committee: Drs. Samuel J Lomonaco Jr. (Chair), Milton Halem, Yanhua Shih, William Gasarch and John Dorband

Travel grants for students to attend 2016 Grace Hopper Conference

Google will fund travel grants to the 2016 Grace Hopper Celebration of Women in Computing Conference (GHC) which takes place in Houston, Oct 19-21, 2016. The GHC is the world’s largest gathering of women technologists and offers many valuable resources to students and academics alike, from a Student Opportunity Lab to tracks specifically designed to educate and inspire faculty. Its career fair, one of the largest in the U.S., earns a 97% satisfaction rate from our student survey respondents.

University students and industry professionals in the US and Canada who are excelling in computing and passionate about supporting women in tech can apply for a travel grant to attend the 2016 Grace Hopper conference. Sponsorship includes: conference registration, round trip flight to Houston, TX, arranged hotel accommodations from October 18-22, $75 USD reimbursement for miscellaneous travel costs and a fun social event with your fellow travel grant recipients on one of the evenings of the conference.

Apply by Sunday, July 10 using this online form. The Grace Hopper Travel Grant recipients will be announced by July 27th.

PhD defense: Z. Wang, Learning Representations and Modeling Temporal Signals

Computer Science PhD Dissertation Defense

Learning Representations and Modeling Temporal Signals:
Symbolic Approximation, Deep Learning, Optimization and Beyond

Zhiguang Wang

1:00pm Tuesday, 31 May 2016, ITE 325, UMBC

Most real-world data has a temporal component, whether it is measurements of natural or man-made phenomena. Specifically, complex, high-dimensional and noisy temporal data are often difficult to model because the intrinsic temporal/topographic structures are highly non-linear, which makes the learning and optimization procedure more complicated. This talk will cover three correlated but self-contained topics to address the problem of representation learning in time series, deep learning optimization, and unsupervised feature learning.

First, I will show how to incorporate ideas from symbolic approximation with simple NLP techniques to represent and model temporal signals. To improve the symbolic approximation to model signals as words, we build a time-delay embedding vector (AKA skip gram) to extract the dependencies at different time scales, which yields state-of-the-art classification performance with a bag-of-patterns and vector space model. A non-parametric pooling/weighting scheme is proposed to extend the methods to multivariate signals

Second, I will show how to encode signals as images to learn and analyze them with deep learning methods. The Gramian Angular Field (GAF) and Markov Transition Field (MTF), as two novel approaches to encode both multi-scale spatial correlation and first order Markov dynamics of the temporal signals as images, are proposed. These visual representations are proved to work well in both visualizations by humans and pattern recognition using deep learning approaches. This work yields state-of-the-art algorithms for temporal data classification and imputation.

Finally, deep learning in image recognition (e.g. pictures or GAF/MTF images) involves high-dimensional non-convex optimization. Such optimization is generally intractable. However, I show how to use a set of exponential form based error estimators (NRAE/NAAE) and learning approaches (Adaptive Training) to attack the non-convex optimization problems in training deep neural networks. Both in theory and practice, they are able to achieve optimality on accuracy and robustness against outliers/noise. They provide another perspective to address the non-convex optimization problem (especially saddle points) in deep learning.

Committee: Tim Oates (Chair), Matt Schmill (Miner & Kasch), Hamed Pirsiavash, Yun Peng, Kostas Kalpakis

1 2 3 34