PhD proposal: Lyrics Augmented Multi-modal Music Recommendation, 1pm 10/30

Lyrics Augmented Multi-modal
Music Recommendation

Abhay Kashyap

1:00pm Friday 30 October, ITE 325b

In an increasingly mobile and connected world, digital music consumption has rapidly increased. More recently, faster and cheaper mobile bandwidth has given the average mobile user the potential to access large troves of music through streaming services like Spotify and Google Music that boast catalogs with tens of millions of songs. At this scale, effective music recommendation is critical for music discovery and personalized user experience.

Recommenders that rely on collaborative information suffer from two major problems: the long tail problem, which is induced by popularity bias, and the cold start problem caused by new items with no data. In such cases, they fall back on content to compute similarity. For music, content based features can be divided into acoustic and textual domains. Acoustic features are extracted from the audio signal while textual features come from song metadata, lyrical content, collaborative tags and associated web text.

Research in content based music similarity has largely been focused in the acoustic domain while text based features have been limited to metadata, tags and shallow methods for web text and lyrics. Song lyrics house information about the sentiment and topic of a song that cannot be easily extracted from the audio. Past work has shown that even shallow lyrical features improved audio-only features and in some tasks like mood classification, outperformed audio-only features. In addition, lyrics are also easily available which make them a valuable resource and warrant a deeper analysis.

The goal of this research is to fill the lyrical gap in existing music recommender systems. The first step is to build algorithms to extract and represent the meaning and emotion contained in the song’s lyrics. The next step is to effectively combine lyrical features with acoustic and collaborative information to build a multi-modal recommendation engine.

For this work, the genre is restricted to Rap because it is a lyrics-centric genre and techniques built for Rap can be generalized to other genres. It was also the highest streamed genre in 2014, accounting for 28.5% of all music streamed. Rap lyrics are scraped from dedicated lyrics websites like and while the semantic knowledge base comprising artists, albums and song metadata come from the MusicBrainz project. Acoustic features are directly used from EchoNest while collaborative information like tags, plays, co-plays etc. come from

Preliminary work involved extraction of compositional style features like rhyme patterns and density, vocabulary size, simile and profanity usage from over 10,000 songs by over 150 artists. These features are available for users to browse and explore through interactive visualizations on Song semantics were represented using off-the-shelf neural language based vector models (doc2vec). Future work will involve building novel language models for lyrics and latent representations for attributes that is driven by collaborative information for multi-modal recommendation.

Committee: Drs. Tim Finin (Chair), Anupam Joshi, Pranam Kolari (WalmartLabs), Cynthia Matuszek and Tim Oates

jobs: Find out about jobs & internships at Google, Oct 29-30

Jobs at Google

Google will be on campus on Thursday and Friday, October 29 and 30 to talk with students about opportunities for full-time positions and internships. See their message below.

Hello UMBC students!

Google’s mission is to organize the world’s information and make it universally accessible and useful. It’s an enormous goal to accomplish and we need great people to help us achieve it!

We invite you to come learn about Google and meet some of our Googlers at this exciting event!

Who: All Computer Science and Engineering students, but anyone with an interest in software development is welcome!

What: Culture at Google and Laying the Groundwork for a Successful Tech Career
Date: Thursday, October 29th
Time: 4:00pm – 8:00pm
Location: PAHB 132

What: Culture at Google and Preparing for Technical Interviews
Date: Friday, October 30th
Time: 12:00pm – 3:00pm
Location: PAHB 132

RSVP here. Have any questions? Check out our FAQs below.


Jonathan Bronson (Google Employee)
Loryn Chen (Google Student Ambassador for UMBC)


“Okay, Google, I’m ready to apply.”

What roles are you hiring for?

Most of our available opportunities for technical students are within our software engineering teams. Check out the roles below for more details. For all other opportunities, visit

Can I apply for multiple positions?

Yes, you can apply for as many roles and locations as you’d like. We’ll review your resume and transcript to determine the best match.

When are the application deadlines?

Apply now! We encourage you to apply sooner rather than later, since most of our full time roles and internships accept applications on a rolling basis. If there is a deadline for a specific position, it will say so on the job posting.

What do I need to submit when I apply?

Please upload your resume and a copy of your transcript (unofficial is fine).

So I really don’t need a cover letter?

Correct! Have your resume tell your story!

I applied previously and wasn’t selected. May I reapply?

Yes, but we generally recommend that you’ve gained at least six months of additional technical experience and knowledge before reapplying.

Are international students eligible to apply for internships or full-time roles?

Yes, international students can apply for internships and full-time roles.

I’m planning to graduate this academic year, can I apply for an internship?

Unfortunately you aren’t able to do an internship after you graduate, so you’ll need to apply for a full-time role. If you’re graduating, but plan to pursue a graduate degree, then you can apply for an internship.

I want to intern on Android/Maps/[insert Google product here]. How do I apply for those teams?

You’ll first need to pass two technical phone interviews then a recruiter will work with you to determine a project match for the summer. You’ll have the chance to express interest in certain teams, tell us more about your background/skills, etc. once you’ve completed the technical interviews.

I applied online but haven’t heard back from anyone. Help?!

First, make sure you received the confirmation email that we received your application. Second, reply to us at so we can check the status of your application.

NSF Graduate Research Fellowship applications due Oct. 27

If you plan on applying to graduate school for next year or are currently a graduate student in your first or second year and are a US citizen or permanent-resident, you should consider applying to the National Science Foundation Graduate Research Fellowship Program (GRFP). This program makes approximately 2000 new fellowship awards each year.

The GRFP program recognizes and supports outstanding graduate students in NSF-supported science, technology, engineering, and mathematics disciplines who are pursuing research-based Master’s and doctoral degrees at accredited United States institutions. Fellows benefit from a three-year annual stipend of $32,000 along with a $12,000 cost of education allowance for tuition and fees, and opportunities for international research and professional development.

GRFP is the country’s oldest national fellowship program directly supporting graduate students in STEM fields. The hallmark features of the program are: 1) the award of fellowships to individuals on the basis of merit and potential, and 2) the freedom and flexibility provided to Fellows to define their own research and choose the accredited U.S. graduate institution that they will attend.

US citizens and permanent residents who are planning to enter graduate school in an NSF-supported discipline next fall, or in the first two years of such a graduate program, or who are returning to graduate school after being out for two or more years, are eligible. Applications for computing and engineering areas fields are due October 27. The applicant information page and the solicitation contain the necessary details.

PhD Defense: Tanvir Mahmood, 2pm 9/24

PhD Dissertation Defense Announcement
Electrical Engineering

Polarization-insensitive all-optical dual pump-phase trans-multiplexing from 2 × 10-GBd OOKs to 10-GBd RZ-QPSK using cross-phase modulation in a passive nonlinear birefringent photonic crystal fiber

Tanvir Mahmood

2:00pm Thursday, 24 September 2015, ITE325b

Considering the network size, bit rate, spectral and channel capacity limitations, different modulation formats may be selectively used in future optical networks. Although the traditional metropolitan area networks (MANs) still use the non-return-to-zero on-off keying (NRZ-OOK) modulation format due to its technical simplicity and therefore low cost, QPSK format is more advantageous in spectrally efficient long-haul fiber optic transmission systems because of its constant power envelope, and robustness to various transmission impairments. Consequently, an important problem may arise, in particular how to route the OOK-data streams from MANs to long-haul backbone networks when the state of polarization (SOP) of the remotely generated OOK is unpredictable. Hence, the focus of this dissertation was to investigate a polarization-insensitive (PI) all-optical nonlinear optical signal processing (NOSP) method that can be implemented at the network cross-connect (X-connect) to transfer data from a remotely and a locally generated OOK data simultaneously to more effectual QPSK format for long-haul transmission. By utilizing cross-phase modulation (XPM) and inherent birefringence of the device, the work demonstrated, for the first time, PI all-optical data transfer utilizing dual pump-phase transmultiplexing (DPTM) from 2 × 10-GBd OOKs to 10-GBd RZ-QPSK in a passive nonlinear birefringent photonic crystal fiber (PCF). Polarization insensitivity was achieved by scrambling the SOP of the remotely generated OOK pump and launching the locally generated OOK pump and the probe off-axis. To mitigate polarization induced power fluctuations and detrimental effects due to nearby partially degenerate and non-degenerate four wave mixings, an optimum pump-probe detuning was also utilized. The PI DPTM RZ-QPSK demonstrated a pre-amplified receiver sensitivity penalty < 5.5 dB at 10−9 bit-error-rate (BER), relative to the FPGA-precoded RZ-DQPSK baseline in ASE-limited transmission system. The effect of the remotely generated OOK pump OSNR degradation on the PI DPTM RZ-QPSK was also investigated and it was established that 10−9 BER metric was attainable till the remotely generated OOK pump reached the threshold OSNR limit of 34 dB/0.1nm. Finally, DWDM transmission performance of the PI DPTM RZ-QPSK signal was evaluated using a 138-km long recirculating loop and it was demonstrated that the PI DPTM RZ-QPSK can be transmitted over 1,500 km before it reached ITU-T G.709 7% HD-FEC overhead limit. This propagation distance was well beyond the transmission requisites of any typical metro network (≈ 600 km). Furthermore, it was demonstrated that, within the threshold limit, OSNR degradation of the remotely generated OOK pump had minimal impact on the transmission distance of the PI DPTM RZ-QPSK before it reached 7% HD-FEC overhead limit.

Committee: Drs. Gary M. Carter (Chair), Anthony M. Johnson, Fow-Sen Choa, Tinoosh Mohsenin, Thomas E. Murphy (ECE,UMCP), William Astar

Proposal: Vatcher, Verifiable Randomness and its Applications, 10:30 9/24


Ph.D. Dissertation Proposal

Verifiable Randomness and its Applications

Christopher Vatcher

10:30am Thursday, 24 September 2015, ITE 325b

We propose to create a public verifiable randomness beacon, to integrate with the Random-Sample Voting system, constructed to be secure against adversaries who have even almost complete control over the system’s source of public randomness including the entropy source.

By verifiable randomness, we do not mean we can prove a sequence of bits to be random. Instead, verifiability means it is possible to prove: (a) a consumer used uniform bits originating from a specific entropy source and therefore cannot lie about the bits used; and (b) the bits used were unpredictable prior to their generation and, with overwhelming probability, were free of adversarial influence. This is in contrast to ordinary public randomness where parties must agree to trust some randomness provider, who becomes a target of corruption. Verifiable randomness is an enhancement of public randomness used to perform random selection in voting, conduct random audits, preserve privacy, generate random challenges for secure multi-party computation, and public lottery draws. Random-Sample Voting specifically requires verifiable randomness for random voter selection and random audits.

Our work extends the work of Eastlake and Clark and Hengartner by considering (a) adversaries who have fine control over the entropy source and (b) physical entropy sources, which we can make verifiable.

Our specific aims include (a) creating adversary models for three entropy source abstractions based on trusted providers, sensor networks, and distributed proof-of-work systems; (b) create a verifiable random beacon that integrates each model; (c) integrate our work with the Random-Sample Voting system; and (d) integrate with NIST’s beacon and propose a verifiable randomness standard based on our work.

Our method is to weaken the trust assumption on the entropy source by introducing verifiable entropy sources, which have mechanisms for limiting adversarial influence and accumulating evidence that their outputs obey a known distribution. Combined with an appropriate randomness extractor, we can generate verifiable random bits. Using sources like weather, we will construct a verifiable randomness beacon: a public randomness provider unencumbered by generous and often unfounded trust assumptions. Such a beacon can serve as a singular gateway for accessing and aggregating multiple entropy sources without compromising the randomness provided to consumers.

Committee: Drs. Alan T. Sherman (Chair), Konstantinos Kalpakis, Weining Kang (Math/Stat), David Chaum (Random-Sample Voting), Aggelos Kiayias (University of Athens)

PhD proposal: Kulkarni, Secured Embedded Many-Core Accelerator for Big Data Processing

PhD Dissertation Proposal

Secured Embedded Many-Core Accelerator for Big Data Processing

Amey Kulkarni

2:00-4:00pm Friday, 18 September 2015, ITE 325b

I/O bandwidth and stringent delay constraints on processing time, limits the use of streaming Big Data for a large variety of real world problems. On the other hand, examining Big Data in applications such as intelligence, surveillance and reconnaissance unveils sensitive information in terms of hidden patterns or unknown correlations, thus demanding secured processing environment. In this PhD research, we propose a scalable and secured framework for a many-core accelerator architecture for efficient big data parallel processing. We propose to merge a compressive sensing-based framework to reduce IO Bandwidth and a machine learning-based framework to secure many-core communications. Four different reduced complexity architectures and two different modifications to Orthogonal Matching Pursuit (OMP) compressive sensing reconstruction algorithm are proposed. We implement the proposed OMP architectures on FPGA, ASIC, CPU/GPU and Many-Core to investigate hardware overhead cost. To secure communications within many-core, we propose two different machine learning-based Trojan detection framework which have minimal hardware overhead. To conclude this work, we aim to implement and evaluate the proposed scalable and secured many-core accelerator hardware for image and multi-channel biomedical signal processing on quad-core and sixteen-core architectures.

Committee: Drs. Tinoosh Mohsenin, (Chair), Mohamed Younis, Seung-Jun Kim, Farinaz Koushanfar (Rice University) and Houman Homayoun (George Mason University)

PhD proposal: Zheng Li , Detecting Objects with High Accuracy and in Real-time, 10am 9/15



Ph.D. Proposal

Detecting Objects with High Accuracy and in Real-time: A Vision-based
Scene-specific Object Detector in Mobile Systems with Human-in-the-loop Training

Zheng Li

10:00am 15 September 2015, ITE 325b

In computer vision, researchers pursue to train machines to detect objects as well as humans — with high accuracy and in real-time. Though the goal of highly intelligent machine vision has been the target of research for years, machines still perform inferior to humans. Present research continues to specifically investigate new robust features types that lead to improvement of effective detection accuracy. While use of carefully hand-engineered features usually helps, it requires decades of expertise effort to design a good feature representation. Moreover, the machine-end real-time performance often suffers due to the complicated feature extraction and matching. In application where low latency is as critical as high accuracy, such as with unmanned aerial vehicles (UAVs), or assistive guidance and navigation systems for people with visual impairments, approaches to achieve lower execution times are required.

In this proposal, a vision-based Scene-Specific object Detector (SSD) is proposed which transforms the general vision problem into scene-specific sub-problems in order to incorporate scene-specific a priori knowledge to achieve higher detection accuracy and real-time performance. This SSD deeply involves human-in-the-loop training to acquire possible a priori knowledge. With the combination of human-acquired a priori information and sensed real-time information from multi-sensors, a hierarchical coarse-grain to fine-grain search scheme can be used to detect objects efficiently and robustly in a real-time hardware platform. Such a solution can achieve performance exceeding traditional state-of-the-art approaches.

Committee: Drs. Ryan Robucci (chair), Nilanjan Banerjee, Chein-I Chang, Ting Zhu

PhD defense: Yu Wang, Physically-Based Modeling and Animation

Computer Science and Electrical Engineering
University of Maryland, Baltimore County

Ph.D. Dissertation Defense

The Modeling Equation: Solving the Physically-Based
Modeling and Animation Problem with a Unified Solution

Yu Wang

12:00pm Friday, 28 August 2015, ITE 352

Physically-based modeling research in computer graphics is based largely on derivation or close approximation from physics laws defining the material behaviors. From rigid object dynamics, to various kinds of deformable objects, such as elastic, plastic, and viscous fluid flow, to their interaction, almost every natural phenomena can find a rich history in computer graphics research. Due to the nonlinear nature of almost all real world dynamics, the mathematical definition of their behavior is rarely linear. As a result, solving for the dynamics of these phenomena involves non-linear numerical solvers, which sometimes introduces numerical instability, such as volume gain or loss, slow convergence.

The contribution of this project is a unified particle-based model that implements an extended SPH solver for modeling fluid motion, integrated with rigid body deformation using shape matching. The model handles phase changes between solid and liquid, including melting and solidification, where material rigidity is treated as a function of time and particle distance to the object surface, and solid fluid coupling, where rigid body motion causes secondary fluid flow motion. Due to the stability of the fluid-rigid interplay solver, we can introduce artistic control to the framework, such as rigging, where object motion is predefined by either artistic control, or procedurally generated dynamics path. Interaction with the fluid can be indirectly achieved by rigging the rigid particles which implicitly handles rigid-fluid coupling. We used marching cubes to extract the surfaces of the objects, and applied the PN-triangles to replace the planar silhouettes with cubic approximations. We provide discussion on evaluation metrics for physically-based modeling algorithms. In addition, GPU solutions are designed for physics solvers, isosurface extraction and smoothing.

Committee: Drs. Marc Olano (CSEE; Advisor, Chair), Penny Rheingans (CSEE), Jian Chen (CSEE), Matthias Gobbert (Math), Lynn Sparling (Physics)

PhD proposal: Assistive Contactless Capacitive Electrostatic Sensing System, 12pm 8/21

Ph.D. Proposal

ACCESS: An Assistive Contactless Capacitive
Electrostatic Sensing System

Alexander Nelson

12:00pm Friday, 21 August 2015, ITE 325b

The objective of ACCESS is to develop fabric capacitor sensor arrays as a holistic, wearable, touchless sensing solution. The fabric sensors are lightweight, flexible, and can therefore be integrated into items of everyday use. Further, the capacitive sensing hardware is low-power, unobtrusive, and easily maintainable. The research includes: the construction of fabric sensor prototypes and custom sensing hardware; the development of adaptive signal processing and gesture recognition; and the creation of an assistive cyber-physical interface for mobility impairment. The research is conducted with advisement from medical professionals and private consultants, and evaluated in clinical trials by individuals with upper-extremity mobility impairment. Proposed future work includes evaluation of the assistive device for computational overhead, the inclusion of personal contextual information in gesture recognition and device actuation, and investigation of a dense spatial-resolution capacitor sensor array as a low-resolution greyscale imaging system.

Committee: Drs. Nilanjan Banerjee and Ryan Robucci (Chairs), Chintan Patel, Sandy McCombe-Waller (UMB Medical School)

Opportunities through robotics: Kavita Krishnaswamy ’07

An interview with UMBC Computer Science Ph.D. student Kavita Krishnaswamy appeared in a recent post on the UMBC Alumni Blog.

Every so often, we’ll chat with an alum about what they do and how they got there. Today we’re talking with Kavita Krishnaswamy ’07, mathematics and computer science. Krishnaswamy has spinal muscular atrophy and has not been able to leave her house in six years. Thanks to Beam Telepresence Technology, a robotic program that allows her to remotely view and navigate spaces through her computer screen, she’s presented her doctoral thesis and attended conferences across the country. The current Ph.D. student talks about her experience with the Beam and her research on robotics and accessibility.

Read the full interview on the UMBC alumni blog.

1 4 5 6 7 8 36