The multi-armed bandit problem arises when allocating a fixed limited set of resources between competing choices to maximize expected gain when each choice’s properties are only partially known but may become better understood as time passes

ArtIAMAS Seminar Series, co-organized by UMBC, UMCP & Army Research Lab

Top-K Ranking Deep Contextual Bandits for Information Selection Systems

Dr. Jade Freeman, Army Research Lab

12-1pm ET Wed. 8 Dec. 2021, Online via Webex

In today’s technology environment, information is abundant, dynamic, and heterogeneous in nature. Automated filtering and prioritization of information is based on the distinction between whether the information adds substantial value toward one’s goal or not. Contextual multi-armed bandit has been widely used for learning to filter contents and prioritize according to user interest or relevance. Learn-to-Rank technique optimizes the relevance ranking on items, allowing the contents to be selected accordingly. We propose a novel approach to top-K rankings under the contextual multi-armed bandit framework. We model the stochastic reward function with a neural network to allow non-linear approximation to learn the relationship between rewards and contexts. We demonstrate the approach and evaluate the performance of learning from the experiments using real-world data sets in simulated scenarios. Empirical results show that this approach performs well under the complexity of a reward structure and high dimensional contextual features.

Dr. Jade Freeman is the Chief of the Battlefield Information Systems Branch, DEVCOM U.S. Army Research Laboratory (ARL), overseeing military information systems and analysis research projects. Prior to joining ARL, Dr. Freeman served as the Senior Statistician for the Chief of Staff at the Department of Homeland Security, Office of Cybersecurity and Communications, currently known as The Cybersecurity and Infrastructure Security Agency (CISA), Dr. Freeman obtained her Ph. D. in Statistics from George Washington University