Ph.D. Dissertation Proposal

Convexification/Deconvexification for Training Neural

Networks and Recurrent Deep Learning Machines

Yichuan Gui

9:30am Thursday, 16 May 2013, ITE 325b, UMBC

The development of artificial neural networks (ANNs) has been impeded by the local minimum problem for decades. One principle goal of this proposal focuses on devel- oping a methodology to alleviate the local minimum problem in training ANNs. A new training criterion called the normalized risk-averting error (NRAE) criterion is proposed to avoid nonglobal local minima in training multilayer perceptrons (MLPs) and deep learning machines (DLMs). Training methods based on the NRAE crite- rion are developed to achieve global or near-global minima with satisfactory learning errors and generalization capabilities.

Many advantages of DLMs have been analyzed in recent research works of ANNs, and effective architectures and training methods have been explored from those works. However, feedback structures are commonly ignored in previous research of DLMs. The next objective of this proposal is to develop recurrent deep learning machines (RDLMs) through adding feedback structures to deep architectures in DLMs. De- signing and testing works are expected to illustrate the efficiency and effectiveness of RDLMs with feedback structures comparing to feedforward DLMs.

Preliminary works presented in this proposal demonstrate the effectiveness of NRAE-based training methods in avoid nonglobal local minima for training MLPs. Methods based on the NRAE criterion will be tested in training DLMs, and the de- veloping and testing of RDLMs will be performed in subsequent works. Moreover, an approach that combining both the NRAE criterion and RDLMs will also be explored to minimize the training error and maximize the generalization capability. Contribu- tions of this proposed research are expected as (1) provide an effective way to avoid local minimum problem in training MLPs and DLMs with satisfactory performance; (2) develop a new type of RDLMs with feedback connections for training large-scale dataset efficiently; (3) apply the NRAE criterion to train RDLMs for minimizing training errors and maximizing generalization capabilities. Those contributions are expected to significantly boost research interests in ANNs' fields and stimulate new practical applications in the future.

Committee: James Lo (mentor), Yun Peng (mentor), Tim Finin, Tim Oates, Charles Nicholas