Etd

Parameter Continuation with Secant Approximation for Deep Neural Networks

Public

Downloadable Content

open in viewer

Non-convex optimization of deep neural networks is a well-researched problem. We present a novel application of continuation methods for deep learning optimization that can potentially arrive at a better solution. In our method, we first decompose the original optimization problem into a sequence of problems using a homotopy method. To achieve this in neural networks, we derive the Continuation(C)- Activation function. First, C-Activation is a homotopic formulation of existing activation functions such as Sigmoid, ReLU or Tanh. Second, we apply a method which is standard in the parameter continuation domain, but to the best of our knowledge, novel to the deep learning domain. In particular, we use Natural Parameter Continuation with Secant approximation(NPCS), an effective training strategy that may find a superior local minimum for a non-convex optimization problem. Additionally, we extend our work on Step-up GANs, a data continuation approach, by deriving a method called Continuous(C)-SMOTE which is an extension of standard oversampling algorithms. We demonstrate the improvements made by our methods and establish a categorization of recent work done on continuation methods in the context of deep learning.

Creator
Contributors
Degree
Unit
Publisher
Language
  • English
Identifier
  • etd-120518-215809
Keyword
Advisor
Defense date
Year
  • 2018
Date created
  • 2018-12-05
Resource type
Rights statement
Last modified
  • 2023-09-19

Relations

In Collection:

Items

Items

Permanent link to this page: https://digital.wpi.edu/show/000000239