Etd

Kernel Coherence Encoders

Public

Downloadable Content

open in viewer

In this thesis, we introduce a novel model based on the idea of autoencoders. Different from a classic autoencoder which reconstructs its own inputs through a neural network, our model is closer to Kernel Canonical Correlation Analysis (KCCA) and reconstructs input data from another data set, where these two data sets should have some, perhaps non-linear, dependence. Our model extends traditional KCCA in a way that the non-linearity of the data is learned through optimizing a kernel function by a neural network. In one of the novelties of this thesis, we do not optimize our kernel based upon some prediction error metric, as is classical in autoencoders. Rather, we optimize our kernel to maximize the ""coherence"" of the underlying low-dimensional hidden layers. This idea makes our method faithful to the classic interpretation of linear Canonical Correlation Analysis (CCA). As far we are aware, our method, which we call a Kernel Coherence Encoder (KCE), is the only extent approach that uses the flexibility of a neural network while maintaining the theoretical properties of classic KCCA. In another one of the novelties of our approach, we leverage a modified version of classic coherence which is far more stable in the presence of high-dimensional data to address computational and robustness issues in the implementation of a coherence based deep learning KCCA.

Creator
Contributors
Degree
Unit
Publisher
Language
  • English
Identifier
  • etd-042318-222257
Keyword
Advisor
Defense date
Year
  • 2018
Date created
  • 2018-04-23
Resource type
Rights statement
Last modified
  • 2021-02-01

Relations

In Collection:

Items

Items

Permanent link to this page: https://digital.wpi.edu/show/hq37vn730