R202, Astronomy-Mathematics Building, NTU
(台灣大學天文數學館 202室)
Interpretable Convolutional Neural Networks (CNNs) via Feedforward Design
C.-C. Jay Kuo (University of Southern California)
Abstract:
Given a convolutional neural network (CNN) architecture, its network parameters are determined by backpropagation (BP) nowadays. The underlying mechanism remains to be a black-box after a large amount of theoretical investigation. In this talk, I describe a new interpretable and feedforward (FF) design with the LeNet-5 as an example. The FF-trained CNN is a data-centric approach that derives network parameters based on training data statistics layer by layer in one pass. To build the convolutional layers, we develop a new signal transform, called the Saab (Subspace approximation with adjusted bias) transform. The bias in filter weights is chosen to annihilate nonlinearity of the activation function. To build the fully-connected (FC) layers, we adopt a label-guided linear least squared regression (LSR) method. The classification performances of BP- and FF-trained CNNs on the MNIST and the CIFAR-10 datasets are compared. The computational complexity of the FF design is significantly lower than the BP design and, therefore, the FF-trained CNN is ideal for mobile/edge computing. We also comment on the relationship between BP and FF designs by examining the cross-entropy values at nodes of intermediate layers.
Reception:
13:30 - 14:00
主辦單位:
NCTS國家理論科學研究中心
AI創新研究中心專案-國際鏈結計畫
Abstract: events_1_1903041258106901.pdf