Sponsored by
 
Events
News
 
[ Events ]
 
 

Activity Search
Sort out
Field
 
Year
Seminars  
 
NCTS Workshop on Compressive Sensing + Data Science
 
13:20 - 17:00, May 27, 2016 (Friday)
R101, Astronomy-Mathematics Building, NTU
(台灣大學天文數學館 101室)
Part I: Sparse Representation for Time Series Classification
Part II: Sparse representations for musical signal processing
Part III: Learning Sparse Representation for Visual Analysis and Classification
Yuh-Jye Lee (National Yang Ming Chiao Tung University )

Title:
Sparse Representation for Time Series Classification
 
Speaker:
Professor Yuh-Jye Lee (NCTU)
 
Time:
1:20-2:20
 
Abstract:
 
The problem of time series classification has been studied for over a decade. In the era of Internet of Things, time series data has become a major data type data, much effort has been devoted to this issue. The approaches to time series classification can be categorized into three types, including distance-based, model-based, and feature-based approaches. In this research, we focus on feature-based methods, which represent time series into a set of characterized values. However, features generated by most of existing representation techniques are not completely interpretable. Due to this fact, a novel time series representation, envelope, is proposed. The envelope is a profiling for a set of time series. This is a supervised feature extraction method that encodes time series into three numbers, -1, 0 and 1. If time series value falls into the envelope then encodes it as 0. We use -1 and 1 to represent the value falls outside below and above respectively. It is always important to find the most discriminating features for data mining tasks. Hence, we need a good heuristic to decide the size of the envelope in order to have a better performance either in data classification or anomaly detection tasks. Moreover, this new representation enjoys the characteristic of sparsity which is an essential property for applying compressed sensing. With this advantage, we can benefit from high transmission efficiency, the reduction of required storage and model complexity. Furthermore, the transformed features are interpretable via visualization. Envelope shows the shape of time series and defines the similarity between which. Disclosed below is the demonstration of the effectiveness of proposed method on numerous benchmark datasets.
 
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
 
Title:
Sparse representations for musical signal processing
 
Speaker:
Dr. Li Su (Academia Sinica)
 
Time:
2:30-3:30
 
Abstract:

Musical signals are structured profoundly. The diversity of musical instruments with their sound effects, the multiple-source construction and the harmony relationship among different sounds in musical signals also introduce several fundamental challenges in the field of machine listening. Finding an efficient sparse model for musical signals is usually considered an important step for tackling the challenges. In this talk, two approaches for obtaining the sparse representations of musical signal are reviewed, the first being the optimization-based approach and the second the nonlinear time-frequency approach. Several specific problems in musical signal processing and recently developed solutions are also introduced and discussed, including: (1) using the -regularized optimization and the LASSO algorithm in feature learning, with applications for genre, instrument and playing technique classification, and (2) combining the linear prediction (LP) scheme and the non-negative least square (NNLS) optimization for musical onset detection.
 
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
 
Title:
Learning Sparse Representation for Visual Analysis and Classification
 
Speaker:
Dr. Y.C. Frank Wang
 
Time:
4:00-5:00
 
Abstract:
 
For the areas of computer vision and image processing, sparse representation has been widely applied for learning, analyzing, and classifying visual data such as images and videos. From image denoising to face recognition, several sparse-representation based algorithms have been proposed, which aim at deriving proper representations for describing the observed data, so that the corresponding synthesis or recognition tasks can be addressed accordingly. In this talk, I will cover a number of applications and their solutions, which benefit from the recent advances of sparse representation, dictionary learning, and low-rank matrix decomposition, with particular focuses on face recognition.
 


 

back to list  
(C) 2021 National Center for Theoretical Sciences