Sponsored by
 
Events
News
 
[ Events ]
Seminars and Talks Conferences, Workshops and Special Events Courses and Lecture Series Taiwan Math. School
 

Activity Search
Sort out
Field
 
Year
Seminars  
 
False Discovery Rate and Multiple Testing - A Selected Review
 
The mini-course consists of 4 lectures starting from 2:10-4:00pm, Wednesday Nov.30,2016
R202, Astronomy-Mathematics Building, NTU

Speaker(s):
Inchi Hu (The Hong Kong University of Science and Technology)


Organizer(s):
Mei-Hui Guo (National Sun Yat-sen University)
Jungkai Chen (National Taiwan University)


Abstract

In this talk I will present a selected review of false discovery rate literature - begin with the original contribution of Benjamini and Hochberg (1995), then followed by a series of papers by Storey, Efron,  Genovese and Wasserman, and Sun and Cai.  It seems that not only each school of thought made uniquely important contribution to the literature but also supported each other in building a more complete theory for multiple testing.

-------------------------------------------------------------

20170123 10:00-12:00

Title  An Outsider's Review of Deep Learning

Abstract

It seems that everyone is talking about deep learning. People appreciate the past successes and progress made by deep learning and have great hope for its potential and future impact on data science. In this talk, I will review deep learning from a statistician's point of view. The focus is on useful ideas from deep learning, specially those unfamiliar to statisticians. 

 

20170222 14:00-16:00

Title  An Outsider’s Review of Deep Learning – Part II

Abstract

I will review a couple of specific neural networks that are extremely successful in their intended applications.  The convolution networks and recurrent networks have been successfully applied to image data and sequential data, respectively. Thus they are worthy of special attention not only because of their long established successes in practice but also because they probably represent two fundamental aspects of learning, one with a prominent space component and the other with a prominent time component. One of the goals of this review is to explain ideas empowered by these two neural networks.

 

Video Playlist






back to list
(C) 2021 National Center for Theoretical Sciences