<p>From the reviews:</p><p>“PhD level students, and researchers and practitioners in statistical learning and machine learning. … text assumes a thorough training in undergraduate statistics and mathematics. Computed examples that include R code are scattered through the text. There are numerous exercises, many with commentary that sets out guidelines for exploration. … The over-riding reason for staying with the independent, symmetric unimodal error model is surely that no one book can cover everything! Within these bounds, this book gives a careful treatment that is encyclopedic in its scope.” (John H. Maindonald, International Statistical Review, Vol. 79 (1), 2011)</p><p>“It is an appropriate textbook for a PhD level course and can also be used as a reference or for independent reading. … an excellent resource for researchers and students interested in DMML. … the authors have done an outstanding job of covering important topics and providing relevant statistical theory and computational resources. I can see myself teaching a statistical learning class using this book and comfortably recommend it to any researcher with a solid mathematical background who wants to be engaged in this field.” (Jeongyoun Ahn, Journal of the American Statistical Association, Vol. 106 (493), March, 2011)</p><p>“This book provides an encyclopedic monograph on this field from a statistical point of view. … A salient feature of this book is its coverage of theoretical aspects of DMML techniques. … Additionally, plenty of exercises and computational examples with R codes are provided to help one brush up on the technical content of the text.” (Kazuho Watanabe, Mathematical Reviews, Issue 2012 i)</p>

The idea for this book came from the time the authors spent at the Statistics and Applied Mathematical Sciences Institute (SAMSI) in Research Triangle Park in North Carolina starting in fall 2003. The rst author was there for a total of two years, the rst year as a Duke/SAMSI Research Fellow. The second author was there for a year as a Post-Doctoral Scholar. The third author has the great fortune to be in RTP p- manently. SAMSI was – and remains – an incredibly rich intellectual environment with a general atmosphere of free-wheeling inquiry that cuts across established elds. SAMSI encourages creativity: It is the kind of place where researchers can be found at work in the small hours of the morning – computing, interpreting computations, and developing methodology. Visiting SAMSI is a unique and wonderful experience. The people most responsible for making SAMSI the great success it is include Jim Berger, Alan Karr, and Steve Marron. We would also like to express our gratitude to Dalene Stangl and all the others from Duke, UNC-Chapel Hill, and NC State, as well as to the visitors (short and long term) who were involved in the SAMSI programs. It was a magical time we remember with ongoing appreciation.
Les mer
The idea for this book came from the time the authors spent at the Statistics and Applied Mathematical Sciences Institute (SAMSI) in Research Triangle Park in North Carolina starting in fall 2003.
Variability, Information, and Prediction.- Local Smoothers.- Spline Smoothing.- New Wave Nonparametrics.- Supervised Learning: Partition Methods.- Alternative Nonparametrics.- Computational Comparisons.- Unsupervised Learning: Clustering.- Learning in High Dimensions.- Variable Selection.- Multiple Testing.
Les mer
This book is a thorough introduction to the most important topics in data mining and machine learning. It begins with a detailed review of classical function estimation and proceeds with chapters on nonlinear regression, classification, and ensemble methods. The final chapters focus on clustering, dimension reduction, variable selection, and multiple comparisons. All these topics have undergone extraordinarily rapid development in recent years and this treatment offers a modern perspective emphasizing the most recent contributions. The presentation of foundational results is detailed and includes many accessible proofs not readily available outside original sources. While the orientation is conceptual and theoretical, the main points are regularly reinforced by computational comparisons. Intended primarily as a graduate level textbook for statistics, computer science, and electrical engineering students, this book assumes only a strong foundation in undergraduate statistics and mathematics, and facility with using R packages. The text has a wide variety of problems, many of an exploratory nature. There are numerous computed examples, complete with code, so that further computations can be carried out readily. The book also serves as a handbook for researchers who want a conceptual overview of the central topics in data mining and machine learning. Bertrand Clarke is a Professor of Statistics in the Department of Medicine, Department of Epidemiology and Public Health, and the Center for Computational Sciences at the University of Miami. He has been on the Editorial Board of the Journal of the American Statistical Association, the Journal of Statistical Planning and Inference, and Statistical Papers. He is co-winner, with Andrew Barron, of the 1990 Browder J. Thompson Prize from the Institute of Electrical and Electronic Engineers. Ernest Fokoue is an Assistant Professor of Statistics at Kettering University. He hasalso taught at Ohio State University and been a long term visitor at the Statistical and Mathematical Sciences Institute where he was a Post-doctoral Research Fellow in the Data Mining and Machine Learning Program. In 2000, he was the winner of the Young Researcher Award from the International Association for Statistical Computing. Hao Helen Zhang is an Associate Professor of Statistics in the Department of Statistics at North Carolina State University. For 2003-2004, she was a Research Fellow at SAMSI and in 2007, she won a Faculty Early Career Development Award from the National Science Foundation. She is on the Editorial Board of the Journal of the American Statistical Association and Biometrics.
Les mer
From the reviews:“PhD level students, and researchers and practitioners in statistical learning and machine learning. … text assumes a thorough training in undergraduate statistics and mathematics. Computed examples that include R code are scattered through the text. There are numerous exercises, many with commentary that sets out guidelines for exploration. … The over-riding reason for staying with the independent, symmetric unimodal error model is surely that no one book can cover everything! Within these bounds, this book gives a careful treatment that is encyclopedic in its scope.” (John H. Maindonald, International Statistical Review, Vol. 79 (1), 2011)“It is an appropriate textbook for a PhD level course and can also be used as a reference or for independent reading. … an excellent resource for researchers and students interested in DMML. … the authors have done an outstanding job of covering important topics and providing relevant statistical theory and computational resources. I can see myself teaching a statistical learning class using this book and comfortably recommend it to any researcher with a solid mathematical background who wants to be engaged in this field.” (Jeongyoun Ahn, Journal of the American Statistical Association, Vol. 106 (493), March, 2011)“This book provides an encyclopedic monograph on this field from a statistical point of view. … A salient feature of this book is its coverage of theoretical aspects of DMML techniques. … Additionally, plenty of exercises and computational examples with R codes are provided to help one brush up on the technical content of the text.” (Kazuho Watanabe, Mathematical Reviews, Issue 2012 i)
Les mer
This is a more theoretical book on the same subject as the book on statistical learning by Hastie/Tibshirani/Friedman. Request lecturer material: sn.pub/lecturer-material

Produktdetaljer

ISBN
9781461417071
Publisert
2011-12-02
Utgiver
Vendor
Springer-Verlag New York Inc.
Høyde
235 mm
Bredde
155 mm
Aldersnivå
Research, UP, 05
Språk
Product language
Engelsk
Format
Product format
Heftet