IUKL Library
Normal view MARC view ISBD view

Data Mining and Business Analytics with R.

By: Ledolter, Johannes.
Material type: materialTypeLabelBookSeries: New York Academy of Sciences Ser: Publisher: Newark : John Wiley & Sons, Incorporated, 2013Copyright date: �2013Description: 1 online resource (365 pages).Content type: text Media type: computer Carrier type: online resourceISBN: 9781118593745.Genre/Form: Electronic books.Online resources: Click to View
Contents:
Intro -- DATA MINING AND BUSINESS ANALYTICS WITH R -- CONTENTS -- Preface -- Acknowledgments -- 1. Introduction -- Reference -- 2. Processing the Information and Getting to Know Your Data -- 2.1 Example 1: 2006 Birth Data -- 2.2 Example 2: Alumni Donations -- 2.3 Example 3: Orange Juice -- References -- 3. Standard Linear Regression -- 3.1 Estimation in R -- 3.2 Example 1: Fuel Efficiency of Automobiles -- 3.3 Example 2: Toyota Used-Car Prices -- Appendix 3.A The Effects of Model Overfitting on the Average Mean Square Error of the Regression Prediction -- References -- 4. Local Polynomial Regression: a Nonparametric Regression Approach -- 4.1 Model Selection -- 4.2 Application to Density Estimation and the Smoothing of Histograms -- 4.3 Extension to the Multiple Regression Model -- 4.4 Examples and Software -- References -- 5. Importance of Parsimony in Statistical Modeling -- 5.1 How Do We Guard Against False Discovery -- References -- 6. Penalty-Based Variable Selection in Regression Models with Many Parameters (LASSO) -- 6.1 Example 1: Prostate Cancer -- 6.2 Example 2: Orange Juice -- References -- 7. Logistic Regression -- 7.1 Building a Linear Model for Binary Response Data -- 7.2 Interpretation of the Regression Coefficients in a Logistic Regression Model -- 7.3 Statistical Inference -- 7.4 Classification of New Cases -- 7.5 Estimation in R -- 7.6 Example 1: Death Penalty Data -- 7.7 Example 2: Delayed Airplanes -- 7.8 Example 3: Loan Acceptance -- 7.9 Example 4: German Credit Data -- References -- 8. Binary Classification, Probabilities, and Evaluating Classification Performance -- 8.1 Binary Classification -- 8.2 Using Probabilities to Make Decisions -- 8.3 Sensitivity and Specificity -- 8.4 Example: German Credit Data -- 9. Classification Using a Nearest Neighbor Analysis -- 9.1 The k-Nearest Neighbor Algorithm.
9.2 Example 1: Forensic Glass -- 9.3 Example 2: German Credit Data -- Reference -- 10. The Na�ive Bayesian Analysis: a Model for Predicting a Categorical Response from Mostly Categorical Predictor Variables -- 10.1 Example: Delayed Airplanes -- Reference -- 11. Multinomial Logistic Regression -- 11.1 Computer Software -- 11.2 Example 1: Forensic Glass -- 11.3 Example 2: Forensic Glass Revisited -- Appendix 11.A Specification of a Simple Triplet Matrix -- References -- 12. More on Classification and a Discussion on Discriminant Analysis -- 12.1 Fisher's Linear Discriminant Function -- 12.2 Example 1: German Credit Data -- 12.3 Example 2: Fisher Iris Data -- 12.4 Example 3: Forensic Glass Data -- 12.5 Example 4: MBA Admission Data -- Reference -- 13. Decision Trees -- 13.1 Example 1: Prostate Cancer -- 13.2 Example 2: Motorcycle Acceleration -- 13.3 Example 3: Fisher Iris Data Revisited -- 14. Further Discussion on Regression and Classification Trees, Computer Software, and Other Useful Classification Methods -- 14.1 R Packages for Tree Construction -- 14.2 Chi-Square Automatic Interaction Detection (CHAID) -- 14.3 Ensemble Methods: Bagging, Boosting, and Random Forests -- 14.4 Support Vector Machines (SVM) -- 14.5 Neural Networks -- 14.6 The R Package Rattle: A Useful Graphical User Interface for Data Mining -- References -- 15. Clustering -- 15.1 k-Means Clustering -- 15.2 Another Way to Look at Clustering: Applying the Expectation-Maximization (EM) Algorithm to Mixtures of Normal Distributions -- 15.3 Hierarchical Clustering Procedures -- References -- 16. Market Basket Analysis: Association Rules and Lift -- 16.1 Example 1: Online Radio -- 16.2 Example 2: Predicting Income -- References -- 17. Dimension Reduction: Factor Models and Principal Components -- 17.1 Example 1: European Protein Consumption -- 17.2 Example 2: Monthly US Unemployment Rates.
18. Reducing the Dimension in Regressions with Multicollinear Inputs: Principal Components Regression and Partial Least Squares -- 18.1 Three Examples -- References -- 19. Text as Data: Text Mining and Sentiment Analysis -- 19.1 Inverse Multinomial Logistic Regression -- 19.2 Example 1: Restaurant Reviews -- 19.3 Example 2: Political Sentiment -- Appendix 19.A Relationship Between the Gentzkow Shapiro Estimate of "Slant" and Partial Least Squares -- References -- 20. Network Data -- 20.1 Example 1: Marriage and Power in Fifteenth Century Florence -- 20.2 Example 2: Connections in a Friendship Network -- References -- Appendix A: Exercises -- Exercise 1 -- Exercise 2 -- Exercise 3 -- Exercise 4 -- Exercise 5 -- Exercise 6 -- Exercise 7 -- Appendix B: References -- Index.
Tags from this library: No tags from this library for this title. Log in to add tags.
Item type Current location Collection Call number URL Copy number Status Date due Item holds
E-book E-book IUKL Library
Subscripti https://ebookcentral.proquest.com/lib/kliuc-ebooks/detail.action?docID=7103844 1 Available
Total holds: 0

Intro -- DATA MINING AND BUSINESS ANALYTICS WITH R -- CONTENTS -- Preface -- Acknowledgments -- 1. Introduction -- Reference -- 2. Processing the Information and Getting to Know Your Data -- 2.1 Example 1: 2006 Birth Data -- 2.2 Example 2: Alumni Donations -- 2.3 Example 3: Orange Juice -- References -- 3. Standard Linear Regression -- 3.1 Estimation in R -- 3.2 Example 1: Fuel Efficiency of Automobiles -- 3.3 Example 2: Toyota Used-Car Prices -- Appendix 3.A The Effects of Model Overfitting on the Average Mean Square Error of the Regression Prediction -- References -- 4. Local Polynomial Regression: a Nonparametric Regression Approach -- 4.1 Model Selection -- 4.2 Application to Density Estimation and the Smoothing of Histograms -- 4.3 Extension to the Multiple Regression Model -- 4.4 Examples and Software -- References -- 5. Importance of Parsimony in Statistical Modeling -- 5.1 How Do We Guard Against False Discovery -- References -- 6. Penalty-Based Variable Selection in Regression Models with Many Parameters (LASSO) -- 6.1 Example 1: Prostate Cancer -- 6.2 Example 2: Orange Juice -- References -- 7. Logistic Regression -- 7.1 Building a Linear Model for Binary Response Data -- 7.2 Interpretation of the Regression Coefficients in a Logistic Regression Model -- 7.3 Statistical Inference -- 7.4 Classification of New Cases -- 7.5 Estimation in R -- 7.6 Example 1: Death Penalty Data -- 7.7 Example 2: Delayed Airplanes -- 7.8 Example 3: Loan Acceptance -- 7.9 Example 4: German Credit Data -- References -- 8. Binary Classification, Probabilities, and Evaluating Classification Performance -- 8.1 Binary Classification -- 8.2 Using Probabilities to Make Decisions -- 8.3 Sensitivity and Specificity -- 8.4 Example: German Credit Data -- 9. Classification Using a Nearest Neighbor Analysis -- 9.1 The k-Nearest Neighbor Algorithm.

9.2 Example 1: Forensic Glass -- 9.3 Example 2: German Credit Data -- Reference -- 10. The Na�ive Bayesian Analysis: a Model for Predicting a Categorical Response from Mostly Categorical Predictor Variables -- 10.1 Example: Delayed Airplanes -- Reference -- 11. Multinomial Logistic Regression -- 11.1 Computer Software -- 11.2 Example 1: Forensic Glass -- 11.3 Example 2: Forensic Glass Revisited -- Appendix 11.A Specification of a Simple Triplet Matrix -- References -- 12. More on Classification and a Discussion on Discriminant Analysis -- 12.1 Fisher's Linear Discriminant Function -- 12.2 Example 1: German Credit Data -- 12.3 Example 2: Fisher Iris Data -- 12.4 Example 3: Forensic Glass Data -- 12.5 Example 4: MBA Admission Data -- Reference -- 13. Decision Trees -- 13.1 Example 1: Prostate Cancer -- 13.2 Example 2: Motorcycle Acceleration -- 13.3 Example 3: Fisher Iris Data Revisited -- 14. Further Discussion on Regression and Classification Trees, Computer Software, and Other Useful Classification Methods -- 14.1 R Packages for Tree Construction -- 14.2 Chi-Square Automatic Interaction Detection (CHAID) -- 14.3 Ensemble Methods: Bagging, Boosting, and Random Forests -- 14.4 Support Vector Machines (SVM) -- 14.5 Neural Networks -- 14.6 The R Package Rattle: A Useful Graphical User Interface for Data Mining -- References -- 15. Clustering -- 15.1 k-Means Clustering -- 15.2 Another Way to Look at Clustering: Applying the Expectation-Maximization (EM) Algorithm to Mixtures of Normal Distributions -- 15.3 Hierarchical Clustering Procedures -- References -- 16. Market Basket Analysis: Association Rules and Lift -- 16.1 Example 1: Online Radio -- 16.2 Example 2: Predicting Income -- References -- 17. Dimension Reduction: Factor Models and Principal Components -- 17.1 Example 1: European Protein Consumption -- 17.2 Example 2: Monthly US Unemployment Rates.

18. Reducing the Dimension in Regressions with Multicollinear Inputs: Principal Components Regression and Partial Least Squares -- 18.1 Three Examples -- References -- 19. Text as Data: Text Mining and Sentiment Analysis -- 19.1 Inverse Multinomial Logistic Regression -- 19.2 Example 1: Restaurant Reviews -- 19.3 Example 2: Political Sentiment -- Appendix 19.A Relationship Between the Gentzkow Shapiro Estimate of "Slant" and Partial Least Squares -- References -- 20. Network Data -- 20.1 Example 1: Marriage and Power in Fifteenth Century Florence -- 20.2 Example 2: Connections in a Friendship Network -- References -- Appendix A: Exercises -- Exercise 1 -- Exercise 2 -- Exercise 3 -- Exercise 4 -- Exercise 5 -- Exercise 6 -- Exercise 7 -- Appendix B: References -- Index.

Description based on publisher supplied metadata and other sources.

Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2022. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.

There are no comments for this item.

Log in to your account to post a comment.
The Library's homepage is at http://library.iukl.edu.my/.