IUKL Library
Vijay Karambelkar, Hrishikesh.

Apache Hadoop 3 Quick Start Guide : Learn about Big Data Processing and Analytics. - 1 online resource (214 pages)

Cover -- Title Page -- Copyright and Credits -- Dedication -- Packt Upsell -- Contributors -- Table of Contents -- Preface -- Chapter 1: Hadoop 3.0 - Background and Introduction -- How it all started -- What Hadoop is and why it is important -- How Apache Hadoop works -- Resource Manager -- Node Manager -- YARN Timeline Service version 2 -- NameNode -- DataNode -- Hadoop 3.0 releases and new features -- Choosing the right Hadoop distribution -- Cloudera Hadoop distribution -- Hortonworks Hadoop distribution -- MapR Hadoop distribution -- Summary -- Chapter 2: Planning and Setting Up Hadoop Clusters -- Technical requirements -- Prerequisites for Hadoop setup -- Preparing hardware for Hadoop -- Readying your system -- Installing the prerequisites -- Working across nodes without passwords (SSH in keyless) -- Downloading Hadoop -- Running Hadoop in standalone mode -- Setting up a pseudo Hadoop cluster -- Planning and sizing clusters -- Initial load of data -- Organizational data growth -- Workload and computational requirements -- High availability and fault tolerance -- Velocity of data and other factors -- Setting up Hadoop in cluster mode -- Installing and configuring HDFS in cluster mode -- Setting up YARN in cluster mode -- Diagnosing the Hadoop cluster -- Working with log files -- Cluster debugging and tuning tools -- JPS (Java Virtual Machine Process Status) -- JStack -- Summary -- Chapter 3: Deep Dive into the Hadoop Distributed File System -- Technical requirements -- How HDFS works -- Key features of HDFS -- Achieving multi tenancy in HDFS -- Snapshots of HDFS -- Safe mode -- Hot swapping -- Federation -- Intra-DataNode balancer -- Data flow patterns of HDFS -- HDFS as primary storage with cache -- HDFS as archival storage -- HDFS as historical storage -- HDFS as a backbone -- HDFS configuration files -- Hadoop filesystem CLIs. Working with HDFS user commands -- Working with Hadoop shell commands -- Working with data structures in HDFS -- Understanding SequenceFile -- MapFile and its variants -- Summary -- Chapter 4: Developing MapReduce Applications -- Technical requirements -- How MapReduce works -- What is MapReduce? -- An example of MapReduce -- Configuring a MapReduce environment -- Working with mapred-site.xml -- Working with Job history server -- RESTful APIs for Job history server -- Understanding Hadoop APIs and packages -- Setting up a MapReduce project -- Setting up an Eclipse project -- Deep diving into MapReduce APIs -- Configuring MapReduce jobs -- Understanding input formats -- Understanding output formats -- Working with Mapper APIs -- Working with the Reducer API -- Compiling and running MapReduce jobs -- Triggering the job remotely -- Using Tool and ToolRunner -- Unit testing of MapReduce jobs -- Failure handling in MapReduce -- Streaming in MapReduce programming -- Summary -- Chapter 5: Building Rich YARN Applications -- Technical requirements -- Understanding YARN architecture -- Key features of YARN -- Resource models in YARN -- YARN federation -- RESTful APIs -- Configuring the YARN environment in a cluster -- Working with YARN distributed CLI -- Deep dive with YARN application framework -- Setting up YARN projects -- Writing your YARN application with YarnClient -- Writing a custom application master -- Building and monitoring a YARN application on a cluster -- Building a YARN application -- Monitoring your application -- Summary -- Chapter 6: Monitoring and Administration of a Hadoop Cluster -- Roles and responsibilities of Hadoop administrators -- Planning your distributed cluster -- Hadoop applications, ports, and URLs -- Resource management in Hadoop -- Fair Scheduler -- Capacity Scheduler -- High availability of Hadoop. High availability for NameNode -- High availability for Resource Manager -- Securing Hadoop clusters -- Securing your Hadoop application -- Securing your data in HDFS -- Performing routine tasks -- Working with safe mode -- Archiving in Hadoop -- Commissioning and decommissioning of nodes -- Working with Hadoop Metric -- Summary -- Chapter 7: Demystifying Hadoop Ecosystem Components -- Technical requirements -- Understanding Hadoop's Ecosystem -- Working with Apache Kafka -- Writing Apache Pig scripts -- Pig Latin -- User-defined functions (UDFs) -- Transferring data with Sqoop -- Writing Flume jobs -- Understanding Hive -- Interacting with Hive - CLI, beeline, and web interface -- Hive as a transactional system -- Using HBase for NoSQL storage -- Summary -- Advanced Topics in Apache Hadoop -- Technical requirements -- Hadoop use cases in industries -- Healthcare -- Oil and Gas -- Finance -- Government Institutions -- Telecommunications -- Retail -- Insurance -- Advanced Hadoop data storage file formats -- Parquet -- Apache ORC -- Avro -- Real-time streaming with Apache Storm -- Data analytics with Apache Spark -- Summary -- Other Books You May Enjoy -- Index.

Apache Hadoop is a widely used distributed data platform. It enables large datasets to be efficiently processed instead of using one large computer to store and process the data. This book will get you started with the Hadoop ecosystem, and introduce you to the main technical topics such as MapReduce, YARN and HDFS.

9781788994347


Apache Hadoop.
Big data.
Data mining.


Electronic books.

QA76.9.D5 .V553 2018

004.36
The Library's homepage is at http://library.iukl.edu.my/.