000 -LEADER |
fixed length control field |
06989nam a22004813i 4500 |
001 - CONTROL NUMBER |
control field |
EBC5573402 |
003 - CONTROL NUMBER IDENTIFIER |
control field |
MiAaPQ |
005 - DATE AND TIME OF LATEST TRANSACTION |
control field |
20220331084434.0 |
007 - PHYSICAL DESCRIPTION FIXED FIELD--GENERAL INFORMATION |
fixed length control field |
cr cnu|||||||| |
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION |
fixed length control field |
220328s2018 xx o ||||0 eng d |
020 ## - INTERNATIONAL STANDARD BOOK NUMBER |
International Standard Book Number |
9781788994347 |
Qualifying information |
(electronic bk.) |
|
Cancelled/invalid ISBN |
9781788999830 |
035 ## - SYSTEM CONTROL NUMBER |
System control number |
(MiAaPQ)EBC5573402 |
|
System control number |
(Au-PeEL)EBL5573402 |
|
System control number |
(CaPaEBR)ebr11630297 |
|
System control number |
(OCoLC)1063855629 |
040 ## - CATALOGING SOURCE |
Original cataloging agency |
MiAaPQ |
Language of cataloging |
eng |
Description conventions |
rda |
-- |
pn |
Transcribing agency |
MiAaPQ |
Modifying agency |
MiAaPQ |
050 #4 - LIBRARY OF CONGRESS CALL NUMBER |
Classification number |
QA76.9.D5 .V553 2018 |
082 0# - DEWEY DECIMAL CLASSIFICATION NUMBER |
Classification number |
004.36 |
100 1# - MAIN ENTRY--PERSONAL NAME |
Personal name |
Vijay Karambelkar, Hrishikesh. |
245 10 - TITLE STATEMENT |
Title |
Apache Hadoop 3 Quick Start Guide : |
Remainder of title |
Learn about Big Data Processing and Analytics. |
264 #1 - |
-- |
Birmingham : |
-- |
Packt Publishing, Limited, |
-- |
2018. |
|
-- |
�2018. |
300 ## - PHYSICAL DESCRIPTION |
Extent |
1 online resource (214 pages) |
336 ## - |
-- |
text |
-- |
txt |
-- |
rdacontent |
337 ## - |
-- |
computer |
-- |
c |
-- |
rdamedia |
338 ## - |
-- |
online resource |
-- |
cr |
-- |
rdacarrier |
505 0# - FORMATTED CONTENTS NOTE |
Formatted contents note |
Cover -- Title Page -- Copyright and Credits -- Dedication -- Packt Upsell -- Contributors -- Table of Contents -- Preface -- Chapter 1: Hadoop 3.0 - Background and Introduction -- How it all started -- What Hadoop is and why it is important -- How Apache Hadoop works -- Resource Manager -- Node Manager -- YARN Timeline Service version 2 -- NameNode -- DataNode -- Hadoop 3.0 releases and new features -- Choosing the right Hadoop distribution -- Cloudera Hadoop distribution -- Hortonworks Hadoop distribution -- MapR Hadoop distribution -- Summary -- Chapter 2: Planning and Setting Up Hadoop Clusters -- Technical requirements -- Prerequisites for Hadoop setup -- Preparing hardware for Hadoop -- Readying your system -- Installing the prerequisites -- Working across nodes without passwords (SSH in keyless) -- Downloading Hadoop -- Running Hadoop in standalone mode -- Setting up a pseudo Hadoop cluster -- Planning and sizing clusters -- Initial load of data -- Organizational data growth -- Workload and computational requirements -- High availability and fault tolerance -- Velocity of data and other factors -- Setting up Hadoop in cluster mode -- Installing and configuring HDFS in cluster mode -- Setting up YARN in cluster mode -- Diagnosing the Hadoop cluster -- Working with log files -- Cluster debugging and tuning tools -- JPS (Java Virtual Machine Process Status) -- JStack -- Summary -- Chapter 3: Deep Dive into the Hadoop Distributed File System -- Technical requirements -- How HDFS works -- Key features of HDFS -- Achieving multi tenancy in HDFS -- Snapshots of HDFS -- Safe mode -- Hot swapping -- Federation -- Intra-DataNode balancer -- Data flow patterns of HDFS -- HDFS as primary storage with cache -- HDFS as archival storage -- HDFS as historical storage -- HDFS as a backbone -- HDFS configuration files -- Hadoop filesystem CLIs. |
|
Formatted contents note |
Working with HDFS user commands -- Working with Hadoop shell commands -- Working with data structures in HDFS -- Understanding SequenceFile -- MapFile and its variants -- Summary -- Chapter 4: Developing MapReduce Applications -- Technical requirements -- How MapReduce works -- What is MapReduce? -- An example of MapReduce -- Configuring a MapReduce environment -- Working with mapred-site.xml -- Working with Job history server -- RESTful APIs for Job history server -- Understanding Hadoop APIs and packages -- Setting up a MapReduce project -- Setting up an Eclipse project -- Deep diving into MapReduce APIs -- Configuring MapReduce jobs -- Understanding input formats -- Understanding output formats -- Working with Mapper APIs -- Working with the Reducer API -- Compiling and running MapReduce jobs -- Triggering the job remotely -- Using Tool and ToolRunner -- Unit testing of MapReduce jobs -- Failure handling in MapReduce -- Streaming in MapReduce programming -- Summary -- Chapter 5: Building Rich YARN Applications -- Technical requirements -- Understanding YARN architecture -- Key features of YARN -- Resource models in YARN -- YARN federation -- RESTful APIs -- Configuring the YARN environment in a cluster -- Working with YARN distributed CLI -- Deep dive with YARN application framework -- Setting up YARN projects -- Writing your YARN application with YarnClient -- Writing a custom application master -- Building and monitoring a YARN application on a cluster -- Building a YARN application -- Monitoring your application -- Summary -- Chapter 6: Monitoring and Administration of a Hadoop Cluster -- Roles and responsibilities of Hadoop administrators -- Planning your distributed cluster -- Hadoop applications, ports, and URLs -- Resource management in Hadoop -- Fair Scheduler -- Capacity Scheduler -- High availability of Hadoop. |
|
Formatted contents note |
High availability for NameNode -- High availability for Resource Manager -- Securing Hadoop clusters -- Securing your Hadoop application -- Securing your data in HDFS -- Performing routine tasks -- Working with safe mode -- Archiving in Hadoop -- Commissioning and decommissioning of nodes -- Working with Hadoop Metric -- Summary -- Chapter 7: Demystifying Hadoop Ecosystem Components -- Technical requirements -- Understanding Hadoop's Ecosystem -- Working with Apache Kafka -- Writing Apache Pig scripts -- Pig Latin -- User-defined functions (UDFs) -- Transferring data with Sqoop -- Writing Flume jobs -- Understanding Hive -- Interacting with Hive - CLI, beeline, and web interface -- Hive as a transactional system -- Using HBase for NoSQL storage -- Summary -- Advanced Topics in Apache Hadoop -- Technical requirements -- Hadoop use cases in industries -- Healthcare -- Oil and Gas -- Finance -- Government Institutions -- Telecommunications -- Retail -- Insurance -- Advanced Hadoop data storage file formats -- Parquet -- Apache ORC -- Avro -- Real-time streaming with Apache Storm -- Data analytics with Apache Spark -- Summary -- Other Books You May Enjoy -- Index. |
520 ## - SUMMARY, ETC. |
Summary, etc |
Apache Hadoop is a widely used distributed data platform. It enables large datasets to be efficiently processed instead of using one large computer to store and process the data. This book will get you started with the Hadoop ecosystem, and introduce you to the main technical topics such as MapReduce, YARN and HDFS. |
588 ## - |
-- |
Description based on publisher supplied metadata and other sources. |
590 ## - LOCAL NOTE (RLIN) |
Local note |
Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2022. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries. |
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM |
Topical term or geographic name as entry element |
Apache Hadoop. |
|
Topical term or geographic name as entry element |
Big data. |
|
Topical term or geographic name as entry element |
Data mining. |
655 #4 - INDEX TERM--GENRE/FORM |
Genre/form data or focus term |
Electronic books. |
776 08 - ADDITIONAL PHYSICAL FORM ENTRY |
Display text |
Print version: |
Main entry heading |
Vijay Karambelkar, Hrishikesh |
Title |
Apache Hadoop 3 Quick Start Guide |
Place, publisher, and date of publication |
Birmingham : Packt Publishing, Limited,c2018 |
International Standard Book Number |
9781788999830 |
797 2# - LOCAL ADDED ENTRY--CORPORATE NAME (RLIN) |
Corporate name or jurisdiction name as entry element |
ProQuest (Firm) |
856 40 - ELECTRONIC LOCATION AND ACCESS |
Uniform Resource Identifier |
https://ebookcentral.proquest.com/lib/kliuc-ebooks/detail.action?docID=5573402 |
Public note |
Click to View |
942 ## - ADDED ENTRY ELEMENTS (KOHA) |
Source of classification or shelving scheme |
|
Koha item type |
E-book |