If you’ve been asked to maintain large and complex Hadoop clusters, this book is a must. Demand for operations-specific material has skyrocketed now that Hadoop is becoming the de facto standard for truly large-scale data processing in the data center. Eric Sammer, Principal Solution Architect at Cloudera, shows you the particulars of running Hadoop in production, from planning, installing, and configuring the system to providing ongoing maintenance.Rather than run through all possible scenarios, this pragmatic operations guide calls out what works, as demonstrated in critical deployments.* Get a high-level overview of HDFS and MapReduce: why they exist and how they work* Plan a Hadoop deployment, from hardware and OS selection to network requirements* Learn setup and configuration details with a list of critical properties* Manage resources by sharing a cluster across multiple groups* Get a runbook of the most common cluster maintenance tasks* Monitor Hadoop clusters—and learn troubleshooting with the help of real-world war stories* Use basic tools and techniques to handle backup and catastrophic failure von Sammer, Eric
Eric Sammer is currently a Principal Solution Architect at Cloudera where he helps customers plan, deploy, develop for, and use Hadoop and the related projects at scale. His background is in the development and operations of distributed, highly concurrent, data ingest and processing systems. He's been involved in the open source community and has contributed to a large number of projects over the last decade.