If your organization is about to enter the world of big data, you not only need to decide whether Apache Hadoop is the right platform to use, but also which of its many components are best suited to your task. This field guide makes the exercise manageable by breaking down the Hadoop ecosystem into short, digestible sections. You’ll quickly understand how Hadoop’s projects, subprojects, and related technologies work together.Each chapter introduces a different topic—such as core technologies or data transfer—and explains why certain components may or may not be useful for particular needs. When it comes to data, Hadoop is a whole new ballgame, but with this handy reference, you’ll have a good grasp of the playing field.Topics include:* Core technologies—Hadoop Distributed File System (HDFS), MapReduce, YARN, and Spark* Database and data management—Cassandra, HBase, MongoDB, and Hive* Serialization—Avro, JSON, and Parquet* Management and monitoring—Puppet, Chef, Zookeeper, and Oozie* Analytic helpers—Pig, Mahout, and MLLib* Data transfer—Scoop, Flume, distcp, and Storm* Security, access control, auditing—Sentry, Kerberos, and Knox* Cloud computing and virtualization—Serengeti, Docker, and Whirr von Sitto, Kevin
Kevin Sitto is a Field Solutions Engineer with Pivotal Software, providing consulting services to help folks understand and address their big data needs.He lives in Maryland with his wife and two kids and enjoys making homebrew beer when he's not writing books about big data.Marshall Presser is a Field Chief Technology Officer for Pivotal and is based in McLean VA. In addition to helping customers solve complex analytic problems with the Greenplum Database, he leads the Hadoop Virtual Field Team, working on issues of integrating Hadoop with relational databases.Prior to coming to Pivotal (formerly Greenplum), he spent 12 years at Oracle, specializing in High Availability, Business Continuity, Clustering, Parallel Database Technology, Disaster Recovery and Large Scale Database Systems. Marshall has also worked for a number of hardware vendors implementing clusters and other parallel architectures. His background includes parallel computation, operating system and compiler development as well as private consulting for organizations in heath care, financial services, and federal and state governments. Marshall holds a B.A in Mathematics and an M.A. in Economics and Statistics from the University of Pennsylvania and a M.Sc. in Computing from Imperial College, London.