Neuerscheinungen 2019Stand: 2020-02-01 |
Schnellsuche
ISBN/Stichwort/Autor
|
Herderstraße 10 10625 Berlin Tel.: 030 315 714 16 Fax 030 315 714 14 info@buchspektrum.de |
Irfan Elahi
Scala Programming for Big Data Analytics
Get Started With Big Data Analytics Using Apache Spark
1st ed. 2019. xxi, 306 S. 125 SW-Abb. 235 mm
Verlag/Jahr: SPRINGER, BERLIN; APRESS 2019
ISBN: 1-484-24809-0 (1484248090)
Neue ISBN: 978-1-484-24809-6 (9781484248096)
Preis und Lieferzeit: Bitte klicken
Gain the key language concepts and programming techniques of Scala in the context of big data analytics and Apache Spark. The book begins by introducing you to Scala and establishes a firm contextual understanding of why you should learn this language, how it stands in comparison to Java, and how Scala is related to Apache Spark for big data analytics. Next, you´ll set up the Scala environment ready for examining your first Scala programs. This is followed by sections on Scala fundamentals including mutable/immutable variables, the type hierarchy system, control flow expressions and code blocks.
The author discusses functions at length and highlights a number of associated concepts such as functional programming and anonymous functions. The book then delves deeper into Scala´s powerful collections system because many of Apache Spark´s APIs bear a strong resemblance to Scala collections.
Along the way you´ll see the development life cycle of a Scala program. This involves compiling and building programs using the industry-standard Scala Build Tool (SBT). You´ll cover guidelines related to dependency management using SBT as this is critical for building large Apache Spark applications. Scala Programming for Big Data Analytics concludes by demonstrating how you can make use of the concepts to write programs that run on the Apache Spark framework. These programs will provide distributed and parallel computing, which is critical for big data analytics.
What You Will Learn
See the fundamentals of Scala as a general-purpose programming language
Understand functional programming and object-oriented programming constructs in Scala
Use Scala collections and functions
Develop, package and run Apache Spark applications for big data analytics Who This Book Is For
Data scientists, data analysts and data engineers who intend to use Apache Spark for large-scale analytics.
Irfan Elahi is a senior consultant in Deloitte Australia specializing in big data and machine learning. His primary focus lies in using big data and machine learning to support business growth with multifaceted and strong ties to the telecommunications, energy, retail and media industries. He has worked on a number of projects in Australia to design, prototype, develop, and deploy production-grade big data solutions in Amazon Web Services (AWS) and Azure to support a number of use-cases ranging from enterprise data warehousing, ETL offloading, analytics, batch processing and stream processing while employing leading commercial Hadoop solutions such as Cloudera and Hortonworks. He has worked closely with clients´ systems and software engineering teams using DevOps to enhance the continuous integration and continuous deployment (CICD) processes and manage a Hadoop cluster´s operations and security.
In addition to his technology competencies, Irfan has recently presented at the DataWorks Summit in Sydney on the subject of in-memory big data technologies and in a number of meetups all around the world. He also remains involved delivering knowledge-transfer sessions, training and workshops about big data and machine learning, both within his firm and at clients. He also has launched Udemy courses on Apache Spark for big data analytics and R programming for data science with more than 18,000 students from 145 countries enrolled.