CSC's trainings and events have moved

Find our upcoming trainings and events at www.csc.fi.

This site is an archive version and is no longer updated.
 

Go to CSC Customer trainings and Events

null analysing-datasets-spark_2017
Analysing large datasets with Apache Spark
Date: 16.11.2017 9:00 - 17.11.2017 16:00
Location details: The event is organised at the CSC Training Facilities located in the premises of CSC at Keilaranta 14, Espoo, Finland. The best way to reach us is by public transportation; more detailed travel tips are available.
Language: english-language
lecturers: Apurva Nandan (CSC)
Tommi Jalkanen (teaching assistant
CSC)
Price:
  • free-price-finnish-academics.
  • free-price-others.
The course materials, lunches as well as morning and afternoon coffees are free of charge.

THE COURSE IS FULLY BOOKED!
If you wish to be added to the waiting list please contact patc@csc.fi The seats are filled in the registration order. If you have registered to this course and you are not able to attend, please cancel your registration in advance.
Additional Information
This course is part of the PRACE Advanced Training Centres (PATCs) activity. Please visit the PRACE Training portal for further information about the course. For content please contact apurva.nandan@csc.fi, for practicalities patc@csc.fi

Description

With the rapid growth in data volume that is being used in data analysis tasks, it gets more and more challenging for the user to process it using standard methods. Enter Spark, a high-performance distributed computing framework, which allows us to tackle big-data problems by distributing the workload across a cluster of machines.

This two day course addresses the technical architechture and use cases of Spark, setting it up for your work, best practices and programming aspects. The first day includes the overview, architechtural concepts and programming with Spark's fundamental data structure (RDD). The second day focuses on the SQL module of Spark, which allows the user to analyse data using Spark's distributed collection (Dataframes) by using the traditional SQL queries.

Learning outcome

After this course you should be able to write simple to intermediate programmes in Spark using RDD and dataframes/SQL.

Prerequisites

Basic knowledge on programming in general is recommended (ideally, Python).

Please NOTE: This is not a regular programming course, the participants would be expected to learn emerging concepts in the field of big data / distributed processing, which might be completely different from the concepts of a general progamming language.

Program

Day 1, Thursday 16.11

  •    09.00 – 09.30 Overview and architechture of Spark
  •    09.30 – 10.15 Basics of RDDs + Demo
  •    10.15 – 10.30 Coffee break
  •    10.30 – 11.00 RDD: Transformations and Actions
  •    11.00 – 12.00 Exercises
  •    12.00 – 13.00 Lunch
  •    13.00 – 13.30 Word Count Example
  •    13.30 – 14.00 Exercises
  •    14.00 – 14.15 Short overview of Machine learning library of Spark
  •    14.15 – 14.30 Coffee break
  •    14.30 – 15.30 Exercises
  •    15.30 – 16.00 Summary of the first day & exercises walk-trough

Day 2, Friday 17.11

  •    09.00 – 09.30 Spark Dataframes and SQL overview
  •    09.30 – 10.15 Exercises
  •    10.15 – 10.30 Coffee break
  •    10.30 – 10.45 Dataframes and SQL contd.
  •    10.45 – 12.00 Exercises
  •    12.00 – 13.00 Lunch
  •    13.00 – 13.30 Best practices and other useful stuff
  •    13.30 – 14.30 Exercises
  •    14.30 – 14.45 Coffee break
  •    14.45 – 15.00 Brief overview of Spark Streaming
  •    15.00 – 15.15 Demo: Processing live twitter stream data
  •    15.15 – 16.00 Summary of the course & exercises walk-trough