Date: The month of August
Time: 11:00 a.m. EDT
Description: R is well-suited to handle data that can fit in memory but additional tools are needed when the amount of data you want to analyze in R grows beyond the limits of your machine’s RAM. There have been a variety of solutions to this problem over the years that aim to solve this problem in R; one of the latest options is Apache Spark™. Spark is a cluster computing tool that enables analysis of massive, distributed data across dozens or hundreds of servers. RStudio recently announced a new open-source package called sparklyr that facilitates a connection between R and Spark using a full-fledged dplyr backend with support for the entirety of Spark’s MLlib library. Due to Spark’s ability to interact with distributed data with little latency, it is becoming an attractive tool for interfacing with large datasets in an interactive environment. In addition to handling the storage of data, Spark also incorporates a variety of other tools including stream processing, computing on graphs, and a distributed machine learning framework. Some of these tools are available to R programmers via the sparklyr package. In this four part series, we’ll discuss how to leverage Spark’s capabilities in a modern R environment. The Sparklyr Series:
Logistics: Only 1,000 live attendees are allowed in the Webinar on a first come first serve basis. It is typical for many people who register to not attend (which is why registration does not guarantee access.) If for any reason you cannot make the webinar or cannot get in we will provide links to the recording as well as all materials within 48 hours.
Javier Luraschi, Software Engineer- Javier is a Software Engineer with experience in technologies ranging from desktop, web, mobile and backend; to augmented reality and deep learning applications. He previously worked for Microsoft Research and SAP and holds a double degree in Mathematics and Software Engineering.
Edgar Ruiz, Solutions Engineer- Edgar has a background in deploying enterprise reporting and Business Intelligence solutions. He has posted multiple articles and blog posts sharing analytics insights and server infrastructure for Data Science. He lives with his family near Biloxi, MS.
Webinar Recordings: We try to record every webinar we host and post all materials on our website.
http://www.rstudio.com/resources/webinars/
Slides & Code:
We've started a Github repository with all webinar materials. Speakers for this webinar and all future webinars will add their materials to the repository.
https://github.com/rstudio/webinars