Blog

Student Blog

Data Processing Stack Overflow Data Using Apache Spark on AWS EMR

November 2, 2020

The blog is posted by WeCloudData’s  student Sneha Mehrin.

An overview on how to process data in spark using DataBricks, add the script as a step in AWS EMR and output the data to Amazon Redshift

This article is part of the series and continuation of the previous post.

finger print image

In the previous post, we saw how we can stream the data using Kinesis Firehose either using stackapi or using Kinesis Data Generator.

In this post, let’s see how we can decide on the key processing steps that need to be performed before we send the data to any analytics tool.

The key steps, I followed in this phase are as below :

Step 1 : Prototype in DataBricks

I always use DataBricks for prototyping as they are free and can be directly connected to AWS S3. Since EMR is charged by the hour, if you want to use spark for your projects DataBricks is the best way to go.

Key Steps in DataBricks

  1. Create a cluster

This is fairly straightforward, just create a cluster and attach to it any notebook.

2. Create a notebook and attach it to the cluster

3. Mount the S3 bucket to databricks

The good thing about databricks is that you can directly connect it to S3, but you need to mount the S3 bucket.

Below code mounts the s3 bucket so that you can access the contents of the S3 bucket

This command will show you the contents of the S3 bucket

%fs ls /mnt/stack-overflow-bucket/

After this we do some Exploratory data analysis in spark

Key Processing techniques done here are :

Check and Delete duplicate entries ,if any.

Convert the date into Salesforce compatible format (YYYY/MM/DD).

Step 2 : Create a Test EMR cluster

pyspark script that you developed in the DataBricks environment gives you the skeleton of the code .

However, you are not done yet!
You need to ensure that this code won’t fail when you run the script in aws. So for this purpose, I create a dummy EMR cluster and test my code in the iPython notebook.This step is covered in detail in this post.

Step 3:Test the Prototype code in pyspark

In the Jupyter notebook that you created, make sure to test out the code.

Key points to note here:

Kinesis Streams output the data into multiple files within a day. The format of the file depends on the prefix of the S3 file location set while creating the delivery stream.

Spark job runs on the next day and is expected to pick all the files of Yesterday’s date from the s3 bucket.

Function get_latest_filename creates a text string which matches the file name spark is supposed to pick up.

So in the Jupyter notebook, i am testing if spark is able to process the file without any errors

Step 4 & 5 :Convert the ipython notebook into python notebook and upload it in s3.

You can use the below commands to convert the ipython notebook to pyscript

pip install ipython

pip install nbconvert

ipython nbconvert — to script stack_processing.ipynb

After this is done ,upload this script to the S3 folder

Step 5 : Add the script as a step in EMR job using boto3

Before we do this, we need to configure the redshift cluster details and test if the functionality is working.

This will be covered in the Redshift post.

To find out more about the courses our students have taken to complete these projects and what you can learn from WeCloudData, view the learning path. To read more posts from Sneha, check out her Medium posts here.
SPEAK TO OUR ADVISOR
Join our programs and advance your career in Business IntelligenceData EngineeringData Science

"*" indicates required fields

Name*
This field is for validation purposes and should be left unchanged.
Other blogs you might like
Blog
Have you noticed how Siri understands your request effortlessly and how Netflix seems to know exactly what you’ll want…
by WeCloudData
January 28, 2025
Blog, Learning Guide
AI or Artificial Intelligence agents are software programs that can interact with their environment, collect data, perceive, learn, and…
by WeCloudData
February 28, 2025
Blog
Prompt engineering has become a key discipline for optimizing AI systems. Among the various prompt engineering techniques, role-playing prompting…
by WeCloudData
January 22, 2025

Kick start your career transformation

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.