Blog

Student Blog

Streaming Stack Overflow Data Using Kinesis Firehose

October 28, 2020

The blog is posted by WeCloudData’s  student Sneha Mehrin.

Overview on how to ingest stack overflow data using Kinesis Firehose and Boto3 and store in S3

grid of tv channels

This article is a part of the series and continuation of the previous post.

Why using Streaming data ingestion?

Traditional enterprises follow a methodology of batch processing where you gather the data, load it periodically into a database, and analyse it hours, days, or weeks later.

However, due to the numerous data sources that continuously generates streams of data, it has become imperative for most of the business to process and analyse massive scale of data within a latency of milliseconds.

Apache Kafka and Amazon Kinesis are two of the more widely adopted messaging queue systems.

Two Main Services Offered by Amazon

Kinesis Firehose is the easiest way to persist your streaming data to a supported destination.

Key advantage of Kinesis streams is that data is immediately available to start consuming almost immediately after the data is added.

Key Steps in the data ingestion pipeline

Most of the steps in Kinesis Firehose is pretty straight forward, so let’s get straight to it.

Pre-Requisites

  1. Set up your aws account.
  2. Install awscli (pip install awscli)
  3. Run aws-configure to set the AWS access key and Secret Access Key.

4. You can run ls -alf ~ to locate the aws file( It is in the hidden folders)

5. Configure aws credentials for a demo profile

aws configure — profile bigdata-demo

In all the scripts, I am connecting to aws using pycharm and the profile i configured above.

Creating a Delivery Stream

Step 1 : Login to aws console and choose the service kinesis. Select Kinesis Firehose

Step 2 : Give a name for Kinesis firehose

Step 3 : Choose the source as Direct put or other sources, as we will be streaming using python boto3.

Step 4 : Choose the default options for processing records as we will be using spark to process these records.

Step 5 : Choose the destination S3 and choose the S3 bucket

It is important to give a prefix for the S3 bucket name as we will be using to spark process the records from the exact folder location

S3 Bucket Prefix

StackOverFlow/year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}

Error Prefix

myError/year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/!{firehose:error-output-type}

Step 6 : Leave the default option for configure settings

Step 6 :Choose a IAM role which provides read write access to Kinesis.

Step 7: Review and create the delivery stream

Sending Data to Kinesis Firehose

Now that we have the delivery stream created, our next step is to send the data to Firehose.

There are two ways to send data to Firehose :

In my use-case, i will be using python to connect to AWS using boto3 and then use the Kinesis firehose api to send the data to Firehose.

Below is the code which streams data from Stack overflow and sends data into Kinesis Firehose.

<script src=”https://gist.github.com/snehamehrin/b1b2db14eb420b7bc398010e31a4e07b.js”></script>

Key Tips

We will be streaming data created only today and then output it to s3 bucket.

sort= “creation” streams the Stackoverflow data created only today.

Spark Job runs the next day will process the data from the s3 bucket based on the previous day and append it to redshift.

Stackoverflow api has a lot of limitations and can only process 10,000 requests in a day. So in-order to get more data i used Kinesis data generator .

Once i got an idea about how the actual data looked like using the script above, i used Kinesis data generator to generate a lot of fake data for our analytics purpose.

You can check out the article for generating data using Kinesis data generator here.

The next step is to process this data using Spark. This is covered in detail in this article

To find out more about the courses our students have taken to complete these projects and what you can learn from WeCloudData, view the learning path. To read more posts from Sneha, check out her Medium posts here.

SPEAK TO OUR ADVISOR
Join our programs and advance your career in Data EngineeringData Science

"*" indicates required fields

Name*
This field is for validation purposes and should be left unchanged.
Other blogs you might like
Blog
I almost called this blog ‘Things I Would Have Loved to Have Known Before Starting Out on a Career…
by Cherice
September 14, 2023
Student Blog
The blog is posted by WeCloudData’s Big Data course student Udayan Maurya. Spark is one of the most popular…
by Student WeCloudData
June 1, 2020
Uncategorized
Discover how machine learning revolutionizes real estate price prediction, overcoming biases and empowering data-driven decisions. Harness AI for accurate…
by john
June 21, 2024

Kick start your career transformation