The blog is posted by WeCloudData’s Big Data course student Laurent Risser.
Toronto is known for its crazy housing market. It’s getting harder and harder to find an affordable and convenient place. Searching for “How to find an apartment in Toronto” on Google leads to dozens of pages of advice, which is a pretty good indicator that apartment hunting is a painful process.
As a Data Scientist trainee, I was sure that I could alleviate this pain a bit and simplify the way people search for a place to live in. The project I came up with aims to find out the relationships between the price of an apartment in the Greater Toronto Area (GTA), its location, surface, and the number of bedrooms. The business idea of this project is to help apartment seekers to find the best deal across different neighbourhoods in the GTA.
To conduct this project, I decided to use the popular website Craiglist. My idea was to extract the data from the website using a web scraping tool from Python (version 3.7.4), named Beautiful Soup.
To keep everyone here awake, I have divided this project into two parts: the first part is the Web scraping and Data Frame generation, and the second part focuses on the Analysis and Predictions [coming sooner than you expect…].
So, what can I extract from Craiglist?
Craigslist apartment listings for Toronto are located at https://toronto.craigslist.org/d/apts-housing-for-rent/search/apa
To begin, I needed to get the website’s URL. To make it cleaner, I filtered out the posts without picture, to narrow down the search just a little. Even though it’s not a ‘true’ base URL, it is still good for our purpose here.
Then I created my attack plan in four steps:
- Understand the data
- Collect the data
- Create a dataset
- Clean the dataset
Before digging into each step, in this project I used several Python packages, but will only touch upon the most relevant ones. Of course, Beautiful Soup from bs4, which is the module that can parse the HTML of the web page retrieved from the server. I quickly checked the type and length of that item to make sure it matched the number of posts on the page (the default is 120 posts per page).
In case you are interested in details, here is a list of the packages needed for this project:
Understand the Data (the website)
I used the get module from the requests package in Python. I defined a variable response and assigned it to the get method called on the base URL. What I mean by ‘base URL’ is the URL on the first page you want to pull the data from.
Then, to do the scraping correctly, I needed to understand how the website was organized. To do that, I performed a basic search on Craiglist and opened the HTML box. Looking at the screenshot below, you see on the right side <li class=“result-row”>. This is the tag you want to find for a single post; this is the box that contains all the elements I needed!
Collect the data
To make an initial quick test, I worked in the following way: I grabbed the first post and all the variables I wanted from it, and made sure I knew how to access each of them before looping the whole page. In the same manner, I then made sure I could successfully scrape one page before adding the loop that goes through all pages.
So what does the loop that I designed to extract the data look like? These are the details of the ‘for’ loop I used in my project:
- For each page in pages:
— If page returns status code other than 200, send a warning - For the post in posts:
– If not missing neighbourhood information:
— Add post-date-time posted to list of date-times
— Add post neighbourhood to list of neighbourhoods
— Add post title to list of post titles
— Add a link to post to list of links
— Add cleaned post price to list of prices
— Add the surface to list of surface
— Add the number of bedrooms in the list of bedrooms
Feel free to access the full code on GitHub click here.
I also included some steps to clean the data in the loop, like pulling the ‘DateTime’ attribute and removing the ‘ft2’ from the surface in sqt footage variable and making that value an integer. In addition, I removed ‘br’ from the number of bedrooms as that was scraped as well.
With these additional steps, I started the data cleaning with some work already done, which is always good, right?
Create a dataset
After I extracted the data with the loop above, I saved the data into a data frame. Then I filtered the data frame with the following columns: Date Posted, Neighbourhood, Post Title, URL, Price, Surface, Number of Bedrooms.
Cleaning the data set
Next, I needed to further clean the data set by modifying the class of some objects and removing others. Here are the additional steps required:
- Turned DateTime string into a DateTime object.
- Removed $ and converted Price to an integer.
- Converted Bedroom to class float.
- Removed () from the Neighborhood column.
- Changed missing values in Price and Sqft to NaN type and removed them.
Surprisingly, after I did all this cleaning, I ended up with 101 rows, and only 53 rows with values for the surface. This is not the ideal sample size we would like, but let’s see what we can get from it.
Now that the dataset is ready to go, I can analyze it.
To find out more about the courses our students have taken to complete these projects and what you can learn from WeCloudData, click here to see the learning path. To read more posts from Laurent, check out his Medium posts here.