Our client is one of the largest news publishers in North America. With their print and digital formats reach millions of readers every week, they lead the national discussion by engaging audiences through its prestigious coverage of news, politics, business, investing and lifestyle topics, across multiple platforms.
The WeCloudData team worked with the client’s digital marketing and data analytics team on an audience segmentation and expansion project for customer acquisition.
WeCloudData helped the client set ML strategies on how to generate look-alike users for the certain types of customers with similar behaviours or interests, further, to provide guidance on the marketing and bidding decisions with the most up-to-date and precise information.
The key challenge of the project is that the client collects hundreds of millions of session data generated by millions of readers on a daily basis. To drive subscriptions, the client is hoping to target anonymous users who will become high-LTV subscribers. The preliminary data cleaning and analysis must be done. Hence, we started our work on the following aspects:
- Preliminary data transforming and analysis
- Look-alike model development
- Model evaluations and testing
- Workflow automation
Tools used: Snowflake, Spark on Databricks, AWS (S3, EC2, Airflow), Machine Learning
- Similarity-based Look-alike Model: Nearest Neighbors (NN) + Clustering
- Simple and easy to understand
- Difficult to test (A/B testing required)
- No feature importance to interpret
- Not with high precise but effective to detect “Neighbors (Targeted customers from the pool) Strangers (Unwanted customers for this segment)” with the defined Similarity Score
To solve the scalability problem, we also introduced the hashing algorithms, Locality Sensitive Hashing (LSH) to reduce the computational cost when calculating the distance.
Precision vs Recall: i.e., “cost of targeting the wrong user is much smaller than the cost of failing to target the right user”. Also don’t want to waste resources on the wrong users though – Finding a balance is important.
- Classification Models
- More explanatory power – Feature importance and confusion matrix
- Randomly sampling users is difficult and introduced bias – Training models in Spark will improve reliability significantly
- Easier to evaluate results on test data
- Model Deployment and Data Flow
- The model automation:
- Audience segment creation in Cloud
- A batch job runs daily or hourly to find lookalikes to augment the segment size (real-time list generation possible)
- User selects number of lookalikes based on similarity score
- New users appended back to original segment and sent to 3 party Ad Manager
- Availability to adjust the metrics in determine the “similarity score” based on business needs in the future
- Test on different segments and larger samples as the data gathered
- Continue engineering features for the model interpretation
- Optimized the AI data pipeline
Beam Data successfully delivered this half year project within digital media industry. It showed our capability in handling large amount of data and provide the data-driven insights in new areas. Throughout the project, one of the biggest challenges is to gain the variety types of domain knowledges in a short time and communicate with cross-functional teams to convey the tasks. In addition, we also quickly adjusted in grasping the client’s tech stacks to deliver the compatible works smoothly.
Beam Data then successfully gained trust with the client and continued the relationship with the same project team for further work contents such as the AI model optimization, pipeline design and other different data inquiries.