Sales prospecting is difficult if you're relying on the same tools as every other salesperson. Most...
Customer Segmentation-Getting department size for cold email with Clay
If you're building lists and only relying on the filters in your lead database, there is a better way. Try multi-point segmentation instead.
In this video, I talk through a campaign we did for a client that spans web scraping, a lead database, and a new tool in our toolkit.
The client in this case makes an analytics product.
First, we built an initial broad fit ICP list in Apollo.io. We used filters like company size, industry, and geography.
Then, we web scraped social media posts from companies that fit the bill. We were seeing how they use analytics links in their social posts.
Once we confirmed they use this type of post, we're off to the races.
The next thing we did is used LinkedIn data with Clay to determine department size. With regex, we calculated a ratio and put the companies into bins based on the size of their marketing department.
Then, we went to Lavender to write copy that speaks to each of these different company sizes—for emails that will actually be read from beginning to end.
Have a look and let me know what you think.
What's interesting? What would you do differently?
[00:00:00] The model of pulling a bunch of leads from a single database, I think is going to be less and less relevant. And particularly with changes to deliverability, the way that a lot of people are doing it today is not going to keep working.
[00:00:12] The approach that we've been working with clients and really getting more and more technical is one where we're spanning multiple data sources.
[00:00:19] And creating richer segmentation between those different data sources.
[00:00:23] First we built a list of companies that would potentially fit using Apollo.
[00:00:30] Once we had that list, we then scraped the social media profiles for companies on that list to see if they were using a certain type of analytics link, basically, in their posts.
[00:00:45] We then filtered. From that initial list that we had of a broad ICP fit...
[00:00:53] Found the ones who were using this certain type of analytics link on their social media posts.
[00:00:59] Got rid of the rest of them.
[00:01:00] The next thing that we did using Clay is some enrichment.
[00:01:05] We used LinkedIn to find the current total number of employees.
[00:01:10] Then we checked from that list of employees. How many were in marketing roles?
[00:01:15] This segmentation, I think is interesting because what we did here is basically we put together a ratio, right? How big is their marketing team, relative to the total company?
[00:01:25] The copywriting that we did therefore is based on these bins.
[00:01:29] So you've got small midsize or large teams relative to their total head count. We wrote different copy based on different company sizes and then layered in the specific research that we'd already done scraping their social media posts to determine that it was a good fit.
[00:01:47] Where I think you can go with this or what I think you can apply here is that what we're trying to do is really tap multiple data sources to get a richer profile or understanding of our prospect. And then write better copy.
[00:01:58] We wrote the copy in Lavender, which if you're not familiar with it, it's a really helpful tool to write better cold email.
[00:02:05] And we built this list programmatically.
[00:02:07] There could be five, 10, a thousand, 10,000 leads on this, and we would be able to get this level of specificity on who they are.
[00:02:13] And who's a good fit. I'm doing more and more or less building like this, but I wanted to share it because I thought it was a pretty good example, spanning using a database web scraping, LinkedIn, and ultimately writing good copy and being really confident that this company's offer and product is going to be a fit for those prospects.
[00:02:34] Because if you're not getting good replies, it doesn't really matter how many emails you send. And I think we've got a much higher likelihood of getting responses here based on what we put together.
[00:02:42] Let me think, what would you do differently? How would you change this? And any other ideas we'd love to hear. Thanks.