Lead scoring should tell you who is ready to buy right now, not who matches a profile in your CRM. Most B2B scoring models assign points for firmographic fit: job title gets 10 points, company size gets 15, industry match gets 20. These scores measure how well someone looks on paper. They say nothing about whether that company is actively evaluating solutions this week.
Signal-based lead scoring with Claude and Clearcue replaces static property scores with real-time buying behavior analysis. Claude processes every signal from Clearcue's MCP, applies your custom rules for recency, diversity, and strength, and outputs a 0-100 score with plain-English reasoning for each company. The entire scoring run processes hundreds of companies in under 5 minutes.
This guide covers the exact scoring methodology we use, the reasoning behind each rule, and the prompts you can copy and customize for your business.
Why Traditional Lead Scoring Breaks Down
Most teams score leads using CRM properties in HubSpot, Salesforce, or similar tools. The model looks something like this:
| Property |
Points |
Logic |
| Job title contains "VP" or "Director" |
+20 |
Senior = more likely to buy |
| Company size 50-500 |
+15 |
Mid-market = sweet spot |
| Industry = SaaS |
+10 |
ICP match |
| Opened 3+ emails |
+10 |
Engaged with our content |
| Visited pricing page |
+25 |
High intent action |
This model has three fundamental problems.
It measures fit, not timing. A VP of Sales at a 200-person SaaS company scores 45 points whether they are actively evaluating tools or just signed a 3-year contract with your competitor. The score cannot distinguish between "perfect fit, wrong time" and "perfect fit, buying now."
It ignores external buying behavior. CRM scoring only sees what happens inside your own ecosystem: email opens, page visits, form fills. It misses everything that happens on LinkedIn, X, Reddit, industry events, and competitor platforms. That is where most of the buying journey actually occurs. Research from Gartner and Forrester consistently shows that B2B buyers are 60-70% through their evaluation before they ever visit your website.
It decays without anyone noticing. A lead who visited your pricing page 6 months ago still carries those 25 points. The score does not reflect that their buying window closed, their budget was reallocated, or they chose a competitor. Stale scores create false confidence.
Signal-based scoring solves all three problems by measuring real-time buying behavior from external platforms, weighting recency heavily, and updating scores continuously as new signals arrive.
Who Should Use Signal-Based Lead Scoring
This scoring methodology is for B2B teams who need to prioritize leads based on buying intent, not just profile fit:
- Sales teams drowning in leads that all "look good on paper" but most never convert
- Revenue ops looking to replace gut-feel prioritization with a repeatable, data-driven scoring model
- Founder-sellers who need to spend their limited outreach time on the companies most likely to buy this week
- Outbound agencies managing lead scoring across multiple client ICPs who need a consistent framework
- Teams using HubSpot or Salesforce scoring that want to layer real-time intent signals on top of their existing CRM scores
If your current lead scoring feels like a coin flip, signal-based scoring will change how your team prioritizes.
The Three Pillars of Signal-Based Scoring
Effective signal scoring evaluates three dimensions for each company. Each dimension answers a different question about buying readiness.
Pillar 1: Signal Recency
Question it answers: Is this company showing buying behavior right now, or did they show it months ago?
Recency is the single most important factor in signal scoring. A prospect who engaged with competitor content yesterday is actively evaluating options. A prospect who did the same thing three months ago has likely made their decision already.
Recency scoring bands:
| Signal Age |
Weight Multiplier |
Reasoning |
| 0-7 days |
1.0x (full value) |
Active evaluation window |
| 8-14 days |
0.8x |
Still relevant, slightly cooled |
| 15-30 days |
0.5x |
May still be evaluating, but urgency drops |
| 31-60 days |
0.2x |
Likely moved on, minimal value |
| 60+ days |
0.05x |
Stale, near-zero value |
A company with a single competitor engagement from yesterday (1.0x multiplier) often deserves more attention than a company with 10 engagements from two months ago (0.2x multiplier each). Buying windows close fast. Fresh signals indicate current opportunity.
Pillar 2: Signal Diversity
Question it answers: Is this company showing one type of behavior, or multiple indicators of buying intent?
Signal diversity measures how many different categories of buying behavior a company exhibits. One signal type repeated many times is weaker than multiple signal types appearing together.
Why diversity beats volume:
A company where one person liked 20 posts from a single competitor could be a fan, a friend, or even a customer of that competitor. The volume is high but the diversity is zero. There is one signal type (competitor engagement) from one source.
A company where the CEO viewed your profile, a sales manager downloaded your lead magnet, and an SDR commented on a competitor comparison post shows three different signal types from three different people. The volume is lower (3 vs 20) but the diversity is maximum. This pattern indicates organizational awareness and active category evaluation.
Signal diversity categories:
| Category |
Examples |
What It Indicates |
| Brand engagement |
Liked your post, viewed your profile, commented on your content |
Awareness of your solution |
| Competitor engagement |
Engaged with competitor content, followed competitor page |
Evaluating the category |
| Lead magnet interaction |
Downloaded a guide, registered for a webinar, signed up for a tool |
Active learning and research |
| Pain point expression |
Complained about current tool, asked for recommendations |
Experiencing a problem you solve |
| Hiring signals |
Posted relevant job openings, leadership changes |
Organizational change creating buying opportunity |
| Event signals |
Attending industry conference, speaking at relevant event |
Investing time in the category |
A company showing signals in 3+ categories within 14 days is a strong buying indicator. A company showing signals in only one category, regardless of volume, is ambiguous.
Pillar 3: Signal Strength
Question it answers: How strong is each individual signal as an indicator of purchase intent?
Not all signals carry equal weight. Commenting "we need to switch tools" on a competitor post is dramatically stronger than liking a generic industry article. Your scoring model should reflect this.
Signal strength tiers:
| Strength |
Signal Examples |
Score Contribution |
| Very high |
Asked for recommendations, complained about current tool, registered for competitor demo |
25-30 points |
| High |
Downloaded lead magnet, engaged with comparison content, multiple competitor interactions |
15-25 points |
| Medium |
Liked brand content, attended industry event, profile view |
8-15 points |
| Low |
Liked generic top voice content, single competitor like, old connection request |
2-8 points |
| Noise |
Bot-like behavior, engagement from irrelevant roles, very old single signals |
0-2 points |
The combination of all three pillars creates a score that reflects both readiness to buy and fit for outreach.
The Complete Scoring Prompt
Here is the exact prompt we use to score companies with Claude and Clearcue. Copy it and customize the rules for your business.
Score each company 0-100 based on how hot a lead they are right now. 100 = hot, 0 = not interested.
What makes a hot lead: Recent interactions (last 1-2 weeks) with Martin/Ralitsa/Clearcue content, PLUS engagement with lead magnets or competitors. The best leads show multiple signal types (direct engagement + lead magnet + competitor interaction) rather than the same signal repeated many times.
Recency matters most: Fresh signals (≤14 days) score highest. Older signal combinations: Martin/Ralitsa/Clearcue interactions can still matter IF paired with recent competitor or lead magnet activity. Old signals alone = low score.
Weak signals to penalize: Only top voice interactions with nothing else → cap at 15. Only many interactions with a single competitor and nothing else → cap at 20. Only stale (30+ day) signals → cap at 10.
Output per company: Company: [Name] Score: [0-100] Reasoning: [1-2 sentences human-readable text, summarised reasoning]
How to customize this prompt:
Replace "Martin/Ralitsa/Clearcue content" with your own brand signals. Replace the penalty caps with rules that match your business. If you sell enterprise software, you might cap companies under 50 employees at 30 regardless of signal strength. If you sell regionally, you might penalize companies outside your target geography.
The critical elements to keep:
- Explicit recency weighting. Tell Claude that fresh signals matter more than old ones.
- Diversity over volume. Tell Claude that multiple signal types beat repeated single signals.
- Penalty caps for noise patterns. Define specific scenarios that should never score above a threshold.
- Human-readable reasoning. The one-sentence explanation per company lets you quickly validate the scoring logic and spot patterns.
Scoring in Action: Real Examples
Here is how the scoring logic plays out on actual companies. These examples show why the three-pillar approach surfaces genuinely hot leads while filtering noise.
Example 1: High Score (85/100)
Company: Mid-market SaaS, 120 employees, Tier 1 ICP
| Signal |
Category |
Age |
Strength |
| Head of Sales liked 2 Clearcue posts |
Brand engagement |
3 days |
Medium |
| VP Marketing downloaded lead magnet |
Lead magnet |
5 days |
High |
| SDR commented on competitor comparison |
Competitor engagement |
8 days |
High |
| Company posted 2 SDR job openings |
Hiring signal |
12 days |
Medium |
Reasoning: Four signal types from four different people, all within 14 days. Brand awareness confirmed, actively researching category, investing in sales team growth. High diversity, high recency, strong individual signals.
Action: Outreach immediately with personalized message referencing their hiring and research activity.
Example 2: Medium Score (45/100)
Company: Enterprise, 800 employees, Tier 2 ICP
| Signal |
Category |
Age |
Strength |
| Junior marketer liked 8 competitor posts |
Competitor engagement |
Various, 5-25 days |
Low per signal |
| No brand engagement |
- |
- |
- |
| No lead magnet interaction |
- |
- |
- |
Reasoning: High volume but single signal type from one person. Could be a personal follower of the competitor. No brand awareness, no lead magnet interaction, no diversity. Recency is mixed.
Action: Add to monitoring list. Do not outreach yet. Wait for a second signal type to appear.
Example 3: Capped Score (15/100)
Company: Small startup, 15 employees, Tier 3 ICP
| Signal |
Category |
Age |
Strength |
| Founder liked 12 top voice posts |
Top voice only |
Various, 10-45 days |
Very low |
| No competitor engagement |
- |
- |
- |
| No lead magnet interaction |
- |
- |
- |
Reasoning: Only top voice interactions with nothing else. Capped at 15 per scoring rules. Top voice engagement is the weakest signal because many people engage with top voice content for visibility or personal interest without any buying intent.
Action: Ignore for outreach. This pattern does not indicate purchase consideration.
Building Your Custom Scoring Rules
Every business needs different scoring rules. Here is how to define yours.
Step 1: Identify Your Strongest Buying Indicators
Ask yourself: which signals have historically preceded closed deals?
Tools like Apollo and LinkedIn Sales Navigator provide contact data, but they lack the behavioral signals needed for accurate scoring. Clearcue captures these behaviors across LinkedIn, X, Reddit, news, job boards, podcasts, and events in real time.
Step 2: Define Your Penalty Caps
Penalty caps prevent noise from creating false positives. Start with these defaults and adjust based on your experience:
| Pattern |
Cap |
Why |
| Only top voice engagement, nothing else |
15 |
Top voice likes are low-intent engagement |
| Only single-competitor engagement, nothing else |
20 |
Could be a customer or personal contact |
| Only stale signals (30+ days), nothing else |
10 |
Buying window likely closed |
| Only brand engagement from one person |
25 |
Individual interest, not organizational |
| Signals only from non-ICP roles |
20 |
Company may not be in-market for your solution |
Step 3: Set Outreach Thresholds
Define what score triggers what action. Consistency matters more than the exact numbers.
| Score Range |
Label |
Action |
Timeline |
| 70-100 |
Hot |
Personal, signal-referenced outreach |
Same day |
| 50-69 |
Warm |
Targeted outreach sequence via HeyReach or Lemlist |
Within the week |
| 30-49 |
Interested |
Add to nurture campaign, monitor for new signals |
Ongoing |
| 10-29 |
Low |
No action, re-score next week |
Weekly re-score |
| 0-9 |
Cold |
Remove from active list |
Monthly cleanup |
Step 4: Iterate Based on Results
Scoring rules are not set-and-forget. Track which scores convert to meetings and deals. If companies scoring 40-50 are converting at the same rate as companies scoring 60-70, your thresholds need adjustment. If a specific signal type consistently predicts deals, increase its weight.
Run the full scoring workflow weekly. Review the results. Adjust one rule at a time. Within 2-3 iterations, your scoring model will reflect your actual buying patterns rather than assumptions.
Signal Scoring vs Traditional Intent Data Providers
Traditional intent data from Bombora, 6sense, or ZoomInfo Intent operates at the account level. They track anonymous content consumption across publisher networks and report that "Company X is researching CRM software." The data is directional but vague.
| Dimension |
Traditional Intent Data |
Signal-Based Scoring |
| Granularity |
Account level |
Person + company level |
| Signal source |
Anonymous web browsing |
Public platform activity (LinkedIn, X, Reddit, events) |
| Specificity |
"Interested in CRM" |
"VP of Sales liked competitor comparison post and downloaded your lead magnet" |
| Recency |
Weekly or monthly updates |
Real-time, within hours |
| Actionability |
Low (no individual contacts) |
High (specific people with specific behaviors) |
| Cost |
$25,000-35,000/year |
€79-439/month |
| Scoring |
Binary (surge vs no surge) |
0-100 with custom rules |
Signal-based scoring gives you individual-level data with specific behaviors and clear sources. You know exactly who did what, when, and on which platform. This specificity powers personalized outreach that traditional intent data cannot support.
Automating Your Scoring Workflow
Once your scoring rules are validated, automate the full flow as a Claude scheduled task.
Daily scoring prompt (for new signals):
With Clearcue, get all companies with new signals from the past 24 hours. Score each 0-100 using these rules:
- Multiple signal types (brand + competitor + lead magnet) score highest
- Fresh signals (≤14 days) get full weight, 15-30 days half weight, 30+ near zero
- Cap: only top voice → max 15, only single competitor → max 20, only stale → max 10
Tag companies scoring 70+ as "hot-lead" and 50-69 as "warm-lead" in Clearcue.
Weekly full re-score prompt (for all active companies):
With Clearcue, get all Tier 1 and Tier 2 companies. Re-score each 0-100 based on current signal data. Compare to last week's scores. Flag any company that moved up 20+ points (heating up) or down 20+ points (cooling down). Update tags in Clearcue accordingly.
Schedule the daily version to run every morning at 9 AM in Claude. Your computer needs to be on and Claude open. The weekly re-score can run on Monday mornings to set priorities for the week.
Connecting Scores to Outreach
Scoring is only valuable if it drives action. Here is how scores connect to your outreach tools:
For LinkedIn outreach (HeyReach):
Tag hot leads in Clearcue. Export tagged leads or push them directly to HeyReach via MCP. Claude can generate personalized connection requests referencing each prospect's specific signals.
For email outreach (Lemlist or Salesloft):
Export scored leads with signal context. Use the reasoning field from Claude's scoring output as personalization input. "Your team has been actively evaluating sales automation tools" lands differently than "I noticed you match our ICP."
For CRM updates (HubSpot or Salesforce):
Sync signal scores to your CRM as a custom property. This gives your sales team a real-time intent layer on top of their existing pipeline view. Reps can sort by signal score to prioritize their day.
Start Scoring Your Leads Today
The setup takes under 30 minutes:
- Create your Clearcue account and set up at least 3 signal types
- Connect Clearcue MCP to Claude in Settings, Integrations, MCP for Claude
- Copy the scoring prompt from this article and replace the signal names and penalty caps with your own
- Run your first scoring pass on your existing signal data
- Set outreach thresholds and act on the results
For the full prompt library including ICP tiering, decision maker identification, meeting preparation, and daily lead monitoring, visit our prompt library.