Your lead scoring model has a problem and it's probably not the one you think.
Most B2B companies using HubSpot have a lead scoring model. Most of those models don't work. Not because the points are wrong or the thresholds are miscalibrated but because the model was built by marketing in isolation, sales never agreed to the criteria, and nobody goes back to check whether high-scoring leads are actually converting.
The result: marketing proudly delivers "qualified" leads that sales ignores. Sales complains the leads are garbage. Marketing points to the score as proof of quality. And the MQL-to-SQL conversion rate quietly tells the real story, it's terrible and nobody's fixing it because both sides think the other side is the problem.
A lead scoring model that actually works isn't just a set of point values. It's an agreement between marketing and sales about what "qualified" means, encoded in your CRM, automated through workflows, and calibrated against real conversion data.
Key takeaway: Lead scoring is a collaboration problem disguised as a technical problem. The model is only as good as the shared definition behind it and the willingness to iterate when the data says it's not working.
Lead scoring is a system that assigns numerical values to contacts based on how likely they are to become customers. In HubSpot, it combines two dimensions: fit (who the person is) and engagement (what the person does), and uses the combined score to determine when a lead is ready for sales.
Manual lead scoring (HubSpot Score): You define the criteria and assign point values based on contact properties, behaviors, and engagement patterns. This is the approach most B2B companies use and the one this article focuses on. Available on Marketing Hub and Sales Hub Professional and Enterprise.
Predictive lead scoring (Likelihood to Close): HubSpot's AI analyzes your existing customer data and automatically identifies patterns that predict conversion. It scores leads based on how closely they resemble your current customers. Available on Marketing Hub Enterprise and Sales Hub Enterprise. HubSpot's documentation on understanding the lead scoring tool covers both approaches in detail.
You build scoring rules using positive and negative criteria:
When a contact's total score crosses a threshold you define (e.g., 70 points), they're considered a Marketing Qualified Lead. A workflow then transitions their lifecycle stage to MQL and notifies sales.
The mechanics are straightforward. The hard part is deciding what to score, how much to weight it, and where to set the threshold, which is where most models go wrong.
Effective B2B lead scoring evaluates two distinct dimensions: fit (does this person match your ideal customer profile?) and engagement (is this person showing buying intent?). You need both. A perfect-fit lead with zero engagement isn't ready for sales. A highly engaged lead with no fit is a time-waster.
Fit criteria are based on demographic and firmographic data, the attributes that tell you whether this person could be a customer regardless of whether they're actively interested right now.
High-value fit signals (assign more points):
Negative fit signals (subtract points):
Engagement criteria are based on behavioral data, the actions that signal a contact is actively evaluating solutions and moving toward a buying decision.
High-intent engagement signals (assign more points):
Medium-intent engagement signals (assign moderate points):
Low-intent or ambiguous signals (assign minimal or no points):
A common mistake is over-indexing on fit criteria and under-weighting engagement — a pattern HubSpot's own lead scoring guide confirms is widespread. In practice, the best-performing B2B scoring models allocate roughly 40% of the total possible score to fit and 60% to engagement. The logic: fit tells you the lead could buy, but engagement tells you they want to buy. Intent matters more than identity.
After reviewing lead scoring models across dozens of HubSpot portals, these are the mistakes I see most frequently, and each one erodes the model's credibility with sales.
This is the #1 reason lead scoring models fail. Marketing designs the model based on what they think qualified looks like, assigns points to marketing activities, sets a threshold, and starts routing "MQLs" to sales. Sales gets leads they don't recognize as qualified, starts ignoring them, and trust in the model collapses.
The fix: build the model collaboratively. Sit down with sales and ask: "Think about your last 10 closed-won deals. What did those contacts have in common? What did they do before they became opportunities?" Build the scoring criteria from those answers, not from marketing's assumptions.
If your model only adds points and never subtracts them, scores only go up. A competitor who opens every email will eventually score as an MQL. A job seeker who visits your careers page daily will look like a hot lead. Negative scoring is essential for filtering out contacts who accumulate engagement signals but will never buy.
Apple's Mail Privacy Protection has made email open tracking unreliable since iOS 15. Apple devices pre-load email content, which registers as an "open" even if the person never actually read the email. If your scoring model awards points for email opens, you're inflating scores for a significant portion of your database. Score clicks instead as they're a far more reliable indicator of genuine interest.
If your MQL threshold is 30 points and a single form submission awards 25, almost anyone who fills out a form becomes an MQL regardless of fit. That's not qualification; that's lead passing. Set your threshold high enough that reaching it requires a combination of both fit and engagement signals. For most B2B models, a threshold between 60 and 80 points works well.
Lead interest isn't permanent. A contact who was highly engaged three months ago but has gone silent isn't the same quality lead as someone actively engaging today. Without score decay — the automatic reduction of points over time for inactive contacts — your database fills up with stale MQLs who are no longer in-market, compounding the data quality problems that undermine everything downstream.
HubSpot's native scoring handles this partially: points are removed when a contact no longer meets the criteria. But for time-based decay (e.g., "subtract 5 points if no email click in 90 days"), you may need to build supplemental workflows that adjust scores based on recency of engagement.
The scoring model you build today won't be accurate six months from now. Your product evolves, your ICP shifts, your content strategy changes, and your sales process adapts. A scoring model that isn't regularly reviewed against actual conversion data drifts out of calibration, quietly delivering worse results while appearing to function normally.
Sales buy-in isn't achieved by presenting a finished model for approval. It's built by involving sales in the design process and giving them a mechanism to provide feedback that actually changes the model.
Before you assign a single point value, meet with your sales team (or sales leadership) and work through these questions together:
These conversations generate the criteria that matter, and because sales helped define them, they're far more likely to trust the output.
Build a simple mechanism for sales to flag leads that are incorrectly scored. This can be as straightforward as a lead status option like "Score too high — not qualified" or a monthly 30-minute meeting where marketing and sales review the MQLs from the previous month and discuss which ones converted and which ones didn't.
The insight from this feedback loop is invaluable: it tells you which scoring criteria are actually predictive and which ones are just noise. Over time, this feedback makes the model more accurate and gives sales evidence that their input matters.
Publish a monthly report showing MQL-to-SQL conversion rates, average score of converted leads vs. non-converted leads, and time-to-conversion. Share this with both marketing and sales — this same data feeds into how you prove marketing ROI to leadership. Transparency about what's working and what isn't prevents the 'your leads are bad' / 'you're not following up' stalemate and replaces it with a shared commitment to improving the system
If you're implementing lead scoring for the first time or rebuilding a model that sales doesn't trust, run a pilot. Score leads for 30 days without changing any workflows. At the end of the pilot, review the results together: which high-scoring leads were genuinely qualified? Which low-scoring leads turned out to be good opportunities? Adjust the model based on what you find, then turn on automation.
A scoring model is a hypothesis about what predicts conversion. Like any hypothesis, it needs to be tested against reality. If you're not reviewing your model quarterly, it's almost certainly drifting out of calibration.
Run a retrospective analysis: pull a list of contacts who became customers in the last 6 months and examine their scoring history. What criteria did they trigger? What was their score when they became MQLs? Compare this to contacts who scored high but never converted. The gap between these two groups reveals which criteria are actually predictive and which are inflating scores without driving real outcomes.
Adjust point values and thresholds based on what you find. Then share the changes with sales, explain the rationale, and reset expectations for the next quarter.
Lead scoring only creates value when it triggers action. The score itself is just a number, it's the workflows and lifecycle stage transitions connected to it that turn scoring into a working qualification system.
The most fundamental connection: when a contact's score crosses your MQL threshold, a workflow automatically updates their lifecycle stage from Lead to MQL.This transition should also trigger a notification to the assigned sales rep or the sales team queue, depending on your routing logic.
For companies with multiple sales teams or territories, the scoring model can feed into routing logic. High-scoring enterprise leads route to senior AEs. Mid-scoring leads route to the SDR team for further qualification. Low-scoring leads stay in nurture workflows until their engagement increases.
Contacts who are engaged but haven't reached the MQL threshold should be placed in targeted nurture workflows designed to increase their engagement. The nurture content should address the specific gaps in their scoring profile: if a contact has high fit but low engagement, send them high-value content designed to drive website visits and deeper exploration. If they have high engagement but low fit data, use progressive profiling to collect the missing firmographic information.
Contacts whose scores decay below a certain level after reaching MQL should be recycled, moved back into nurture workflows with their lead status updated to reflect the disposition (e.g., "Nurture" or "Bad Timing"). This prevents stale MQLs from cluttering your pipeline while keeping them in the system for future re-engagement.
Manual lead scoring (HubSpot Score) is available on Marketing Hub Professional and Enterprise, and Sales Hub Professional and Enterprise. Predictive lead scoring (Likelihood to Close) requires Marketing Hub Enterprise or Sales Hub Enterprise. You can create up to 25 custom score properties on Enterprise plans.
There's no universal answer, but a practical approach is to work backward from your MQL threshold. If your threshold is 70 points, design the model so that reaching 70 requires a meaningful combination of fit and engagement. A single action shouldn't be enough to qualify a lead. As a general guide: high-intent actions (demo request, pricing page visit) might be worth 15–20 points, medium-intent actions (content download, webinar attendance) might be worth 5–10 points, and fit criteria might be worth 10–15 points each for top-priority attributes.
If you have enough historical data (at least 200–300 closed-won deals in HubSpot) and an Enterprise subscription, predictive scoring is a powerful complement to manual scoring. It surfaces patterns you might not have identified manually. However, for most mid-market companies, manual scoring provides more control and transparency, you know exactly why a lead scored the way it did, which makes it easier to troubleshoot and get sales buy-in. Many companies use both: manual scoring for qualification workflows and predictive scoring as a secondary prioritization signal.
This is a common challenge for companies with short forms or limited progressive profiling. Focus on behavioral scoring first: website activity, content consumption, and email engagement don't require the contact to tell you anything. Then use enrichment tools (HubSpot's Breeze Intelligence, Clearbit, ZoomInfo) to auto-fill firmographic data for scoring. As contacts engage more, use progressive profiling on forms to gradually collect the demographic data your fit scoring needs.
Review quarterly at minimum. A full review includes: checking MQL-to-SQL and MQL-to-opportunity conversion rates, analyzing the scores of recently converted customers vs. non-converters, adjusting point values and thresholds based on findings, and discussing results with sales to gather qualitative feedback. Between quarterly reviews, monitor for sudden changes in MQL volume or conversion rates that might signal a model issue.
You can build the most sophisticated scoring model in the world, multi-dimensional, AI-enhanced, perfectly calibrated to your conversion data. But if your sales team doesn't trust it, they won't use it. And a scoring model that sales ignores is just marketing theater.
Trust comes from three things: involving sales in the design, sharing conversion data transparently, and iterating based on feedback. When sales sees that high-scoring leads actually convert at higher rates, and when they have a mechanism to flag the ones that don't, the model becomes a tool they rely on instead of one they work around.
That alignment between scoring, lifecycle stages, and sales workflows is exactly the kind of operational infrastructure that transforms marketing from a lead generation function into a revenue engine — and it's the work a fractional HubSpot consultant is built to deliver.
Want to build a scoring model your sales team actually uses? Book a free discovery call and I'll review your current lead scoring setup, what's working, what's not predictive, and what it would take to build a model that both marketing and sales trust.
Anna Connolly is a HubSpot Solutions Consultant and marketing operations strategist with 9+ years of experience helping B2B marketing and RevOps teams fix broken CRM systems, clean up messy data, and build automation that scales. Learn more →