When Meridian Retail Group approached InsightCore in early 2024, they had a problem that was simultaneously expensive, measurable, and invisible: an 18% annual loyalty program churn rate that their analytics team could describe in retrospect but could not predict in advance.
Meridian operates 340 retail locations across 12 countries with a loyalty program serving 4.8 million active members. Their annual churn translated to roughly 864,000 members lapsing each year — members who, on average, had spent $210 in the previous 12 months. The retention problem was costing the business an estimated $180 million in recoverable lifetime value annually. The existing analytics team could tell leadership exactly how bad last year's churn was. They could not tell the customer success team which members were about to leave.
The Diagnostic Phase
InsightCore's implementation began with a four-week diagnostic phase focused on identifying the behavioral signals that preceded loyalty program lapse. Meridian had 36 months of transaction history, app engagement logs, email click data, support interactions, and store visit frequency by member. The raw data existed — it had simply never been assembled into a unified member profile and analyzed for churn signal.
The diagnostic surfaced several counterintuitive findings. First, the strongest predictors of churn were not purchase frequency or transaction value — they were behavioral inflection points: specifically, a reduction in app session frequency of more than 40% over a rolling 90-day window, combined with the absence of any points redemption in a 120-day period. Members exhibiting both signals had a 73% probability of lapsing within 60 days.
Second, the churn signal was different by member segment. High-value members (top 20% by lifetime spend) showed different behavioral patterns before lapsing than mid-tier members. They were more likely to have had an unresolved customer service interaction, and less likely to have received a personalized offer in the preceding quarter. Building separate models for each segment increased prediction accuracy by 18 percentage points compared to a single unified model.
Third, the intervention window was wider than the team had assumed. Popular intuition suggested that lapsing members had to be reached within days of their last purchase. The model revealed that for most segments, the predictive signal appeared 45-60 days before lapse — providing ample time for a thoughtful intervention sequence rather than a last-ditch retention offer.
Building the Predictive Model
InsightCore's AutoML engine evaluated fourteen model architectures against Meridian's historical data, ultimately selecting a gradient boosting model with 47 input features for the primary segments and a logistic regression model for the long-tail of infrequent purchasers where training data was limited.
The feature engineering step was critical. Raw transaction counts and recency scores had limited predictive power. What mattered were derived features: rate of change in visit frequency, ratio of redemptions to earned points, time-since-last app engagement relative to cohort average, and cross-category purchase breadth (members who purchase across multiple product categories churn at significantly lower rates than single-category purchasers).
Model validation used a 6-month holdout period — intentionally chosen to span a major seasonal event — to test whether the model's predictions were genuinely robust or merely capturing seasonal patterns. The model achieved 74% recall on actual churners in the holdout set with a false-positive rate of 22%. In the context of a loyalty intervention program where the cost of a false positive is a modestly discounted offer sent to a healthy member, this precision-recall tradeoff was comfortably acceptable.
The Intervention Architecture
Prediction without intervention is just a more expensive form of reporting. Meridian's customer success team designed a three-step intervention sequence triggered automatically when a member crossed the model's churn probability threshold:
At 45 days before predicted lapse, a personalized reengagement email was sent with curated product recommendations based on purchase history. No discount was offered at this stage — the signal was used only to trigger relevance-optimized content. Open rates on this triggered email were 2.3x higher than Meridian's standard newsletter, confirming that the model was identifying members who had not yet mentally disengaged.
At 30 days before predicted lapse, members who had not engaged with the first email received a push notification with a targeted points bonus offer (earn 3x points on their next purchase). The offer was personalized to the category where each member had historically shown the highest purchase probability.
At 15 days before predicted lapse, members who had still not engaged received a direct outreach from a member services representative — automated message for mid-tier members, personal phone outreach for high-value members — with a retention-specific offer calibrated to the member's estimated lifetime value.
Results After 12 Months
After 12 months of live deployment, Meridian's overall loyalty churn rate had declined from 18% to 11% — a 39% relative reduction. The model was identifying at-risk members with consistent accuracy, and the intervention sequence was converting approximately 38% of reached members who would otherwise have lapsed.
The financial impact was direct and measurable. At an average recovery value of $210 per retained member, and with approximately 220,000 members retained through the program in year one, the gross recovered ARR was $46.2 million against a total program cost — including InsightCore licensing, implementation, and offer redemption — of approximately $43 million. That is a year-one return before accounting for lifetime value multiplication: a retained loyalty member typically returns 2.4x their immediate-year spend in subsequent years.
More importantly, the program created a flywheel: each cohort of retained members provided new training data that improved subsequent model iterations. The churn prediction model running today is 12 accuracy points better than the model deployed at launch, with no additional feature engineering — purely the result of 12 additional months of labeled outcomes being used for continuous retraining.