The Agent: Three Layers of Intelligence for Smarter Playlist Pitching

Matt/ May 7, 2026/ Agentforce, AI, Development, Integration, Music, Personal, Projects, Salesforce/ 0 comments

The AI Recommendation Engine to handle your music promotion

I Built an AI Music Promotion Agent Inside My CRM – Here’s What It Can Actually Do

There’s a moment every independent artist knows well. You’re staring at a spreadsheet – or a Notion doc, or a pile of emails – trying to figure out which playlist curator to pitch next. You’ve got ten tracks. There are thousands of playlists. Some of them have already said no. Some of them would be perfect fits you haven’t thought of yet. And some of them look right but would be a waste of everyone’s time.

I’m a guitarist, singer and producer. I’m not particularly talented at any of these but I’ve been writing and recording music for over 20 years and last year just started publishing it. I quickly learned the biggest source of friction was the promotional grind that comes with it. At some point I stopped asking “which playlists should I pitch?” and started asking a different question: what would it take to answer that scientifically?

The answer turned out to be a Salesforce CRM, a Python audio analysis pipeline, and an Agentforce AI agent. This is what I built – and what it taught me about the gap between raw data and actual intelligence.

THE FOUNDATION: MEASURING WHAT A TRACK ACTUALLY SOUNDS LIKE

The previous posts in this series cover the full build: the Music Intelligence Engine that analyzes 30-second preview clips and computes thirteen audio features per track, the Scoring Engine that condenses those features into three composite dimensions (Energy Adjusted, Mood Score, Rhythm Score), and the playlist snapshot pipeline that builds a matching profile for every playlist by averaging those same dimensions across all of its current tracks. The result is a six-number fingerprint for any track-playlist pair, and from that a Fit Score – a single number ranking every track-playlist combination in the catalog.

THE AGENT: THREE LAYERS OF INTELLIGENCE

The Scoring Engine generates the data. The Recommendation Engine makes it conversational. The Recommendation Engine is embedded directly in the CRM on both the Track and Playlist record pages – meaning when I’m looking at a specific track, I can ask questions about that track in context, and when I’m looking at a playlist, I can ask questions about that playlist.

There are three areas where it’s genuinely useful.

PLAYLIST INTELLIGENCE

The most obvious starting point: which playlists should I pitch this track to?

But the agent doesn’t just return a ranked list of fit scores. It filters out playlists I’ve already pitched, accounts for playlists where I was rejected recently, and prioritizes based on a combination of audio fit and curator responsiveness. I can ask it to explain its reasoning – why this playlist over that one, what’s the gap in the score, what would need to be different for a track to rank higher.

This alone eliminates hours of manual cross-referencing.

CATALOG-LEVEL INTELLIGENCE

This is where it gets more interesting. The agent can reason across the catalog, not just about a single track.

If I’m looking at a playlist and I ask “which of my tracks is the best fit for this?” – it compares every scored track against that playlist’s profile and returns a ranked list with explanations. More useful than that: it can identify when a track I haven’t been actively promoting is a better fit for a playlist I’ve been pitching a different track to. That’s a real insight that’s impossible to surface manually when you have a deep catalog.

The other catalog-level query that matters: ROI ranking by playlist. Not every high-follower playlist is worth the effort. Some curators are selective to the point where acceptance probability is near zero. The agent weighs follower count against historical acceptance difficulty – playlists where the audio fit is strong, the popularity tier matches, and the curator has historically been receptive rank higher than prestige playlists that are effectively closed doors.

FIT SCORING INTELLIGENCE

The most technically rich layer. When a track scores poorly against a playlist, a yes/no answer is useless. What I need to know is why – and specifically, what the gap looks like across each dimension.

The agent can break down the score component by component: the energy delta, the mood delta, the rhythm delta, where the popularity mismatch is, how the track compares against the playlist’s BPM range. If a track is close on energy and mood but has a weak beat salience score for a playlist that runs at 128 BPM, that’s actionable – I know what kind of track would actually fit, which informs production decisions, not just promotion decisions.

The subtler insight: curator tastes drift. A playlist that was mellow indie-electronic two years ago might have shifted toward darker, more aggressive sounds as the curator evolved. Comparing a track against recently added songs rather than the playlist average is a meaningfully different question, and the agent can answer both.

WHAT I ACTUALLY LEARNED

Building this revealed something I didn’t expect: the bottleneck in music promotion isn’t information. It’s interpretation.

I had access to most of this data before – follower counts, genre tags, playlist URLs. What I didn’t have was a framework for combining it into a decision. The Scoring Engine provides the framework. The Recommendation Engine makes it conversational enough that the framework actually gets used.

The other thing I learned is that “fit” is multidimensional in ways that genre labels completely miss. I have tracks that are sonically identical to what’s on a playlist – same key, same tempo, same energy profile – but differ on mood in a way that makes them feel wrong. The Circumplex Model catches that. Genre tags don’t.

WHAT’S NEXT

The trend scoring component – weighting tracks that are gaining streaming momentum – requires connecting the pipeline to streaming analytics, which is next on the roadmap.

The bigger question is whether the fit score can be improved by learning from outcomes. Every submission has a status – accepted, rejected, no response. That’s a labeled dataset. Whether there’s enough signal in the scores to train a lightweight classifier is something I’ll find out once there are enough resolved submissions to work with.

For now, the system does what I built it to do: it turns a catalog of tracks and a database of playlists into a ranked, explainable set of decisions. That’s a significant upgrade from a spreadsheet.

The Music Intelligence Engine, Scoring Engine, and Recommendation Engine are all custom-built on top of Salesforce and Agentforce. If you’re a developer curious about the technical implementation – the audio feature formulas, the scoring model design, or the Salesforce data architecture – feel free to reach out.

Share this Post

About Matt

Matt McGuire is a Salesforce architect, AI builder, and punk musician based in Toronto. Canada's #1 certified Salesforce professional, 43× certified across architecture, development, AI, and a wide range of platform products. He's been building on Salesforce for 17 years and currently spends most of his time at the intersection of AI and the platform. The Music Intelligence Engine is his most interesting project to date. He thinks you should read the whole series.

Leave a Comment

Your email address will not be published. Required fields are marked *

*
*

This site uses Akismet to reduce spam. Learn how your comment data is processed.