- industry insights
Sound Search #2: Jenn Anderson Miller, Audiosocket
.png&w=3840&q=100)
Jenn Anderson-Miller has been building ahead of the market for a while. In 2011, her company Audiosocket announced "music as a service." The product suite followed in 2012. The idea was sound; the ecosystem wasn't ready to recognize the rights they were issuing. Seventeen years on, Jenn is still CEO and co-founder, and Audiosocket has grown into three distinct arms: ASX for high-end sync with music supervisors, agencies, and brands; a self-serve marketplace for small businesses and creators; and Sync Hits, which powers music on platforms like Canva, CapCut, and TikTok.
We talked to Jenn about why she thinks music tech needs to stop patching the past and start building for the future, her vision for turning music licensing from relational to transactional, and what 17 years of persistence have taught her.
You've talked about music shifting from a relational model to a transactional one. Can you walk through what each of those means, and what the transactional version actually looks like?
Two years ago, I put together a deck describing a vision I'd been working on mentally. One of the slides said: turning music from relational to transactional. The vision is built on a hypothesis — that by gating the content, we are actually creating a system that's capping the revenue potential. When we create rules that are frictionless, we enable reach and revenue to grow exponentially.
Relational means: which platform is it going on? It's going on YouTube. Okay, but what if somebody reposts it to TikTok? That's a different price. And what territory? We think it's just going to be in the US. But what if it gets put in Europe as well? Different price, different license. With every decision, there's a new outcome.
Transactional means the decisions don't matter. The outcome is that we monetize at every single touch point. If you build a product on AWS, for example, you know you're going to pay X based on how much you use. If we turn music into a transactional model, my belief is we're going to monetize where we're not monetizing today, because today we're monetizing imperfectly, and there's a lot of room for leakage. Each use creates an event, which is monetizable. And we'll see those artists who are able to operate in that way reaching a far greater audience, because their content's not being gated at every turn. It's being encouraged to be shared.
What do you think are the blockers for creating that frictionless experience?
Oh my gosh, at the moment, there's a lot. I've always said this vision is another career. I only just started working on turning it into reality in November, December. I see it as truly another seven-to-ten-year cycle.
The first thing is the rights themselves. Most music is still quite fragmented and the rights are not clean. It's really tough to find music that can actually move in a frictionless way. There are companies like Audiosocket that can enable this today, but broadly, that’s not the case, and the biggest blocker is a rights issue.
The second thing is proof. If we can demonstrate that revenue and reach grow when friction is removed, that will be a compelling argument for people to release music in this way in the future. But initially, we'll start with what's there.
The third thing is mindset. I just don't think the music industry historically has been the tech innovator. If you look at the companies that deliver music — Spotify, Amazon, Apple — those are people who went to MIT and Stanford and tech schools. They were not musicians. They were not driving this vision from within the music industry. That shift — that the music industry can build its own technologies — is still a shift.
Let's move from the future into the present. Walk me through a real week at ASX. How do briefs come in, who does what, and how does the team decide what to pitch?
Mostly ASX is working with the highest end of the curators of music experiences. These are people doing trailers, big launch video games, TV commercials, TV series. We're really talking about music supervisors and agencies for the most part. Sometimes brands, but typically the brands are coming in through an agency or a music supervisor too.
Usually a brief comes in. Sometimes it's vague, other times very specific. Since we integrated with AIMS, the team tends to really rely on it if reference tracks are given. That's a quick start, a first pass. They also know the catalog really well, so after the AIMS first pass they'll look to see what else they might pull in that isn't so obvious, and then they'll apply filters based on what the brief said.
There's definitely a human component. ASX services the highest end of the human side of music, in the sense that music supervisors and brands care a lot about how the music impacts the work they're creating. Our team genuinely tries to imagine themselves within whatever project they're working on — within that scene, within that commercial. A lot of that is feeling based. The quick starts get us from 100,000 songs down to maybe 75. That's where our team starts to build a human experience into it. Ultimately they're only delivering five to ten songs per brief.
Audiosocket also runs Sync Hits, which serves a very different user than ASX. How does the discovery experience differ?
They're entirely different. Music supervisors heavily rely on curation teams within companies like Audiosocket to deliver them five to ten songs. I don't know supervisors or ad agencies who go into a catalog of 100,000 songs and start there. They trust their go-tos to bring them top picks. They also know exactly what they want and they can guide the results.
On Canva, for example, music is not something most people know how to talk about or even how to guide. I always describe it to business / product people this way, because they don’t necessarily know the nuances of searching for music.I specifically highlight that with music, more is not better. More content can work for visuals like photos, where you can scroll through hundreds in a minute. But for music, it's such an engaged process where you have to click every single song and you have to listen and you have to scrub. It's intense. As such, more is not better. The right thing is better, and the right experience is best.
For tools, I lean into AI. I encourage product teams to think about a more curated set, and the right tools to get users to the music they're thinking of but don't know how to describe. Even if you give people filters, genres, moods and tempos, very few people really know how to use those effectively.
How are you thinking about that internally? Creating an almost magical experience for a user who can't yet describe what they're looking for.
We're actually building an entirely new front end that starts with the assumption you don't necessarily know how to talk about music. We're building all of AIMS' new AI tools into it. When you land on our new site, which will probably be available in July, it's basically a choose your own adventure experience. It says to the user: how do you want to proceed? Do you want to talk to me like you do with AI? Do you want to deliver a brief? Do you want to upload a song, or use a YouTube or Spotify link? Or do you want to take an adventure with me where I ask you questions to guide your search?
I love the "guide my search." It's built on something we did last year with AIMS — an integration with Monotype, where we worked out how to marry font and music. Through the process of building that tool, we discovered the right types of questions to take people from "I know I need a song" to "this is the song." What are you building? Great, you're working on an ad. From there, let's talk about the sentiment. What's the energy that moves you through this piece of media? What kind of pace? What's the end impact you want to leave? It just prompts the right questions. So if people don't have a starting place, they can still end with a song that suits.
Seventeen years in, what's the thing founders in music tech consistently get wrong?
I've seen a lot of companies trying to solve the same problems, which is really patchworking the rights complexities. It sounds a bit sad, but I don't think we're going to fix the problems of the past. Trying to build solutions on top of fragmented rights and systems with friction — that's not the path forward. It's rethinking how music is created, shared and consumed in the future.
This is self-serving, but back to what we're doing: we're not thinking about how to fix the past. We're thinking about what the future looks like, especially as machines are coding now. Machines aren't going to negotiate a contract. Machines are going to see if something exists that is usable, and then keep building. It'll grab an API key and keep building. Agentic AI is here.
Founders need to understand that trying to fix what's past is always going to be problematic. If you look at it as a slow roll, a track by track approach going forward, then you can build really effective systems for the future. We can consider how it ideally works, and build for that.
What's the best piece of advice you've received over 17 years of running Audiosocket?
Persistence pays off.
We put out press releases in 2011 about MaaS, music as a service, and launched the product suite in 2012. It was too soon. The product was needed, but the ecosystem wasn't ready. We were issuing rights when there weren't systems that recognized the rights being issued. It was really frustrating.
Audiosocket came out as a music company that was innovating in tech. We hit bottleneck after bottleneck. Now I feel like all of that has finally led me to where I am today, to be positioned to build for the future. And the time is finally right. The idea of persistence: try, try, try again.
This interview has been edited for length and clarity.
Sound Search is AIMS' interview series with music professionals on how technology is changing the way we discover and work with music.
Have someone in mind we should talk to? Reach out to us on LinkedIn.
