DataArt’s Sergey Bludov, writing for the Medium publication Hackernoon, has posted an interesting rundown of Music Tech Trends to Bolster The Music Industry in 2019. He writes that 2018 may have been one of the more pivotal years for technology in music, both in innovation and adoption, which sets things up nicely for 2019. The article explores the potential of sexier tech-topics like artificial intelligence, VR/AR, and wearables, but the area I’m most excited about might seem mundane in comparison. Sergey places it at the top of his list, so I think he shares my enthusiasm. We’re talking about using music recognition technology as a tool to calculate accurate performance royalty payments from song play in venues. I swear — this is super-exciting:
The music industry faces a massive challenge when it comes to monitoring and tracking where and how often a song is being played. Without effective Music Recognition Technology (MRT) artists, publishers, and other rights owners lose their royalties each time music is played in a club, bar or any other venue. And, of course, this is a very serious problem, with some estimating that 25–35% of mechanical licenses in the U.S. are unrecognized on streaming platforms alone. Fortunately, a range of experts around the world are working diligently to solve this major issue through MRT innovations and implementation.
Automatic music recognition isn’t new. In fact, Broadcast Data Systems (BDS) was widely-deployed by the early 1990s for recognizing songs played on U.S. radio stations. However, even though the core algorithm for recognizing music has existed for decades, a large percentage of venues are still not adequately equipped with MRT. The good news is that many companies such as DJ Monitor heading up the technology side. And of course, once the music is effectively recognized, the data is sent to the performance rights organizations (PRO) that handle payment distribution. Paris-based Yacast is another tech company working in this space, as well as SoundHound Inc.’s Houndify, Google’s Sound Search, and others.
I’ve written about this before. Music played in venues (restaurants, nightclubs, hair salons, etc.) cannot be accurately tracked unless someone’s taking notes and submitting tracklists to the PROs. So, historically, the payments venues make to the PROs (mainly BMI and ASCAP here in the states) go into a pool. The top artists of that quarter — who the PROs assume are getting the most venue-play — receive payments from this pool. Of course, this is ludicrous — though there hasn’t been any other realistic solution — and causes frustration for the gothic club or the hipster coffeehouse that’s never playing ‘top artists.’
Shazam-like technology is a hope to solve the problem. With a device installed in venues, the music coming from the speakers can be monitored 24/7 with the info sent to the PROs. Theoretically (and realistically) that info is used to pay out accurate venue royalty to the artists receiving play.
A few countries and PROs in Europe are already testing this — PRS in the UK and GEMA in Germany are working to implement this technology — and it can’t come soon enough for the US and the rest of the world. However, most countries only have one performance rights organization, which makes it easy to select and install the device and have it report back to the appropriate party. The US is an outlier (go figure) in that there are technically four competing PROs. It may be a battle to get these companies to agree on a single device that will report data to each. I’m sure each fork of that data will need to be a private and trusted stream so, for example, BMI can’t see how ASCAP is faring. If they can’t agree then the untenable status quo may hold or — even sillier — venues may be asked to install a separate listening device for each PRO.
The impact of virtual reality and A.I. on music over the next few years will be fascinating to watch. But, to be honest, I am a lot more curious to see how this song-tracking technology develops.