Home PublicationsData Innovators 5 Q’s for Shiva Rajaraman, Vice President of Product at Spotify

5 Q’s for Shiva Rajaraman, Vice President of Product at Spotify

by Paul MacDonnell

The Center for Data Innovation spoke with Shiva Rajaraman, vice president of product at Spotify, the music, video, and podcast streaming service based in Stockholm. Rajaraman discussed how Spotify uses algorithms to learn users’ tastes and how it could take advantage of crowdsourcing to improve its services.

This interview has been edited.

Paul MacDonnell: The number of Spotify users is now approaching 80 million, 20 million of which use the paid version of the service. How has technology supported this growth since Spotify started in 2006?

Shiva Rajaraman: We first launched with just a subscription model offering access to a music library in the cloud on your computer. We’ve built on this in two ways. First, we’ve made advances in understanding listening preferences and how to curate content in ways that allow us to learn what users like to listen to so we can recommend new music to them. At the same time, we’ve made Spotify available through new channels and in new locations. Mobile phones, smart TVs, and gaming consoles now carry the service—on PlayStation you can use it as a game soundtrack. Recently we’ve made it available on Amazon Echo which allows users to use voice commands to select what music or podcasts they want to hear. It’s also available via the Starbucks app. Making it available in different situations and contexts and giving users more knowledge and control is the direction we’ve been taking.

MacDonnell: Curation of music has become a hot topic in the past couple of years. What are the challenges of recommending new music to users?

Rajaraman: The challenge is to get the right balance between what’s new and what’s familiar. So, for example, our music recommendation service, Discover Weekly, relies on algorithms to look at other users’ listening preferences and other curated playlists. It also uses our analytics technology, Echo Nest, which pulls data from different websites to analyze information about music trends and artists. In the past this is the kind of thing that only a knowledgeable friend could do, but because Discover Weekly combines a lot of music knowledge from a lot of other listeners, music fans, and professional curators, it is much more reliable at curating content. And its reliability gets better over time as it learns from all of these sources and from how users’ listening preferences change over time.

Another challenge is understanding the context in which users listen to music. You may be playing a special playlist for your young kids to help them get to sleep at night so we will need to know that these kids’ songs are different from your own tastes. Our algorithms can pay attention to the time of day you are using these playlists and if you are playing them regularly at this time. We also notice the names you give them, such as “Kids’ Bedtime,” which can help us avoid making recommendations to you based on them.

MacDonnell: What new kinds of technology are available or are being developed to enhance the service? For example, you recently acquired the Soundwave social networking app which helps users find and share music and connect with like-minded fans.

Rajaraman: There are two parts to this question. How much can we tap the crowd? And what can we learn algorithmically to improve? We have invested in machine learning tools to better understand users’ tastes. This does not replace editorial curators who input tags to make machine-generated recommendations better. In fact, the key input here is what our own human editors say about a piece of music. A question worth asking is whether we should be using just our own editors or whether we should crowdsource this information. For example, could we crowdsource the correction of music tagging errors, or add information about our content in the same way that Wikipedia was created by volunteers?

MacDonnell: While Spotify has simplified the front-end of music distribution, the back-end is still pretty old fashioned and complex. What kind of service or tools do you provide for artists to make their lives easier?

Rajaraman: We provide artists with a dashboard that gives them tools to understand their fans and manage their own careers. So, for example, an artist can find out which of their tracks are most popular, who and where their most loyal fans are, and can plan performances using this information. Artists can also plan promotional campaigns based on this data, and can better communicate with fans and get valuable feedback about their likes and dislikes.

With podcasts, we’re able to measure how often or how soon users abandon a podcast and can provide feedback to help creators improve. For example, based on when users stop listening, we can recommend they improve the introduction of their podcast to encourage users to keep listening, just like the way a producer would advise the presenter of a radio show.

MacDonnell: Do you have a vision for what users will be able to do with Spotify in, say, 15 years? What will the experience be like and what technology will make it happen?

Rajaraman: First of all there may not be a Spotify app. The technology may disappear into the background and into buildings or devices connected to the Internet of Things. It may be a voice-only user experience where you talk to the software in a connected device in your home, such as we’re beginning to see with the Amazon Echo. So you could ask a device to play you music for dinner or breakfast time and it would find exactly what you want.

We are beginning to look at technologies that can select music based on individual context. For example, music could be played based on data from wearables that detect your heartbeat and if desired, help you to relax, sleep, or wake up.  

There is a lot of potential to choose music for specific moments. This will be a thriving economy for music and content creators and this will democratize the production and consumption of music. Movies could have personalized soundtracks, for example. I see an important role for personalized music in virtual reality (VR) content. Instead of recreating an experience, what if we could use VR to create another layer of experience on top of our everyday experience? Instead of just experiencing a concert, could that help you experience a concert with a million other people that’s on the moon?

You may also like

Show Buttons
Hide Buttons