Clouds, Streams, and Ground (Truths)
Developing Methods for Studying Algorithmic Music Ecosystems
March 7-8, 2026
University of California, Berkeley

Call for Proposals
We are pleased to announce a Call For Papers for the conference, “Clouds, Streams, and Ground (Truths): Developing Methods for Studying Algorithmic Music Ecosystems,” to be held at the University of California, Berkeley, March 7-8, 2026.
“Fail fast, fail forward” echoes throughout Silicon Valley. The phrase validates (and often financially rewards) companies who pursue rapid technological development over more considered approaches. But this future-oriented vision has also made these systems difficult to study: like the metaphorical “stream,” they are constantly in flux. One consequence: digital music, streaming platforms, and cloud infrastructures have been around for decades, yet scholars lack a consensus on how to study these objects. Access to the past is often foreclosed by relentless pursuits of digital futures.
Our aim is to bring together an interdisciplinary group of scholars, researchers in the music industry, and legal practitioners to discuss the challenges of studying these digital systems and develop ways to make them more knowable. We are soliciting proposals for presentations (20 minutes + 10 minutes discussion) from scholars in musicology, critical data studies, media studies, and related disciplines to contribute their perspective to the conversation. Possible topics of interest include:
- How does metaphorical language like clouds and streams shape how we perceive the affordances of different music technologies?
- What kind of knowledge can we generate about these systems taking a historical approach? An ethnographic one?
- Quantitative vs. qualitative: What can we learn about these systems by studying them at scale, and what can we learn from case studies?
- What are the implications of a rapidly changing political economy of music? Have we seen comparable economic shifts in the past?
- What can recent (or not so recent) litigation reveal about these companies or their technologies?
- What kind of musical data is publicly available, and what can we do with it?
- Most commercial music recommendation companies were developed in North American and European contexts. These systems were largely trained on popular music, but with an eye to universal applications. How might we go about mitigating bias from this training data? Is taking a universal approach to music recommendation and generation even possible?
The conference will feature keynotes by Bob Sturm (KTH Royal Technical University), Anna Huang (MIT), and Chris White (UMass, Amherst), along with roundtables with researchers in the music industry and the legal sphere. We anticipate having financial support available to help defray the costs of travel/lodging for accepted participants, particularly graduate students or independent scholars.
If interested, please send proposals of 250 words to conference (at) algorithmicmusicmethods.com by August 22, 11:59PM.
Program decisions will be announced no later than September 19.
Keynote Speakers



Bob L.T. Sturm (KTH)
I am the PI of the MUSAiC project (ERC-2019-COG No. 864189). Before becoming an Associate Professor of Computer Science at KTH in 2018, I was a Lecturer in Digital Media at the Centre for Digital Music, School of Electronic Engineering and Computer Science, Queen Mary University of London. Before joining QMUL in 2014, I was a lektor at the Department of Architecture, Design and Media Technology, Aalborg University Copenhagen. Before joining AAU in 2010, I was a postdoc at “Lutheries – Acoustique – Musique” (LAM) de l’Institut Jean le Rond d’Alembert, Paris 6. I received my PhD in Electrical and Computer Engineering in 2009 from the University of California, Santa Barbara.
Keywords: machine listening for music and audio, evaluation, music modeling and generation, ethics of AI, machine learning for music, digital signal processing for sound and music, folk music, Irish traditional music, English Morris dancing, Scandinavian folk music, accordion, caricature, painting
Chris White (UMass)
Christopher White is Associate Professor of Music Theory at the University of Massachusetts Amherst, having previously taught at The University of North Carolina at Greensboro and Harvard University. Chris received his PhD from Yale University and has also attended Queens College–CUNY, and Oberlin College Conservatory of Music.
Chris’s research uses big data and computational techniques to study how we hear and write music. He has published widely in such venues as Music Perception, Music Theory Online, and Music Theory Spectrum. His first book The Music in The Data (2022, Routledge) investigates how computer-aided research techniques can hone to how we think about music’s structure and expressive content. His second book, The AI Music Problem (2025, Routledge) outlines ways that music poses difficulties for contemporary generative AI. He also had published in popular press venues— including Slate, The Daily Beast, and The Chicago Tribune—on a wide range of topics, including music analysis, computational modeling, and artificial intelligence. Chris also remains an avid organist, actively performing and collaborating across New England.
(Photo by Eric Berlin)
Anna Huang (MIT)
In Fall 2024, I started a faculty position at Massachusetts Institute of Technology (MIT), with a shared position between Electrical Engineering and Computer Science (EECS) and Music and Theater Arts (MTA). For the past 8 years, I have been a researcher at Magenta in Google Brain and then Google DeepMind, working on generative models and interfaces to support human-AI partnerships in music making.
I am the creator of the ML model Coconet that powered Google’s first AI Doodle, the Bach Doodle. In two days, Coconet harmonized 55 million melodies from users around the world. In 2018, I created Music Transformer, a breakthrough in generating music with long-term structure, and the first successful adaptation of the transformer architecture to music. Our ICLR paper is currently the most cited paper in music generation.
I was a Canada CIFAR AI Chair at Mila, and continue to hold an adjunct professorship at University of Montreal. I was a judge then organizer for AI Song Contest 2020-22. I did my PhD at Harvard University, master’s at the MIT Media Lab, and a dual bachelor’s at University of Southern California in music composition and CS.
Sponsors
University of California, Berkeley Department of Music
UC Berkeley Townsend Center
IASPM-US
Organizers

Allison Jerzak
UC Berkeley
Allison Jerzak is a PhD Candidate at the University of California, Berkeley, where she studies the history of digital music and music recommendation. Allison is also a keyboardist and currently plays harpsichord and organ for the UC Berkeley’s Baroque Ensemble.

Ravi Krishnaswami
Brown University
Ravi Krishnaswami is a PHD candidate at Brown University researching AI and automation in music for media. He is a composer and sound-designer for advertising, television, and games, and co-founder of award-winning production company COPILOT Music + Sound. He plays guitar in The Smiths Tribute NYC and has studied sitar with Srinivas Reddy. He is the Valentine Visiting Assistant Professor of Music at Amherst College.