To many people, the vast majority of the 500 million tweets sent per day are nothing but a lot of noise. To some people, however, they represent the basis for music. Scott Lindroth, a professor music at Duke University, “sonified” tweets that mentioned “osama” shortly after Osama bin Laden’s assassination in 2011. The rhythms and pitches were generated by algorithms, and accompanying marimba sounds were composed to complement the auto-generated music.
Composer Peter Gregson and developer Daniel Jones created The Listening Machine which generated a continuous stream of music from May 2012 to January 2013 based on the tweets of 500 different people in the U.K. Their algorithms translated the topics, emotions and tones of the tweets into music.
In 2012, Kingsley Ash created a sonification of Twitter music trend data, called Affected States. Reviews and news about musicians trending on Twitter were gathered and analyzed for emotion, energy and quality. Trending artists of higher quality and energy generate sounds with higher pitches and longer durations.
Sam Harmon’s Twinthesism, a Twitter-powered synthesizer for Macs, is “an attempt to sonify the human randomness being generated on the service.” Every 30 seconds a tweet is fetched and its individual letters are converted to numbers which the synthesizer uses to generate sound. Users can make their own original music by using faders to mix the sounds. If you’re happy with what you produce, you should, of course, tweet it out to the world.