THE WHITE NOISE RESEARCH
The White Noise Research
Sound or Music
Perception
Aaron Klenke, 649097, 31 March 2025
Humboldt University of Berlin
Faculty of Humanities and Social Sciences
Department of Musicology and Media Studies
Course: Einführung in die Systematische Musikwissenschaft (53482 Winter Term
2024/25)
Lecturer: Gina Emerson
Structure
Introduction
Contextualization, Methodologies and Literature Review
The (Im-)Probability of White Noise Being Perceived as
Music
The Neuroscience of Perceiving Sound as Music
My Experience of Perceiving Sound as Music
An Infinitely Improbable Concept of Perceiving Sound as
Music
An Outlook on Perceiving Sound as Music
Conclusion
References
Introduction
White noise is a random sound that contains all frequencies at equal intensity. It offers a
simple center for my contemporary research in musicology. Since it contains all sounds it
theoretically also contains music. Sound or Music? After writing down my general idea for
this extensive project in the first paper, this one examines perception. I investigate the
difference between sound and music considering two articles.
The first one presents an argument originating in stochastics: there is a probability that
white noise could be perceived as music since it theoretically contains every sound. Collins
(2024) calculates this more as an improbability, meaning the chance of this happening is
almost infinitely low. He concludes that there is a vast amount of sound that is not perceived
as music. This is very interesting to me, as it offers a perspective on music being discovered
in continuously existing sound rather than made.
The second article examines the neuroscience behind perceiving sound as music. It
explores sensing and processing sound, showing the plasticity of our brain and nervous
system and their connection to perception (Koelsch, 2011). Together, these two works suggest
that there is a vast amount of sound to connect with through our plasticity, understanding
music as the human connection to sound - discovering it through hearing and making it.
Contextualization, Methodologies and Literature Review
I want to quickly state the methodologies, which are tools for The White Noise Research.
At the center of this research are interdisciplinary approaches (Denhardt, 2005) and mixed-
method approaches (Tashakkori & Teddlie, 2022), combining qualitative (Denzin & Lincoln,
2024) and quantitative approaches (Kaplan, 2004). These contemporary methodologies allow
1me to expand my simple idea of researching white noise. This aligns with my initial idea of
having such a simple center of research that many fields of study easily open up.
In this paper, the interdisciplinary approaches I employ allow me to bridge musicology,
stochastics, and neuroscience. The integration of multiple disciplines enables more
comprehensive research within musicology. It provides a framework for discussing topics
through a variety of lenses, offering a deeper understanding (Born, 2010). This
interdisciplinary perspective is essential for connecting intriguing findings in the stochastics
of white noise to the neurological mechanisms underlying the human ability to perceive sound
as music.
While Collins and Koelsch contribute quantitative findings to my research, sharing my
experience of perceiving sound as music is a qualitative - more precisely, an
autoethnographic - approach. It bridges the subjective and the academic, allowing for self-
reflection on my research (Ellis et al., 2011). As a saxophonist and composer, I have always -
perhaps even unknowingly - explored my own plasticity, perceiving sound as music and
engaging in a complex connection to sound, and by extension, to music.
Collins’ research highlights the rarity of music within sound and raises the question of
how much sound is still left to be perceived as music. Koelsch examines the processes of
perceiving sound as music. By discussing these two articles, this paper explores the
distinction between sound and music, examining our perception.
The (Im-)Probability of White Noise Being Perceived as Music
In his paper, Collins presents an intriguing idea. He calculates how likely it is for white
noise to form a music-like sound. His results and the conditions he sets for his stochastic
calculation provide interesting insights into the difference between sound and music and how
this distinction is defined by our perception. He starts off by explaining that a white noise
signal can have any possible configuration of values. This is determined by the stochastic
properties inherent to the signal. Theoretically it contains every possible sound. Simply put,
this means that white noise could, at any time, sound like music - for example, like an existing
song.
He further explains that, statistically, over many samples, the signal tends to have a
uniform spectral distribution. This is what we hear as a static and constant sound. The
probability of this being different is highly unlikely, although theoretically possible. To make
a stochastic calculation following his question of how likely it is for white noise to sound like
music, Collins necessarily has to define the difference between two outcomes. One of these is
not music - in the words of this paper, sound not perceived as music - and the second outcome
is sound perceived as music.
Beyond the results of his research, this part is particularly interesting to me, since what
Collins defines as parameters for these two outcomes provides insights into the qualities of
the two differentiated sound categories. In other words, the parameters of music and non-
music.
Collins focuses on two necessary conditions observed in musical audio signals to
establish this distinction. He introduces the zero crossing rate (ZCR). The ZCR of an audio
signal measures how often the waveform crosses the zero amplitude line within a given time
frame. A low crossing rate is associated with a more structured sound with fewer chaotic
oscillations, whereas higher rates are linked to more chaotic sounds. Secondly, he describes
that when signals move to nearby sample values more often, they are more continuous.
Collins states that sounds with low zero crossing rates and high proximate movement are
typically found in examples that we now perceive as music.
This reveals fundamental acoustic elements that define music as structured sounds.
However, it also shows that sound perceived as music is not necessarily structured in one
specific way but rather exists on a spectrum of structure - more or less structured, or simply
structured differently. What remains are the two extremes of structure and randomness.
The results of his calculations show that the amount of sounds perceived as music -
concrete sounds that meet the conditions stated earlier - is almost infinitely low. He reached
this conclusion by calculating the probability of white noise forming a signal that meets these
conditions - thus forming music. Since white noise is defined as a signal containing all
possible sounds, his second key finding is that music constitutes only a very, very, very small
fraction of all possible sounds.
Following Collins’ research, I argue that by expanding our perception and processing of
structures in sound, we would expand our definition of music - perceiving, and through this
naming, more sounds as music. We would discover more sounds to be music. Collins provides
an interesting outlook on the role of AI-generated sounds, considering how much music AI
could potentially populate. This suggests that by connecting with AI, we might expand our
own perception.
The Neuroscience of Perceiving Sound as Music
In his paper Koelsch reviews a huge amount of research on the neuroscience on our
connection to sound and perceiving it as music. His paper shows how important, inherent and
involuntary it is for our system to sense sound. It shows that searching connection to sound is
an automatic process of our system, by analyzing, decoding, organizing, connecting
memorizing qualities of sound. This article shows music as the result of a natural and inherent
human connection to sound. An important point is the indifference of music and language,
both being results of human auditory sensing, processing and creating.
Additionally Koelsch shows that this connection is not a static one. He discusses the
plasticity of this connection. Neuroplasticity refers to the brains ability to form new neural
connections. While perceiving sound our brain automatically reorganizes and structures itself
differently. These adaptive changes can enable someone to "hear" music where others might
not. These differences in sound perception are shown by Koelsch through reviewing various
experiments. Koelsch explains that MMN studies have demonstrated the impact of long-term
musical training on various aspects of sound perception, including examples like pitch
discrimination of chords, temporal acuity, the temporal window of integration, changes in
sound localization, and the detection of spatially peripheral sounds. Additionally, an MMN
using MEG, revealed effects of just three weeks of training. Sound perception emerges from a
combination of bottom-up auditory processing and top-down cognitive, emotional, and
predictive mechanisms. We involuntary perceive, analyzes, organize, predict, and emotionally
engage with sound, transforming it into music. Since music is described by Koelsch as similar
to language, one could say the more we engage with sound the more fluent we become in the
language music. If we connect more with sound this connection becomes more complex and
deeper.
My Experience of Perceiving Sound as Music
My perception of sound as music has been deeply shaped by my long engagement with
it. One of my earliest connections to music comes from listening to the radio on top of our
fridge. My father turned it on every morning, and it would run the whole day with seemingly
no one listening to it. This was one of the few musical sounds in our house, alongside a small
collection of my mother’s records. This connection deepened as I first learned the recorder
and later the saxophone, an instrument I have played for many years now.
Practicing the saxophone - playing written music, unwritten music, or improvising -
became central to my connection to sound. My teacher introduced me to jazz, which became
very important for me for years. Jazz brought an interesting and vast amount of ways to
discover sound in a passionate environment for me. Through this, I realized that engaging
with sound and practice better and more is tied to getting better on my instrument.
Unknowingly I was making use of the earlier referred to plasticity. Practicing music, and
specifically an instrument, is mainly driven by this plasticity. It involves not only refining
finger movements but also engaging the entire body to produce sound. Over time, my
perception of musical structures and my ability to create them became increasingly precise
through deliberate and specifically designed practice. Transcribing music and embodying the
transcribed material to improve as a musician was a crucial part of my saxophone studies and
remains essential today. This process expands my musical language and opens new
possibilities.
When I practice, I engage in an active process of listening and mimicking. This requires
attuning myself to the structures of sound, like pitch and its relations to harmony, rhythm,
timbre, and dynamics. Through repetition, my brain learns to recognize subtle variations,
reinforcing my ability to distinguish musical qualities within initially unknown sound
environments. Ear training plays a crucial role in this, shaping my brain’s predictive
capacities and allowing me to analyze and internalize sounds. Ear training is immensely
important for hearing more precisely and imagining music.
Beyond playing an instrument, I have always been interested in other aspects of sound.
Composing, expanding, and developing my ability to express the connection to sound - for
me, this is connected to playing an instrument and cannot be separated. The process of
composing feels more like an ongoing search in sound to me rather than inventing music.
While composition is often viewed as an act of creation, I also experience it as an act of
discovery. It involves listening for music within sound, selecting elements that shape what I
am looking for. In this sense, composing is like exploring something unknown. Its forms and
structures reveal themselves through attention and intuition. Even the word "composing" itself
captures this process: it is about bringing elements together to find music within sound.
Studying music theory, sociocultural contexts, and now musicology has further deepened my
understanding, allowing me to expand further always in connection to the center of music.
An Infinitely Improbable Concept of Perceiving Sound as Music
Through Collins, Koelsch, and my own experience, it becomes clear that our perception
of sound is what defines music. Music is not an inherent property of sound but rather a
product of human connection to sound, shaped by our neurological processes and
sociocultural contexts - in other words, by our perception. Since both neurological processes
and sociocultural contexts are not static but plastic, our perception is as well. The future may
hold possibilities for expanding our perception in many ways. Through evolution, we might
develop new ways of hearing, extracting structure, and finding meaning in sound. Brain-
computer interfaces, machine learning models, and neuroplasticity research could contribute
to this transformation, allowing individuals to perceive sounds as music that previously were
not.
In an extreme thought experiment, white noise- a fully randomized sum of sound,
currently perceived as undifferentiated chaos - could become an infinite reservoir of music. In
this way, white noise is the sonic equivalent of visual white, which is the sum of all colors and
the color of light - an abstract theoretical extreme, impossible to fully realize. Philosophically
and spiritually, this extreme might connect to ideas of transcendence. The notion of music as
an endeavor to transcend is not new, but approaching it from this perspective might shift our
understanding of music’s meaning. In this scenario, everyone would be a musician, much like
Joseph Beuys’ statement that everyone is an artist. These musicians would discover music in
sound rather than simply listening to or creating music.
Ultimately, this thought experiment understands music as a human process of making
sense of sound - whether rationally, emotionally, culturally, scientifically, or even
transcendentally. Music, in this way, is the human interaction with sound, the human
exploration of sound, and therefore immensely important to our lives. It has been so since the
beginning of humanity and remains so today. Although often seen as art or entertainment,
music is much more than that.
This thought experiment raises extreme, almost ideological or fanatical questions: Can
we cultivate a world where everything is music? Can we find a connection to every sound,
fully exploring the world of sound? Could the distinction between sound and music dissolve,
leading to every sound being perceived as music? Could we fully transcend into sound itself?
While this remains a thought experiment - and in reality, not everything is music - the
possibility of expanding our perception offers a different perspective on what music is and
what it could become.
An Outlook on Perceiving Sound as Music
While this paper focused on perception from a stochastic and neuroscientific standpoint,
discussing inherent features of sound perceived as music, the next step could shift toward the
sociocultural contexts that shape and constrain what we perceive as music. The reasons why
we do not hear all sound as music also lie in sociocultural contexts. History, economy, power
8structures, and many more define music. A better understanding of neurological limitations is
connected not only to inherent biology but is also shaped by sociocultural factors, and vice
versa, since everything is interconnected. The avant-garde has long challenged the boundaries
of music, yet its survival often depends on economic systems that resist its innovations.
Capitalism thrives on commodification, making experimental, disruptive, or politically
charged music difficult to sustain. Historically, musical movements that diverged from
dominant tastes struggled for institutional and financial support, reinforcing the paradox that
true progress in music is often unpopular because of its lack of economic value. The same
forces that claim to support artistic growth - grants, funding bodies, academia - often impose
their own limitations, privileging accessibility over radical innovation. This leads to a broader
critique: the limitations placed on music are not just economic but ideological. Music as a
field has been shaped by historical structures of patriarchy, heteronormativity, and
colonialism, defining who gets to be heard, whose music is preserved, and which
methodologies are deemed valid. Feminist, queer, and anti-capitalist approaches offer ways to
break from these restrictions, creating spaces where new music is not filtered through
profitability, but through collective meaning and alternative value systems. To evolve music,
the future could bring decentralization—rejecting institutional gatekeeping in favor of
community-driven practices, open-access resources, and alternative economies of exchange.
By moving away from the capitalist model that demands music be a product rather than an
evolving, shared practice, new possibilities could emerge. These methodologies do not just
expand music’s sonic possibilities, but its very function in society, enabling more freedom in
life through an expanded perception.
Conclusion.
This exploration of perception has revealed that music is perceived in sound by us, rather
than being an inherent quality of it. Collins demonstrates that while white noise theoretically
contains all possible sounds, the likelihood of it being perceived as music is infinitely small.
This result, along with the conditions defining music that he uses for his calculation, shows
that music is a question of hearing - of our perception, both neurologically and
socioculturally. Koelsch’s research further supports this, illustrating how the brain actively
and plastically engages with sound. This is not a fixed process, which means our perception -
and, by extension, music - can be expanded. This aligns with my autoethnographic
reflections. Through musical practice, one experiences the plasticity of perception. Following
an extreme thought: random chaos, white noise, being perceived as music - both
neurologically possible and socioculturally - might seem abstract and going too far, but it
serves as an opportunity for a different perspective on what music is and what it means for us
humans.
Ultimately, this study explored the distinctions between sound not perceived as music
and music, advocating for a broader understanding of perception. Perception is shaped and
not static, which means we are not static but continuously evolving. Music is not fixed but
evolving and inherently connected to everything we are as humans. By engaging with white
noise as a research tool, I examined the difference between sound and music. My answer is
simple: Perception.
REFERENCES
Adams, T. E., Ellis, C., & Holman Jones, S. (2017). Autoethnography. In J. Matthes
(Ed.), The International Encyclopedia of Communication Research Methods. John Wiley & Sons. https://doi.org/10.1002/9781118901731.iecrm0011
Born, G. (2010). For a relational musicology: Music and interdisciplinarity, beyond the
practice turn: The 2007 Dent Medal address. Journal of the Royal Musical Association,
135(2), 205-243. https://doi.org/10.1080/02690403.2010.506265
Collins, N. (2024). The rarity of musical audio signals within the space of possible audio
generation. arXiv.
Denhardt, R. A. (2005). Handbook of interdisciplinary research. SAGE Publications.
Denzin, N. K., & Lincoln, Y. S. (2024). The SAGE handbook of qualitative research (6th
ed.). SAGE Publications.
Kaplan, D. (2004). The SAGE handbook of quantitative methodology for the social
sciences. SAGE Publications.
Koelsch, S. (2011). Toward a neural basis of music perception – A review and updated
model. Frontiers in Psychology, 2, 1–20. https://doi.org/10.3389/fpsyg.2011.00278
Tashakkori, A., & Teddlie, C. (2022). The SAGE handbook of mixed methods in social
& behavioral research (3rd ed.). SAGE Publications.