Hearing Seminars
CCRMA hosts a weekly Hearing seminar (aka Music 319). All areas related to perception are discussed, but the group emphasizes topics that will help us understand how the auditory system works. Speakers are drawn from the group and visitors to the Stanford area. Most attendees are graduate students, faculty, or local researchers interested in psychology, music, engineering, neurophysiology, and linguistics. Stanford students can (optionally) receive credit to attend, by enrolling in Music 319 "Research Seminar on Computational Models of Sound Perception." Meetings are usually from 10:30AM to 12:20 (or so, depending on questions) on Friday mornings in the CCRMA Seminar Room.
The current schedule is announced via a mailing list. To subscribe yourself to the mailing list, please visit https://cm-mail.stanford.edu/mailman/listinfo/hearing-seminar If you have any questions, please contact Malcolm Slaney at hearing-seminar-admin@ccrma.stanford.edu.
Upcoming Hearing Seminars
Jill Kries - How the brain encode speech and language with aging and aphasia
Date:Fri, 01/17/2025 - 10:30am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing Seminar
Abstract:FREEOpen to the Public
Recent Hearing Seminars
Nat Condit-Schultz on Tempo, Tactus, Rhythm, Flow: Computational Hip Hop Musicology in Theory and Practice
Date:Fri, 11/15/2024 - 10:30am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing SeminarComputational musicology is not just for classical music. In this talk, I will review a variety of computational investigations of hip hop based on my dataset, the Musical Corpus of Flow (MCFlow). Using MCFlow, we can characterize the "norms" of rap flow and investigate how they have changed historically, including changes in tempo and the density of rhyme over time. I will also discuss the methodological challenges of computational musicology in general, and hip-hop/pop musicology specifically, and demonstrate tools and methods I have developed.FREEOpen to the PublicGerald Schuller: Perceptual and higher-level loss and distance functions for machine learning in audio and acoustics
Date:Thu, 10/31/2024 - 3:30pm - 5:00pmLocation:CCRMA Classroom / ZoomEvent Type:Hearing SeminarProf. Gerald Schuller will report on the potential transformative role of perceptual loss functions and distance metrics in enhancing audio and acoustic machine learning models, and their applications. He will cover theoretical foundations of perceptual loss functions, which mimic human auditory perception, and also more abstract, higher-level representations, and explore how these functions, along with novel distance metrics, significantly improve the performance of audio processing tasks. Applications involving loss functions for room impulse responses, audio similarity, and audio representations for cochlear implants will be discussed.
Prof. Marina Bosi will be hosting his visit.
Join us in Zoom if you cannot make it in person!FREEOpen to the PublicMeasuring Acoustic Transfer Functions - Swapan Gandhi and Juan Sierra (Meyer Sound)
Date:Fri, 10/18/2024 - 10:30am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing SeminarWe often want to characterize an audio system but we might not have access to the underlying input signal, This occurs in common devices that have their own clocks and unknown latency. Our friends at Meyer Sound have this problem and are proposing a solution based on a "virtual referece." Grounded in their experience measuring and tuning room acoustics, they will share with us details of their method and a live demonstration of its use to characterize some popular consumer audio devices.
Who: Swapan Gandhi and Juan Sierra (Meyer Sound)
What: Transfer Function Measurements When the Reference Signal is Known but not Accessible
When: Friday October 18 at 10:30AMFREEOpen to the PublicProf. Dan Bowling - Music for Mental Health
Date:Fri, 10/11/2024 - 10:30am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing Seminar
I'm happy to welcome a new faculty member, Dr. Dan Bowling, to Stanford and the Hearing Seminar. He'll be talking about his research on music and health at the next Hearing Seminar. Please join us.
Who: Dr. Dan Bowling, Stanford Psychiatry's Division of Interdisciplinary Brain Sciences
What: Music and Health: Biological Foundations and Applications
When: Friday October 11th at 10:30AM
Where: CCRMA Seminar Room, Top Floor of the Knoll at StanfordFREEOpen to the PublicPurnima Kamath on Generative Models for Sound Design
Date:Fri, 09/13/2024 - 10:30am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing SeminarLarge language models (LLM) such as ChatGPT are making striking changes to how we think about words and intelligence. Generative models (https://developers.google.com/machine-learning/gan/generative) take these ideas a step further by creating new data from a text prompt. Can an LLM and a generative model create new kinds of sounds? It is easy to imagine a system that lets you generate dog sounds, for example. But how would you build a system that lets you ask for a dog sound with a touch of wolf? With steerability or morphing the sound landscapes become much more interesting. Can we control a generative model to make both big and small changes to the sound we generate?FREEOpen to the PublicDemo of Personalized 3d Sound System
Date:Fri, 08/16/2024 - 12:00pm - 6:00pmLocation:DoubleTree Hotel, 275 South Airport Blvd, South San Francisco, CaliforniaEvent Type:Hearing SeminarFREEOpen to the PublicLeslie Famularo on Differentiating and Optimizing an Auditory Model
Date:Fri, 08/09/2024 - 12:00am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing SeminarOne of the shortcomings of current AI work is the inability to tie the results back to known physics. This is useful both to help explain the results, but also to constrain the optimal solution to known physical properties of the system. Neural networks are hard. They are big, often times the result is inscruttable. What can be done?
New software paradigms such as JAX and PyTorch allow one to specify arbitrary computations in a way that can be differentiated. And if we can differentiate a function we can optimize it. Hurray. How can we express an auditory model in a differentiable fashion?FREEOpen to the PublicRobert L. White's Cochlear Implants - Repeat Seminar
Date:Tue, 07/02/2024 - 4:00pm - 5:30pmEvent Type:Hearing SeminarCochlear implants (CI) are amazing. Squirt a little current into a cochlea and you hear a buzzing sound. It is even more amazing that the right currents sound like speech. What was it like to first convey speech to new cochlear implant users?
This is a repeat of the May 31 seminar, for those wishing to join from another time zone. It will be online only and recorded. The recording is avaialble on YouTube at ths URL: https://www.youtube.com/watch?v=9hoY24bVTZwFREEOpen to the PublicRobert L. White's Cochlear Implants
Date:Fri, 05/31/2024 - 10:30am - 12:00pmLocation:Biomedical Innovations Building, BMI 1021, 240 Pasteur Drive, Stanford, CAEvent Type:Hearing Seminar
Join us for a special Stanford Hearing Seminar on the invention of the cochlear implant speech processor. May 31st at 10:30AM in Stanford BMI 1021FREEOpen to the PublicSenyuan Fan will Exploring Implicit Neural Audio Representation
Date:Fri, 05/24/2024 - 10:30am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing Seminar
This week at the CCRMA Hearing Seminar Senyuan Fan and Prof. Marina Bosi claim that these implicit methods require less training data and achieve higher compression rates than other approaches. How do they do that?
Who: Senyuan Fan and Marina Bosi
What: Exploring Neural Audio Coding MethodsFREEOpen to the Public
- 1 of 18
- ››