Seminars

Past talks

Speaker: Dra. Carolina Brum Medeiros (Fliprl CEO, IDMIL/McGill, Google ATAP)
Date and time: Tuesday, September 6, 2016 - 15:00
Place: Room 132-A, IME/USP
Abstract: In the past decade, various consumer electronic devices were launched as gestural controllers, from which several have been used for musical expression. Despite the variety of these devices, academic and industrial institutions keep their efforts on researching and developing new devices every so often. Why? In this conversation, I’d like to raise the discussion about the reasons why we are not satisfied with the existent gestural controllers: Natural human unsettledness? Consumerism and market? Technological evolution, allowing for creation of more efficient devices? Search for new ways of expression? Or maybe we are aiming towards abstracting settled physical objects and structures? We are going to discuss and review some new gestural controllers, based on the reading of the following authors: Marcelo Wanderley, Alva Noe, Ivan Poupyrev, Oliver Sacks, John Milton, and Ana Solodkin.


(video presentation in portuguese)

Speaker: Ivan Eiji Simurra
Date and time: Wednesday, June 1, 2016 - 12:00
Place: Auditório do CCSL, IME/USP
Abstract: In this seminar we will present an overview of researches that relate to sound perception with verbal correlates to describe instrumental timbres. In our presentation we will oppose three works of Asteris Zacharakis ("An Investigation of Musical Timbre", "An Interlanguage Study of Musical Timbre Semantic Dimensions and Their Acoustic Correlates" and "An Interlanguage Unification Of Musical Timbre: Bridging Semantic, Perceptual and Acoustic Dimensions" ) with two works by Vinoo Alluri ("Effect of enculturation on the Semantic and Acoustic Correlates of Polyphonic Timbre" and "Exploring Perceptual and Acoustival Correlates of Polyphonic Timbre"). Our goal is to highlight the characteristics of each survey and how they can dialogue with our own related to timbre and emotions.


(video presentation in portuguese)

Speaker: Thilo Koch
Date and time: Wednesday, May 18, 2016 - 12:00
Place: CCSL Auditorium, IME/USP
Abstract: In our daily lives we are exposed to many kinds of sound - traffic noise, people talking, crowds, music, etc. Consequently, the awareness of quality and its perception is increasing. Although anyone has some individual understanding of what audio quality is about, the research aimed at quantifying the perceived audio quality corresponds to a relative new scientific field. Objective quantification of perceived audio quality is a complex topic which involves a number of technical and scientific issues, from audio recording, signal processing, and room acoustics, to statistics and experimental psychology. In this seminar we will give an introduction on what is audio quality evaluation, and an overview on how experiments are planned and executed, and how their results are analyzed and interpreted.


(video presentation in portuguese)

Speaker: Rodrigo Borges
Date and time: Wednesday, May 4, 2016 - 12:00
Place: Auditório do CCSL, IME/USP
Abstract:

Music Recommender Systems are computational techniques for suggesting music to an specific user according to his personal interest. They operate under a big amount of music files and, depending on the information provided in its entry, may apply Collaborative Filtering, Context-Based or Content-based approaches.

Collaborative Filtering makes recommendations to the current user based on items that other users with similar tastes liked in the past. Contextual Music Recommendation refer to the situation of the user when listening to recommended tracks (e.g, time, mood, current activity, the presence of other people). Music Content can be understood as musical features computed directly from audio, or semantic inferred or predicted by machine learning techniques.

Differently from another recommender systems, as for books, movies or news, the music ones has specific characteristics: they allow recommendation of repeated items, and has a fast consumption time in comparison. These leads us to differentiate between parallel (albums) and serial (playlist) recommendation.

Preliminary feature extraction results are finally presented, retrieved from a temporary database containing popular Brazilian music.


(video presentation in portuguese)

Speaker: Fábio Goródscy
Date and time: Wednesday, April 6, 2016 - 12:00
Place: Auditório do CCSL
Abstract:

A tutorial about Web Audio API.

We are showing and  presenting some simple examples for explaining how Web Audio API works and how its structure is defined. You can see the examples here: https://github.com/fabiogoro/webaudio

During the presentation we show:

  • How to start Web Audio Objects
  • How to use the Oscillator to create sounds with different frequencies
  • How to control volume of the sound and its waveform
  • How to apply all of this to create a piano on the browser
  • And in the end: Using peer.js or PubNub to exchange Web Audio information through browsers

A live version of the tutorial is online at: http://webaudiotutorial.herokuapp.com/


Pages

Subscribe to Upcoming talks Subscribe to Past talks