Índices de popularidade são instrumentos tradicionalmente utilizados na indústria da música para a comparação de artistas. Apesar da ampla aceitação, pouco se conhece a respeito da metodologia que embasa tais índices, o que os torna instrumentos por vezes difíceis de compreender e/ou criticar.
Neste seminário, nós vamos expor o método e as principais dificuldades enfrentadas durante a construção do índice Playax, um índice de metodologia transparente que agrega sinais de popularidade provenientes de dados de fontes heterogêneas (e.g. rádios, Spotify, Youtube, Instagram, Twitter, etc) em um único número. Em particular, vamos discutir as soluções e relações de compromisso adotadas na reconciliação de sinais contraditórios e que capturam aspectos de dimensões diferentes do fenômeno de popularidade em mídias distintas. A nossa esperança, com este trabalho, é a de iniciar um processo saudável de discussão e crítica que leve eventualmente a um instrumento prático, coerente e transparente para a comparação de artistas e outros tipos de colaboração relacionados a dados de contexto musical.
Since 1957, computers have been used for synthesizing and processing audio signal. Some of the first techniques used were additive synthesis, AM/FM synthesis and subtractive synthesis. An alternative to these techniques is physical modeling, that utilizes mathematical descriptions of sound waves and physical components such as strings, tubes and membranes, to create musical signals. This seminar will present the main techniques for physical modeling of instruments, with special attention to waveguides, lumped models and state-space models.
Humans can easily identify portions of singing voice in an audio with a mixture of sound sources. However, trying to identify such segments computationally is not a trivial task. This seminar will present the fundamentals of the problem of singing voice detection in polyphonic audio signals, a brief description of the techniques used to solve it, and its applications in other tasks of music information retrieval (MIR). Finally, some challenges will be highlighted regarding the performance improvement in the automatic detection of segments with singing voice.
The emergence of musical patterns via repetition/similarity is paramount in making sense and understanding music. Yet, despite the efforts made towards its systematic description, musical similarity remains an elusive concept, resisting robust formalisation. Why does the introduction of well-established powerful pattern matching techniques (exact or approximate) in the musical domain, usually ends up with rather limited/partial/fragmentary results? Why is it so difficult to create a general model of musical similarity that may capture musically and cognitively plausible patterns? In this presentation, we will focus on three sources of difficulty in describing musical similarity. Firstly, it is not always easy, to get a musical sequence per se on which to apply pattern matching techniques; especially in non-monophonic music (i.e., most music), it is anything but trivial to derive cognitively meaningful auditory images/streams within which patterns may emerge. Secondly, it is most important to decide how a coherent sequence of musical entities may be represented; representation in music is complex due to the multi-dimensional and hierarchic nature of musical data. Thirdly, it is vital to define the nature of a certain similarity process, as special models may have to be devised (rather than use of standard off-the- shelf algorithms). In this presentation, examples and techniques from recent research on musical pattern discovery, in melodic, harmonic and rhythmic contexts, will be presented to highlight the importance of looking in detail at the musical and cognitive aspects of music pattern discovery tasks before attempting to use/develop specific pattern matching algorithms.
Emilios Cambouropoulos is Associate Professor in Musical Informatics at the School of Music Studies, Aristotle University of Thessaloniki. He studied Physics, Music, and Music Technology before obtaining his PhD in 1998 on Artificial Intelligence and Music at the University of Edinburgh. He worked as a research associate at King’s College London (1998-1999) on a musical data-retrieval project and was employed at the Austrian Research Institute for Artificial Intelligence (OeFAI) in Vienna on the project Artificial Intelligence Models of Musical Expression (1999-2001). Recently he was principal investigator for the EU FP7 project Concept Invention Theory COIVENT (2013-2016). His research interests cover topics in the domain on cognitive and computational musicology (CCM Group - ccm.web.auth.gr) and has published extensively in this field in scientific journals, books and conference proceedings. Homepage: http://users.auth.gr/emilios/