In this masters work we explore different possibilities of real time digital audio processing using platforms that are highly available and have relatively low cost.
Arduinos are minimal structure for interaction with ATmega microcontrollers and are generally used as control interface for other eletric or eletronic devices. Because it has pins capable of ADC and DAC, it can be used to capture, process and emit analogic signals.
GPUs are parallel processing cards whose structure evolved from the traditional graphic processing pipeline. They have hunderds of processors that operate in parallel in its own memory. Thus, they can operate in many channels at the same time, or use some audio processing algorithms' inherent parallel properties.
Mobile devices are becoming more present and, because they have the capability of capturing and emiting audio, they can be explored as platforms for audio processing in real time. In this context, it is interesting to analyse the performance of different mobile devices for common tasks in audio processing.
In this seminar, we will present the platforms described above and results obtained when performing real time audio processing in each of them.
AudioLazy = DSP (Digital Signal Processing) + expressiveness + real time + pure Python. It is a package designed for audio processing, analysis and synthesis that is intended to be used both for prototyping and simulation tasks and for applications with real-time requirement. This seminar aims to present AudioLazy, its design goals, aspects of the digital representation of the sound and their impacts, relationships between expressivity and implementation, as well as several examples of applications. Among the topics discussed, are included the container strategies and value assessment, gammatone filters, DTFT and Z transforms, MIR (Music Information Retrieval) and working with sequences of symbols.
In this talk will be presented an experience of application of computer music in theater realized in 1981 in the "Festival of Two Worlds" in Spoleto, Italy. The aims was to increase the interaction between sounds, scenery and actors on stage distributing the control of the musical events generation. In the seminar the structure of the performance and the music generated will be adressed such as the technical apparatus constructed to perform the work.
In this seminar we show the Raspberry Pi from basic specifications to its use with some operating systems. Some case studies and comparisons with other devices will be presented in conjunction with useful accessories to provide better user experience. Aiming to validate (or not) its use in artistic performances there will be some demonstrations related to real-time signal processing with this a credit-card-sized single-board computer.
Musical Multiagent Systems are useful in solving inherently complex and distributed problems. This seminar will present an overview of Multiagent Systems, and applications on the field of Computer Music.