In life, some say: “If the problem is easy, solve it directly. If not, decompose it into smaller parts.” In computer science, this is called Divide & Conquer (D&C). But what type of problems should one divide to conquer? In research, communication, collaboration… in life? Carolina Brum Medeiros, a PhD Candidate at IDMIL Laboratory/McGill University, sketches possibilities on dividing and conquering on human motion analysis and, why not, on research. In this talk, she will discuss sensor fusion as a method for closing the D&C open chain. Also, an open discussion will follow about research methods that Divide, Conquer, Collaborate and Go Beyond.
Topics:
Carolina Brum Medeiros was born in Pelotas, Brazil. She received the B.S. degree in electrical engineering in 2006, and the M.S. degree in mechanical engineering in 2009 from the Universidade Federal de Santa Catarina, Florianópolis, Brazil. She is currently working towards the Ph.D. degree with the Music Technology Program at Schulich School of Music, and a Research Assistant at Input Devices and Music Interaction Laboratory (IDMIL), McGill University, Montreal, Canada. In 2013, she was a Visiting Student with the Responsive Environments Group, MIT Media Laboratory. She collaborates with the sportsemble project acquiring and analyzing baseball pitching, a project with MIT Media Lab, Harvard Medical School and C-Motion. Her research interest includes sensor fusion techniques, sensor and signal conditioning, user interfaces, and human motion analyses. Ms. Brum Medeiros is a recipient of a full doctoral scholarship from Capes/Brazil. She is a member of the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) and CREATE-NSERC Integrated Sensor Systems Program. She collaborates with the sportsemble project acquiring and analyzing baseball pitching, a project with MIT Media Lab, Harvard Medical School and C-Motion.
Impedance is related to the impediment or opposition inside a system. The electrical impedance can be represented as a complex function of magnitude in resistance (real part) and reactance (imaginary part). On the other hand, the acoustic impedance is dependent upon the frequency and can be calculated as a function of pressure, particle velocity and surface area. It is also possible to check the impedance of a medium or a component of sound.
In this seminar we are discussing concepts related to the impedance and its effects on the manipulation of sound signal. Some examples will be presented focusing on interference on signal transmission caused by impedance. In the end we will emphasize some precautions in connecting components at some systems.
Writing multimedia applications from scratch is hard work. There are many aspects to consider: I/O, formats for video and audio, conversions, real-time conditions, synchronization, (de)multiplexing, effective control and so on. The Gstreamer framework tries to help with this.
GStreamer is a library for constructing graphs of media-handling components. The applications it supports range from simple Ogg/Vorbis playback, audio/video streaming to complex audio (mixing) and video (non-linear editing) processing. Applications can take advantage of advances in codec and filter technology transparently. Developers can add new codecs and filters by writing a simple plugin with a clean, generic interface.
This seminar will give a short introduction into concepts and structures of the Gstreamer framework and will present examples and tools.
The popularization of computer networks, the growth in computational resources and their use in music production have raised the interest in using computers for synchronous communication of music content. This communication may allow a new level of interactivity between machines and people in music production processes, including the distribution of activities, resources and people within a networked music environment. In this context, this work presents a solution for synchronous communication of audio and MIDI streams in computer networks.
Besides allowing communication, the proposed solution simplifies connections of music resources and allows the integration of heterogeneous systems, such as different operating systems, audio architecture and codification formats, transparently in a distributed environment.
As a means for accomplishing this solution, we mapped requirements and desirable features for this application domain, from the interaction with musicians and the analysis of related software. Based on these requirements and features, we designed a system architecture for the specific domain of synchronous communication of music content. Using this architecture as reference, we implemented a library that comprises the essential functionalities for this specific domain.
In order to integrate this library with different Audio and MIDI libraries, we developed a tool set that matches the proposed requirements and allows users to use network connections in several music tools.
In this seminar, we will introduce Medusa and present its concepts and development.