Music recommender systems typically use historical listening information for making personalized recommendations. This approach however keeps high rated songs always as better candidates in a greedy manner. We present a strategy for balancing safe (Exploitation) and novel (Exploration) recommendations in order to prevent suboptimal performance over the long term. The solution proposed is based in a reinforcement learning problem called multi-armed bandit that simulates a situation where someone is playing in several slot machines and needs to optimize his gains. The player starts without any knowledge about the machines as has to choose between the current best machine and new possibilities at each turn. Practical results from the literature are presented as enhancing long term recommendation as well as solving the problem of new items added to the dataset.
We are presenting strategies for processing audio for query-by-humming, having the goal of matching melodies from MIDI files to the melody hummed in audio files. An application of query-by-humming is defined as interface where an user can hum a melody as he or she remembers and the application brings melodies from a MIDI repertoire that has some degree of similarity, depending on what is expected by the user to be similar, so that the user can discover more information from the melody he hummed.
We are showing a brief revision of key concepts and examples of transcription using algorithms as Melodia and ASyMuT. We discuss metrics for matching these sound representations and show some results using the presented strategies.