Segmentation to Sound Conversion
IOSR Journal of Computer Engineering
Our motive, the task of unsupervised topic segmentation of speech data operating over raw acoustic information. In contrast to existing algorithms for topic segmentation of speech, our approach does not require input transcripts. Our method predicts topic changes by analyzing the distribution of reoccurring acoustic patterns in the speech signal corresponding to a single speaker. The algorithm robustly handles noise inherent in acoustic matching by intelligently aggregating information about
... similarity profile from multiple local comparisons. Our experiments show that audio-based segmentation compares favorably with transcript based segmentation computed over noisy transcripts. These results demonstrate the desirability of our method for applications where a speech recognizer is not available, or its output has a high word error rate. Also, this paper describes methods for automatically locating points of significant change in music or audio by analyzing local self-similarity. This method can find individual note boundaries or even natural segment boundaries such as verse/c hours or speech/music transitions, even in the absence of cues such as silence. This approach uses the signal to model itself, and thus does not rely on particular acoustic cues nor requires training. Here present a wide variety of applications, including indexing, segmenting, and beat tracking of music and audio. The method works well on a wide variety of audio sources.