Filters








46,310 Hits in 7.1 sec

Exploration Of The Correspondence Between Visual And Acoustic Parameter Spaces

David Gerhard, Daryl Hepting, Matthew Mckague
2004 Zenodo  
Since both visualand acoustic elements in the example are generated fromconcise specifications, the potential of this approach to create new works through parameter space exploration is accentuated, however  ...  This approach is intended to provide new tools to facilitate bothcollaboration between visual artists and musicians and examination of perceptual issues between visual and acousticmedia.  ...  ACKNOWLEDGEMENTS The authors wish to acknowledge the Natural Sciences and Engineering Research Council of Canada Discovery Grant and Undergraduate Student Research Award programs.  ... 
doi:10.5281/zenodo.1176603 fatcat:i3hbe2obv5bjpow4zkyyujpyra

Collaborative Computer-Aided Parameter Exploration for Music and Animation [chapter]

Daryl H. Hepting, David Gerhard
2005 Lecture Notes in Computer Science  
The main piece of software in this development is the system which allows exploration of parameter mappings.  ...  Although many artists have worked to create associations between music and animation, this has traditionally be done by developing one to suit the pre-existing other, as in visualization or sonification  ...  This work was supported by the University of Regina and the Natural Sciences and Engineering Research Council of Canada.  ... 
doi:10.1007/978-3-540-31807-1_13 fatcat:nqxm5bauy5a45epkioy2xjvhv4

Towards A Perceptual Framework For Interface Design In Digital Environments For Timbre Manipulation

Sean Soraghan, Alain Renaud, Ben Supper
2016 Zenodo  
A review is given of existing research into semantic descriptors of timbre, as well as corresponding acoustic features of timbre. Discussion is also given on existing interface design techniques.  ...  'Perceptually motivated' in this context refers to the use of common semantic timbre descriptors to influence the digital representation of timbre.  ...  It is argued that the exploration of parameter spaces in digital environments could be visually guided using a perceptually motivated visualisation framework that makes use of mappings between acoustic  ... 
doi:10.5281/zenodo.1176129 fatcat:lovjzw3icrcblft2bsia7qnohi

Acoustic and articulatory analysis of French vowels produced by congenitally blind adults and sighted adults

Lucie Ménard, Corinne Toupin, Shari R. Baum, Serge Drouin, Jérôme Aubin, Mark Tiede
2013 Journal of the Acoustical Society of America  
The goal of the present study is to further investigate the articulatory effects of visual deprivation on vowels produced by 11 blind and 11 sighted adult French speakers.  ...  Trade-offs between lip and tongue positions were examined. Results are discussed in the light of the perception-for-action control theory.  ...  ACKNOWLEDGMENTS This work was supported by the Social Sciences and Humanities Research Council of Canada and the Natural Sciences and Engineering Research Council of Canada.  ... 
doi:10.1121/1.4818740 pmid:24116433 fatcat:6ozlx6aek5cf3kbjthhirmyh3e

Improving acoustic event detection using generalizable visual features and multi-modality modeling

Po-Sen Huang, Xiaodan Zhuang, Mark Hasegawa-Johnson
2011 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
To allow the flexibility of audio-visual state asynchrony, we explore effective CHMM training via HMM state-space mapping, parameter tying and different initialization schemes.  ...  Acoustic event detection (AED) aims to identify both timestamps and types of multiple events and has been found to be very challenging.  ...  Following a transformation strategy based on state-space mapping and parameter tying [8] , we can convert a CHMM to an equivalent HMM, whose hidden states each corresponds to the state of the system described  ... 
doi:10.1109/icassp.2011.5946412 dblp:conf/icassp/HuangZH11 fatcat:tvzxgu5ls5dmjgc2bej3tvqdlq

Analysis and Modeling of Affective Audio Visual Speech Based on PAD Emotion Space

Shen Zhang, Yingjin Xu, Jia Jia, Lianhong Cai
2008 2008 6th International Symposium on Chinese Spoken Language Processing  
This paper explores the connection between PAD emotion space and acoustic/visual features respectively.  ...  This paper analyzes acoustic and visual features for affective audio-visual speech based on PAD (Pleasure-Arousal-Dominance) emotion space.  ...  We aim to model the correlation between PAD emotion space and acoustic/visual feature vector space, and then try to predict the affective acoustic features and visual features for affective talking face  ... 
doi:10.1109/chinsl.2008.ecp.82 dblp:conf/iscslp/ZhangXJC08 fatcat:xoudfpbr5vdknoiuakdatx7em4

Comparing perceived auditory width to the visual image of a performing ensemble in contrasting bi-modal environments

Daniel L. Valente, Jonas Braasch, Shane A. Myrbeck
2012 Journal of the Acoustical Society of America  
The greatest differences between the panned audio stimuli given a fixed visual width were found in the physical space with the largest volume and the greatest source distance.  ...  Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness.  ...  The acoustically treated laboratory space, in which the recordings of the musical instruments and the psychophysical experiments took place, was funded by the New York State Foundation for Science, Technology  ... 
doi:10.1121/1.3662055 pmid:22280585 pmcid:PMC3283897 fatcat:4dttjyd5jzfgxocedcxcxodwni

Animating Timbre - A User Study

Sean Soraghan
2014 Proceedings of the SMC Conferences  
Acknowledgments The author would like to thank all of the participants for volunteering to take part in the study.  ...  This study explores mappings in 3D visual space and is focussed on visual representations of timbre features.  ...  CONCLUSION The aim of this study was to combine findings about verbal timbre descriptors and acoustic timbre features and explore preferred mappings between the two.  ... 
doi:10.5281/zenodo.850623 fatcat:ter6hi7zdvbd7kepjeqkv2gk5u

Building a talking baby robot: A contribution to the study of speech acquisition and evolution [chapter]

Jihène Serkhane, Jean-Luc Schwartz, Pierre Bessière
2009 Benjamins Current Topics  
The articulatory model delivers sagittal contour, lip shape and acoustic formants from seven input parameters, which characterize the configurations of the jaw, the tongue, the lips and the larynx.  ...  Learning involves Bayesian programming, in which there are two phases: (i) specification of the variables, decomposition of the joint distribution and identification of the free parameters through exploration  ...  Acknowledgements This work was prepared with support from the European ESF Eurocores program OMLL, and from the French funding programs CNRS STIC Robea and CNRS SHS OHLL, and MESR ACI Neurosciences Fonctionnelles  ... 
doi:10.1075/bct.13.12ser fatcat:qeupnjjaiva4rhqtnx65t764ey

Seeing [u] aids vocal learning: Babbling and imitation of vowels using a 3D vocal tract model, reinforcement learning, and reservoir computing

Max Murakami, Bernd Kroger, Peter Birkholz, Jochen Triesch
2015 2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)  
We present a model of imitative vocal learning consisting of two stages. First, the infant is exposed to the ambient language and forms auditory knowledge of the speech items to be acquired.  ...  Moreover, we find that acquisition of [u] is impaired if visual information is discarded during imitation.  ...  ACKNOWLEDGMENT This work was supported by the Quandt foundation.  ... 
doi:10.1109/devlrn.2015.7346142 dblp:conf/icdl-epirob/MurakamiKBT15 fatcat:nu5vxg4rmjadbmnema3jhd67xi

Building a talking baby robot: A contribution to the study of speech acquisition and evolution

2005 Interaction Studies  
The articulatory model delivers sagittal contour, lip shape and acoustic formants from seven input parameters that characterize the configurations of the jaw, the tongue, the lips and the larynx.  ...  Learning involves Bayesian programming, in which there are two phases: (i) specification of the variables, decomposition of the joint distribution and identification of the free parameters through exploration  ...  Acknowledgements This work was prepared with support from the European ESF Eurocores program OMLL, and from the French funding programs CNRS STIC Robea and CNRS SHS OHLL, and MESR ACI Neurosciences Fonctionnelles  ... 
doi:10.1075/is.6.2.06ser fatcat:rpbepzfboneb7cb3aagryj5tde

Goal babbling of acoustic-articulatory models with adaptive exploration noise

Anja Kristina Philippsen, Rene Felix Reinhart, Britta Wrede
2016 2016 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)  
Ambient speech influences learning on two levels: it organizes the learning process because it is used to generate a space of goals in which exploration takes place.  ...  space.  ...  The bottom plot shows how the distance decreases between the goal space position of the interpolated sound and the goal space position of the target vowel shape.  ... 
doi:10.1109/devlrn.2016.7846793 dblp:conf/icdl-epirob/PhilippsenRW16 fatcat:yyiqqnaymzb6jpsuy3bpekiy7u

Music and Architecture: Bonds, Interrelations, Transductions

Miriam Bessone, Ricardo Pérez Miró
2007 International Journal of Architectural Computing  
Acknowledgements We gratefully thank the researchers and especially the architects, musicians and students-authors who participated in the controlled experiences:  ...  Listening to music, finding and selecting joining parameters appear to be the necessary activity to establish analogies between music parameters and visual form.  ...  As a general result, spaces of high level of experience were generated emphasizing the enriching interaction of perceptions and knowledge between the composers of electro-acoustic music while composing  ... 
doi:10.1260/147807707782581828 fatcat:673wl3yfofdo3d4uhd64xz5rfy

How Can Acoustic-To-Articulatory Maps Be Constrained?

Yves Laprie, Petros Maragos, Jean Schoentgen
2008 Zenodo  
Publication in the conference proceedings of EUSIPCO, Lausanne, Switzerland, 2008  ...  The objective has thus been to build a combined tessellation of the acoustic and articulatory spaces [15] .  ...  An additional source of discrepancies between inferred and observed vocal tract shapes is the simulation of the acoustic wave propagation, which does not represent the acoustical behavior of the real vocal  ... 
doi:10.5281/zenodo.41139 fatcat:5e223kza4bgltnhchstcifywz4

Fast Insight into High-Dimensional Parametrized Simulation Data

Daniel Butnaru, Benjamin Peherstorfer, Hans-Joachim Bungartz, Dirk Pfluger
2012 2012 11th International Conference on Machine Learning and Applications  
However, in order to allow the engineer to thoroughly explore the design space and fine-tune parameters, many -usually very time-consuming -simulation runs are necessary.  ...  In this paper, we address the two-fold problem: First, instantly provide simulation results if the parameter configuration is changed, and, second, identify specific areas of the design space with concentrated  ...  In this section, we address the second issue, namely, which area of the large parameter space should the explorer mostly consider.  ... 
doi:10.1109/icmla.2012.189 dblp:conf/icmla/ButnaruPBP12 fatcat:sp3xkyseijbf5cct2m6m6a4qk4
« Previous Showing results 1 — 15 out of 46,310 results