Structured Knowledge and Kernel-based Learning: the case of Grounded Spoken Language Learning in Interactive Robotics

Roberto Basili, Danilo Croce
2016 International Conference of the Italian Association for Artificial Intelligence  
Recent results achieved by statistical approaches involving Deep Neural Learning architectures suggest that semantic inference tasks can be solved by adopting complex neural architectures and advanced optimization techniques. This is achieved even by simplifying the representation of the targeted phenomena. The idea that representation of structured knowledge is essential to reliable and accurate semantic inferences seems to be implicitly denied. However, Neural Networks (NNs) underlying such
more » ... thods rely on complex and beneficial representational choices for the input to the network (e.g., in the so-called pre-Training stages) and sophisticated design choices regarding the NNS inner structure are still required. While optimization carries strong mathematical tools that are crucially useful, in this work, we wonder about the role of representation of information and knowledge. In particular, we claim that representation is still a major issue, and discuss it in the light of Spoken Language capabilities required by a robotic system in the domain of service robotics. The result is that adequate knowledge representation is quite central for learning machines in real applications. Moreover, learning mechanisms able to properly characterize it, through expressive mathematical abstractions (i.e. trees, graphs or sets), constitute a core research direction towards robust, adaptive and increasingly autonomous AI systems. Recent results achieved by statistical approaches involving Deep Neural Learning architectures (as in [1] ) suggest that semantic inference tasks can be solved by adopting complex neural architectures and advanced mathematical optimization techniques, but simplifying the representation of the targeted phenomena. The idea that representation of structured knowledge is essential to reliable and accurate semantic inferences seems to be implicitly denied. As an example, the application of Deep Neural Networks architectures in the context of Natural Language Processing or Machine Translation is quite radical in this respect, since the work presented in [2] . However, Neural Networks (NNs) underlying such methods rely on beneficial representational choices for the input to the network (e.g., in the so-called pre-training stages) and complex design choices regarding the NNs inner structure are still required ([3]). Moreover, some recent works suggest that optimal
dblp:conf/aiia/BasiliC16 fatcat:wqihvuh53fduxhr2uxpnudnj4i