Evolutionary concept learning from cartoon videos by multimodal hypernetworks

Beom-Jin Lee, Jung-Wo Ha, Kyung-Min Kim, Byoung-Tak Zhang
2013 2013 IEEE Congress on Evolutionary Computation  
Concepts have been widely used for categorizing and representing knowledge in artificial intelligence. Previous researches on concept learning have focused on unimodal data, usually on linguistic domains in a static environment. Concept learning from multimodal stream data, such as videos, remains a challenge due to their dynamic change and high-dimensionality. Here we propose an evolutionary method that simulates the process of human concept learning from multimodal video streams. Two key
more » ... on evolutionary concept learning are representing concepts in a large collection (population) of hyperedges or a hypergraph and to incrementally learning from video streams based on an evolutionary approach. The hypergraph is learned "evolutionarily" by repeating the generation and selection process of hyperedge concepts from the video data. The advantage of this evolutionary learning process is that the population-based distributed coding allows flexible and robust trace of the change of concept relations as the video story unfolds. We evaluate the proposed method on a suite of children's cartoon videos for 517 minutes of total playing time. Experimental results show that the proposed method effectively represents visual-textual concept relations and our evolutionary concept learning method effectively models the conceptual change as an evolutionary process. We also investigate the structure properties of the constructed concept networks.
doi:10.1109/cec.2013.6557700 dblp:conf/cec/LeeHKZ13 fatcat:q4trwieypnctzebnojfde7bfcq