Filters








28 Hits in 0.86 sec

Multiple Style Transfer via Variational AutoEncoder [article]

Zhi-Song Liu and Vicky Kalogeiton and Marie-Paule Cani
2021 arXiv   pre-print
Zhi-Song Liu, Vicky Kalogeiton and Marie-Paule Cani LIX,École Polytechnique, CNRS, IP Paris https://www.lix.polytechnique.fr/geovic/project-pages/icip-style-transfer ADDITIONAL RESULTS We present additional  ... 
arXiv:2110.07375v1 fatcat:xxxrkkbh6vgsfefzmdn5bndym4

Face, Body, Voice: Video Person-Clustering with Multiple Modalities [article]

Andrew Brown, Vicky Kalogeiton, Andrew Zisserman
2021 arXiv   pre-print
The objective of this work is person-clustering in videos -- grouping characters according to their identity. Previous methods focus on the narrower task of face-clustering, and for the most part ignore other cues such as the person's voice, their overall appearance (hair, clothes, posture), and the editing structure of the videos. Similarly, most current datasets evaluate only the task of face-clustering, rather than person-clustering. This limits their applicability to downstream applications
more » ... such as story understanding which require person-level, rather than only face-level, reasoning. In this paper we make contributions to address both these deficiencies: first, we introduce a Multi-Modal High-Precision Clustering algorithm for person-clustering in videos using cues from several modalities (face, body, and voice). Second, we introduce a Video Person-Clustering dataset, for evaluating multi-modal person-clustering. It contains body-tracks for each annotated character, face-tracks when visible, and voice-tracks when speaking, with their associated features. The dataset is by far the largest of its kind, and covers films and TV-shows representing a wide range of demographics. Finally, we show the effectiveness of using multiple modalities for person-clustering, explore the use of this new broad task for story understanding through character co-occurrences, and achieve a new state of the art on all available datasets for face and person-clustering.
arXiv:2105.09939v1 fatcat:2iqx4cqf2nb43ns76j2abikjky

Programmable Crossbar Quantum-dot Cellular Automata Circuits [article]

Vicky S. Kalogeiton, Dim P. Papadopoulos, Orestis Liolis, Vassilios A. Mardiris, Georgios Ch. Sirakoulis, Ioannis G. Karafyllidis
2016 arXiv   pre-print
Vicky S. Kalogeiton, Dim P. Papadopoulos, Orestis Liolis, Georgios Ch. Sirakoulis and Ioannis G.  ... 
arXiv:1604.07803v1 fatcat:77a4qrbmjnemrf3zmq6zq46mbu

Joint Learning of Object and Action Detectors

Vicky Kalogeiton, Philippe Weinzaepfel, Vittorio Ferrari, Cordelia Schmid
2017 2017 IEEE International Conference on Computer Vision (ICCV)  
While most existing approaches for detection in videos focus on objects or human actions separately, we aim at jointly detecting objects performing actions, such as cat eating or dog jumping. We introduce an end-to-end multitask objective that jointly learns object-action relationships. We compare it with different training objectives, validate its effectiveness for detecting objects-actions in videos, and show that both tasks of object and action detection benefit from this joint learning.
more » ... over, the proposed architecture can be used for zero-shot learning of actions: our multitask objective leverages the commonalities of an action performed by different objects, e.g. dog and cat jumping, enabling to detect actions of an object without training with these object-actions pairs. In experiments on the A2D dataset [50], we obtain state-of-the-art results on segmentation of object-action pairs. We finally apply our multitask architecture to detect visual relationships between objects in images of the VRD dataset [24] .
doi:10.1109/iccv.2017.219 dblp:conf/iccv/KalogeitonWFS17 fatcat:zhjxvt6unnfedipyg4b75yjfla

A Survey on Reinforcement Learning Methods in Character Animation [article]

Ariel Kwiatkowski, Eduardo Alvarado, Vicky Kalogeiton, C. Karen Liu, Julien Pettré, Michiel van de Panne, Marie-Paule Cani
2022 arXiv   pre-print
Reinforcement Learning is an area of Machine Learning focused on how agents can be trained to make sequential decisions, and achieve a particular goal within an arbitrary environment. While learning, they repeatedly take actions based on their observation of the environment, and receive appropriate rewards which define the objective. This experience is then used to progressively improve the policy controlling the agent's behavior, typically represented by a neural network. This trained module
more » ... n then be reused for similar problems, which makes this approach promising for the animation of autonomous, yet reactive characters in simulators, video games or virtual reality environments. This paper surveys the modern Deep Reinforcement Learning methods and discusses their possible applications in Character Animation, from skeletal control of a single, physically-based character to navigation controllers for individual agents and virtual crowds. It also describes the practical side of training DRL systems, comparing the different frameworks available to build such agents.
arXiv:2203.04735v1 fatcat:usnqama2frfwxijpctt6bipivu

Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval [article]

Andrew Brown, Weidi Xie, Vicky Kalogeiton, Andrew Zisserman
2020 arXiv   pre-print
Kalogeiton [0000−0002−7368−6993] , and Andrew Zisserman [0000−0002−8945−8573] Visual Geometry Group, University of Oxford {abrown,weidi,vicky,az}@robots.ox.ac.uk https: Table Of Contents Of Algorithm  ...  www.robots.ox.ac.uk/~vgg/research/smooth-ap/ Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval Supplementary Material Andrew Brown [0000−0002−9556−2633] , Weidi Xie [0000−0003−3804−2639] , Vicky  ... 
arXiv:2007.12163v2 fatcat:xfw3kzvzt5bttem434htfgqtru

Action Tubelet Detector for Spatio-Temporal Action Localization

Vicky Kalogeiton, Philippe Weinzaepfel, Vittorio Ferrari, Cordelia Schmid
2017 2017 IEEE International Conference on Computer Vision (ICCV)  
Current state-of-the-art approaches for spatio-temporal action localization rely on detections at the frame level that are then linked or tracked across time. In this paper, we leverage the temporal continuity of videos instead of operating at the frame level. We propose the ACtion Tubelet detector (ACT-detector) that takes as input a sequence of frames and outputs tubelets, i.e., sequences of bounding boxes with associated scores. The same way state-of-theart object detectors rely on anchor
more » ... es, our ACT-detector is based on anchor cuboids. We build upon the SSD framework [19] . Convolutional features are extracted for each frame, while scores and regressions are based on the temporal stacking of these features, thus exploiting information from a sequence. Our experimental results show that leveraging sequences of frames significantly improves detection performance over using individual frames. The gain of our tubelet detector can be explained by both more accurate scores and more precise localization. Our ACT-detector outperforms the state-of-the-art methods for frame-mAP and video-mAP on the J-HMDB [12] and UCF-101 [31] datasets, in particular at high overlap thresholds.
doi:10.1109/iccv.2017.472 dblp:conf/iccv/KalogeitonWFS17a fatcat:djg52a4k7vanvcwvltxwphib4y

Name Your Style: An Arbitrary Artist-aware Image Style Transfer [article]

Zhi-Song Liu, Li-Wen Wang, Wan-Chi Siu, Vicky Kalogeiton
2022 arXiv   pre-print
Image style transfer has attracted widespread attention in the past few years. Despite its remarkable results, it requires additional style images available as references, making it less flexible and inconvenient. Using text is the most natural way to describe the style. More importantly, text can describe implicit abstract styles, like styles of specific artists or art movements. In this paper, we propose a text-driven image style transfer (TxST) that leverages advanced image-text encoders to
more » ... ontrol arbitrary style transfer. We introduce a contrastive training strategy to effectively extract style descriptions from the image-text model (i.e., CLIP), which aligns stylization with the text description. To this end, we also propose a novel and efficient attention module that explores cross-attentions to fuse style and content features. Finally, we achieve an arbitrary artist-aware image style transfer to learn and transfer specific artistic characters such as Picasso, oil painting, or a rough sketch. Extensive experiments demonstrate that our approach outperforms the state-of-the-art methods on both image and textual styles. Moreover, it can mimic the styles of one or many artists to achieve attractive results, thus highlighting a promising direction in image style transfer.
arXiv:2202.13562v2 fatcat:vbhnuyar2fc2ncsfakip42enry

Action Tubelet Detector for Spatio-Temporal Action Localization [article]

Vicky Kalogeiton, Philippe Weinzaepfel, Vittorio Ferrari, Cordelia Schmid
2017 arXiv   pre-print
Current state-of-the-art approaches for spatio-temporal action localization rely on detections at the frame level that are then linked or tracked across time. In this paper, we leverage the temporal continuity of videos instead of operating at the frame level. We propose the ACtion Tubelet detector (ACT-detector) that takes as input a sequence of frames and outputs tubelets, i.e., sequences of bounding boxes with associated scores. The same way state-of-the-art object detectors rely on anchor
more » ... xes, our ACT-detector is based on anchor cuboids. We build upon the SSD framework. Convolutional features are extracted for each frame, while scores and regressions are based on the temporal stacking of these features, thus exploiting information from a sequence. Our experimental results show that leveraging sequences of frames significantly improves detection performance over using individual frames. The gain of our tubelet detector can be explained by both more accurate scores and more precise localization. Our ACT-detector outperforms the state-of-the-art methods for frame-mAP and video-mAP on the J-HMDB and UCF-101 datasets, in particular at high overlap thresholds.
arXiv:1705.01861v3 fatcat:mnoeyukkivdjbmur7yaygkkujm

Automatic summarization and annotation of videos with lack of metadata information

Dim P. Papadopoulos, Vicky S. Kalogeiton, Savvas A. Chatzichristofis, Nikos Papamarkos
2013 Expert systems with applications  
As far as the video summarization method, it should be noted that the video abstraction problem is expanded to a single query image retrieval problem (Kalogeiton et al., 2010; Papadopoulos et al., 2011  ... 
doi:10.1016/j.eswa.2013.02.016 fatcat:f57bidqcjbhrxaok4iogujwc5i

LAEO-Net: Revisiting People Looking at Each Other in Videos

Manuel J. Marin-Jimenez, Vicky Kalogeiton, Pablo Medina-Suarez, Andrew Zisserman
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Figure 1 : Intimacy or hostility? Head pose, along with body pose and facial expressions, is a rich source of information for interpreting human interactions. Being able to automatically understand the non-verbal cues provided by the relative head orientations of people in a scene enables a new level of human-centric video understanding. Green and red/orange heads represent LAEO and non-LAEO cases, respectively. Video source of second row: https://youtu.be/B3eFZMvNS1U Abstract Capturing the
more » ... ual gaze' of people is essential for understanding and interpreting the social interactions between them. To this end, this paper addresses the problem of detecting people Looking At Each Other (LAEO) in video sequences. For this purpose, we propose LAEO-Net, a new deep CNN for determining LAEO in videos. In contrast to previous works, LAEO-Net takes spatio-temporal tracks as input and reasons about the whole track. It consists of three branches, one for each character's tracked head and one for their relative position. Moreover, we introduce two new LAEO datasets: UCO-LAEO and AVA-LAEO. A thorough experimental evaluation demonstrates the ability of LAEO-Net to successfully determine if two people are LAEO and the temporal window where it happens. Our model achieves state-of-the-art results on the existing TVHID-LAEO video dataset, significantly outperforming previous approaches.
doi:10.1109/cvpr.2019.00359 dblp:conf/cvpr/Marin-JimenezKM19 fatcat:xn2j23jiy5da7kntdjipcjznra

Analysing domain shift factors between videos and images for object detection [article]

Vicky Kalogeiton, Vittorio Ferrari, Cordelia Schmid
2016 arXiv   pre-print
Object detection is one of the most important challenges in computer vision. Object detectors are usually trained on bounding-boxes from still images. Recently, video has been used as an alternative source of data. Yet, for a given test domain (image or video), the performance of the detector depends on the domain it was trained on. In this paper, we examine the reasons behind this performance gap. We define and evaluate different domain shift factors: spatial location accuracy, appearance
more » ... sity, image quality and aspect distribution. We examine the impact of these factors by comparing performance before and after factoring them out. The results show that all four factors affect the performance of the detectors and their combined effect explains nearly the whole performance gap.
arXiv:1501.01186v3 fatcat:7nsxymvnb5dg7corjsfzdoid74

Analysing Domain Shift Factors between Videos and Images for Object Detection

Vicky Kalogeiton, Vittorio Ferrari, Cordelia Schmid
2016 IEEE Transactions on Pattern Analysis and Machine Intelligence  
Kalogeiton is with the CALVIN team at the University of Edinburgh and with the THOTH team,  ... 
doi:10.1109/tpami.2016.2551239 pmid:27071159 fatcat:z34afj3iknd3bmtw3d2xcbfeiq

LAEO-Net++: revisiting people Looking At Each Other in videos

Manuel Jesus Marin-Jimenez, Vicky Kalogeiton, Pablo Medina-Suarez, Andrew Zisserman
2020 IEEE Transactions on Pattern Analysis and Machine Intelligence  
Vicky Kalogeiton and Andrew Zisserman are with the University of Oxford. Emails: vicky@robots.ox.ac.uk and az@robots.ox.ac.uk means equal contribution. • Manuel J.  ...  Emails: mjmarin@uco.es and i42mesup@uco.es • Vicky Kalogeiton is at LIX,École Polytechnique and Andrew Zisserman at the University of Oxford.  ... 
doi:10.1109/tpami.2020.3048482 pmid:33382648 fatcat:abpxckm5k5dx7ghewkvmaf6sny

Area Chairs

2021 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
lxxiv 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) | 978-1-6654-4899-4/21/$31.00 ©2021 IEEE | DOI: 10.1109/CVPRW53098.2021.00007 Fredrik Kahl Vicky Kalogeiton  ... 
doi:10.1109/cvprw53098.2021.00007 fatcat:dem3dlavzzeeppjp4bmo7svh4y
« Previous Showing results 1 — 15 out of 28 results