Filters








5 Hits in 3.2 sec

CLIP-Forge: Towards Zero-Shot Text-to-Shape Generation [article]

Aditya Sanghi and Hang Chu and Joseph G. Lambourne and Ye Wang and Chin-Yi Cheng and Marco Fumero and Kamal Rahimi Malekshan
2022 arXiv   pre-print
We present a simple yet effective method for zero-shot text-to-shape generation that circumvents such data scarcity.  ...  We not only demonstrate promising zero-shot generalization of the CLIP-Forge model qualitatively and quantitatively, but also provide extensive comparative evaluations to better understand its behavior  ...  Leveraging the progress of text-to-image generation, we present CLIP-Forge, a two-stage training method for zero-shot text-to-shape generation.  ... 
arXiv:2110.02624v2 fatcat:zb4nyevqjjgzxd3cbiibxz3rwi

AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars [article]

Fangzhou Hong, Mingyuan Zhang, Liang Pan, Zhongang Cai, Lei Yang, Ziwei Liu
2022 arXiv   pre-print
To democratize this technology to a larger audience, we propose AvatarCLIP, a zero-shot text-driven framework for 3D avatar generation and animation.  ...  Remarkably, AvatarCLIP can generate unseen 3D avatars with novel animations, achieving superior zero-shot capability.  ...  Fortunately, recent advances in vision-language models pave the way toward zero-shot text-driven generation. CLIP ] is a vision-language pre-trained model trained with large-scale image-text pairs.  ... 
arXiv:2205.08535v1 fatcat:ybbcbmjs2fckljkkx3pqgymejy

Zero-Shot Text-Guided Object Generation with Dream Fields [article]

Ajay Jain, Ben Mildenhall, Jonathan T. Barron, Pieter Abbeel, Ben Poole
2022 arXiv   pre-print
Instead, we guide generation with image-text models pre-trained on large datasets of captioned images from the web.  ...  We combine neural rendering with multi-modal image and text representations to synthesize diverse 3D objects solely from natural language descriptions.  ...  to novel concepts zero-shot.  ... 
arXiv:2112.01455v2 fatcat:z3mktn6omfcg7hhwcfnowg2p7a

Text2Mesh: Text-Driven Neural Stylization for Meshes [article]

Oscar Michel, Roi Bar-On, Richard Liu, Sagie Benaim, Rana Hanocka
2021 arXiv   pre-print
Our framework, Text2Mesh, stylizes a 3D mesh by predicting color and local geometric details which conform to a target text prompt.  ...  In order to modify style, we obtain a similarity score between a text prompt (describing style) and a stylized mesh by harnessing the representational power of CLIP.  ...  Clip-forge: To- of shapes.  ... 
arXiv:2112.03221v1 fatcat:2mfgjh37lna5hnjj6pvms6zuey

Polar Bears" in the Land of Lice and Snow: The American Soldier Experience in North Russia

Jake Zellner
unpublished
McKenzie Papers, 38 (1/7/19). 132 BHL: Cleo Colburn, "Diary", 3. 133 BHL: Douma, "Diary", 13 (1/20/19). 134 BHL: Douma, "Diary", 18 (4/8-13/19); ). 138 BHL: Arkins, "Newspaper clipping, "Forging  ...  , "Forging ahead after Bolsheviki: E.  ... 
fatcat:so5owvzcojf2thcib63xj2pt2q