A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
"This is my unicorn, Fluffy": Personalizing frozen vision-language representations
[article]
2022
arXiv
pre-print
., 2021) , through image generation (Gal et al., 2021; Patashnik et al., 2021) and segmentation (Zabari & Hoshen, 2021; Li et al., 2022) , to robotic manipulation (Shridhar et al., 2022) . ...
arXiv:2204.01694v1
fatcat:dzhe2h5tlzcf5eybgi7t25uuv4
LARGE: Latent-Based Regression through GAN Semantics
[article]
2021
arXiv
pre-print
We propose a novel method for solving regression tasks using few-shot or weak supervision. At the core of our method is the fundamental observation that GANs are incredibly successful at encoding semantic information within their latent space, even in a completely unsupervised setting. For modern generative frameworks, this semantic encoding manifests as smooth, linear directions which affect image attributes in a disentangled manner. These directions have been widely used in GAN-based image
arXiv:2107.11186v1
fatcat:vsvubx2ijnfurkpzhisliuhaci
more »
... ting. We show that such directions are not only linear, but that the magnitude of change induced on the respective attribute is approximately linear with respect to the distance traveled along them. By leveraging this observation, our method turns a pre-trained GAN into a regression model, using as few as two labeled samples. This enables solving regression tasks on datasets and attributes which are difficult to produce quality supervision for. Additionally, we show that the same latent-distances can be used to sort collections of images by the strength of given attributes, even in the absence of explicit supervision. Extensive experimental evaluations demonstrate that our method can be applied across a wide range of domains, leverage multiple latent direction discovery frameworks, and achieve state-of-the-art results in few-shot and low-supervision settings, even when compared to methods designed to tackle a single task.
StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators
[article]
2021
arXiv
pre-print
Can a generative model be trained to produce images from a specific domain, guided by a text prompt only, without seeing any image? In other words: can an image generator be trained "blindly"? Leveraging the semantic power of large scale Contrastive-Language-Image-Pre-training (CLIP) models, we present a text-driven method that allows shifting a generative model to new domains, without having to collect even a single image. We show that through natural language prompts and a few minutes of
arXiv:2108.00946v2
fatcat:lnn4ydsoenauxbpu6ijpm3ccn4
more »
... ing, our method can adapt a generator across a multitude of domains characterized by diverse styles and shapes. Notably, many of these modifications would be difficult or outright impossible to reach with existing methods. We conduct an extensive set of experiments and comparisons across a wide range of domains. These demonstrate the effectiveness of our approach and show that our shifted models maintain the latent-space properties that make generative models appealing for downstream tasks.
SWAGAN: A Style-based Wavelet-driven Generative Model
[article]
2021
arXiv
pre-print
In recent years, considerable progress has been made in the visual quality of Generative Adversarial Networks (GANs). Even so, these networks still suffer from degradation in quality for high-frequency content, stemming from a spectrally biased architecture, and similarly unfavorable loss functions. To address this issue, we present a novel general-purpose Style and WAvelet based GAN (SWAGAN) that implements progressive generation in the frequency domain. SWAGAN incorporates wavelets throughout
arXiv:2102.06108v1
fatcat:zreadqhmufhsznwbkcb6gp2yrm
more »
... its generator and discriminator architectures, enforcing a frequency-aware latent representation at every step of the way. This approach yields enhancements in the visual quality of the generated images, and considerably increases computational performance. We demonstrate the advantage of our method by integrating it into the SyleGAN2 framework, and verifying that content generation in the wavelet domain leads to higher quality images with more realistic high-frequency content. Furthermore, we verify that our model's latent space retains the qualities that allow StyleGAN to serve as a basis for a multitude of editing tasks, and show that our frequency-aware approach also induces improved downstream visual quality.
Self-Conditioned Generative Adversarial Networks for Image Editing
[article]
2022
arXiv
pre-print
Generative Adversarial Networks (GANs) are susceptible to bias, learned from either the unbalanced data, or through mode collapse. The networks focus on the core of the data distribution, leaving the tails - or the edges of the distribution - behind. We argue that this bias is responsible not only for fairness concerns, but that it plays a key role in the collapse of latent-traversal editing methods when deviating away from the distribution's core. Building on this observation, we outline a
arXiv:2202.04040v1
fatcat:q5dfsdnllzhcznk4kuxi2zh3ui
more »
... od for mitigating generative bias through a self-conditioning process, where distances in the latent-space of a pre-trained generator are used to provide initial labels for the data. By fine-tuning the generator on a re-sampled distribution drawn from these self-labeled data, we force the generator to better contend with rare semantic attributes and enable more realistic generation of these properties. We compare our models to a wide range of latent editing methods, and show that by alleviating the bias they achieve finer semantic control and better identity preservation through a wider range of transformations. Our code and models will be available at https://github.com/yzliu567/sc-gan
MRGAN: Multi-Rooted 3D Shape Generation with Unsupervised Part Disentanglement
[article]
2020
arXiv
pre-print
We present MRGAN, a multi-rooted adversarial network which generates part-disentangled 3D point-cloud shapes without part-based shape supervision. The network fuses multiple branches of tree-structured graph convolution layers which produce point clouds, with learnable constant inputs at the tree roots. Each branch learns to grow a different shape part, offering control over the shape generation at the part level. Our network encourages disentangled generation of semantic parts via two key
arXiv:2007.12944v1
fatcat:7twrpqwk5feftkughbzykwui4m
more »
... dients: a root-mixing training strategy which helps decorrelate the different branches to facilitate disentanglement, and a set of loss terms designed with part disentanglement and shape semantics in mind. Of these, a novel convexity loss incentivizes the generation of parts that are more convex, as semantic parts tend to be. In addition, a root-dropping loss further ensures that each root seeds a single part, preventing the degeneration or over-growth of the point-producing branches. We evaluate the performance of our network on a number of 3D shape classes, and offer qualitative and quantitative comparisons to previous works and baseline approaches. We demonstrate the controllability offered by our part-disentangled generation through two applications for shape modeling: part mixing and individual part variation, without receiving segmented shapes as input.
Stitch it in Time: GAN-Based Facial Editing of Real Videos
[article]
2022
arXiv
pre-print
The ability of Generative Adversarial Networks to encode rich semantics within their latent space has been widely adopted for facial image editing. However, replicating their success with videos has proven challenging. Sets of high-quality facial videos are lacking, and working with videos introduces a fundamental barrier to overcome - temporal coherency. We propose that this barrier is largely artificial. The source video is already temporally coherent, and deviations from this state arise in
arXiv:2201.08361v2
fatcat:3xnfbxo6ijamnowimqh4yay2hu
more »
... art due to careless treatment of individual components in the editing pipeline. We leverage the natural alignment of StyleGAN and the tendency of neural networks to learn low frequency functions, and demonstrate that they provide a strongly consistent prior. We draw on these insights and propose a framework for semantic editing of faces in videos, demonstrating significant improvements over the current state-of-the-art. Our method produces meaningful face manipulations, maintains a higher degree of temporal consistency, and can be applied to challenging, high quality, talking head videos which current methods struggle with.
State-of-the-Art in the Architecture, Methods and Applications of StyleGAN
[article]
2022
arXiv
pre-print
Bermano,Rinon Gal, Yuval Alaluf, Ron Mokady, Yotam Nitzan, Omer Tov, Or Patashnik, and Daniel Cohen-Or
• Amit H. ...
Bermano,Rinon Gal, Yuval Alaluf, Ron Mokady, Yotam Nitzan, Omer Tov, Or Patashnik, and Daniel Cohen-Or
• Amit H. ...
arXiv:2202.14020v1
fatcat:qu3plbdnszdujcwxwq3zizysje
Stitch it in Time: GAN-Based Facial Editing of Real Videos
[article]
2022
The ability of Generative Adversarial Networks to encode rich semantics within their latent space has been widely adopted for facial image editing. However, replicating their success with videos has proven challenging. Sets of high-quality facial videos are lacking, and working with videos introduces a fundamental barrier to overcome - temporal coherency. We propose that this barrier is largely artificial. The source video is already temporally coherent, and deviations from this state arise in
doi:10.48550/arxiv.2201.08361
fatcat:35hrt3ndb5grxdqh6nuq5depo4
more »
... art due to careless treatment of individual components in the editing pipeline. We leverage the natural alignment of StyleGAN and the tendency of neural networks to learn low frequency functions, and demonstrate that they provide a strongly consistent prior. We draw on these insights and propose a framework for semantic editing of faces in videos, demonstrating significant improvements over the current state-of-the-art. Our method produces meaningful face manipulations, maintains a higher degree of temporal consistency, and can be applied to challenging, high quality, talking head videos which current methods struggle with.
Xenotrasplante renal: El rechazo vascular agudo
2008
Actas Urológicas Españolas
Babuino, cerdo, riñon. ...
En EEUU en el 2005 la lista de espera para trasplantes de órganos, riñón, corazón, hígado, pulmón, páncreas era de 94.419. ...
En estudios realizados no fueron detectados anticuerpos xenoreactivos α-Gal específicos y no α-Gal específicos y los riñones no presentaron signos de rechazo. ...
doi:10.4321/s0210-48062008000100015
fatcat:26owci44traclnidxtpko2ajsy
Xenotrasplante renal cerdo hDAF-babuino: Experiencia y revisión
2004
Actas Urológicas Españolas
IgM Anti-Gal IgG APA 2 3 FIGURA 3. ...
Al terminar el trasplante se realiza nefrectomía bilateral de los riñones propios y finalmente biopsia del riñón trasplantado. ...
doi:10.4321/s0210-48062004000300001
fatcat:i4sivbxo2je5vp5fnxk7ac3ppq
Rechazo celular agudo en modelo ex vivo de xenotrasplante renal cerdo-hombre
2004
Actas Urológicas Españolas
RESUMEN RECHAZO CELULAR AGUDO EN MODELO EX VIVO DE XENOTRASPLANTE RENAL CERDO-HOMBRE OBJETIVOS: Desarrollo de un modelo ex vivo de perfusión de riñones de cerdo con sangre humana, intentando reproducir ...
MÉTODO: Perfusión de los riñones de cerdo durante 3 horas con sangre de cerdo (grupo 1; n=5), sangre humana (grupo 2; n=5), sangre humana decomplementada por calor (grupo 3; n=5), sangre humana deplaquetada ...
Los anticuerpos naturales en humanos que se fijan a las células del cerdo se dirigen principalmente contra el disacárido galactosa-α1-3 galactosa (Gal α1-3 Gal) 9 . ...
doi:10.4321/s0210-48062004000200006
fatcat:ntq6huzqrncarpslfedt6tizey
Del alotransplante al xenotransplante: la compatibilidad antigénica donante-receptor por medio del Complejo Mayor de Histocompatibilidad CMH
2006
Nova
Una posible solución está en la inducción de tolerancia con los antígenos Gal-Ü-1,3-Gal porcinos, para evitar que los LT no actúen y por ende no produzcan rechazo contra el órgano. ...
Como estrategia para disminuir el rechazo del órgano se ha estudiado la posibilidad de intervenir en la no expresión del Gal Ü 1,3-Gal por medio de enzimas que puedan interferir en la expresión de este ...
doi:10.22490/24629448.364
fatcat:idd3uoxv4vd4la3qrxkgrkadba
Enfermedad de Fabry
2021
Revista Colombiana de Cardiología
), debido a mutaciones en el gen que codifica la proteína αGAL. ...
Abstract Fabry disease (FD) is an X-linked lysosomal storage disorder caused by reduced or absent activity of the hydrolase α-galactosidase A (αGAL) enzyme due to mutations in the gene encoding the αGAL ...
Desde 2001 está disponible la TRE con αGAL humana recombinante (rhαGAL, α-galactosidasa A humana recombinante) para tratar la EF. ...
doi:10.24875/rccar.m21000037
fatcat:c6xvfaqrybewnkn6xbjktgzfiy
Compromiso cardiovascular en la enfermedad de Fabry Cardiovascular involvement in Fabry disease
2008
Revista Colombiana de Cardiología
En los adultos los órganos más afectados son corazón, riñones y cerebro. El compromiso cardiaco es de gran importancia por tratarse de una de las principales causas de morbi-mortalidad. ...
En los adultos los órganos más comprometidos son el corazón, los riñones y el cerebro. ...
El compromiso es sistémico ya que afecta en forma preferencial riñón, corazón y cerebro. ...
doaj:736fd7568f12469e8a8b255021967d97
fatcat:fjh6jo52f5hshl5qop2ajyqaxm
« Previous
Showing results 1 — 15 out of 210 results