Filters








6,552 Hits in 3.7 sec

Null-sampling for Interpretable and Fair Representations [article]

Thomas Kehrenberg, Myles Bartlett, Oliver Thomas, Novi Quadrianto
2020 arXiv   pre-print
We propose to learn invariant representations, in the data domain, to achieve interpretability in algorithmic fairness.  ...  To address this problem, we introduce an adversarially trained model with a null-sampling procedure to produce invariant representations in the data domain.  ...  We are grateful to NVIDIA for donating GPUs.  ... 
arXiv:2008.05248v1 fatcat:pnlcct4dsvakhpwsj6rr76zdui

On the Global Optima of Kernelized Adversarial Representation Learning

Bashir Sadeghi, Runyi Yu, Vishnu Boddeti
2019 2019 IEEE/CVF International Conference on Computer Vision (ICCV)  
Numerical experiments on UCI, Extended Yale B and CIFAR-100 datasets indicate that, (a) practically, our solution is ideal for "imparting" provable invariance to any biased pre-trained data representation  ...  We then extend this solution and analysis to non-linear functions through kernel representation.  ...  Practically, we also demonstrate the utility of Linear-ARL and Kernel-ARL for "imparting" provable invariance to any biased pre-trained data representation.  ... 
doi:10.1109/iccv.2019.00806 dblp:conf/iccv/SadeghiYB19 fatcat:3o6rdzwr3nevzefkdotkkrkpp4

On the Intrinsic and Extrinsic Fairness Evaluation Metrics for Contextualized Language Representations [article]

Yang Trista Cao and Yada Pruksachatkun and Kai-Wei Chang and Rahul Gupta and Varun Kumar and Jwala Dhamala and Aram Galstyan
2022 arXiv   pre-print
Multiple metrics have been introduced to measure fairness in various natural language processing tasks.  ...  language representation models.  ...  We evaluate 19 popular pre-trained language models 4 .  ... 
arXiv:2203.13928v1 fatcat:c3z6xau625gapo3iyc5u5scrua

Biometrics: Trust, but Verify [article]

Anil K. Jain, Debayan Deb, Joshua J. Engelsma
2021 arXiv   pre-print
Finally, we provide insights into how the biometric community can address core biometric recognition systems design issues to better instill trust, fairness, and security for all.  ...  ), iii) uncertainty over the bias and fairness of the systems to all users, iv) explainability of the seemingly black-box decisions made by most recognition systems, and v) concerns over data centralization  ...  , 3) Explainability and Interpretability, 4) Biasness and Fairness, and 5) Privacy.  ... 
arXiv:2105.06625v2 fatcat:gnii3qxufzevzlkfimhmmwof7i

Executive Function: A Contrastive Value Policy for Resampling and Relabeling Perceptions via Hindsight Summarization? [article]

Chris Lengerich, Ben Lengerich
2022 arXiv   pre-print
This is made feasible by the use of a memory policy and a pretrained network with inductive biases for a grammar of learning and is trained to maximize evolutionary survival.  ...  minimize attended prediction error, similar to an online prompt engineering problem.  ...  Although specifically how to learn causal world models is still an open research question, imparting inductive biases for sparse representations has been useful for improving neural network performance  ... 
arXiv:2204.12639v1 fatcat:x7oxt3otxzez3prdrrivvutfky

Accelerating the acquisition of knowledge structure to improve performance in internal control reviews

A. Faye Borthick, Mary B. Curtis, Ram S. Sriram
2006 Accounting, Organizations and Society  
We demonstrated that knowledge structure training is eVective in imparting transaction Xow and control objective knowledge structures and that knowledge structure mediates the relationship between structure  ...  training and performance in internal control reviews.  ...  For helpful comments, the authors are indebted to the editor, two anonymous reviewers, Michael Bamber, Karen Braun, Scott ButterWeld, Bryan Church, Kathryn Epps, Lyn Graham, Steve Kaplan, Lisa Koonce,  ... 
doi:10.1016/j.aos.2005.12.001 fatcat:ulqnp6pgjzgfpdgdszqwgsbiou

TARA: Training and Representation Alteration for AI Fairness and Domain Generalization [article]

William Paul, Armin Hadzic, Neil Joshi, Fady Alajaji, Phil Burlina
2021 arXiv   pre-print
We propose a novel method for enforcing AI fairness with respect to protected or sensitive factors.  ...  via adversarial independence to suppress the bias-inducing dependence of the data representation from protected factors; and b) training set alteration via intelligent augmentation to address bias-causing  ...  All adversarial modules were pre-trained for 100 epochs with the prediction module frozen.  ... 
arXiv:2012.06387v4 fatcat:q27pshiwnnae7jkbqxusttd24q

Addressing Artificial Intelligence Bias in Retinal Disease Diagnostics [article]

Philippe Burlina, Neil Joshi, William Paul, Katia D. Pacheco, Neil M. Bressler
2020 arXiv   pre-print
This study evaluated generative methods to potentially mitigate AI bias when diagnosing diabetic retinopathy (DR) resulting from training data imbalance, or domain generalization which occurs when deep  ...  learning systems (DLS) face concepts at test/inference time they were not initially trained on.  ...  Since this study aimed to keep class balance across diseased and healthy retinas, so as not to impart additional artificial bias, the combined number of training and validation fundi consisted of a total  ... 
arXiv:2004.13515v4 fatcat:r2bm7pwtgbe6lnluqyjjshpkga

On the Global Optima of Kernelized Adversarial Representation Learning [article]

Bashir Sadeghi, Runyi Yu, Vishnu Naresh Boddeti
2019 arXiv   pre-print
Numerical experiments on UCI, Extended Yale B and CIFAR-100 datasets indicate that, (a) practically, our solution is ideal for "imparting" provable invariance to any biased pre-trained data representation  ...  We then extend this solution and analysis to non-linear functions through kernel representation.  ...  This setup serves as an example to illustrate how invariance can be "imparted" to an existing biased pre-trained representation.  ... 
arXiv:1910.07423v2 fatcat:h7vmk7hzxrfhtpdjazvv5zcwoe

Dual-branch Hybrid Learning Network for Unbiased Scene Graph Generation [article]

Chaofan Zheng, Lianli Gao, Xinyu Lyu, Pengpeng Zeng, Abdulmotaleb El Saddik, Heng Tao Shen
2022 arXiv   pre-print
However, most de-biasing methods overemphasize the tail predicates and underestimate head ones throughout training, thereby wrecking the representation ability of head predicate features.  ...  Thus, these de-biasing SGG methods can neither achieve excellent performance on tail predicates nor satisfying behaviors on head ones.  ...  Next, we map each probability distribution of predicates and objects to a 200-dimensional vector with a pre-trained word embedding model (GloVe) to obtain the predicate semantic representation s p and  ... 
arXiv:2207.07913v1 fatcat:pioyuakglnh37dd7wlbjhn2tzi

InvGAN: Invertible GANs [article]

Partha Ghosh, Dominik Zietlow, Michael J. Black, Larry S. Davis, Xiaochen Hu
2021 arXiv   pre-print
Despite numerous efforts to train an inference model or design an iterative method to invert a pre-trained generator, previous methods are dataset (e.g. human face images) and architecture (e.g.  ...  Our key insight is that, by training the inference and the generative model together, we allow them to adapt to each other and to converge to a better quality model.  ...  InvGAN addresses this problem and can be used to support representation learning [8, 9] , data augmentation [5, 10] and algorithmic fairness [11] [12] [13] .  ... 
arXiv:2112.04598v2 fatcat:ivzs54nkmzacxjwo66bmcrasuy

Developing a framework to address health equity and racism within pharmacy education: Rx-HEART

Lakesha M. Butler, Vibhuti Arya, Nkem P. Nonyel, Terri Smith Moore
2021 American Journal of Pharmaceutical Education  
The five-phase framework, Pharmacy Health Equity Anti-Racism Training (Rx-HEART) provides guidance on how to accomplish the objectives described in this paper and the theme issue on social injustice.  ...  To identify gaps in health equity and anti-racism education across the pharmacy curriculum, define the key health equity and anti-racism concepts that are suggested to be included across the pharmacy curriculum  ...  through pre-clinical and clinical education," and "infuse clinical training with a structural focus … coupled with medical models for structural change." 19, 23 According to the authors, the five core  ... 
doi:10.5688/ajpe8590 pmid:34301560 pmcid:PMC8655143 fatcat:aohh4j47a5e2zgm2eo6svmvxyy

Automated experiment in 4D-STEM: exploring emergent physics and structural behaviors [article]

Kevin M. Roccapriore, Ondrej Dyck, Mark P. Oxley, Maxim Ziatdinov, Sergei V. Kalinin
2022 arXiv   pre-print
With this, efficient and "intelligent" probing of dissimilar structural elements to discover desired physical functionality is made possible.  ...  We verify the approach first on pre-acquired 4D-STEM data, and further implement it experimentally on an operational STEM.  ...  We also note there are opportunities to increase the rate of the training by using pre-acquired data to train invariant variational autoencoders, 39 and then use the pre-trained weights to initialize  ... 
arXiv:2112.04479v2 fatcat:l2h3ppjdfjcjdhupzeivkwqwqi

Effectiveness of Communication Skills Training in Medical Students Using Simulated Patients or Volunteer Outpatients

Adlene I Adnan
2022 Cureus  
Study design, sample selection, and biases were scrutinized for each study. Various adult learning theories were used to correlate the effects of the communication skills training.  ...  However, which types of patients to use for better development of practical communication skills training.  ...  Sample selection and biases: The sampling method of studies by Clever et al. [3] and Elley et al. [4] aimed to achieve demographic variation for a good representation of the population.  ... 
doi:10.7759/cureus.26717 fatcat:z4fxzxku7jg7vp46krefgoe4pm

Building machines that learn and think for themselves

Matthew Botvinick, David G. T. Barrett, Peter Battaglia, Nando de Freitas, Darshan Kumaran, Joel Z Leibo, Timothy Lillicrap, Joseph Modayil, Shakir Mohamed, Neil C. Rabinowitz, Danilo J. Rezende, Adam Santoro (+7 others)
2017 Behavioral and Brain Sciences  
Under the approach we advocate, high-level prior knowledge and learning biases can be installed not only at the level of representational structure, but also through larger-scale architectural and algorithmic  ...  However, it is not clear why their pre-installed model is to be preferred over knowledge acquired through pretraining.  ... 
doi:10.1017/s0140525x17000048 pmid:29342685 fatcat:j35s4wku5zagppz4pgc32spawm
« Previous Showing results 1 — 15 out of 6,552 results