Filters








164 Hits in 1.1 sec

Fractional ridge regression: a fast, interpretable reparameterization of ridge regression [article]

Ariel Rokem, Kendrick Kay
2020 arXiv   pre-print
Ridge regression (RR) is a regularization technique that penalizes the L2-norm of the coefficients in linear regression. One of the challenges of using RR is the need to set a hyperparameter (α) that controls the amount of regularization. Cross-validation is typically used to select the best α from a set of candidates. However, efficient and appropriate selection of α can be challenging, particularly where large amounts of data are analyzed. Because the selected α depends on the scale of the
more » ... a and predictors, it is not straightforwardly interpretable. Here, we propose to reparameterize RR in terms of the ratio γ between the L2-norms of the regularized and unregularized coefficients. This approach, called fractional RR (FRR), has several benefits: the solutions obtained for different γ are guaranteed to vary, guarding against wasted calculations, and automatically span the relevant range of regularization, avoiding the need for arduous manual exploration. We provide an algorithm to solve FRR, as well as open-source software implementations in Python and MATLAB (https://github.com/nrdg/fracridge). We show that the proposed method is fast and scalable for large-scale data problems, and delivers results that are straightforward to interpret and compare across models and datasets.
arXiv:2005.03220v1 fatcat:kpqlyt2wdvbhvcy6bbup6zlfla

Deep-Learning Based, Automated Segmentation Of Macular Edema In Optical Coherence Tomography [article]

Cecilia S. Lee, Ariel J. Tyring, Nicolaas P. Deruyter, Yue Wu, Ariel Rokem, Aaron Y. Lee
2017 bioRxiv   pre-print
Evaluation of clinical images is essential for diagnosis in many specialties and the development of computer vision algorithms to analyze biomedical images will be important. In ophthalmology, optical coherence tomography (OCT) is critical for managing retinal conditions. We developed a convolutional neural network (CNN) that detects intraretinal fluid (IRF) on OCT in a manner indistinguishable from clinicians. Using 1,289 OCT images, the CNN segmented images with a 0.911 cross-validated Dice
more » ... efficient, compared with segmentations by experts. Additionally, the agreement between experts and between experts and CNN were similar. Our results reveal that CNN can be trained to perform automated segmentations.
doi:10.1101/135640 fatcat:c6quepy55zc6vpl57s2weizae4

The interaction of orientation-specific surround suppression and visual-spatial attention [article]

Ariel Rokem, Ayelet Nina Landau
2016 bioRxiv   pre-print
often focused on the orientation-selective component of surround suppression (OSSS), by comparing suppression for parallel and orthogonal gratings on a within-subject basis (Kosovicheva, Sheremata, Rokem  ...  Non-invasive recordings of human brain activity support this: measurements of surround suppressive effects related to perception have been conducted using fMRI (Kay, Winawer, Rokem, Mezer, & Wandell,  ... 
doi:10.1101/091553 fatcat:3gby62vhjnblxomsfx7c6u7re4

Combining citizen science and deep learning to amplify expertise in neuroimaging [article]

Anisha Keshavan, Jason Yeatman, Ariel Rokem
2018 bioRxiv   pre-print
Research in many fields has become increasingly reliant on large and complex datasets. "Big Data" holds untold promise to rapidly advance science by tackling new questions that cannot be answered with smaller datasets. While powerful, research with Big Data poses unique challenges, as many standard lab protocols rely on experts examining each one of the samples. This is not feasible for large-scale datasets because manual approaches are time-consuming and hence difficult to scale. Meanwhile,
more » ... omated approaches lack the accuracy of examination by highly trained scientists and this may introduce major errors, sources of noise, and unforeseen biases into these large and complex datasets. Our proposed solution is to 1) start with a small, expertly labelled dataset, 2) amplify labels through web-based tools that engage citizen scientists, and 3) train machine learning on amplified labels to emulate expert decision making. As a proof of concept, we developed a system to quality control a large dataset of three-dimensional magnetic resonance images (MRI) of human brains. An initial dataset of 200 brain images labeled by experts were amplified by citizen scientists to label 722 brains, with over 80,000 ratings done through a simple web interface. A deep learning algorithm was then trained to predict data quality, based on a combination of the citizen scientist labels that accounts for differences in the quality of classification by different citizen scientists. In an ROC analysis (on left out test data), the deep learning network performed as well as a state-of-the-art, specialized algorithm (MRIQC) for quality control of T1-weighted images, each with an area under the curve of 0.99. Finally, as a specific practical application of the method, we explore how brain image quality relates to the replicability of a well established relationship between brain volume and age over development. Combining citizen science and deep learning can generalize and scale expert decision making; this is particularly important in emerging disciplines where specialized, automated tools do not already exist.
doi:10.1101/363382 fatcat:4ekagsmxpbbnjma4vtt6dklxlq

pulse2percept: A Python-based simulation framework for bionic vision [article]

Michael Beyeler, Geoffrey M. Boynton, Ione Fine, Ariel Rokem
2017 bioRxiv   pre-print
By 2020 roughly 200 million people worldwide will suffer from photoreceptor diseases such as retinitis pigmentosa and age-related macular degeneration, and a variety of retinal sight restoration technologies are being developed to target these diseases. One technology, analogous to cochlear implants, uses a grid of electrodes to stimulate remaining retinal cells. Two brands of retinal prostheses are currently approved for implantation in patients with late stage photoreceptor disease. Clinical
more » ... xperience with these implants has made it apparent that the vision restored by these devices differs substantially from normal sight. To better understand the outcomes of this technology, we developed pulse2percept, an open-source Python implementation of a computational model that predicts the perceptual experience of retinal prosthesis patients across a wide range of implant configurations. A modular and extensible user interface exposes the different building blocks of the software, making it easy for users to simulate novel implants, stimuli, and retinal models. We hope that this library will contribute substantially to the field of medicine by providing a tool to accelerate the development of visual prostheses.
doi:10.1101/148015 fatcat:yttoj4cojfeq3abcgnkje2jnhy

Model-based recommendations for optimal surgical placement of epiretinal implants [article]

Michael Beyeler, Geoffrey M. Boynton, Ione Fine, Ariel Rokem
2019 bioRxiv   pre-print
A major limitation of current electronic retinal implants is that in addition to stimulating the intended retinal ganglion cells, they also stimulate passing axon fibers, producing perceptual 'streaks' that limit the quality of the generated visual experience. Recent evidence suggests a dependence between the shape of the elicited visual percept and the retinal location of the stimulating electrode. However, this knowledge has yet to be incorporated into the surgical placement of retinal
more » ... s. Here we systematically explored the space of possible implant configurations to make recommendations for optimal intraocular positioning of the electrode array. Using a psychophysically validated computational model, we demonstrate that better implant placement has the potential to reduce the spatial extent of axonal activation in existing implant users by up to ~55%. Importantly, the best implant location, as inferred from a population of simulated virtual patients, is both surgically feasible and is relatively stable across individuals. This study is a first step towards the use of computer simulations in patient-specific planning of retinal implant surgery.
doi:10.1101/743484 fatcat:opo6ub4p4jgj5lncoam4vamcwa

Interactions of cognitive and auditory abilities in congenitally blind individuals

Ariel Rokem, Merav Ahissar
2009 Neuropsychologia  
Ariel Rokem was partially supported through a scholarship from the Vera and David Finkel Student Aid Endowment Fund.  ...  We thank Galit Hasan-Rokem for help in locating references on the folklore of blindness.  ... 
doi:10.1016/j.neuropsychologia.2008.12.017 pmid:19138693 fatcat:uuiizh7sqzeltdy34ni4apidi4

Sex differences in sleep-dependent perceptual learning

Elizabeth A. McDevitt, Ariel Rokem, Michael A. Silver, Sara C. Mednick
2014 Vision Research  
Motion PL has previously been examined across multiple days of training (Ball & Sekuler, 1987; Rokem & Silver, 2010 , with nocturnal sleep occurring between training sessions.  ...  In particular, visual PL of motion direction discrimination is specific to the direction of motion and visual field location used for training (Ball & Sekuler, 1987; Rokem & Silver, 2010) .  ... 
doi:10.1016/j.visres.2013.10.009 pmid:24141074 pmcid:PMC4704702 fatcat:icxxdgoaejhdfdee6yd2gau23q

Generating retinal flow maps from structural optical coherence tomography with artificial intelligence [article]

Cecilia S. Lee, Ariel J. Tyring, Yue Wu, Sa Xiao, Ariel S. Rokem, Nicolaas P. Deruyter, Qinqin Zhang, Adnan Tufail, Ruikang K. Wang, Aaron Y. Lee
2018 arXiv   pre-print
Despite significant advances in artificial intelligence (AI) for computer vision, its application in medical imaging has been limited by the burden and limits of expert-generated labels. We used images from optical coherence tomography angiography (OCTA), a relatively new imaging modality that measures perfusion of the retinal vasculature, to train an AI algorithm to generate vasculature maps from standard structural optical coherence tomography (OCT) images of the same retinae, both exceeding
more » ... he ability and bypassing the need for expert labeling. Deep learning was able to infer perfusion of microvasculature from structural OCT images with similar fidelity to OCTA and significantly better than expert clinicians (P < 0.00001). OCTA suffers from need of specialized hardware, laborious acquisition protocols, and motion artifacts; whereas our model works directly from standard OCT which are ubiquitous and quick to obtain, and allows unlocking of large volumes of previously collected standard OCT data both in existing clinical trials and clinical practice. This finding demonstrates a novel application of AI to medical imaging, whereby subtle regularities between different modalities are used to image the same body part and AI is used to generate detailed and accurate inferences of tissue function from structure imaging.
arXiv:1802.08925v1 fatcat:cr6u62fho5bnbpzureuffjratu

Generating perfusion maps from structural optical coherence tomography with artificial intelligence [article]

Cecilia S Lee, Ariel J Tyring, Yue Wu, Sa Xiao, Ariel S Rokem, Nicolaas P Deruyter, Qinqin Zhang, Adnan Tufail, Ruikang K Wang, Aaron Lee
2018 bioRxiv   pre-print
Despite significant advances in artificial intelligence (AI) for computer vision, its application in medical imaging has been limited by the burden and limits of expert-generated labels. We used images from optical coherence tomography angiography (OCTA), a relatively new imaging modality that measures perfusion of the retinal vasculature, to train an AI algorithm to generate vasculature maps from standard structural optical coherence tomography (OCT) images of the same retinae, both exceeding
more » ... he ability and bypassing the need for expert labeling. Deep learning was able to infer perfusion of microvasculature from structural OCT images with similar fidelity to OCTA and significantly better than expert clinicians (P < 0.00001). OCTA suffers from need of specialized hardware, laborious acquisition protocols, and motion artifacts; whereas our model works directly from standard OCT which are ubiquitous and quick to obtain, and allows unlocking of large volumes of previously collected standard OCT data both in existing clinical trials and clinical practice. This finding demonstrates a novel application of AI to medical imaging, whereby subtle regularities between different modalities are used to image the same body part and AI is used to generate detailed and accurate inferences of tissue function from structure imaging.
doi:10.1101/271346 fatcat:k6xphuf3q5cefpq3gyhledncxq

Deep-learning based, automated segmentation of macular edema in optical coherence tomography

Cecilia S. Lee, Ariel J. Tyring, Nicolaas P. Deruyter, Yue Wu, Ariel Rokem, Aaron Y. Lee
2017 Biomedical Optics Express  
Evaluation of clinical images is essential for diagnosis in many specialties and the development of computer vision algorithms to analyze biomedical images will be important. In ophthalmology, optical coherence tomography (OCT) is critical for managing retinal conditions. We developed a convolutional neural network (CNN) that detects intraretinal fluid (IRF) on OCT in a manner indistinguishable from clinicians. Using 1,289 OCT images, the CNN segmented images with a 0.911 cross-validated Dice
more » ... efficient, compared with segmentations by experts. Additionally, the agreement between experts and between experts and CNN were similar. Our results reveal that CNN can be trained to perform automated segmentations.
doi:10.1364/boe.8.003440 pmid:28717579 pmcid:PMC5508840 fatcat:nmgzm3vt3rhdjcaodzivh6hezm

Human blindsight is mediated by an intact geniculo-extrastriate pathway

Sara Ajina, Franco Pestilli, Ariel Rokem, Christopher Kennard, Holly Bridge
2015 eLife  
747 account for the diffusion signal as a combination of signals from different 748 bundles of nerve fibres provide better estimates of tracking directions in these 749 locations (Frank, 2001, 2002; Rokem  ...  was a crossing, diffusion tensor model (Basser et al., 1994; Pierpaoli and Basser, 807 1996) can be inappropriate for tracking, it is an accurate representation of the 808 signal and its statistics (Rokem  ... 
doi:10.7554/elife.08935 pmid:26485034 pmcid:PMC4641435 fatcat:nzmplwkls5hxfbgferjmorvy6e

MRI2MRI: A deep convolutional network that accurately transforms between brain MRI contrasts [article]

Sa Xiao, Yue Wu, Aaron Y Lee, Ariel Rokem
2018 bioRxiv   pre-print
Different brain MRI contrasts represent different tissue properties and are sensitive to different artifacts. The relationship between different contrasts is therefore complex and nonlinear. We developed a deep convolutional network that learns the mapping between different MRI contrasts. Using a publicly available dataset, we demonstrate that this algorithm accurately transforms between T1- and T2-weighted images, proton density images, time-of-flight angiograms, and diffusion MRI images. We
more » ... monstrate that these transformed images can be used to improve spatial registration between MR images of different contrasts.
doi:10.1101/289926 fatcat:nmcrf5qtqnbptayxcuviskjera

White matter plasticity and reading instruction: Widespread anatomical changes track the learning process [article]

Elizabeth Huber, Patrick M Donnelly, Ariel Rokem, Jason Yeatman
2018 bioRxiv   pre-print
White matter tissue properties correlate with children's performance across domains ranging from reading, to math, to executive function. We use a longitudinal intervention design to examine experience-dependent growth in reading skills and white matter in a group of grade school aged, struggling readers. Diffusion MRI data were collected at regular intervals during an 8-week, intensive reading intervention. These measurements reveal large-scale changes throughout a collection of white matter
more » ... acts, in concert with growth in reading skill. Additionally, we identify tracts whose properties predict reading skill but remain fixed throughout the intervention, suggesting that some anatomical properties may stably predict the ease with which a child learns to read, while others dynamically reflect the effects of experience. These results underscore the importance of considering recent experience when interpreting cross-sectional anatomy-behavior correlations. Widespread changes throughout the white matter may be a hallmark of rapid plasticity associated with an intensive learning experience.
doi:10.1101/268979 fatcat:pylss5m4drgxdoobt3zzaj3inu

Diffusion Weighted Image Co-registration: Investigation of Best Practices [article]

David Qixiang Chen, Flavio Dell'Acqua, Ariel Rokem, Eleftherios Garyfallidis, David J Hayes, Jidan Zhong, Mojgan Hodaie
2019 bioRxiv   pre-print
The registration or alignment of diffusion weighted images (DWI) with other imaging modalities is a critical step in neuroimaging analysis. Within subject T1 to DWI coregistration is particularly instrumental. DWI derived scalar images are commonly used as intermediates for T1 to DWI coregistration, and the resulting registration transforms are applied to all other scalar images for analysis. The ideal registration intermediate should register well to T1 and other multimodal images and be
more » ... cally easy to obtain. It is however, currently unclear which DWI derived scalar image serves as the best intermediate. We aim to determine the best, practical, intermediate for image coregistration. T1 and DWI images were acquired from 20 healthy subjects. DWIs were acquired with 60 directions. Six DWI derived scalar images were compared including: 1) fractional anisotropy (FA); 2) generalized FA (GFA); 3) B0 images; 4) mean DWIs with the B0 image (MDWI); 5) anisotropic power (AP) images. AP showed the smallest variability in registration improvements across all the tested DWI derived scalar images, and show the highest average percent changes with CC registration cost function (CC=1.2%, MI=15%). In contrast, the FA and GFA transforms resulted in significantly poorer registration across DWI types. The AP image was the DWI derived scalar image that provided the most consistent registration to all other images. Practically, it is generated easily and so could be implemented in basic and clinical research pipelines currently using other intermediates. Given these findings, it is recommended that AP images be used for T1 to DWI coregistration, and that FA and GFA images in particular be avoided.
doi:10.1101/864108 fatcat:yn4w6n5e3ja3tpxzuadmdx4xb4
« Previous Showing results 1 — 15 out of 164 results