Filters








18 Hits in 0.79 sec

Approximating net interactions among rigid domains

Pouya Tavousi, Claudio M Soares
2018 PLoS ONE  
Software: Pouya Tavousi. Validation: Pouya Tavousi. Visualization: Pouya Tavousi. Writing -original draft: Pouya Tavousi. Writing -review & editing: Pouya Tavousi.  ...  Formal analysis : analysis Pouya Tavousi. Investigation: Pouya Tavousi. Methodology: Pouya Tavousi. Project administration: Pouya Tavousi. Resources: Pouya Tavousi.  ...  Author Contributions Conceptualization: Pouya Tavousi. Fig 1 . 1 Existing techniques.  ... 
doi:10.1371/journal.pone.0195618 pmid:29630635 pmcid:PMC5891034 fatcat:juwzbrcqnbf5llxd5a67yr4bi4

Can age-distribution be an indicator of the goodness of COVID-19 testing? [article]

Amirhoshang Hoseinpour Dehkordi, Reza Nemati, Pouya Tavousi
2020 medRxiv   pre-print
It has been evident that the faster, more accurate, and more comprehensive testing can help policymakers assess the real impact of COVID-19 and help them with when and how strict the mitigation policies should be. Nevertheless, the exact number of infected ones could not be measured due to the lack of comprehensive testing. In this paper, first of all, we will investigate the relation of transmission of COVID-19 with age by observing timed data in multiple countries. Then, we compare the
more » ... 9 CFR with the age-demography data. and as a result, we have proposed a method for estimating a lower bound for the number of positive cases by using the reported data on the oldest age group and the regions' population age-distributions. The proposed estimation method improved the expected similarity between the age-distribution of positive cases and regions' populations. Thus, using the publicly accessible data for several developed countries, we show how the improvement of testing over the course of several months has made it clear for the community that different age groups are equally prone to becoming COVID positive. The result shows that the age demography of COVID-19 gets similar to the age-demography of the population, together with the reduction of CFR over time. In addition, countries with less CFR have more similar COVID-19's age-distribution, which is caused by more comprehensive testing, than ones who have higher CFR. This leads us to a better estimation for positive cases in different testing strategies. Having knowledge of this fact helps policymakers enforce more effective policies for controlling the spread of the virus.
doi:10.1101/2020.12.21.20248690 fatcat:rmzpycwrufbvfi5jbmfetdzwka

AI-based Brain Image Segmentation Using Synthesized Data

Pouya Tavousi, Zahra Shahbazi, Sina Shahbazmohamadi
2020 Microscopy and Microanalysis  
The success of connectomics research in mapping the neural interconnections, as needed for understanding brain functions, largely depends on advances in microscopy and image analysis. Miniature features in brain rule its functions. Yet, large regions of interest (ROIs) must be scouted, for a proper understating of neural network system. This requires obtaining high-resolution tomographies of relatively large brain samples [1] . Regardless of what tomography technique is used, a critical step is
more » ... the segmentation of the produced images to identify the boundaries of the constituent objects within the brain sample of interest. For that, each pixel (voxel) in the 2D (3D) image is assigned a label that is shared among all the pixels (voxels) of the same object, resulting in a 2D (3D) image that is partitioned into several groups of connected pixels (voxels) [2] . Doing so is necessary for follow-up interpretation steps but it faces significant challenges. Conventionally, the segmentation process is conducted manually, where trained individuals spend months to segment volumes in the order of cubic millimeters [1] . Aside from extensive labor and cost involved, such manual process entails human error. Doublechecking and triple-checking practices to eliminate such error would add to the required time and effort for conducting the manual segmentation. Existing attempts to crowdsource the manual segmentation task using volunteers face the challenge of volunteer's lack of motivation [3] . On top of these, a major drawback of the state-of-the-art software packages that enable manual segmentation is that the user, at each point in time, has only a 2D perception of the brain sample, which acts as a prohibitive factor on the way of conducting a realistic segmentation. Automated segmentation techniques tend to address these issues but they come short of providing a reliable solution [4] . Conventional computer-visionbased segmentation methods, such as thresholding are often only useful for aiding the manual segmentation and the machine-learning-based segmentation algorithms [5] suffer from shortage of ground truth data, especially because the required data for supervised training of these algorithms must be produced manually.
doi:10.1017/s143192762001555x fatcat:wc63hgkw75hb3llh2q56sbaube

A deeper look at COVID-19 CFR: health care impact and roots of discrepancy [article]

Amirhoshang Hoseinpour Dehkordi, Reza Nemati, Pouya Tavousi
2020 medRxiv   pre-print
Intensive care capacity and proper testing play a paramount role in the COVID-19 Case Fatality Rate (CFR). Nevertheless, the real impact of such important measures has not been appreciated due to the lack of proper metrics. In this work, we have proposed a method for estimating a lower bound for the number of positive cases by using the reported data on the oldest age group and the regions' population distributions. The proposed estimation method improved the expected similarity between the
more » ... distribution of positive cases and regions' population. Further, we have provided a quantitative measure for the impact of intensive care on the critical cases by comparing the CFR among those who did and did not receive intensive care. Our findings showed that the chance of living among non-ICU receivers is less than half of ICU receivers (~24% vs ~60%).
doi:10.1101/2020.04.22.20071498 fatcat:ykdio63sxvgmdcd5c72wjoiiti

Material prediction from confocal images of lasered samples

Hongbin Choi, Adrian Phoulady, Nicholas May, Sina Shahbazmohamadi, Pouya Tavousi
2021 Microscopy and Microanalysis  
Effective use of laser for fine-machining of material requires fine-tuning of laser-machining parameters. Based on the machining requirement and the composition of the material that needs to be ablated, proper lasering/scanning parameters must be practiced to achieve satisfactory results. Nevertheless, oftentimes, a priori accurate information about the material composition of sample of interest is not at hand and thus the material composition must be inferred during the laser-machining
more » ... Non-trial-and-error existing methods that could be used for this purpose include energy dispersive spectroscopy (EDS) and laser induced breakdown spectroscopy (LIBS). The complexities associated with integrating such techniques with laser machining often acts as a prohibitive factor on the way of using them. Herein, we report on the development of a new technique that can predict material composition while laser machining is taking place using confocal images that have been obtained from the surface of lasered samples together with a knowledge of the lasering parameters. A multilayer fully connected neural network was trained, using a training data set, to predict the material composition of samples, within a set of unseen data, that have undergone laser machining, followed by confocal imaging. Note that, although lasering must start before material composition can be detectedwhich is also the case for LIBSthe amount of lasering that is needed for this purpose is minimal.
doi:10.1017/s1431927621009673 fatcat:fnfuqylyd5bthajwwgx2eearpa

Model for predicting surface properties of lasered samples

Adrian Phoulady, Hongbin Choi, Nicholas May, Bahar Ahmadi, Pouya Tavousi, Sina Shahbazmohamadi
2021 Microscopy and Microanalysis  
States Introduction: Ultrashort pulsed (USP) laser offers athermal material ablation, which makes it a popular technology for conducting fine machining jobs. Nevertheless, due to lack of a mechanistic understanding of the laser/matter interaction, the laser machining practices are often trial-and-error, with no systematic method for generating proper machining recipes. In this work, we present a model for predicting the surface properties of a sample from the lasering/scanning parameters as
more » ... as the material composition of the sample of interest. Development of such model is the first critical step towards constructing a recipe generator model that can prescribe the right set of lasering/scanning parameters for achieving the desired results. We established an interpolator that predicted two surface properties of depth of cut (DOC) and surface roughness (Sq) from lasering parameters and material type.
doi:10.1017/s1431927621011016 fatcat:cmgpmm7bobe67lskkqfozytjs4

A Novel analog approach for fast evaluation of affinity between ligand and receptor in scaled up molecular models [article]

Pouya Tavousi, Sina Shahbazmohamadi
2018 bioRxiv   pre-print
AbstractRational structure based drug design aims at identifying ligand molecules that bind to the active site of a target molecule with high affinity (low binding free energy), to promote or inhibit certain biofunctions. Thus, it is absolutely essential that one can evaluate such affinity for the predicted molecular complexes in order to design drugs effectively. A key observation is that, binding affinity is proportional to the geometric fit between the two molecules. Having a way to assess
more » ... e quality of the fit enables one to rank the quality of potential drug solutions. Other than experimental methods that are associated with excessive time, labor and cost, several in silico methods have been developed in this regard. However, a main challenge of any computation-based method is that, no matter how efficient the technique is, the trade-off between accuracy and speed is inevitable. Therefore, given today's existing computational power, one or both is often compromised. In this paper, we propose a novel analog approach, to address the aforementioned limitation of computation-based algorithms by simply taking advantage of Kirchhoff's circuit laws. Ligand and receptor are represented with 3D printed molecular models that account for the flexibility of the ligand. Upon the contact between the ligand and the receptor, an electrical current will be produced that is proportional to the number of representative contact points between the two scaled up molecular models. The affinity between the two molecules is then assessed by identifying the number of representative contact points obtainable from the measured total electrical current. The simple yet accurate proposed technique, in combination with our previously developed model, Assemble-And-Match, can be a breakthrough in development of tools for drug design. Furthermore, the proposed technique can be more broadly practiced in any application that involves assessing the quality of geometric match between two physical objects.
doi:10.1101/452367 fatcat:h5e4rjpqu5fojkhdanj46j7jhu

Training AI-Based Feature Extraction Algorithms, for Micro CT Images, Using Synthesized Data

Matthew Konnik, Bahar Ahmadi, Nicholas May, Joseph Favata, Zahra Shahbazi, Sina Shahbazmohamadi, Pouya Tavousi
2021 Journal of nondestructive evaluation  
AbstractX-ray computed tomography (CT) is a powerful technique for non-destructive volumetric inspection of objects and is widely used for studying internal structures of a large variety of sample types. The raw data obtained through an X-ray CT practice is a gray-scale 3D array of voxels. This data must undergo a geometric feature extraction process before it can be used for interpretation purposes. Such feature extraction process is conventionally done manually, but with the ever-increasing
more » ... end of image data sizes and the interest in identifying more miniature features, automated feature extraction methods are sought. Given the fact that conventional computer-vision-based methods, which attempt to segment images into partitions using techniques such as thresholding, are often only useful for aiding the manual feature extraction process, machine-learning based algorithms are becoming popular to develop fully automated feature extraction processes. Nevertheless, the machine-learning algorithms require a huge pool of labeled data for proper training, which is often unavailable. We propose to address this shortage, through a data synthesis procedure. We will do so by fabricating miniature features, with known geometry, position and orientation on thin silicon wafer layers using a femtosecond laser machining system, followed by stacking these layers to construct a 3D object with internal features, and finally obtaining the X-ray CT image of the resulting 3D object. Given that the exact geometry, position and orientation of the fabricated features are known, the X-ray CT image is inherently labeled and is ready to be used for training the machine learning algorithms for automated feature extraction. Through several examples, we will showcase: (1) the capability of synthesizing features of arbitrary geometries and their corresponding labeled images; and (2) use of the synthesized data for training machine-learning based shape classifiers and features parameter extractors.
doi:10.1007/s10921-021-00758-w fatcat:thp3ekw35rc3fpipzzmzhmei7a

Assemble-And-Match: A Novel Hybrid Tool for Enhancing Education and Research in Rational Structure Based Drug Design

Pouya Tavousi, Reza Amin, Sina Shahbazmohamadi
2018 Scientific Reports  
Author Contributions Pouya Tavousi: Contributed to the development of idea and writing of the manuscript. Reza Amin: Contributed to the development of idea and writing of the manuscript.  ... 
doi:10.1038/s41598-017-18151-x pmid:29339792 pmcid:PMC5770410 fatcat:f6petpolcvdcpebgvt42ys7vwu

AI-based Feature Detection in X-ray-CT Images Using Synthesized Data

Matthew Konnik, Bahar Ahmadi, Nicholas May, Joseph Favata, Zahra Shahbazi, Sina Shahbazmohamadi, Pouya Tavousi
2020 Microscopy and Microanalysis  
Nondestructive volumetric analysis of samples, enabled by X-ray computed tomography (CT), has attracted scientists and engineers from a wide spectrum of disciplines that are interested in identification and measurement of miniature internal features of their samples [1] . While obtaining X-ray CT images of arbitrary objects has become a straightforward procedure, which only requires adjustment of a few imaging parameters (e.g., energy, # of projections), the interpretation of the resulting 3D
more » ... ages is still a challenging task [2] . For proper interpretation of an X-ray CT image, one must be able to extract welldefined geometric features from the raw data, where the raw data is a gray-scale 3D array of voxels [3] . Conventionally this task is performed manually by the subject matter experts (SMEs). Th extensive time and effort as well as human error associated with manual processes call for automated methods that can extract features accurately and with a high-throughput. The most common approach for achieving this goal is use of computer-vison (CV) techniques, to segment the images into distinct partitions [6] [7] [8] [9] [10] [11] [12] [13] [14], which could hopefully be used for extracting meaningful geometric features. For example, in thresholding, a common CV technique, intensity values and a preset thresholding constant will be used to assign a label to each pixel (voxel) in the 2D (3D) image. Such label is shared among all the pixels (voxels) of the same partition and the result of the segmentation process is a 2D (3D) image that is partitioned into several groups of connected pixels (voxels). Although, the CV techniques may offer an automated process in the absence of image noise (i.e., features that are of no interest), their performance drops drastically in dealing with noise which is prevalent in any image obtained from an X-ray CT practice [15] [16] [17]. The produced noise can be mitigated, but not completely removed. Therefore, in practice, the CV methods are only used to assist the manual feature extraction process and cannot provide a fully automated feature extraction process. The success of machine learning (ML) algorithms in automating tasks that are not analytically well-defined, promises use of these methods for automated feature extraction, as a superior alternative to CV-based methods. The idea is to train a machine learning algorithm with sufficient ground truth data [18] [19] [20] and then use it for automated feature extraction. Here, the ground truth data are obtained from labeled X-ray CT images. Each data point consists of: (1) raw data in the form of a gray-scale 3D array of voxels and (2) the corresponding feature. The caveat is that the proper training of a machine learning algorithm demands huge amounts of labeled data. This is a multifold challenge. The necessity of labeling the raw data manually, makes this process extremely tedious, if not impractical. In addition, labeling process will be subjective, with different outcomes expected from different SMEs. Further, such manual process is subject to error. Double-checking and triple-checking practices to eliminate such error would add to
doi:10.1017/s1431927620020498 fatcat:kykpeyd3ijgknn55cm46av6xlm

Single Image Composite Tomography Utilizing Large Scale Femtosecond Laser Cross-sectioning and Scanning Electron Microscopy

Nicholas May, Adrian Phoulady, Hongbin Choi, Pouya Tavousi, Sina Shahbazmohamadi
2022 Microscopy and Microanalysis  
doi:10.1017/s1431927622003889 fatcat:rajlpchfuzhwvidpqd4qvnftnm

Forward modeling of volume electron microscopy (vEM) of stained resin-embedded biological samples

Yu Yuan, Sabrina Clusiau, Raynald Gauvin, Christopher Bleck, Adrian Phoulady, Pouya Tavousi, Sina Shahbazmohamadi, Nicolas Piché, Mike Marsh
2021 Microscopy and Microanalysis  
Image formation in the scanning electron microscope (SEM) is a complicated process that starts with interaction between the incident electron beam and the sample being imaged. The physics of electron-matter interaction has been described and modeled by MC-Xray in the past, although, only recently, has that modeling been integrated into a user-friendly software framework Dragonfly and accelerated and extended from 0dimensional point simulations to 2D simulations that mimic the 2D rastering of
more » ... e-beam in an SEM [1] . Understanding the image formation and forward-modeling of images in the SEM offers two benefits. First, knowing a priori how variations in microscope conditions affect contrast and resolution permits a microscopist to optimize imaging parameters in advance. Second, having a realistic phantom paired with the corresponding forward-modeled images equips machine learning researchers with suitable ground truth for training deep learning and other artificial intelligence models for automatic segmentation of the microstructure observed in new micrographs. We show here image simulation of 2D micrographs from a simple 3D phantom structure of heavy-metal stained resin-embedded biological tissue and demonstrate that the simulated images strongly resemble the experimental micrographs, both qualitatively and quantitatively. The forward modeling requires a phantom, or digital structure, of known geometry and material composition. In this work, we derive a 3-phase phantom from SEM images of murine rod internal segments, prepared by high-pressure freezing, freeze-substitution with osmium tetroxide, followed by durcupan resin embedding at room temperature. This sample was selected because it demonstrates some of the structural complexity of stained biological samples, but the image histogram suggests that the material composition is rather simple. We infer only three major phases: stained membranes at the highest backscattered electron (BSE) signal, with a lower BSE signal for the intracellular and intra-organelle areas, and an even lower BSE signal for the extracellular space (see Figure 1A) . A stack of 2D images was collected by FIB-SEM serial sectioning on a Zeiss Crossbeam, using the energy-selective backscatter (EsB) detector. The stack of images was segmented into 3 phases, plasma membrane and organelle membranes, intracellular, and extracellular resin. The input conditions for the simulation include the total count and the landing voltage of the incident electrons at every point along the surface of the 3D phantom. We match the experimental landing voltage (1.5 keV) and use a similar fluence of electrons. Because our simulation results encode the energy of the backscattered electrons, we simulate the experimental EsB by choosing not to accumulate in our image any of the electrons whose energy is below a filter threshold (1.0 keV). The remaining free parameter of the simulation is the elemental weight-fraction for each of the phases encoded in our 3D phantom. The precise elemental weight-fraction in biological samples like these is difficult to determine experimentally. For our simulations, we visually observed image contrast and noise similar to the experimental images when we used osmium weight-fraction approximating 20%. To further refine the quality of our simulation, we varied two parameters: 1) the weight-fraction of osmium from 10% to 40%, and 2) the noise level in the phantom. That noise level allows us to encode a non-uniform osmium binding throughout the three material phases. We find that 20% weight-fraction osmium is qualitatively similar to the experimental https://www.cambridge.org/core/terms. https://doi.
doi:10.1017/s1431927621003093 fatcat:7j4qpgtmubbnbk5dlgyy5qsofu

A New Fast Helium Ion Imaging Technique Through Rapid Acquiring and Restoring Using the Point Spread Function Deconvolution Method

Pouya Tavousi, Bahar Ahmadi, Nicholas May, Sunshine Snider-Drysdale, Zahra Shahbazi, Daniel Di Mase, Sina Shahbazmohamadi
2020 Microscopy and Microanalysis  
Scanning helium ion microscopy (HIM) offers a superior resolution (up to 0.5nm) and depth of field (up to 5X more) compared to scanning electron microscopy (SEM) thanks to high source brightness, low energy spread, and small diffraction effects [10] . HIM also can accommodate nonconductive samples without the use of conductive coatings. Compared to other Ion beams, it has less ion damage during imaging due to the light mass of the helium ions. However, one of the drawbacks of HIM is its speed.
more » ... n average, HIM is 5 times slower than SEM which can be a deterring factor when larger areas need to be imaged. In this study, we explore the use of point spread function (PSF) deconvolution method as a means to speed up HIM imaging process. PSF deconvolution has successfully been used to restore SEM images but its effectivity on Ion-based imaging is less explored. Here we have studied whether faster HIM imaging using shorter dwell times can be restored by PSF deconvolution. We have also assessed the quality of images using quantitative methods.
doi:10.1017/s1431927620019133 fatcat:g5lgn4i2mzc5hcujyooxaflxhq

Three-Dimensional Reconstruction of Printed Circuit Boards: Comparative Study between 3D Femtosecond Laser Serial Sectioning and Optical Imaging versus 3D X-Ray Computed Tomography

Nicholas May, Hongbin Choi, Adrian Phoulady, Pouya Tavousi, Sina Shahbazmohamadi
2022 Microscopy and Microanalysis  
doi:10.1017/s1431927622001945 fatcat:foxq4a2as5eobiqvlbms4clpum

A Novel analog approach for fast evaluation of affinity between ligand and receptor in scaled-up molecular models

Pouya Tavousi
2019 SDRP Journal of Computational Chemistry & Molecular Modelling  
doi:10.25177/jccmm.3.2.ra.541 fatcat:fkl67etyz5ellmsqtjtvj6ejom
« Previous Showing results 1 — 15 out of 18 results