A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit the original URL.
The file type is
Software: Pouya Tavousi. Validation: Pouya Tavousi. Visualization: Pouya Tavousi. Writing -original draft: Pouya Tavousi. Writing -review & editing: Pouya Tavousi. ... Formal analysis : analysis Pouya Tavousi. Investigation: Pouya Tavousi. Methodology: Pouya Tavousi. Project administration: Pouya Tavousi. Resources: Pouya Tavousi. ... Author Contributions Conceptualization: Pouya Tavousi. Fig 1 . 1 Existing techniques. ...doi:10.1371/journal.pone.0195618 pmid:29630635 pmcid:PMC5891034 fatcat:juwzbrcqnbf5llxd5a67yr4bi4
It has been evident that the faster, more accurate, and more comprehensive testing can help policymakers assess the real impact of COVID-19 and help them with when and how strict the mitigation policies should be. Nevertheless, the exact number of infected ones could not be measured due to the lack of comprehensive testing. In this paper, first of all, we will investigate the relation of transmission of COVID-19 with age by observing timed data in multiple countries. Then, we compare thedoi:10.1101/2020.12.21.20248690 fatcat:rmzpycwrufbvfi5jbmfetdzwka
more »... 9 CFR with the age-demography data. and as a result, we have proposed a method for estimating a lower bound for the number of positive cases by using the reported data on the oldest age group and the regions' population age-distributions. The proposed estimation method improved the expected similarity between the age-distribution of positive cases and regions' populations. Thus, using the publicly accessible data for several developed countries, we show how the improvement of testing over the course of several months has made it clear for the community that different age groups are equally prone to becoming COVID positive. The result shows that the age demography of COVID-19 gets similar to the age-demography of the population, together with the reduction of CFR over time. In addition, countries with less CFR have more similar COVID-19's age-distribution, which is caused by more comprehensive testing, than ones who have higher CFR. This leads us to a better estimation for positive cases in different testing strategies. Having knowledge of this fact helps policymakers enforce more effective policies for controlling the spread of the virus.
The success of connectomics research in mapping the neural interconnections, as needed for understanding brain functions, largely depends on advances in microscopy and image analysis. Miniature features in brain rule its functions. Yet, large regions of interest (ROIs) must be scouted, for a proper understating of neural network system. This requires obtaining high-resolution tomographies of relatively large brain samples  . Regardless of what tomography technique is used, a critical step isdoi:10.1017/s143192762001555x fatcat:wc63hgkw75hb3llh2q56sbaube
more »... the segmentation of the produced images to identify the boundaries of the constituent objects within the brain sample of interest. For that, each pixel (voxel) in the 2D (3D) image is assigned a label that is shared among all the pixels (voxels) of the same object, resulting in a 2D (3D) image that is partitioned into several groups of connected pixels (voxels)  . Doing so is necessary for follow-up interpretation steps but it faces significant challenges. Conventionally, the segmentation process is conducted manually, where trained individuals spend months to segment volumes in the order of cubic millimeters  . Aside from extensive labor and cost involved, such manual process entails human error. Doublechecking and triple-checking practices to eliminate such error would add to the required time and effort for conducting the manual segmentation. Existing attempts to crowdsource the manual segmentation task using volunteers face the challenge of volunteer's lack of motivation  . On top of these, a major drawback of the state-of-the-art software packages that enable manual segmentation is that the user, at each point in time, has only a 2D perception of the brain sample, which acts as a prohibitive factor on the way of conducting a realistic segmentation. Automated segmentation techniques tend to address these issues but they come short of providing a reliable solution  . Conventional computer-visionbased segmentation methods, such as thresholding are often only useful for aiding the manual segmentation and the machine-learning-based segmentation algorithms  suffer from shortage of ground truth data, especially because the required data for supervised training of these algorithms must be produced manually.
Intensive care capacity and proper testing play a paramount role in the COVID-19 Case Fatality Rate (CFR). Nevertheless, the real impact of such important measures has not been appreciated due to the lack of proper metrics. In this work, we have proposed a method for estimating a lower bound for the number of positive cases by using the reported data on the oldest age group and the regions' population distributions. The proposed estimation method improved the expected similarity between thedoi:10.1101/2020.04.22.20071498 fatcat:ykdio63sxvgmdcd5c72wjoiiti
more »... distribution of positive cases and regions' population. Further, we have provided a quantitative measure for the impact of intensive care on the critical cases by comparing the CFR among those who did and did not receive intensive care. Our findings showed that the chance of living among non-ICU receivers is less than half of ICU receivers (~24% vs ~60%).
Effective use of laser for fine-machining of material requires fine-tuning of laser-machining parameters. Based on the machining requirement and the composition of the material that needs to be ablated, proper lasering/scanning parameters must be practiced to achieve satisfactory results. Nevertheless, oftentimes, a priori accurate information about the material composition of sample of interest is not at hand and thus the material composition must be inferred during the laser-machiningdoi:10.1017/s1431927621009673 fatcat:fnfuqylyd5bthajwwgx2eearpa
more »... Non-trial-and-error existing methods that could be used for this purpose include energy dispersive spectroscopy (EDS) and laser induced breakdown spectroscopy (LIBS). The complexities associated with integrating such techniques with laser machining often acts as a prohibitive factor on the way of using them. Herein, we report on the development of a new technique that can predict material composition while laser machining is taking place using confocal images that have been obtained from the surface of lasered samples together with a knowledge of the lasering parameters. A multilayer fully connected neural network was trained, using a training data set, to predict the material composition of samples, within a set of unseen data, that have undergone laser machining, followed by confocal imaging. Note that, although lasering must start before material composition can be detectedwhich is also the case for LIBSthe amount of lasering that is needed for this purpose is minimal.
States Introduction: Ultrashort pulsed (USP) laser offers athermal material ablation, which makes it a popular technology for conducting fine machining jobs. Nevertheless, due to lack of a mechanistic understanding of the laser/matter interaction, the laser machining practices are often trial-and-error, with no systematic method for generating proper machining recipes. In this work, we present a model for predicting the surface properties of a sample from the lasering/scanning parameters asdoi:10.1017/s1431927621011016 fatcat:cmgpmm7bobe67lskkqfozytjs4
more »... as the material composition of the sample of interest. Development of such model is the first critical step towards constructing a recipe generator model that can prescribe the right set of lasering/scanning parameters for achieving the desired results. We established an interpolator that predicted two surface properties of depth of cut (DOC) and surface roughness (Sq) from lasering parameters and material type.
AbstractRational structure based drug design aims at identifying ligand molecules that bind to the active site of a target molecule with high affinity (low binding free energy), to promote or inhibit certain biofunctions. Thus, it is absolutely essential that one can evaluate such affinity for the predicted molecular complexes in order to design drugs effectively. A key observation is that, binding affinity is proportional to the geometric fit between the two molecules. Having a way to assessdoi:10.1101/452367 fatcat:h5e4rjpqu5fojkhdanj46j7jhu
more »... e quality of the fit enables one to rank the quality of potential drug solutions. Other than experimental methods that are associated with excessive time, labor and cost, several in silico methods have been developed in this regard. However, a main challenge of any computation-based method is that, no matter how efficient the technique is, the trade-off between accuracy and speed is inevitable. Therefore, given today's existing computational power, one or both is often compromised. In this paper, we propose a novel analog approach, to address the aforementioned limitation of computation-based algorithms by simply taking advantage of Kirchhoff's circuit laws. Ligand and receptor are represented with 3D printed molecular models that account for the flexibility of the ligand. Upon the contact between the ligand and the receptor, an electrical current will be produced that is proportional to the number of representative contact points between the two scaled up molecular models. The affinity between the two molecules is then assessed by identifying the number of representative contact points obtainable from the measured total electrical current. The simple yet accurate proposed technique, in combination with our previously developed model, Assemble-And-Match, can be a breakthrough in development of tools for drug design. Furthermore, the proposed technique can be more broadly practiced in any application that involves assessing the quality of geometric match between two physical objects.
AbstractX-ray computed tomography (CT) is a powerful technique for non-destructive volumetric inspection of objects and is widely used for studying internal structures of a large variety of sample types. The raw data obtained through an X-ray CT practice is a gray-scale 3D array of voxels. This data must undergo a geometric feature extraction process before it can be used for interpretation purposes. Such feature extraction process is conventionally done manually, but with the ever-increasingdoi:10.1007/s10921-021-00758-w fatcat:thp3ekw35rc3fpipzzmzhmei7a
more »... end of image data sizes and the interest in identifying more miniature features, automated feature extraction methods are sought. Given the fact that conventional computer-vision-based methods, which attempt to segment images into partitions using techniques such as thresholding, are often only useful for aiding the manual feature extraction process, machine-learning based algorithms are becoming popular to develop fully automated feature extraction processes. Nevertheless, the machine-learning algorithms require a huge pool of labeled data for proper training, which is often unavailable. We propose to address this shortage, through a data synthesis procedure. We will do so by fabricating miniature features, with known geometry, position and orientation on thin silicon wafer layers using a femtosecond laser machining system, followed by stacking these layers to construct a 3D object with internal features, and finally obtaining the X-ray CT image of the resulting 3D object. Given that the exact geometry, position and orientation of the fabricated features are known, the X-ray CT image is inherently labeled and is ready to be used for training the machine learning algorithms for automated feature extraction. Through several examples, we will showcase: (1) the capability of synthesizing features of arbitrary geometries and their corresponding labeled images; and (2) use of the synthesized data for training machine-learning based shape classifiers and features parameter extractors.
Author Contributions Pouya Tavousi: Contributed to the development of idea and writing of the manuscript. Reza Amin: Contributed to the development of idea and writing of the manuscript. ...doi:10.1038/s41598-017-18151-x pmid:29339792 pmcid:PMC5770410 fatcat:f6petpolcvdcpebgvt42ys7vwu
Nondestructive volumetric analysis of samples, enabled by X-ray computed tomography (CT), has attracted scientists and engineers from a wide spectrum of disciplines that are interested in identification and measurement of miniature internal features of their samples  . While obtaining X-ray CT images of arbitrary objects has become a straightforward procedure, which only requires adjustment of a few imaging parameters (e.g., energy, # of projections), the interpretation of the resulting 3Ddoi:10.1017/s1431927620020498 fatcat:kykpeyd3ijgknn55cm46av6xlm
more »... ages is still a challenging task  . For proper interpretation of an X-ray CT image, one must be able to extract welldefined geometric features from the raw data, where the raw data is a gray-scale 3D array of voxels  . Conventionally this task is performed manually by the subject matter experts (SMEs). Th extensive time and effort as well as human error associated with manual processes call for automated methods that can extract features accurately and with a high-throughput. The most common approach for achieving this goal is use of computer-vison (CV) techniques, to segment the images into distinct partitions         , which could hopefully be used for extracting meaningful geometric features. For example, in thresholding, a common CV technique, intensity values and a preset thresholding constant will be used to assign a label to each pixel (voxel) in the 2D (3D) image. Such label is shared among all the pixels (voxels) of the same partition and the result of the segmentation process is a 2D (3D) image that is partitioned into several groups of connected pixels (voxels). Although, the CV techniques may offer an automated process in the absence of image noise (i.e., features that are of no interest), their performance drops drastically in dealing with noise which is prevalent in any image obtained from an X-ray CT practice   . The produced noise can be mitigated, but not completely removed. Therefore, in practice, the CV methods are only used to assist the manual feature extraction process and cannot provide a fully automated feature extraction process. The success of machine learning (ML) algorithms in automating tasks that are not analytically well-defined, promises use of these methods for automated feature extraction, as a superior alternative to CV-based methods. The idea is to train a machine learning algorithm with sufficient ground truth data    and then use it for automated feature extraction. Here, the ground truth data are obtained from labeled X-ray CT images. Each data point consists of: (1) raw data in the form of a gray-scale 3D array of voxels and (2) the corresponding feature. The caveat is that the proper training of a machine learning algorithm demands huge amounts of labeled data. This is a multifold challenge. The necessity of labeling the raw data manually, makes this process extremely tedious, if not impractical. In addition, labeling process will be subjective, with different outcomes expected from different SMEs. Further, such manual process is subject to error. Double-checking and triple-checking practices to eliminate such error would add to
Image formation in the scanning electron microscope (SEM) is a complicated process that starts with interaction between the incident electron beam and the sample being imaged. The physics of electron-matter interaction has been described and modeled by MC-Xray in the past, although, only recently, has that modeling been integrated into a user-friendly software framework Dragonfly and accelerated and extended from 0dimensional point simulations to 2D simulations that mimic the 2D rastering ofdoi:10.1017/s1431927621003093 fatcat:7j4qpgtmubbnbk5dlgyy5qsofu
more »... e-beam in an SEM  . Understanding the image formation and forward-modeling of images in the SEM offers two benefits. First, knowing a priori how variations in microscope conditions affect contrast and resolution permits a microscopist to optimize imaging parameters in advance. Second, having a realistic phantom paired with the corresponding forward-modeled images equips machine learning researchers with suitable ground truth for training deep learning and other artificial intelligence models for automatic segmentation of the microstructure observed in new micrographs. We show here image simulation of 2D micrographs from a simple 3D phantom structure of heavy-metal stained resin-embedded biological tissue and demonstrate that the simulated images strongly resemble the experimental micrographs, both qualitatively and quantitatively. The forward modeling requires a phantom, or digital structure, of known geometry and material composition. In this work, we derive a 3-phase phantom from SEM images of murine rod internal segments, prepared by high-pressure freezing, freeze-substitution with osmium tetroxide, followed by durcupan resin embedding at room temperature. This sample was selected because it demonstrates some of the structural complexity of stained biological samples, but the image histogram suggests that the material composition is rather simple. We infer only three major phases: stained membranes at the highest backscattered electron (BSE) signal, with a lower BSE signal for the intracellular and intra-organelle areas, and an even lower BSE signal for the extracellular space (see Figure 1A) . A stack of 2D images was collected by FIB-SEM serial sectioning on a Zeiss Crossbeam, using the energy-selective backscatter (EsB) detector. The stack of images was segmented into 3 phases, plasma membrane and organelle membranes, intracellular, and extracellular resin. The input conditions for the simulation include the total count and the landing voltage of the incident electrons at every point along the surface of the 3D phantom. We match the experimental landing voltage (1.5 keV) and use a similar fluence of electrons. Because our simulation results encode the energy of the backscattered electrons, we simulate the experimental EsB by choosing not to accumulate in our image any of the electrons whose energy is below a filter threshold (1.0 keV). The remaining free parameter of the simulation is the elemental weight-fraction for each of the phases encoded in our 3D phantom. The precise elemental weight-fraction in biological samples like these is difficult to determine experimentally. For our simulations, we visually observed image contrast and noise similar to the experimental images when we used osmium weight-fraction approximating 20%. To further refine the quality of our simulation, we varied two parameters: 1) the weight-fraction of osmium from 10% to 40%, and 2) the noise level in the phantom. That noise level allows us to encode a non-uniform osmium binding throughout the three material phases. We find that 20% weight-fraction osmium is qualitatively similar to the experimental https://www.cambridge.org/core/terms. https://doi.
Scanning helium ion microscopy (HIM) offers a superior resolution (up to 0.5nm) and depth of field (up to 5X more) compared to scanning electron microscopy (SEM) thanks to high source brightness, low energy spread, and small diffraction effects  . HIM also can accommodate nonconductive samples without the use of conductive coatings. Compared to other Ion beams, it has less ion damage during imaging due to the light mass of the helium ions. However, one of the drawbacks of HIM is its speed.doi:10.1017/s1431927620019133 fatcat:g5lgn4i2mzc5hcujyooxaflxhq
more »... n average, HIM is 5 times slower than SEM which can be a deterring factor when larger areas need to be imaged. In this study, we explore the use of point spread function (PSF) deconvolution method as a means to speed up HIM imaging process. PSF deconvolution has successfully been used to restore SEM images but its effectivity on Ion-based imaging is less explored. Here we have studied whether faster HIM imaging using shorter dwell times can be restored by PSF deconvolution. We have also assessed the quality of images using quantitative methods.
« Previous Showing results 1 — 15 out of 18 results