A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is
La télésérie Sherlock met en scène un Holmes contemporain, tributaire, à bien des égards, de la culture de l'ordinateur qui s'est déployée au cours des vingt dernières années. ... Complete Sherlock Holmes, Londres, 2009, p. 867--980. « The Adventure of the Bruce--Partington Plans » (1917a) -. (1927), The Case--Book of Sherlock Holmes, repris dans The Penguin Complete Sherlock Holmes ... Sherlock et Elementary présentent ainsi un Sherlock Holmes inséparable de son téléphone intelligent, auquel il a recours autant pour communiquer que pour consulter Internet. ...doi:10.18192/analyses.v10i3.1411 fatcat:w64d6ex5kfbznbzdmuvriot77a
In the appendix to Naming and Necessity, Kripke espouses the view that necessarily, Sherlock Holmes is not a person. To date, no compelling argument has been extracted from Kripke's remarks. ... The second is that "Sherlock Holmes" is a name. The third is that "Sherlock Holmes" does not refer to an actual person. ... On Stone's view, we treat "Sherlock Holmes" similarly. ...doi:10.1111/phib.12039 fatcat:lizql3lt4vhfnfvqzsry5gpsuu
Before, during, and after the interview Michael "Sherlock" Downing and yours truly, the digitally befuddled Dr. ... David Anderson is Professor of English (retired) at Butler County Community College. He is a native Pittsburgher and has studied the life and works of August Wilson for the past three decades. ...doi:10.5195/awj.2019.48 fatcat:gtzaqn7rsjhuldorkvt7kjnrne
Sherlock Holmes, although a fictional character, remains renowned as a great detective. ... INTRODUCTION Sherlock Holmes is renowned as a great detective; indeed the paradigm (Risinger, 2006) . His particular skills are said to have been in deduction. ... After all, as Sherlock Holmes said to Dr Watson, in A Study in Scarlet: They say that genius is an infinite capacity for taking pains, . . . ...doi:10.1350/ijps.2009.11.2.123 fatcat:mwe7w5rm2fg3rimrpknxxo4rau
Comment Sir Arthur Conan Doyle, figure majeure du roman policier depuis la création de Sherlock Holmes, pourrait-il se laisser abuser par des photographies truquées par deux jeunes filles, mettant en scène ... Abstract : This sounds like a paradox : Sir Arthur Conan Doyle, a major figure in the crime novel since the creation of Sherlock Holmes, being fooled by fake photographs, representing fairies, made by ... rien de moins que Sir Arthur Conan Doyle, rendu célèbre par la création de Sherlock Holmes. ...doi:10.5902/2179219434982 fatcat:5netmex7izfefn7okzirtkapty
Broad & David E. Sanger, Race for Latest Class of Nuclear Arms Threatens to Revive Cold War, N.Y. ... Treaty Doc. 103-39, (1994); DAVID ANDERSON, MODERN LAW OF THE SEA: SELECTED ESSAYS, 49-61 (2008) [hereinafter ANDERSON ESSAYS]. 126. ...doi:10.2139/ssrn.2884179 fatcat:sr6op6lorzewjnjmz5kcdk3pv4
The Sherlock platform provides a simple interface to leverage big data technologies, such as Docker and PrestoDB. ... Sherlock is designed to analyse, process, query and extract the information from extremely complex and large data sets. ... The minimal requirements for Sherlock depend on what the use case is and where Sherlock will be deployed. ...doi:10.12688/f1000research.52791.1 fatcat:rns4awoaffgdji37fjznqv4j3a
Transcriptome sequencing (RNA-Seq) has become the assay of choice for high-throughput studies of gene expression. However, as is the case with microarrays, major technology-related artifacts and biases affect the resulting expression measures. Normalization is therefore essential to ensure accurate inference of expression levels and subsequent analyses thereof. Results: We focus on biases related to GC-content and demonstrate the existence of strong sample-specific GC-content effects on RNA-Seqdoi:10.1186/1471-2105-12-480 pmid:22177264 pmcid:PMC3315510 fatcat:bmtalkowufabtgtgmsyogxdqgq
more »... read counts, which can substantially bias differential expression analysis. We propose three simple within-lane gene-level GC-content normalization approaches and assess their performance on two different RNA-Seq datasets, involving different species and experimental designs. Our methods are compared to state-of-the-art normalization procedures in terms of bias and mean squared error for expression fold-change estimation and in terms of Type I error and p-value distributions for tests of differential expression. The exploratory data analysis and normalization methods proposed in this article are implemented in the open-source Bioconductor R package EDASeq. Conclusions: Our within-lane normalization procedures, followed by betweenlane normalization, reduce GC-content bias and lead to more accurate estimates of expression fold-changes and tests of differential expression. Such results are crucial for the biological interpretation of RNA-Seq experiments, where downstream analyses can be sensitive to the supplied lists of genes. Hosted by The Berkeley Electronic Press expression (DE) results as well as downstream analyses, such as those involving Gene Ontology (GO). As GC-content varies throughout the genome and is often associated with functionality, it may be difficult to infer true expression levels from biased read count measures. Proper normalization of read counts is therefore crucial to allow accurate inference of differences in expression levels. Herein, we distinguish between two main types of effects on read counts: (1) within-lane gene-specific (and possibly lane-specific) effects, e.g., related to gene length or GC-content, and (2) effects related to between-lane distributional differences, e.g., sequencing depth. Accordingly, within-lane and between-lane normalization adjust for the first and second types of effects, respectively. Within-lane normalization The most obvious and well-known selection bias in RNA-Seq is due to gene length. Bullard et al.  and Oshlack & Wakefield  show that scaling counts by gene length is not sufficient for removing this bias and that the power of common tests of differential expression is positively correlated with both gene length and expression level. Indeed, the longer the gene, the higher the read count for a given expression level; thus, any method for which precision is related to read count will tend to report more significant DE statistics for longer genes, even when considering per-base read counts. Hansen et al.  incorporate length effects on the mean of a Poisson model for read counts using natural cubic splines and adjust for this effect using robust quantile regression. Young et al.  propose a method that accounts for gene length bias in Gene Ontology analysis after performing DE tests. Another documented source of bias for the Illumina sequencing technology is GC-content, i.e., the proportion of G and C nucleotides in a region of interest. Several authors have reported strong GC-content biases in DNA-Seq [7, 10] and ChIP-Seq . Yoon et al.  propose a GC-content normalization method for DNA copy number studies, which involves binning reads in 100-bp windows and scaling bin-level read counts by the ratio between the overall median and the median for bins with the same GC-content. More recently, Boeva et al.  propose a polynomial regression approach, based on binning reads in non-overlapping windows and regressing bin-level counts on GC-content (with default polynomial degree of three). Still in the context of DNA-Seq, Benjamini & Speed  report that read counts are most affected by the GC-content of the actual DNA fragments from the sequence library (vs. that of the sequenced reads themselves) and that the effect of GC-content is sample-specific and unimodal, i.e., both GC-rich and GC-poor fragments are under-represented. They develop a method for estimating and correcting for GC-content bias that works at the base-pair level and accommodates library, strand, and fragment length 3 http://biostats.bepress.com/ucbbiostat/paper291 information, as well as varying bin sizes throughout the genome. Sequence composition biases have also been observed in RNA-Seq. Hansen et al.  report large and reproducible base-specific read biases associated with random hexamer priming in Illumina's standard library preparation protocol. The bias takes the form of patterns in the nucleotide frequencies of the first dozen or so bases of a read. They provide a re-weighting scheme, where each read is assigned a weight based on its nucleotide composition, to mitigate the impact of the bias and improve the uniformity of reads along expressed transcripts. Roberts et al.  also consider the problem of non-uniform cDNA fragment distribution in RNA-Seq and use a likelihood-based approach for correcting for this fragment bias. When analyzing RNA-Seq data from a yeast diploid hybrid for allele-specific expression (ASE), Bullard et al.  note that read counts from an orthologous pair of genes might overestimate the expression level of the more GC-rich ortholog. To correct for this confounding effect, they develop a resampling-based method where the significance of differences in read counts is assessed by reference to a null distribution that accounts for between-species differences in nucleotide composition. While there has been general agreement about the need to adjust for GC-content effects when comparing read counts between genomic regions for a given sample (as in DNA-Seq and ChIP-Seq) or between orthologs (as in ASE with RNA-Seq in an F1 hybrid organism ), the need to do so was not immediately recognized for standard RNA-Seq DE studies, where one compares read counts between samples for a given gene. The common belief was that, for a given gene, the GC-content effect was the same across samples and hence would cancel out when considering DE statistics such as count ratios. Pickrell et al.  seem to be the first to note the sample-specificity of the GC-content effect in the context of RNA-Seq and the resulting confounding of expression fold-change estimates. To address this problem, they developed a lane-specific correction procedure which involves binning exons according to GC-content, defining for each GC-bin and each lane a relative read enrichment factor as the proportion of reads in that bin originating from that lane divided by the overall proportion of reads in that lane, and scaling exon-level counts by the spline-smoothed enrichment factors. As noted by Hansen et al. , this approach suffers from two main drawbacks. Firstly, as the enrichment factors are computed for each lane relative to all others, the procedure equalizes the GC-content effect across lanes instead of removing it. Secondly, by adding counts across exons and lanes, the method does not account for the fact that regions with higher counts also tend to have higher variances. 4 Hosted by The Berkeley Electronic Press Zheng et al.  note that base-level read counts from RNA-Seq may not be randomly distributed along the transcriptome and can be affected by local nucleotide composition. They propose an approach based on generalized additive models to simultaneously correct for different sources of bias, such as gene length, GC-content, and dinucleotide frequencies. In their recent manuscript, Hansen et al.  show that GC-content has a strong impact on expression fold-change estimation and that failure to adjust for this effect can mislead differential expression analysis. They develop a conditional quantile normalization (CQN) procedure, which combines both within and between-lane normalization and is based on a Poisson model for read counts. Lane-specific systematic biases, such as GC-content and length effects, are incorporated as smooth functions using natural cubic splines and estimated using robust quantile regression. In order to account for distributional differences between lanes, a full-quantile normalization procedure is adopted, in the spirit of that considered in Bullard et al. . The main advantage of this approach is that it is lane-specific, i.e., it works independently in each lane, aiming at removing the bias rather than equalizing it across lanes. Modeling simultaneously GC-content and length (and in principle other sources of bias) leads to a flexible normalization method. On the other hand, for some datasets such as the Yeast dataset analysed in the present article, a regression approach may be too weak to completely remove the GC-content effect and other more aggressive normalization strategies may be needed. Between-lane normalization The simplest between-lane normalization procedure adjusts for lane sequencing depth by dividing gene-level read counts by the total number of reads per lane (as in multiplicative Poisson model of Marioni et al.  and Reads Per Kilobase of exon model per Million mapped reads (RPKM) of Mortazavi et al. ). However, this still widely-used approach has proven ineffective and more beneficial procedures have been proposed [3, 12, 21, 22]. In particular, Bullard et al.  consider three main types of between-lane normalization procedures: (1) global-scaling procedures, where counts are scaled by a single factor per lane (e.g., total count as in RPKM, count for housekeeping gene, or single quantile of count distribution); (2) full-quantile (FQ) normalization procedures, where all quantiles of the count distributions are matched between lanes; and (3) procedures based on generalized linear models (GLM). They demonstrate the large impact of normalization on differential expression results; in some contexts, sensitivity varies more between normalization procedures than between DE methods. Standard total-count normalization (cf. RPKM) tends to be heavily affected 5 http://biostats.bepress.com/ucbbiostat/paper291 by a relatively small proportion of highly-expressed genes and can lead to biased DE results, while the upper-quartile (UQ) or full-quantile normalization procedures proposed in  tend to be more robust and improve sensitivity without loss of specificity. In this article, we propose three different strategies to normalize RNA-Seq data for GC-content following a within-lane (i.e., sample-specific) gene-level approach. We examine their performance on two different types of data: a new RNA-Seq dataset for yeast grown in three different media and well-known benchmarking RNA-Seq datasets for two types of human reference samples from the MicroArray Quality Control (MAQC) Project . For the latter datasets, the gene expression measures from qRT-PCR and Affymetrix chips serve as useful standards for performance assessment of RNA-Seq. We compare our approaches to the state-of-the-art CQN procedure of Hansen et al.  (which was shown to outperform competing methods such as that of Pickrell et al. ), in terms of bias and mean squared error for expression fold-change estimation and in terms of Type I error and p-value distributions for tests of differential expression. We demonstrate how properly correcting for GC-content bias, as well as for between-lane differences in count distributions, leads to more accurate estimation of gene expression levels and fold-changes, making statistical inference of differential expression less prone to false discoveries. The exploratory data analysis and normalization methods proposed in this article are implemented in the open-source Bioconductor R package EDASeq. Methods Data We benchmark our proposed normalization methods on two different types of data: a new RNA-Seq dataset for yeast grown in three different media and the MAQC RNA-Seq datasets. The Yeast dataset addresses a "real" biological question, while the MAQC datasets are rather "artificial", but have the advantage of including qRT-PCR and Affymetrix chip measures for comparison with RNA-Seq. The different experimental designs allow the study of different types of technical and biological effects. By technical replicate lanes, we refer to lanes assaying libraries that differ only by virtue of the sequencing assay (i.e., library preparation, flow-cell, lane), not in terms of the biology (i.e., growth condition or culture for the Yeast dataset, UHR vs. Brain for the MAQC-2 dataset). By biological replicate lanes, we refer to lanes assaying libraries that are distinct independently of/prior to the sequencing assay (i.e., libraries Y1, Y2, Y4, and Y7, for different cultures of the same yeast strain under the same growth condition for the Yeast dataset). There are therefore different levels/types of technical replication, depending on which 6 Hosted by The Berkeley Electronic Press aspect of the assay is varied (i.e., library preparation, flow-cell, lane). Likewise, there are different levels/types of biological replication. Furthermore, it is possible for biological effects to be confounded with technical effects, as is the case with culture and library preparation effects for the Yeast dataset. The MAQC datasets are useful mainly for examining technical effects, i.e., for understanding the biases and variability introduced at various stages of the assay, as was done in Bullard et al.  . The Yeast dataset allows the study of both technical and biological effects of interest. Yeast dataset Illumina's Genome Analyzer II high-throughput sequencing system was used to sequence RNA from Saccharomyces cerevisiae grown in three different media: standard YP Glucose (YPD, a rich medium), Delft Glucose (Del, a minimal medium), and YP Glycerol (Gly, which contains a non-fermentable carbon source in which cells respire rather than ferment). Specifically, yeast (diploid S288c) were grown at 25 • C to approximately 1-2e7 cells/ml, as determined by a Beckman Coulter Z2 Particle Count and Size Analyzer. Cells were harvested by filtration, frozen in liquid nitrogen, and kept at −80 • C until RNA extraction and purification. RNA was extracted from the cells using a slightly modified version of the traditional hot phenol protocol , followed by ethanol precipitation and washing. Briefly, 5 ml of lysis buffer (10 mM EDTA pH 8.0, 0.5% SDS, 10 mM Tris-HCl pH 7.5) and 5 ml of acid phenol were added to frozen cells and incubated at 60 • C for 1 hour, with occasional vortexing, then placed on ice. The aqueous phase was extracted after centrifuging and additional phenol extraction steps were performed as needed, followed by a chloroform extraction. Total RNA was precipitated from the final aqueous solution, with 10% volume 3 M sodium acetate pH 5.2 and ethanol, and resuspended in nuclease-free water. Residual DNA was removed from the RNA preparations using the Turbo DNA-free kit (Applied
Microwave ablation (MWA) is increasingly utilized in the treatment of hepatic tumours. Promising single-centre reports have demonstrated its safety and efficacy, but this modality has not been studied in a prospective, multicentre study. Methods: Eighteen international centres recorded operative and perioperative data for patients undergoing MWA for tumours of any origin in a voluntary Internet-based database. All patients underwent operative MWA using a 2.45-GHz generator with a 5-mm antenna.doi:10.1111/j.1477-2574.2011.00338.x pmid:21762302 pmcid:PMC3163281 fatcat:gllssn4dp5ferbz7n7sdwfdbim
more »... esults: Of the 140 patients, 114 (81.4%) were treated with MWA alone and 26 (18.6%) were treated with MWA combined with resection. Multiple tumours were treated with MWA in 40.0% of patients. A total of 299 tumours were treated in these 140 patients. The median size of ablated lesions was 2.5 cm (range: 0.5-9.5 cm). Tumours were treated with a median of one application (range: 1-6 applications) for a median of 4 min (range: 0.5-30.0 min). A power setting of 100 W was used in 78.9% of cases. Major morbidity was 8.3% and in-hospital mortality was 1.9%. Conclusions: These multi-institution data demonstrate rapid ablation time and low morbidity and mortality rates in patients undergoing operative MWA with a high rate of multiple ablations and concomitant hepatic resection. Longterm follow-up will be required to determine the efficacy of MWA relative to other forms of ablative therapy.
PURPOSE: To describe the results of an audit of patients who received epidural analgesics postoperatively and the subsequent development of a formal acute pain management service in a community hospital.METHODS: To understand how epidural analgesia was being used to treat postoperative pain at the Peterborough Regional Health Centre, Peterborough, Ontario, a retrospective chart review was performed. Audits were performed on 178 patients who had received epidural analgesia postoperatively fromdoi:10.1155/2001/539804 pmid:11854757 fatcat:yoxah6cmcnfjzbxqddr37wy5ra
more »... tober 1994 to May 1995. Data pertaining to demographics, epidural analgesia, pain scores and side effects were collected.RESULTS: Sixty-one per cent of patients received bupivacaine/ fentanyl infusions, and 39% received epidural morphine boluses. More than 60% of patients reported no pain postoperatively. Patients who received bupivacaine/fentanyl were more likely than those who received epidural morphine to also receive coanalgesia and transitional analgesia. Patients who received epidural morphine were more likely than those who received bupivacaine fentanyl to experience respiratory depression, hypotension and pruritus. Patients were followed by the anesthesiologist who provided the anesthetic. Anesthesiologists practised independently, and formal policies and procedures did not exist.CONCLUSIONS: As a result of the audit, an acute pain management service was developed. This included a team that did daily rounds and consisted of a nurse clinician and an anesthesiologist who was assigned to the service on a weekly basis. A committee was created, and formalized policies and procedures were established. Standardized order sheets, data sheets and a computerized database were developed. Reports for administrative and quality improvement purposes were generated monthly. Education programs were developed. Coanalgesia and transitional analgesia are now part of routine care, and epidural catheter placement close to the site of incision is encouraged. A postoperative nausea and vomiting algorithm, and a treatment regimen for pruritus have also been implemented.
Budd-Chiari syndrome is characterized by supra-hepatic veins obstruction, leading to post-sinusoidal portal hypertension that often evolves to hepatic failure. It is usually related to prothrombotic conditions, such as trombophilia, myeloproliferative diseases or nocturnal paroxysmal hemoglobinuria. Spontaneous remissions are rare and less than a third of the patients survive one year without treatment. We recommend that anticoagulation should be started as soon as possible with full-dosedoi:10.1016/0002-9343(67)90178-7 pmid:6061264 fatcat:x64ipbtklvgffd4pceop5jqtue
more »... aneous heparin, postponing warfarin therapy until substantial improvement of ascites and liver congestion. This approach optimizes anticoagulation, decreasing the chances of bleeding. Since January 2000, among 350 patients followed at the Anticoagulation Clinic, three fulfilled the criteria for primary Budd-Chiari syndrome and were started on scheduled anticoagulation protocol. During three to ten years follow-up, supra-hepatic thrombosis completely resolved in all patients and hepatic function normalized without resorting to invasive procedures or liver transplantation. Neither recurrence of thrombotic events, nor serious bleeding events were documented. Scheduled anticoagulation is safe and improves patient's outcomes.
She leaves a husband, David. ...doi:10.1136/bmj.317.7169.1391 pmid:9882113 fatcat:kjqvrhudyzcmtimbo5qqwhlvse
The Sherlock Holmes symposia have been educating haematologists on the need for prompt recognition, diagnosis and treatment of rare haematological diseases for 10 years. ... The Sherlock Holmes symposia programme includes real-life interactive clinical cases of rare haematological disorders that require awareness from the physician, to be diagnosed at an early stage. ... Over the last 10 years, the Sherlock Holmes symposia have raised the awareness of rare haematological diseases among thousands of haematologists around the world. ...doi:10.17925/eoh.2016.12.01.55 fatcat:pyr2rai2b5h7fo7xcugrd4kvru
With rising incidence and emergence of effective treatment options, the management of hepatocellular carcinoma (HCC) is a complex multidisciplinary process. There is still little consensus and uniformity about clinicopathological staging systems. Resection and liver transplantation have been the cornerstone of curative surgical treatments with recent emergence of ablative techniques. Improvements in diagnostics, surgical techniques, and postoperative care have lead to dramatically improveddoi:10.4061/2011/686074 pmid:21994867 pmcid:PMC3170839 fatcat:agbrycoybbdormxyfqb5yzp3g4
more »... ts over the years. The most appropriate treatment plan has to be individualised and depends on a variety of patient and tumour-related factors. Very small HCCs discovered on surveillance have the best outcomes. Patients with advanced cirrhosis and tumours within Milan criteria should be offered transplantation. Resection is best for small solitary tumours with preserved liver function. Ablative techniques are suitable for low volume tumours in patients unfit for either resection or transplantation. The role of downstaging and bridging therapy is not clearly established.
When publishing large-scale microarray datasets, it is of great value to create supplemental websites where either the full data, or selected subsets corresponding to figures within the paper, can be browsed. We set out to create a CGI application containing many of the features of some of the existing standalone software for the visualization of clustered microarray data. We present GeneXplorer, a web application for interactive microarray data visualization and analysis in a web environment.doi:10.1186/1471-2105-5-141 pmid:15458579 pmcid:PMC523853 fatcat:htj6fmpyaneczmkdjkrkk46qmy
« Previous Showing results 1 — 15 out of 7,523 results