A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit <a rel="external noopener" href="https://arxiv.org/pdf/1909.05644v1.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
Filters
Illuminated Decision Trees with Lucid
[article]
<span title="2019-09-03">2019</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
The Lucid methods described by Olah et al. (2018) provide a way to inspect the inner workings of neural networks trained on image classification tasks using feature visualization. Such methods have generally been applied to networks trained on visually rich, large-scale image datasets like ImageNet, which enables them to produce enticing feature visualizations. To investigate these methods further, we applied them to classifiers trained to perform the much simpler (in terms of dataset size and
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.05644v1">arXiv:1909.05644v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/czkyxo5aj5dwlkjxysmdugywbq">fatcat:czkyxo5aj5dwlkjxysmdugywbq</a>
</span>
more »
... isual richness), yet challenging task of distinguishing between different kinds of white blood cell from microscope images. Such a task makes generating useful feature visualizations difficult, as the discriminative features are inherently hard to identify and interpret. We address this by presenting the "Illuminated Decision Tree" approach, in which we use a neural network trained on the task as a feature extractor, then learn a decision tree based on these features, and provide Lucid visualizations for each node in the tree. We demonstrate our approach with several examples, showing how this approach could be useful both in model development and debugging, and when explaining model outputs to non-experts.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200929203158/https://arxiv.org/pdf/1909.05644v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/54/5d/545d81bf7faa4b8bba44f157ee9b441dc9204e17.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.05644v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
Stakeholders in Explainable AI
[article]
<span title="2018-09-29">2018</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
In our own recent work, we examined explainability and interpretability from the perspective of explanation recipients, of six kinds (Tomsett et al. 2018): system creators, system operators, executors ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1810.00184v1">arXiv:1810.00184v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/izkonor3urg3recwz4gdvoxnu4">fatcat:izkonor3urg3recwz4gdvoxnu4</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191025191257/https://arxiv.org/pdf/1810.00184v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/76/75/7675f33ced0353c7eaadf3fe2d236d76be2addce.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1810.00184v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
Mapping neuron positions to curved cortical surfaces
[article]
<span title="2014-05-14">2014</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
Also, I would like to gratefully thank Richard Tomsett who always intuitively and patiently gave technical help and support to my project from proposal phase to code scripting period. ...
Preparation The simulations of model composition, model size, coordinate system and topology of neuron network are adopted from Tomsett et al (Tomsett RJ) . ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1405.3333v1">arXiv:1405.3333v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qq2f2p52svafvasaxrw4lqdepi">fatcat:qq2f2p52svafvasaxrw4lqdepi</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200825111222/https://arxiv.org/ftp/arxiv/papers/1405/1405.3333.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/c3/b6/c3b60f43c55269512da0a0d71b3e335eb28888ec.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1405.3333v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
Sanity Checks for Saliency Metrics
[article]
<span title="2019-11-29">2019</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
Saliency maps are a popular approach to creating post-hoc explanations of image classifier outputs. These methods produce estimates of the relevance of each pixel to the classification output score, which can be displayed as a saliency map that highlights important pixels. Despite a proliferation of such methods, little effort has been made to quantify how good these saliency maps are at capturing the true relevance of the pixels to the classifier output (i.e. their "fidelity"). We therefore
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.01451v1">arXiv:1912.01451v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/monr3t6p7zcvzjb5zfcc6zo4hm">fatcat:monr3t6p7zcvzjb5zfcc6zo4hm</a>
</span>
more »
... estigate existing metrics for evaluating the fidelity of saliency methods (i.e. saliency metrics). We find that there is little consistency in the literature in how such metrics are calculated, and show that such inconsistencies can have a significant effect on the measured fidelity. Further, we apply measures of reliability developed in the psychometric testing literature to assess the consistency of saliency metrics when applied to individual saliency maps. Our results show that saliency metrics can be statistically unreliable and inconsistent, indicating that comparative rankings between saliency methods generated using such metrics can be untrustworthy.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200927130914/https://arxiv.org/pdf/1912.01451v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/5b/e5/5be5f9660410c0aa23416ca005737861879c72dd.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.01451v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
[article]
<span title="2018-06-20">2018</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
Correspondence to: Richard Tomsett <rtomsett@uk.ibm.com>. 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, Sweden. Copyright by the author(s). ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1806.07552v1">arXiv:1806.07552v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7lq432d7tjhodmssobcoky6i44">fatcat:7lq432d7tjhodmssobcoky6i44</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200911164343/https://arxiv.org/ftp/arxiv/papers/1806/1806.07552.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/2f/72/2f72e1c9ce526de4aaed2e5c10d4af99c1dccd47.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1806.07552v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
'Take Ten' improving the surgical post take ward round
<span title="">2015</span>
<i title="Elsevier BV">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ucfvmf67ybdrdogcptksyifate" style="color: black;">International Journal of Surgery</a>
</i>
Tomsett, S. Richards. ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.ijsu.2015.07.304">doi:10.1016/j.ijsu.2015.07.304</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/u4asopg6hfdbvbitz5wignhrnu">fatcat:u4asopg6hfdbvbitz5wignhrnu</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170930084336/http://publisher-connector.core.ac.uk/resourcesync/data/elsevier/pdf/8b9/aHR0cDovL2FwaS5lbHNldmllci5jb20vY29udGVudC9hcnRpY2xlL3BpaS9zMTc0MzkxOTExNTAwNjcwNg%3D%3D.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/d5/fc/d5fc7e82954ddea24fdad8887567f460974a0c73.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.ijsu.2015.07.304">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="unlock alternate icon" style="background-color: #fb971f;"></i>
elsevier.com
</button>
</a>
Cognitive Analysis in Sports: Supporting Match Analysis and Scouting through Artificial Intelligence
<span title="2021-03-14">2021</span>
<i title="Wiley">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/lbrji2lpbbcojliddu762z3uju" style="color: black;">Applied AI Letters</a>
</i>
In elite sports, there is an opportunity to take advantage of rich and detailed datasets generated across multiple threads of the sporting business. Challenges currently exist due to time constraints to analyse the data, as well as the quantity and variety of data available to assess. Artificial Intelligence (AI) techniques can be a valuable asset in assisting decision makers in tackling such challenges, but deep AI skills are generally not held by those with rich experience in sporting
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1002/ail2.21">doi:10.1002/ail2.21</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/njvfd64divcc5lkl6hk3iyirdi">fatcat:njvfd64divcc5lkl6hk3iyirdi</a>
</span>
more »
... Here, we describe how certain commonly available AI services can be used to provide analytic assistance to sports experts in exploring, and gaining insights from, typical data sources. In particular, we focus on the use of Natural Language Processing and Conversational Interfaces to provide users with an intuitive and time-saving toolkit to explore their datasets and the conclusions arising from analytics performed on them. We show the benefit of presenting powerful AI and analytic techniques to domain experts, showing the potential for impact not only at the elite level of sports, where AI and analytic capabilities may be more available, but also at a more grass-roots level where there is generally little access to specialist resources. The work described in this paper was trialled with Leatherhead Football Club, a semi-professional team that, at the time, were based in the English 7th tier of football.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210717224336/https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ail2.21" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/03/7d/037d54bea49f9b7d7888aaabfe2486a6ace11649.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1002/ail2.21">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="external alternate icon"></i>
wiley.com
</button>
</a>
Explaining Motion Relevance for Activity Recognition in Video Deep Learning Models
[article]
<span title="2020-03-31">2020</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
A small subset of explainability techniques developed initially for image recognition models has recently been applied for interpretability of 3D Convolutional Neural Network models in activity recognition tasks. Much like the models themselves, the techniques require little or no modification to be compatible with 3D inputs. However, these explanation techniques regard spatial and temporal information jointly. Therefore, using such explanation techniques, a user cannot explicitly distinguish
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.14285v1">arXiv:2003.14285v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/24b2igwemnfche3pjsrq2rat3e">fatcat:24b2igwemnfche3pjsrq2rat3e</a>
</span>
more »
... e role of motion in a 3D model's decision. In fact, it has been shown that these models do not appropriately factor motion information into their decision. We propose a selective relevance method for adapting the 2D explanation techniques to provide motion-specific explanations, better aligning them with the human understanding of motion as conceptually separate from static spatial features. We demonstrate the utility of our method in conjunction with several widely-used 2D explanation methods, and show that it improves explanation selectivity for motion. Our results show that the selective relevance method can not only provide insight on the role played by motion in the model's decision -- in effect, revealing and quantifying the model's spatial bias -- but the method also simplifies the resulting explanations for human consumption.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200402080823/https://arxiv.org/pdf/2003.14285v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.14285v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI
<span title="2020-07-10">2020</span>
<i title="Elsevier BV">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/5cekcl3stvfxxfsctgk6zwoqwe" style="color: black;">Patterns</a>
</i>
toss or dice roll) Epistemic uncertainty uncertainty caused by a lack of knowledge, reducible by observing more data Adapted from H€ ullermeier and Waegeman, 15 Lee and See, 16 Nilsson, 17 and Tomsett ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.patter.2020.100049">doi:10.1016/j.patter.2020.100049</a>
<a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/33205113">pmid:33205113</a>
<a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC7660448/">pmcid:PMC7660448</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/znum6ebievgg5aqchwhszjflyy">fatcat:znum6ebievgg5aqchwhszjflyy</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210531234948/http://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC7660448&blobtype=pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/b1/ad/b1ad0fef853163959512c0f99084667eeaf28c99.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.patter.2020.100049">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="unlock alternate icon" style="background-color: #fb971f;"></i>
elsevier.com
</button>
</a>
<a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7660448" title="pubmed link">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
pubmed.gov
</button>
</a>
Provisioning Robust and Interpretable AI/ML-Based Service Bundles
<span title="">2018</span>
<i title="IEEE">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/5fjspjtqujectfcv6khzqy3mvq" style="color: black;">MILCOM 2018 - 2018 IEEE Military Communications Conference (MILCOM)</a>
</i>
Coalition operations environments are characterised by the need to share intelligence, surveillance and reconnaissance services. Increasingly, such services are based on artificial intelligence (AI) and machine learning (ML) technologies. Two key issues in the exploitation of AI/ML services are robustness and interpretability. Employing a diverse portfolio of services can make a system robust to 'unknown unknowns'. Interpretability -the need for services to offer explanation facilities to
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/milcom.2018.8599838">doi:10.1109/milcom.2018.8599838</a>
<a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/milcom/PreeceHRTB18.html">dblp:conf/milcom/PreeceHRTB18</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/4k72tzz3jjgc5mlglih34picse">fatcat:4k72tzz3jjgc5mlglih34picse</a>
</span>
more »
... er user trust -can be addressed by a variety of methods to generate either transparent or post hoc explanations according to users' requirements. This paper shows how a service-provisioning framework for coalition operations can be extended to address specific requirements for robustness and interpretability, allowing automatic selection of service bundles for intelligence, surveillance and reconnaissance tasks. The approach is demonstrated in a case study on traffic monitoring featuring a diverse set of AI/ML services based on deep neural networks and heuristic reasoning approaches.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190427084542/http://orca.cf.ac.uk/116295/1/MILCOM_2018_BPP_P5.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/02/14/0214ef7471e71d8ea2f3a12e184788f263e8c1f5.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/milcom.2018.8599838">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="external alternate icon"></i>
ieee.com
</button>
</a>
Modelling local field potential features during network gamma oscillations
<span title="">2014</span>
<i title="Springer Nature">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/hus553nnwfhsbi2wiivfjdwxru" style="color: black;">BMC Neuroscience</a>
</i>
© 2014 Tomsett et al; licensee BioMed Central Ltd. ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1186/1471-2202-15-s1-p131">doi:10.1186/1471-2202-15-s1-p131</a>
<a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC4125023/">pmcid:PMC4125023</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6utfhjf4ffhwboqefg6kuegsfe">fatcat:6utfhjf4ffhwboqefg6kuegsfe</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170830103235/https://link.springer.com/content/pdf/10.1186%2F1471-2202-15-S1-P131.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/bb/b1/bbb1c249227fc4ec6ee8eb54f847b8dac48974ea.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1186/1471-2202-15-s1-p131">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="unlock alternate icon" style="background-color: #fb971f;"></i>
springer.com
</button>
</a>
<a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4125023" title="pubmed link">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
pubmed.gov
</button>
</a>
'Take Ten' improving the surgical post-take ward round: a quality improvement project
<span title="">2018</span>
<i title="BMJ">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2grcxzb5gzdypmklbsgfrneaiu" style="color: black;">BMJ Open Quality</a>
</i>
The surgical post-take ward round is a complex multidisciplinary interaction in which new surgical patients are reviewed and management plans formulated. Its fast-paced nature can lead to poor communication and inaccurate or incomplete documentation with potential detriment to patient safety. Junior team members often do not fully understand the diagnosis and management plan. Aims The aims of this project were to improve both communication and documentation on the surgical posttake ward round,
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1136/bmjoq-2017-000045">doi:10.1136/bmjoq-2017-000045</a>
<a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/29527575">pmid:29527575</a>
<a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC5841505/">pmcid:PMC5841505</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/4jwhkmsrmnfjbkecjiu43qwdh4">fatcat:4jwhkmsrmnfjbkecjiu43qwdh4</a>
</span>
more »
... nfluencing patient safety. Methods The ward round was deconstructed to identify individual roles and determine where intervention would have the most impact. Ten important points were identified that should be documented in the management of an acute surgical patient; observations, examination, impression, investigations, antibiotics, intravenous fluids, VTE assessment, nutrition status, estimated length of stay and ceiling of treatment. A 'Take Ten' checklist was devised with these items to be used as a 'time out' after each patient with the whole team for discussion, clarification and clear documentation. Four plan do study act cycles were completed over a period of a year. A retrospective review of post-take documentation preintervention and postintervention was performed, and the percentage of points that were accurately documented was calculated. For further clarification, 2 weekends were compared-one where the checklist was used and one where it was not. Results Results showed documentation postintervention varied between categories but there was improvement in documentation of VTE assessment, fluids, observations and investigations. On direct comparison of weekends the checklist showed improved documentation in all categories except length of stay. Junior team members found the checklist improved understanding of diagnosis and management plan, and encouraged a more effective ward round. Conclusion The 'Take Ten' checklist has been well received. Three years on from its inception, the checklist has become an integral part of the post-take ward round, thanks to the multidisciplinary engagement in the project. on 28 April 2019 by guest. Protected by copyright.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190428073852/https://bmjopenquality.bmj.com/content/bmjqir/7/1/e000045.full.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/a7/ab/a7abfeb676cf9a99557f8198a647517345d1bb10.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1136/bmjoq-2017-000045">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="unlock alternate icon" style="background-color: #fb971f;"></i>
bmj.org
</button>
</a>
<a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5841505" title="pubmed link">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
pubmed.gov
</button>
</a>
Why the Failure? How Adversarial Examples Can Provide Insights for Interpretable Machine Learning
<span title="">2018</span>
<i title="IEEE">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/x3ebkiwkjrg3fclvat7tinimy4" style="color: black;">2018 21st International Conference on Information Fusion (FUSION)</a>
</i>
Recent advances in Machine Learning (ML) have profoundly changed many detection, classification, recognition and inference tasks. Given the complexity of the battlespace, ML has the potential to revolutionise how Coalition Situation Understanding is synthesised and revised. However, many issues must be overcome before its widespread adoption. In this paper we consider two -interpretability and adversarial attacks. Interpretability is needed because military decision-makers must be able to
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.23919/icif.2018.8455710">doi:10.23919/icif.2018.8455710</a>
<a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/fusion/TomsettWXCJGRS18.html">dblp:conf/fusion/TomsettWXCJGRS18</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2abpnopui5fthj7lgphosc3wgi">fatcat:2abpnopui5fthj7lgphosc3wgi</a>
</span>
more »
... y their decisions. Adversarial attacks arise because many ML algorithms are very sensitive to certain kinds of input perturbations. In this paper, we argue that these two issues are conceptually linked, and insights in one can provide insights in the other. We illustrate these ideas with relevant examples from the literature and our own experiments. Index Terms-interpretability, interpretable machine learning, deep learning, adversarial machine learning, adversarial examples, explainable AI, AI alignment, internet of battlefield things
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190426065406/http://discovery.ucl.ac.uk/10070702/1/2374_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/0a/5c/0a5c02babb472ff4a729cdb82d98f5e5560503cd.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.23919/icif.2018.8455710">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="external alternate icon"></i>
Publisher / doi.org
</button>
</a>
Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making
[article]
<span title="2022-04-04">2022</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
., 2020; Tomsett et al., 2020; Okamura and Yamada, 2020) , in this work we analyse reliance miscalibration due to anchoring bias which has a different mechanism and, hence, different mitigating strategies ...
sparked crucial research in several directions, such as human trust in algorithmic systems, interpretability, and explainability of machine learning models (Arnold et al., 2019; Zhang et al., 2020; Tomsett ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.07938v2">arXiv:2010.07938v2</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/cyy7enue6naz5fyx5x6ued3xde">fatcat:cyy7enue6naz5fyx5x6ued3xde</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220519122101/https://arxiv.org/pdf/2010.07938v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/14/8e/148efaba70165d9faef0dac28d5fa2538cfa662d.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.07938v2" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
Modelling spatially realistic local field potentials in spiking neural networks using the VERTEX simulation tool
<span title="">2014</span>
<i title="Springer Nature">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/hus553nnwfhsbi2wiivfjdwxru" style="color: black;">BMC Neuroscience</a>
</i>
© 2014 Tomsett et al; licensee BioMed Central Ltd. ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1186/1471-2202-15-s1-p130">doi:10.1186/1471-2202-15-s1-p130</a>
<a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC4125021/">pmcid:PMC4125021</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/25uqah7qr5g7vmcgif6up2sevm">fatcat:25uqah7qr5g7vmcgif6up2sevm</a>
</span>
<a target="_blank" rel="noopener" href="https://archive.org/download/pubmed-PMC4125021/PMC4125021-1471-2202-15-S1-P130.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
File Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/71/db/71db663cb366170a01e8415ba0f65b48219b1817.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1186/1471-2202-15-s1-p130">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="unlock alternate icon" style="background-color: #fb971f;"></i>
springer.com
</button>
</a>
<a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4125021" title="pubmed link">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
pubmed.gov
</button>
</a>
« Previous
Showing results 1 — 15 out of 120 results