Filters








22,587 Hits in 3.6 sec

Concrete Problems in AI Safety [article]

Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
2016 arXiv   pre-print
In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems  ...  Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.  ...  In addition a minority of the work done by Paul Christiano was performed as a contractor for Theiss Research and at OpenAI.  ... 
arXiv:1606.06565v2 fatcat:m2dn55wibvafzgnfejxaavczrm

Robust Computer Algebra, Theorem Proving, and Oracle AI [article]

Gopal P. Sarma, Nick J. Hay
2017 arXiv   pre-print
set of problems related to the notion of provable safety that has emerged in the AI safety community.  ...  AI safety.  ...  Some of these issues overlap with the set of problems identified in [15] as examples of concrete problems in AI safety.  ... 
arXiv:1708.02553v2 fatcat:hesasayvljb3vmjqsw7sniutdm

Robust Computer Algebra, Theorem Proving, and Oracle AI

Gopal P. Sarma, Nick J. Hay
2017 Social Science Research Network  
set of problems related to the notion of provable safety that has emerged in the AI safety community.  ...  AI safety.  ...  Some of these issues overlap with the set of problems identified in [15] as examples of concrete problems in AI safety.  ... 
doi:10.2139/ssrn.3038545 fatcat:w3bzqxgopbdwtbqvsckcsfdjoq

Open Questions in Creating Safe Open-ended AI: Tensions Between Control and Creativity [article]

Adrien Ecoffet and Jeff Clune and Joel Lehman
2020 arXiv   pre-print
This paper explains how unique safety problems manifest in open-ended search, and suggests concrete contributions and research questions to explore them.  ...  The idea is that AI systems are increasingly applied in the real world, often producing unintended harms in the process, which motivates the growing field of AI safety.  ...  Problems incurred during transfer relate to robustness problems in AI safety, i.e. due to failures in modeling, the real world differs from the simulated one in ways that an agent ideally would be robust  ... 
arXiv:2006.07495v1 fatcat:whdhok4ztzc7tov55c2x5ggkcq

AI safety: state of the field through quantitative lens [article]

Mislav Juric, Agneza Sandic, Mario Brcic
2020 arXiv   pre-print
Equally, there is a severe lack of research into concrete policies regarding AI.  ...  As we expect AI to be the one of the main driving forces of changes in society, AI safety is the field under which we need to decide the direction of humanity's future.  ...  In this paper we shall divide AI safety in the following hierarchy of sub-fields: 1. technical AI safety -deals with the technical issues of achieving safety and utility.  ... 
arXiv:2002.05671v2 fatcat:avco2ffhbnb7dcfonrvpy5ilv4

Evolutionary Computation and AI Safety: Research Problems Impeding Routine and Safe Real-world Application of Evolution [article]

Joel Lehman
2019 arXiv   pre-print
Recent developments in artificial intelligence and machine learning have spurred interest in the growing field of AI safety, which studies how to prevent human-harming accidents when deploying AI systems  ...  This paper thus explores the intersection of AI safety with evolutionary computation, to show how safety issues arise in evolutionary computation and how understanding from evolutionary computational and  ...  EC and Concrete AI Safety Problems This section explores more concretely how ideas from EC intersect with those from AI safety. We adopt the framework of Amodei et al.  ... 
arXiv:1906.10189v2 fatcat:atiwcab35bbwhcgebxsh6i5cz4

X-Risk Analysis for AI Research [article]

Dan Hendrycks, Mantas Mazeika
2022 arXiv   pre-print
Finally, we discuss a crucial concept in making AI systems safer by improving the balance between safety and general capabilities.  ...  concepts from hazard analysis and systems safety that have been designed to steer large processes in safer directions.  ...  DH is supported by the NSF GRFP Fellowship and an Open Philanthropy Project AI Fellowship.  ... 
arXiv:2206.05862v6 fatcat:mdapiamsq5gbhijrkp6shfwflu

From construction site to design: The different accident prevention levels in the building industry

Eduardo Diniz Fonseca, Francisco P.A. Lima, Francisco Duarte
2014 Safety Science  
The contribution and originality of this paper are based upon the presentation of a model in three levels of anticipation of problems during the construction phase and its effects on improving production  ...  and safety.  ...  In Table 3 , it is still possible to verify that the case of excess in the WC pillar of construction AI (AI-3) is also one of the cases that generate problems for safety.  ... 
doi:10.1016/j.ssci.2014.07.006 fatcat:jgeezyznyrakfhrvon2w5y35om

On estimation of occupant safety in vehicular crashes into roadside obstacles using non-linear dynamic analysis

Krzysztof Wilde, Arkadiusz Tilsen, Stanisław Burzyński, Wojciech Witkowski, T. Burczyński, B. Miller, L. Ziemiański
2019 MATEC Web of Conferences  
The so-called direct method is mainly based on the HIC (Head Injury Criterion) of a crash test dummy in a vehicle with passive safety system while the indirect method uses a European standard approach  ...  The article describes a comparison of two general methods of occupants safety estimation based on a numerical examples.  ...  Acknowledgements This work was supported by the National Centre for Research and Development (NCBiR) and General Director for National Roads and Motorways (GDDKiA) under the research project "Road Safety  ... 
doi:10.1051/matecconf/201928500022 fatcat:ycousmkmqvdyvddv7aziafi5vy

TanksWorld: A Multi-Agent Environment for AI Safety Research [article]

Corban G. Rivera, Olivia Lyons, Arielle Summitt, Ayman Fatima, Ji Pak, William Shao, Robert Chalmers, Aryeh Englander, Edward W. Staley, I-Jeng Wang, Ashley J. Llorens
2020 arXiv   pre-print
Fortunately, a landscape of AI safety research is emerging in response to this asymmetry and yet there is a long way to go.  ...  In this work, we introduce the AI safety TanksWorld as an environment for AI safety research with three essential aspects: competing performance objectives, human-machine teaming, and multi-agent competition  ...  Minimizing Collateral Damage One of the concrete [12] problems in AI safety is avoiding unintended consequences.  ... 
arXiv:2002.11174v1 fatcat:ndc7sc7chvb2nj3dawgv2hnrla

Using Artificial Intelligence Techniques to Predict Punching Shear Capacity of Lightweight Concrete Slabs

Ahmed Ebid, Ahmed Deifalla
2022 Materials  
The novelty lies in developing three proposed models for the punching capacity of lightweight concrete slabs using three different (AI) techniques capable of accurately predicting the strength compared  ...  In addition, the punching shear failure of concrete slabs is dangerous and calls for precise and consistent prediction models.  ...  AI Model Development Artificial intelligence (AI) techniques are searching algorithms that aim to find the best solution for certain problems according to certain criteria within the available time and  ... 
doi:10.3390/ma15082732 pmid:35454424 pmcid:PMC9024571 fatcat:axkrfwa5jjfhxplodc2u52ikwa

Towards Requirements Engineering for Superintelligence Safety

Hermann Kaindl, Jonas Ferdigg
2020 Requirements Engineering: Foundation for Software Quality  
To our best knowledge, this view of "AI safety" has not been pointed out yet.  ...  Under the headline "AI safety", a wide-reaching issue is being discussed, whether in the future some "superhuman artificial intelligence" / "superintelligence" could pose a threat to humanity.  ...  While other references regarding "AI safety" can be found in [ELH18] and at https://vkrakovna.wordpress.com/ai-safety-resources/, it is mentioned nowhere that it is actually a problem with requirements  ... 
dblp:conf/refsq/KaindlF20 fatcat:ocpel3ntobg6nfvjxevxtqatz4

Safety of Artificial Intelligence: A Collaborative Model

John McDermid, Yan Jia
2020 International Joint Conference on Artificial Intelligence  
It sets out a three-layer model, going from top to bottom: system safety/functional safety; "AI/ML safety"; and safety-critical software engineering.  ...  This model gives both a basis for achieving and assuring safety and a structure for collaboration between safety engineers and AI/ML specialists.  ...  The views expressed in this paper are those of the authors and not necessarily those of the NHS, or the Department of Health and Social Care.  ... 
dblp:conf/ijcai/McDermidJ20 fatcat:u2mzw2tc3famrg46o74nwbfvre

AI Safety and Reproducibility: Establishing Robust Foundations for the Neuropsychology of Human Values [chapter]

Gopal P. Sarma, Nick J. Hay, Adam Safron
2018 Lecture Notes in Computer Science  
Our aim is to ensure that research underpinning the value alignment problem of artificial intelligence has been sufficiently validated to play a role in the design of AI systems.  ...  We propose the creation of a systematic effort to identify and replicate key findings in neuropsychology and allied fields related to understanding human values.  ...  As we see it, there is no shortage of concrete research problems that can be pursued within a familiar academic setting.  ... 
doi:10.1007/978-3-319-99229-7_45 fatcat:6hrpwpoonzhxriu3shtxhhhbzy

Hard Choices in Artificial Intelligence [article]

Roel Dobbe, Thomas Krendl Gilbert, Yonatan Mintz
2021 arXiv   pre-print
In this paper, we examine the vagueness in debates about the safety and ethical behavior of AI systems.  ...  As such, HCAI contributes to a timely debate about the status of AI development in democratic societies, arguing that deliberation should be the goal of AI Safety, not just the procedure by which it is  ...  Gilbert is funded by the Center for Human-Compatible AI as well as a Newcombe Fellowship.  ... 
arXiv:2106.11022v1 fatcat:2sl7xaaq4vgvzn74x7c6jcos54
« Previous Showing results 1 — 15 out of 22,587 results