Social impact and governance of AI and neurotechnologies

Social impact and governance of AI and neurotechnologies

Kenji Doya, Arisa Ema, Hiroaki Kitanoa, Masamichi Sakagami, StuartRussell

Abstract

Advances in artificial intelligence (AI) and brain science are going to have a huge impact on society. While technologies based on those advances can provide enormous social benefits, adoption of new technologies poses various risks. This article first reviews the co-evolution of AI and brain science and the benefits of brain-inspired AI in sustainability, healthcare, and scientific discoveries. We then consider possible risks from those technologies, including intentional abuse, autonomous weapons, cognitive enhancement by brain-computer interfaces, insidious effects of social media, inequity, and enfeeblement. We also discuss practical ways to bring ethical principles into practice. One proposal is to stop giving explicit goals to AI agents and to enable them to keep learning human preferences. Another is to learn from democratic mechanisms that evolved in human society to avoid over-consolidation of power. Finally, we emphasize the importance of open discussions not only by experts, but also including a diverse array of lay opinions.

Keywords

Artificial intelligence
Neurotechnology
AI scientist
Human compatible AI
Ethics
Governance


Reasoning About Common Knowledge with Infinitely Many Agents

Reasoning About Common Knowledge with Infinitely Many Agents

Complete axiomatizations and exponential-time decision procedures are provided for reasoning about knowledge and common knowledge when there are infinitely many agents. The results show that reasoning about knowledge and common knowledge with infinitely many agents is no harder than when there are finitely many agents, provided that we can check the cardinality of certain set differences G - G', where G and G' are sets of agents. Since our complexity results are independent of the cardinality of the sets G involved, they represent improvements over the previous results even with the sets of agents involved are finite. Moreover, our results make clear the extent to which issues of complexity and completeness depend on how the sets of agents involved are represented.


Dynamic Awareness

Dynamic Awareness

We investigate how to model the beliefs of an agent who becomes more aware. We use the framework of Halpern and Rego (2013) by adding probability, and define a notion of a model transition that describes constraints on how, if an agent becomes aware of a new formula ϕ in state s of a model M, she transitions to state s∗ in a model M∗. We then discuss how such a model can be applied to information disclosure.


LEARNING TRUTHFUL, EFFICIENT, AND WELFARE MAXIMIZING AUCTION RULES

LEARNING TRUTHFUL, EFFICIENT, AND WELFARE MAXIMIZING AUCTION RULES 

Andrea Tacchetti, DJ Strouse, Marta Garnelo, Thore Graepel, Yoram Bachrach

ABSTRACT From social networks to supply chains, more and more aspects of how humans, firms and organizations interact is mediated by artificial learning agents. As the influence of machine learning systems grows, it is paramount that we study how to imbue our modern institutions with our own values and principles. Here we consider the problem of allocating goods to buyers who have preferences over them in settings where the seller’s aim is not to maximize their monetary gains, but rather to advance some notion of social welfare (e.g. the government trying to award construction licenses for hospitals or schools). This problem has a long history in economics, and solutions take the form of auction rules. Researchers have proposed reliable auction rules that work in extremely general settings, and in the presence of information asymmetry and strategic buyers. However, these protocols require significant payments from participants resulting in low aggregate welfare. Here we address this shortcoming by casting auction rule design as a statistical learning problem, and trade generality for participant welfare effectively and automatically with a novel deep learning network architecture and auction representation. Our analysis shows that our auction rules outperform state-of-the art approaches in terms of participants welfare, applicability, robustness.

Sequence Hypergraphs: Paths, Flows, and Cuts

Sequence Hypergraphs: Paths, Flows, and Cuts

Kateřina Böhmová, Jérémie Chalopin, Matúš Mihalák, Guido Proietti


Abstract. We introduce sequence hypergraphs by extending the concept of a directed edge (from simple directed graphs) to hypergraphs. Specifically, every hyperedge of a sequence hypergraph is defined as a sequence of vertices (not unlike a directed path). Sequence hypergraphs are motivated by problems in public transportation networks, as they conveniently represent transportation lines. We study the complexity of several fundamental algorithmic problems, arising (not only) in transportation, in the setting of sequence hypergraphs. In particular, we consider the problem of finding a shortest st-hyperpath: a minimum set of hyperedges that “connects” (allows to travel to) t from s; finding a minimum st-hypercut: a minimum set of hyperedges whose removal “disconnects” t from s; or finding a maximum st-hyperflow: a maximum number of hyperedge-disjoint st-hyperpaths. We show that many of these problems are APX-hard, even in acyclic sequence hypergraphs or with hyperedges of constant length. However, if all the hyperedges are of length at most 2, we show that these problems become polynomially solvable. We also study the special setting in which for every hyperedge there also is a hyperedge with the same sequence, but in reverse order. Finally, we briefly discuss other algorithmic problems such as finding a minimum spanning tree, or connected components.

Keywords: Sequence hypergraphs, colored graphs, labeled problems, transportation lines, algorithms, complexity

Path-Specific Objectives for Safer Agent Incentives

Path-Specific Objectives for Safer Agent Incentives 

Sebastian Farquhar, Ryan Carey, Tom Everitt

University of Oxford, DeepMind 

Abstract We present a general framework for training safe agents whose naive incentives are unsafe. As an example, manipulative or deceptive behaviour can improve rewards but should be avoided. Most approaches fail here: agents maximize expected return by any means necessary. We formally describe settings with ‘delicate’ parts of the state which should not be used as a means to an end. We then train agents to maximize the causal effect of actions on the expected return which is not mediated by the delicate parts of state, using Causal Influence Diagram analysis. The resulting agents have no incentive to control the delicate state. We further show how our framework unifies and generalizes existing proposals.

Rescuing Ontological Individualism

Rescuing Ontological Individualism

Francesco Guala

Abstract. Standard defences of ontological individualism are challenged by arguments that exploit the dependence of social facts on material facts – i.e. facts that are not about human individuals. In this paper I discuss Brian Epstein’s “materialism” in The Ant Trap: granting Epstein’s strict definition of individualism, I show that his arguments depend crucially on a generous conception of social properties and social facts. Individualists however are only committed to the claim that projectible properties are individualistically realized, and materialists have not undermined this claim.

Towards categorical cryptography

Towards categorical cryptography 

Dusko Pavlovic, Royal Holloway

Abstract Cryptography is a theory of secret functions. Category theory is a general theory of functions. Cryptography has reached the stage where its structures often take several pages to define, and even its formulas sometime run from page to page. Category theory has some complicated definitions as well, but one of its specialties is taming the flood of structure by diagrams and layering. Cryptography seems to be in need of high level methods, whereas category theory always needs concrete applications. So why is there no categorical cryptography? One reason may be that the foundations of modern cryptography are laid over probabilistic polynomial-time Turing machines, and category theory does not have a good handle on such things. On the other hand, these foundations might be the very reason why the details of cryptographic constructions often resemble low level machine programming. I present some preliminary results of an effort to present the basic cryptographic concepts categorically. It turns out that the standard security definitions can be characterized by simple commutative diagrams. Some security proofs become modular. The work is at an early stage, and did not yield any new cryptographic results yet, but the approach seems natural, leads to some interesting new ideas and structures, and invites more work.

Dependent Types for Extensive Games

Dependent Types for Extensive Games

Pierre Lescanne

University of Lyon, E ́cole normale sup ́erieure de Lyon, CNRS (LIP), 46 all ́ee d’Italie, 69364 Lyon, France

November 9, 2018

Abstract

Extensive games are tools largely used in economics to describe deci- sion processes of a community of agents. In this paper we propose a formal presentation based on the proof assistant Coq which focuses mostly on in- finite extensive games and their characteristics. Coq proposes a feature called “dependent types”, which means that the type of an object may depend on the type of its components. For instance, the set of choices or the set of utilities of an agent may depend on the agent herself. Using dependent types, we describe formally a very general class of games and strategy profiles, which corresponds somewhat to what game theorists are used to. We also discuss the notions of infiniteness in game theory and how this can be precisely described.

Keywords: extensive game, infinite game, sequential game, coinduction, Coq, proof assistant.

Nested Variational Inference

Nested Variational Inference 

Heiko Zimmermann

Hao Wu

Babak Esmaeili

Jan-Willem van de Meent

Abstract We develop nested variational inference (NVI), a family of methods that learn proposals for nested importance samplers by minimizing an forward or reverse KL divergence at each level of nesting. NVI is applicable to many commonly-used importance sampling strategies and provides a mechanism for learning intermediate densities, which can serve as heuristics to guide the sampler. Our experiments apply NVI to (a) sample from a multimodal distribution using a learned annealing path (b) learn heuristics that approximate the likelihood of future observations in a hidden Markov model and (c) to perform amortized inference in hierarchical deep generative models. We observe that optimizing nested objectives leads to improved sample quality in terms of log average weight and effective sample size.