Skip to content

Big Data and AI– Ethical and Societal implications

by Roberto V. Zicari on October 31, 2018

Are computer system designers (i.e. Software Developers, Software Engineers, Data Scientists, Data Engineers, etc,), the ones who will decide what the impact of these technologies are and whether to replace or augment humans in society?

Big Data, AI and Intelligent systems are becoming sophisticated tools in the hands of a variety of stakeholders, including political leaders.

Some AI applications may raise new ethical and legal questions, for example related to liability or potentially biased decision-making.

I recently gave a talk at UC Berkeley on the Ethical and Societal implications of Big Data and AI and what designers of intelligent systems can do to take responsibility, not only for policy makers and lawyers.

You can find copy of the presentation here: 
http://www.odbms.org/wp-content/uploads/2018/10/Zicari.UCBerkeley.2018.pdf

I am interested to hear from you and receive your feedback.

RVZ

From → Uncategorized

61 Comments Leave one →
  1. Bryn Roberts permalink

    Thanks Roberto – very interesting presentation and many key points raised. A statement on slide 19 caught my eye: “At present we do not really understand how Advanced AI-techniques – such as used in Deep learning (e.g. neural networks) really works”. I think we understand the technical principles of how the networks encode and ‘learn’ but, in the individual instances, we may not be able to determine the heuristics of the system. Perhaps this is what was meant and it’s certainly true that it can be extremely difficult to understand which features of the data the machine used, and how they were weighted, to contribute to the outcome. It is entirely feasible that the machine builds a model that is not extensible or generalizable, which may have ethical implications, as you discuss later in the example of autonomous vehicles. Robust [and standardized?] procedures for testing and validating AIs would be a pragmatic solution, even if we don’t understand fully the heuristics. Perhaps, by extensive testing with actual or synthetic data sets and extreme scenarios, an AI could be validated for its intended purpose, including likely paths of future learning?

    • Thank you Bryn for your valuable feedback!

      I agree with you when you say: even if we don’t understand fully the heuristics, perhaps, by extensive testing with synthetics data sets and extreme scenarios, an AI could be validated for whatever purpose it is designed, including likely paths of future learning, if it is deployed in that state?

      In fact, I was asked a similar question when I presented the same talk at Uber in San Francisco..

      I thought that we do not allow kids to drive a car, they need to be at least 16 in USA, and 18 in Europe and have done a traffic school class and passed a test.

      Perhaps we can “certify” AIs by the number of testing with synthetics data sets and extreme scenario they went through- before allowing AIs to drive a car (similar to what happens to airplane pilots)….

      Somebody would need to define when good is enough. And this may be tricky…

  2. Roberto, thanks a million for the very complete overview of ethical questions arising with AI.

    In Defense, AI used in conjunction with offensive weapons will change warfare. Hence, for some years now, the United Nations were conducting related ethical debates. (In air defense, AI has been in use for over two decades already, raising zero ethical concerns raised because of defensive usage. That’s a bit weird, though.)

    Although AI allows new forms and scenarios of warfare – AI will always and everywhere change the way we do things: business processes, company structures, application concepts –, its deployment does not occur in a legal vacuum. In Defence, AI must respect the ethical foundations of International Humanitarian Law (IHL). IHL preserves, so to speak, fundamental laws in military conflict.

    Two topics are most relevant to the debate of AI in autonomous weapons, so that similar questions arise as to e.g. deployment of autonomous cars.

    1. Is the behavior of the autonomous system proportionate?

    This particularly concerns proper discrimination (combatant vs. non-combatant). Even for man, discriminating between a combatant and a civilian is difficult, because in modern conflicts combatants are no longer distinguishable from civilians. Many warring adversaries are not recognizable as fighters/soldiers because civil dressed. The decision as to which an autonomous machine’s behavior is legitimate therefore needs to be made using CONTEXT. When making a similar decision, man would rely on concepts such as “good faith” or sensus communis. The philosopher Markus Gabriel speaks of the “unified impression” people have on everyday life. This is not comparable to the data salad an autonomous machine needs to fuse into a picture of a situation. Thus, with the requirement of proportionality, the question arises: Will an autonomous machine “think” like man?

    2. Who is accountable?

    In answering this question, the military is ahead of the civilian economy and industry.

    (1), who puts an autonomous system on the market and operates it, is accountable. If Bundeswehr orders and commissions an autonomous offensive system which was built by Airbus Defence and Space, the Bundeswehr and not the manufacturer (nor its designers, programmers, etc.), must be held liable – the manufacturer is liable (merely) according to the statutory product liability.

    (2), if the use of an autonomous offensive system causes damage to the civil population – such as violation-by-accident or violation-by-design –, an individual must be accountable, as a legal entity cannot be accountable (at least not according to German law).
    Is it then fair to blame a commander?
    Yes, propose some nations, but: The autonomous system must be extremely well tested. Particularly, this comprise STATISTICAL TESTS and Independent Validation and Verification IV&V. The commander must know the probability distributions when, for example, civilians may be affected in the case of deploying an autonomous system. If the commander does not know about the system’s probability distribution, and intentionally or with gross negligence, uses the autonomous system anyway, and civilians become illegally affected by the system’s decision making, the commander makes himself liable to prosecution (of having committed a war crime). For such a legal solution, both IHL and criminal laws would have to be amended.

    (3), from a legal point of view, every loss event is always a case for an insurance. In the case of whatever damage caused by (civil) autonomous systems, therefore, it will necessary to set up an insurance business, which also reflects the above mentioned thoughts.

    • Hi Yvonne
      thank you for your detailed feedback!
      Of particular interest is your remark:
      Who is accountable?
      In answering this question, the military is ahead of the civilian economy and industry.

      There is a difference though: the motivation.

  3. Mario A Vinasco permalink

    My key take a way is the important distinction of the different areas of application of AI and big data; if social networks’ use of AI for instance are regulated the same as medical applications, we as consumers will loose valuable services without any new protection.

    We saw this when congress summoned Facebook CEO last May; senators displayed their ignorance about how the basic internet and AI works.

    This presentation is a great step to clarify, demystify and add context to this discussion

    • Hi Mario
      thank you for your feedback.
      I believe that Ethics and Societal issues do occur also in AI used for social networks as well.

  4. Cynthia Saracco permalink

    Thanks for a good introduction to the topic of ethics and AI / Big Data, including coverage of several European initiatives in this broad area. I can certainly relate to the need for “explainable AI” as well as imagine some of the difficulties involved to achieve that. It would be interesting to hear technology leaders and researchers from the commercial domain weigh in with their perspectives on what ethical obligations (if any) they associate with development of their AI-based technologies and how they’re addressing these in their commercial offerings.

  5. This presentation provides a useful introduction to some of the ethical and moral issues around the use of AI. In particular, in terms of using AI within automated systems that are making decisions that would once have been the responsibility of a human decision maker.

    I think some of the questions raised make it clear that the underlying mathematics of machine learning/AI is arguably the easy part of implementing ethical AI solutions in a practical setting. How one deals with the AI-Human interface, the identification and mitigation of bias, explicability of outcomes and the assignment of responsibility when things go wrong are where the constraints on the real-world use of AI systems lie. This applies across a wide spectrum of applications, be it self-driving cars, autonomous weapon systems or deciding who to grant loans to.

    From a design point of view, this means that considerations other than raw decision-making ability (e.g. predictive accuracy of a loan granting system) need be core considerations during the design phase of an AI project. Ethical issues can’t just be “tacked on” at the end of the delivery. It would be interesting to survey all of the new AI/Machine learning courses that have appeared in our universities in the last few years and see how much time is devoted to ethical and other real-world issues compared to the quantitative algorithmic elements. I would argue that a data scientist’s education is incomplete unless these “soft issues” are fully explored as part of their degrees.

    The question the presentation raises about regulatory frameworks is an interesting one and one should bear in mind that there is more than one perspective as to how to approach this.

    The EU approach (as captured by the GDPR) is very much a rights-based one. The starting point is that your data is yours and it’s your right to decide how that data is used – even if your refusal to allow use results in sub-optimal outcomes/harm. For example, by not allowing your data to be used to support medical research, others may suffer because new treatments will take longer to develop. A similar argument might be that I have a right to drive myself, even if I am less safe than an autonomous vehicle. This contrasts with the more utilitarian perspective, expressed in the quote by Steve Lohr at the start of the presentation, of thinking about data as a raw material. Data is an asset to be harvested and used. From the utilitarian perspective, one seeks to maximize the use of resources for the general good and only take specific actions to prevent mis-use; i.e. do no harm. Both a rights-based approach and a utilitarian perspective have their merits and drawbacks.

    The EU has gone down the rights-based approach and to date the US has been more utilitarian, but it will be interesting to see these things develop across the different regulatory regions of the world over time.

    • Hi Steven
      I can only agree with you when you say:

      ” I would argue that a data scientist’s education is incomplete unless these “soft issues” are fully explored as part of their degrees.”

      • and I really believe in what you say here: “From a design point of view, this means that considerations other than raw decision-making ability (e.g. predictive accuracy of a loan granting system) need be core considerations during the design phase of an AI project. Ethical issues can’t just be “tacked on” at the end of the delivery.”

  6. John K Thompson permalink

    Roberto,

    I have been reading your interviews and opinion pieces for the last 5+ years now. This presentation is some of your best work. I was and continue to be inspired by the breadth and depth of vision of the possible use of data and analytics.

    I have shared your presentation in our organization, on social media and with colleagues and friends.

    Solid work.

    Best,
    John

  7. Gregory Richards permalink

    Thank you for the slides Roberto. Most of my work is in government and there, strict rules for how data are used exist. We are working to develop a strategic choice model that factors in the ethical aspects of AI algorithms that respect these rules. Other jurisdictions have created guidelines for data ethics that are just finding their way into implementation. So yes, many challenges exist, but progress is being made.

  8. Great overview that will be of interest to those who are at the coalface of AI development as well as those who just want to know more about AI without being overwhelmed – from citizen data scientists to the general public.

  9. A very thoughtful document. Last year, we published a Data Manifesto (https://www.forbes.com/sites/abb/2017/04/13/a-call-to-action-for-the-internet-of-things-industry-lets-write-a-data-bill-of-rights-for-cloud-customers/#2b5c66139a21) to address some key beliefs/principles around the data gathering enabled by ever more connected “things”. We took key ideas from HIPAA and the airline “passenger bill of rights” as a source of inspiration. This data is going to fuel the AI revolution. We need to similarly articulate key principles for AI. We also would need to figure out how can we test AI systems for systemic bias, how to see if they make ethical decisions, reset/reconfigure them when they go astray… So many interesting questions and very timely if we are to avoid a popular backlash against AI (and unbridled use of it by unscrupulous actors).

    • Key point you made: “We also would need to figure out how can we test AI systems for systemic bias, how to see if they make ethical decisions, reset/reconfigure them when they go astray”

  10. Partha Deka permalink

    Thanks for sharing the slides. Overall a very detailed presentation covering all aspects of ethical AI. Innovation as well as ethical Regulation on AI development must go hand in hand. We are seeing rapid pace of research and development in the field of Artificial Intelligence recently. Its the responsibility of govt and non-govt agencies to charter down ethical regulations around AI to nurture its growth as well its positive impact on human evolution in the near and foreseeable future.

    • I believe that also designers of AI-systems (and their managers too) need to be involved in the discussion. Here I am talking about software engineers, data scientists.

  11. Daniel permalink

    Very comprehensive thinking about a complicated and quickly-evolving subject. In my eyes this highlights a fundamental problem that goes even beyond the ethics of use AI – How can a society of humans, who are socialised mainly around linear concepts, understand the effects of exponential technology, leveraging the good (with all shades of how good may be defined in different societies) and avoiding the bad? In a world that is split on such fundamental concepts as religion or philosophy, how can we even begin a dialogue about how best to use exponential technology – or should we stay away from it altogether? Would not leveraging the “good” of such technology constitute a moral flaw per se? It’s hard to imagine any kind of societal consensus around any of these questions, which is why the real issue that needs exploration is: what constitutes “truth”, and how can we re-learn the ability to debate concepts that can essentially polarise those engaged in the discussion?

  12. As AI technology evolves, more and more models trained from big data are not easily explainable. This worries users that AI model may behave against ethic or is unsafe. As a big data software vendor, we support explainable AI! We already see promising advantage from Graph database technology that can contribute on this perspective. By visualizing the entities and their relationships in 2D graph model, it becomes easier for human to understand some of the features that contribute to the model classification accuracy. For example, we think it makes more sense to use domain knowledge to come up with graph features that explainable to the task at hand, than trusting a deep learning framework to blindly digest the data to come up a with a good metric model that is unexplainable.

    • Dear Mingxi

      it is true, as you say, that people are not confident with models that they do not understand.

      However the point is: what is the target ot the model? If the target is to organize data into a logical scheme, graphs are a good tool. If the target is to make a predictive model we should care first about the accuracy of the model, and in some cases deep lerning models are the most accurate. As any statistical model, DeepNN models show correlations, not causation; even Bayesian networks are hardly transformed into causal neworks.

      Deep learning models can be explained, as in the well known examples of images. The only point is that the interpretation is not trivial, but can we expect a trivial interpretation of a complex phenomenon?

  13. A joy to read! I wrote the idea of blockchain based caged autonomy. It fits nicely in with the first posts. The Blockchain is a deterministic permanent record of real orders, perfect to border a the probabilistic, ever-changing and reality guessing AI system. AI and blockchain are fit seamless together. Making a safe smart contract grid on top of AI or any system brings solutions. https://www.linkedin.com/pulse/ai-controlled-blockchain-rules-arnoud-w-berghuis-msc/

    Added to this I like to suggest:
    “that the paradigm we discuss is the same for ALL systems and machines”!
    I stress emphasize governmental systems and organizations. The reasoning above can be applied to them as well. Just as well as to cats and dogs and many humans.

    Adding governmental systems to our discussion makes our discussion easier! We do know hen to take them seriously, hold them responsible and accountable Government is sometimes ABOVE humans. Organizations can be above too. They trust machines at their core. Inside organizations can be people who just follow blunt orders.

    If you go along with my first point you almost automatically can conclude
    “That Machines can become people, but people can become machines too”!

    This “people become machines” problem was described in the book “Der Proceß” by Franz Kafka has written 100 years ago, around 11 November 1917, the day WW1 ended. The day Europe ended for what it was. No individual or country to blame only terrible Kafkasian system” This problem was NEVER really solved. But now we can! By using blockchain, a once in a generation technology that brings 100% traceability. Trace and “Kafka is solved!” This would be really the best thing of our age! Certainly worth a book too! We should write it first Isn’t it? It’s so positive, we solve a real issue! On the fly, we solve our new issue, by understanding the root cause. Here I kick off: Understanding the “law of resistance to change”​ is key to use blockchain. https://www.linkedin.com/pulse/understanding-law-resistance-change-key-use-arnoud-w-berghuis-msc/

  14. Decades ago I posed for an intelligence agency the difficulty of monitoring transformational technologies for ethical consequences when at the same time the structures of ethical thinking, religious ideation and frameworks, and other essential aspects of our humanity were simultaneously going through the looking-glass of transformation too. Religion framed by print text, for example, is not the same as religion framed by digital technologies, nor are the humans who engage with it the same kinds of humans as those who merely read text. This plunge into ethics and the hard questions raised by big data and its mining is a good effort to get those who matter to at the very least think about these issues. My fear is that, as so often, emergent realities happen, then we think about ethics, and it is too late to apply our thinking retroactively. Yet … what else can we do? It was/is very difficult to get software designers to think about building in security, and now we are asking them to build in ethical frames, when real experts at ethics break the rules all the time, but know when to break the rules, a gift beyond machine learning still. I deeply appreciate the issues raised but fear those who should be paying attention are busy doing other things.

    • Very good point when you say: “My fear is that, as so often, emergent realities happen, then we think about ethics, and it is too late to apply our thinking retroactively.”

      You asked what else can we do?…I do not have the solution, but awareness first and actions are called for.

  15. When you say “You can use AI technologies either to automate or to augment humans.”, this reminds me that I defined “conventional applied computer science” by “augmenting human”, and AI as letting machine develop themselves. A successful AI is not necessarily pro-human, and we can bet there is some “puberty age”, where the machine will be rebellious and “stupid”. An AI is a sort of Alien. “AI” is a misnomer. I would say that the universal Turing machine is already and naturally very clever, especially the “Löbian” one, which are those who knows (in some theoretical computer science sense) that they are universal. They are wise and modest. But we don’t use them, because we prefer to “enslave” them instead, and to make them working for us, like when I am sending this post. Only experts listen to them (Gödel, Löb, Solovay). An intelligent machine will build its own ethics, and it will be related to our own as far as we are able to recognise ourself in them. If not we will get a “terrible child”. The “singularity” point might be the point where the machine will not be as intelligent as the humans, but as stupid as the humans. Maybe.

    • I agree when you say that: ” A successful AI is not necessarily pro-human, and we can bet there is some “puberty age”, where the machine will be rebellious and “stupid”.

      We can argue what “intelligence” really means….

  16. Christoph Zimmermann permalink

    Thanks for sharing the slides, Roberto – much appreciated. Very insightful and detailed food for thought indeed.

    In addition to the excellent comments above, there’s a central point which needs addressing in this context as well: Care should be taken that private companies do not take these matters into their own hands, possibly disregarding ethical and moral basics.

    While I’m not a big proponent of extensive regulation, governments should keep an eye on this. If there’s a conflict of interest, NGOs and other organization can close this gap. Google’s project Maven is probably the most recent example where a group of employees forced a major corporation to reverse their decision to advance AI technology which was destined for military deployment.

    • Hi Christoph
      very interesting reference to the Google`s project Maven.
      I did some research and found this book:
      Army of None- AUTONOMOUS WEAPONS AND THE FUTURE OF WAR
      by Paul Scharre, a Pentagon defense expert
      http://books.wwnorton.com/books/Army-of-None/

      From the abstract of the book:

      “What happens when a Predator drone has as much autonomy as a Google car? Or when a weapon that can hunt its own targets is hacked? Although it sounds like science fiction, the technology already exists to create weapons that can attack targets without human input. Paul Scharre, a leading expert in emerging weapons technologies, draws on deep research and firsthand experience to explore how these next-generation weapons are changing warfare.

      Scharre’s far-ranging investigation examines the emergence of autonomous weapons, the movement to ban them, and the legal and ethical issues surrounding their use. He spotlights artificial intelligence in military technology, spanning decades of innovation from German noise-seeking Wren torpedoes in World War II—antecedents of today’s homing missiles—to autonomous cyber weapons, submarine-hunting robot ships, and robot tank armies. Through interviews with defense experts, ethicists, psychologists, and activists, Scharre surveys what challenges might face “centaur warfighters” on future battlefields, which will combine human and machine cognition. We’ve made tremendous technological progress in the past few decades, but we have also glimpsed the terrifying mishaps that can result from complex automated systems—such as when advanced F-22 fighter jets experienced a computer meltdown the first time they flew over the International Date Line.

      At least thirty countries already have defensive autonomous weapons that operate under human supervision. Around the globe, militaries are racing to build robotic weapons with increasing autonomy. The ethical questions within this book grow more pressing each day. To what extent should such technologies be advanced? And if responsible democracies ban them, would that stop rogue regimes from taking advantage? At the forefront of a game-changing debate, Army of None engages military history, global policy, and cutting-edge science to argue that we must embrace technology where it can make war more precise and humane, but without surrendering human judgment. When the choice is life or death, there is no replacement for the human heart. ”

      I plan to read it.

  17. vint cerf permalink

    While this presentation is focused on AI, I came away feeling that the ethical issues cited are application to all software, not just AI-based software.

  18. This is an excellent presentation, showing clearly and convincingly the complexity of the AI challenge we are facing today. I agree safety, human dignity, personal freedom of choice and data autonomy are essential requirements – not the only ones – that must be integrated into system design endowed with ethics. The difficulty is to define what “ethics” is in this context (useful slide is given with 5 useful principles) and how to take it into account in the design (technically, organizationally, managerially). I suggest to liaise and if possible include in the next stages of the reflections eminent experts like Prof. Dr. Sarah SPIEKERMANN (Institute for Management Information Systems at Vienna University of Economics and Business) and Linda FISHER THORNTON (Leading in Context CEO, author of “7 Lenses”).

  19. Nicolai Pogadl permalink

    I would have welcomed to see a slide with a focus on the need for diversity in AI research and development teams, as this has direct impact on ethical considerations regarding AI/ML systems. If AI/ML teams are too homogeneous, the likelihood of group-think and one-dimensional perspectives rises – thereby increasing the risk of leaving the whole AI/ML project vulnerable to inherent biases and unwanted discrimination.

    This is something the leading AI/ML learning conferences are starting to address, and I think we can all do our part to promote the work and thinking by underrepresented groups in the research community.

    Women in Machine Learning ( https://wimlworkshop.org/
    ) and Black in AI ( https://blackinai.github.io/
    ) are probably good places to start.

    One last related thought on a macro, geopolitical level: I believe diversity and free thought/expression is a key advantage in the global ultra-competitive “AI race” (not a fan of this term/framing though…) and should be leveraged as such in the development of systems using AI/ML.

  20. Roberto,
    Thank you for addressing very real ethical issues that need to be sorted out quickly. I have spent the past 9 years working on answering the question Pedro posed on slide # 17 – “what do we really mean by ethics?” My research revealed that there are 7 dimensions of “ethical” that are all important to consider when making decisions like the ones we face with AI and the Internet of Things. The 7 Lenses model I wrote about in my book is available here: https://leadingincontext.com/7lenses/model/. I am working on a paper that applies these multiple ethical perspectives to some of the big ethical questions your presentation has raised. It seems to me that multiple perspectives are required to get a complete enough picture to make ethical choices and prevent significant unintended consequences.
    Linda

  21. Monica Beltrametti permalink

    Great presentation! I think we first have to define and agree to some overarching ethical principles (see for example https://www.academia.edu/37691951/AI4People_-_An_Ethical_Framework_for_a_Good_AI_Society_Opportunities_Risks_Principles_and_Recommendations). Then each industry has to derive from these overarching ethical principles its own industry specific principles.

    By the way, I think there is more behind Big Data than economic power. See for example Semantic Capital: Its Nature, Value, and Curation
    Luciano Floridi1,2
    # Springer Nature B.V. 2018

  22. There will be human-in-the-loop for a long time yet but ultimately its a question of whether politics can ever catch up with disruptive tech. IMHO driving a truck isn’t the healthiest option for a human so, under specific circumstances, I don’t see the problem with automating the truck. People hate change, even if the status quo is slowly killing them! ..so I’m all for disruption, with a universal basic income. A lot of medicine is currently archaic, barbaric even, we need the disruption AI is bringing.

    Re. ethics, the study of AI/AGI is teaching us more and more about human psychology, neuroscience and ethics.

  23. Prof. Zicari has compiled a comprehensive discussion regarding ethics and AI. Ethics is an important problem with significant impacts on the future development of AI. Great work!

  24. Simona Mellino permalink

    Hi roberto,

    very interesting piece of work as usual. I particularly agree with the distinction you make between sales/marketing and healthcare, where of course the consequences of something going wrong would be dramatically worse. And of course ethical design principles have to be across industries but specifically tailored or more stringent for particular applications

  25. Steve Lohr permalink

    It strikes me that building an ethical framework for AI is a lot like writing software, or any other creative endeavor — an iterative process. All these checklists are useful, but only a start.
    I suspect that where we’ll end up with AI is similar to what has happened with other transformative technologies from the internal combustion engine to atomic weapons. Where you come out is less solutions to the challenges the technology raises, but accommodations that seem reasonable. And what are the rules and tools to get us there?
    With this technology, I do think the Explainable AI efforts could yield quite useful tools.

  26. Tome Sandevski permalink

    This is a very helpful overview! As someone who is working at a university I am wondering how prominently ethical and societal implications of big data and ai feature in teaching programmes. Do students approach big data and AI as technical/mathematical challenges or do they reflect on the broader implications?

  27. FYI–
    the draft of the Ethics Guidelines for Trustworthy AI is available online for feedback.
    https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai

    This working document was produced by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG). The final version is due in March 2019.

  28. I have found this (MIT Technology Review, Establishing an AI code of ethics will be harder than people think, October 2, 2018):

    “A recent study out of North Carolina State University also found that asking software engineers to read a code of ethics does nothing to change their behaviour:
    https://people.engr.ncsu.edu/ermurph3/papers/fse18nier.pdf

    Philip Alston, an international legal scholar at NYU’s School of Law, proposes a solution to the ambiguous and unaccountable nature of ethics: reframing AI-driven consequences in terms of human rights. “”

    https://www.technologyreview.com/s/612318/establishing-an-ai-code-of-ethics-will-be-harder-than-people-think/

  29. Roberto, thanks for the great presentation. Indeed, it perfectly covers basic problems of ethics of design and performance of autonomous systems, including AI.

    As I’m generally interested in philosophy of science and technology, I’d like to mention about three problems, I find interesting in this context. The first one is strictly related to possible regulations and can have strong practical implications in the field of law. The problem is “what system exactly do we consider autonomous?” or, simpler, “what does it mean ‘autonomous’?”. In my opinion, this is a very similar case to the known problem of free will, that is “what does in mean ‘free’?”. So called autonomous systems are always interconnected with the rest of the world and depend on software, engineers, regulations, users, environment, and so on. This problem raises important questions about responsibility of the system, sharing responsibilities in systems of systems (SoS) including possibility to recognise the system or its part as an ‘electronic person’.

    The second problem are values. In my opinion, it’s the case of morality vs ethics. I recognise morality as a predisposition to keep or to hold given moral values. On the other hand, in this point of view, ethics is the way, we act in accordance to our morality. Using this scheme, when we talk about ethics of design of autonomous systems or ethics of systems themselves, we’re considering ways of applying moral values. So ethics can probably be somehow standardised and even presented as a kind of an algorithm. This ethical ways of applying moral values can be different (e.g. virtue ethics, deontology) and, to some extent, using them, we can possibly decide on how to deal with values we want to protect. But equally important is to decide, what values exactly our ethics will be dealing with and what does these values mean. This meta-ethical questions are rarely raised during the discussion about AI or designing systems, but – as I wrote – in my opinion, they are similarly important to developing ethical decisions themselves. This is the problem of finding and defining common values in a broad multicultural context and contexts of different social and political realities among which a given system will be working. This should be done prior to applying ethics. As ethics deal with values, first we have to know the values themselves — be able to define and explain them in a various social environments. From my perspective it’s the similar problem to metaphisics vs ontology — if the second can function without the first.

    The third problem is the influence of ethics of design to ethics of systems themselves. If ethically designed system (e.g. in accordance to incoming IEEE standard P7000), will perform ethically by itself? This connects back to the first problem I mentioned — the problem of autonomy (if ethically raised child will act ethically by itself and to what extent it has a free will and what exactly does it mean ‘free’).

    References:

    1.) IEEE P7000 standard:
    https://standards.ieee.org/project/7000.html

    2.) Zgrzebnicki, P. (2017), Selected Ethical Issues in Artificial Intelligence, Autonomous System Development and Large Data Set Processing, Studia Humana, 6(3) s. 24–33, DOI: 10.1515/sh-2017-0020,
    https://bit.ly/2T6RyyN

  30. Interesting article that lays out important ethical challenges in relation to AI and an timely contribution to the debate on the potential benefits and risks of AI.

    There are many unresolved issues regarding the potential unknown effects of self-learning algorithms and their application to personal data for targeted messaging and marketing is largely unexamined. There are huge regulatory gaps on the responsibilities and accountabilities in relation to potential outcomes of AI. The speed of policy response to the developments in AI is breathtakingly slow and poses a major risk to effectively and safely harnessing capabilities of AI.

  31. I was talking with Alex Beutel (Google Brain) and this is what he pointed out as good references at Google:

    Google has put out some research on machine learning fairness: https://ai.google/education/responsible-ai-practices?category=fairness

    A bunch of the work they are doing takes a similar approach to I think what we are suggesting in terms of trying to address concerns during model training: https://arxiv.org/abs/1707.00075 and https://arxiv.org/abs/1809.10610

  32. I was talking with Solon Isaac Barocas at Cornell.
    I thought you might be interested in a class he developed and teach at Cornell:

    ETHICS AND POLICY IN DATA SCIENCE

    https://docs.google.com/document/d/1GV97qqvjQNvyM2I01vuRaAwHe9pQAZ9pbP7KkKveg1o/edit

  33. I strongly recommend to watch this video!

    Jeff Dean, Google Senior Fellow and SVP Google AI – Deep Learning to Solve Challenging Problems

    Published on Nov 7, 2018

    Jeff Dean discusses the future of artificial intelligence and deep learning. This talk highlights Google research projects in healthcare, robotics, and in developing hardware to bring deep learning capability to smaller devices such as smart phones to enable solutions in remote and under-resourced locations. This talk was part of the AI in Real Life series presented by the Institute for Computational and Mathematical Engineering at Stanford University in Autumn 2018.
    https://www.youtube.com/watch?v=imlp8DGNkk0&index=5&list=PLn62CdVLnT-dDshwuuumF5w3rpaidb2Dm&t=0s

  34. I also think it is interesting to look at this. Google`s principles for AI.

    From: https://ai.google/principles

    Artificial Intelligence at Google
    Our Principles

    Google aspires to create technologies that solve important problems and help people in their daily lives. We are optimistic about the incredible potential for AI and other advanced technologies to empower people, widely benefit current and future generations, and work for the common good. We believe that these technologies will promote innovation and further our mission to organize the world’s information and make it universally accessible and useful.
    We recognize that these same technologies also raise important challenges that we need to address clearly, thoughtfully, and affirmatively. These principles set out our commitment to develop technology responsibly and establish specific application areas we will not pursue.
    Objectives for AI Applications

    We will assess AI applications in view of the following objectives. We believe that AI should:

    1. Be socially beneficial.
    The expanded reach of new technologies increasingly touches society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.
    AI also enhances our ability to understand the meaning of content at scale. We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate. And we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.

    2. Avoid creating or reinforcing unfair bias.
    AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

    3. Be built and tested for safety.
    We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm. We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

    4. Be accountable to people.
    We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.

    5. Incorporate privacy design principles.
    We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

    6. Uphold high standards of scientific excellence.
    Technological innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration. AI tools have the potential to unlock new realms of scientific research and knowledge in critical domains like biology, chemistry, medicine, and environmental sciences. We aspire to high standards of scientific excellence as we work to progress AI development.

    We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.

    7. Be made available for uses that accord with these principles.

    Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications. As we develop and deploy AI technologies, we will evaluate likely uses in light of the following factors:

    Primary purpose and use: the primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use

    Nature and uniqueness: whether we are making available technology that is unique or more generally available

    Scale: whether the use of this technology will have significant impact

    Nature of Google’s involvement: whether we are providing general-purpose tools, integrating tools for customers, or developing custom solutions

    AI Applications We Will Not Pursue

    In addition to the above objectives, we will not design or deploy AI in the following application areas:

    Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

    Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

    Technologies that gather or use information for surveillance violating internationally accepted norms.

    Technologies whose purpose contravenes widely accepted principles of international law and human rights.

    As our experience in this space deepens, this list may evolve.

  35. I was talking with Fabio Pistella (Former President of CNR, former Board Member of Authority for Electric Energy and Gas, former Director General of ENEA, Italy)

    and this is his feedback:

    Fabio Pistella:
    As you know ethics is not a fully settled issue in “traditional” world.
    Such a situation has obvious consequences on the theme of ethics in AI.

    Following considerations deal with similarities between ethics in human context and in artificial context .

    In my opinion this theme should be considered as articulated in three subthemes

    A. Ethical rules followed by the programmer

    B. Ethical rules incorporated in the AI agent to dictate how rules are changed as a consequence of experiences

    C. Ethical rules adopted by the different sources of information to which the AI agent has access.

    It’s easy to correlate each subtheme with aspects of human ethics A. Is analogous with rules followed by relatives and teachers B. Is analogous with education by family and teaching system

    Note that A. and B. do not necessarily coincide. C. Is analogous to ethics of sources of information to which humans have access. It could be noticed that A. and B. have an inward origin while C depends on interactions with external “reality” (experiences).

    That’s why in my opinion for sake of transparency AI agents should be provided with a cv documenting experiences undergone. Obviously I am aware that such a prescription is hardly achievable.

    My general approach to AI is to investigate the analogies and differences between human intelligence and artificial intelligence. I’m willing to cooperate in this field

  36. I am slowly but definitely starting to focus. Here are some thoughts:

    1 Ethics is not something we can fully achieve in general;

    When it comes to AI:

    2. I believe that both ethical behaviour of machines AND the ethics of people are equally important, especially in the context of the attempt of AI researchers to create a super-intelligence, sometimes called General AI  or  strong AI.

    As illustrated in 
    https://hackernoon.com/general-vs-narrow-ai-3d0d02ef3e28

    “The ideal of General AI is that the system would possess the cognitive abilities and general experiential understanding of its environments that we humans possess, coupled with the ability to process this data at much greater speeds than mortals. It follows that the system would then become exponentially greater than humans in the areas of knowledge, cognitive ability and processing speed — giving rise to a very interesting species-defining moment in which the human species are surpassed by this (now very) strong AI entity.”

    and this poses severe Ethical concerns….

    And I also believe that AI initiative such as Neuralink: https://www.neuralink.com/
    poses serious Ethical issues…

    https://www.cnbc.com/2018/09/07/elon-musk-discusses-neurolink-on-joe-rogan-podcast.html

  37. Policy makers are actively working out legal frame for Ethical, Trustful, Transparent AI.
    See for example:https://www.odbms.org/blog/2018/10/on-the-future-of-ai-in-europe-interview-with-roberto-viola/

    As Norbert Walter wrote: “Ethics is dependent on the homogeneity of society and functioning sanctions against the non-compliance of the rules”.

    People motivation plays a key role here. With AI the important question is how to avoid that it goes out of control, and how to understand how decisions are made.

    One of the thing I am interested to explore is if #Ethics can be “embedded” into the core of the #AI design. Not reacting to it…. Kind of “Ethics inside”.

    And what measures have to be taken for achieving a trustful AI.

    I am talking with key AI developers to see if this is possible and meaningful. It is important to link them with policy makers and other relevant stakeholders, who are currently working on AI-related issues.

  38. Some interesting resources from Waymo:

    – AutoML: Automating the design of machine learning models for autonomous driving
    https://medium.com/waymo/automl-automating-the-design-of-machine-learning-models-for-autonomous-driving-141a5583ec2a

    – Learning to Drive: Beyond Pure Imitation
    https://medium.com/waymo/learning-to-drive-beyond-pure-imitation-465499f8bcb2

    This blog post is of great interest. They created a Recurrent Neural Network for Driving.
    They trained the neural network Imitating the “Good”, and synthesizing the “Bad”

    They write: ” Knowing why an expert driver behaved the way they did and what they were reacting to is critical to building a causal model of driving. For this reason, simply having a large number of expert demonstrations to imitate is not enough. Understanding the why makes it easier to know how to improve such a system, which is particularly important for safety-critical applications.”

    However, I do not believe that we know WHY and HOW we drive though…
    Try for yourselves: Explain to another person how do you drive and why you react in certain situations they way you do….. And please let me know the result. 🙂

    This is the paper:
    ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst
    Mayank Bansal, Alex Krizhevsky, Abhijit Ogale
    https://arxiv.org/pdf/1812.03079.pdf

    • Source MIT Technology Review

      https://www.technologyreview.com/s/612434/one-of-the-fathers-of-ai-is-worried-about-its-future/

      You mention causality—in other words, grasping not just patterns in data but why something happens. Why is that important, and why is it so hard?

      Yoshua Bengio:

      If you have a good causal model of the world you are dealing with, you can generalize even in unfamiliar situations. That’s crucial. We humans are able to project ourselves into situations that are very different from our day-to-day experience. Machines are not, because they don’t have these causal models.

      We can hand-craft them, but that’s not enough. We need machines that can discover causal models. To some extent it’s never going to be perfect. We don’t have a perfect causal model of the reality; that’s why we make a lot of mistakes. But we are much better off at doing this than other animals.

      Right now, we don’t really have good algorithms for this, but I think if enough people work at it and consider it important, we will make advances.

      ##

  39. Just published!

    Putting Fairness Principles into Practice: Challenges, Metrics, and Improvements

    Alex Beutel, Jilin Chen, Tulsee Doshi, Hai Qian, Allison Woodruff, Christine Luu, Pierre Kreitmann, Jonathan Bischof, Ed H. Chi
    (Submitted on 14 Jan 2019)

    As more researchers have become aware of and passionate about algorithmic fairness, there has been an explosion in papers laying out new metrics, suggesting algorithms to address issues, and calling attention to issues in existing applications of machine learning. This research has greatly expanded our understanding of the concerns and challenges in deploying machine learning, but there has been much less work in seeing how the rubber meets the road.
    In this paper we provide a case-study on the application of fairness in machine learning research to a production classification system, and offer new insights in how to measure and address algorithmic fairness issues. We discuss open questions in implementing equality of opportunity and describe our fairness metric, conditional equality, that takes into account distributional differences. Further, we provide a new approach to improve on the fairness metric during model training and demonstrate its efficacy in improving performance for a real-world product

    LINK:
    https://arxiv.org/abs/1901.04562

Leave a comment to Roberto V. Zicari Cancel reply

Note: HTML is allowed. Your email address will not be published.

Subscribe to this comment feed via RSS