Skip to content

Big Data and AI– Ethical and Societal implications

by Roberto V. Zicari on October 31, 2018

Are computer system designers (i.e. Software Developers, Software Engineers, Data Scientists, Data Engineers, etc,), the ones who will decide what the impact of these technologies are and whether to replace or augment humans in society?

Big Data, AI and Intelligent systems are becoming sophisticated tools in the hands of a variety of stakeholders, including political leaders.

Some AI applications may raise new ethical and legal questions, for example related to liability or potentially biased decision-making.

I recently gave a talk at UC Berkeley on the Ethical and Societal implications of Big Data and AI and what designers of intelligent systems can do to take responsibility, not only for policy makers and lawyers.

You can find copy of the presentation here: 
http://www.odbms.org/wp-content/uploads/2018/10/Zicari.UCBerkeley.2018.pdf

I am interested to hear from you and receive your feedback.

RVZ

From → Uncategorized

47 Comments Leave one →
  1. Bryn Roberts permalink

    Thanks Roberto – very interesting presentation and many key points raised. A statement on slide 19 caught my eye: “At present we do not really understand how Advanced AI-techniques – such as used in Deep learning (e.g. neural networks) really works”. I think we understand the technical principles of how the networks encode and ‘learn’ but, in the individual instances, we may not be able to determine the heuristics of the system. Perhaps this is what was meant and it’s certainly true that it can be extremely difficult to understand which features of the data the machine used, and how they were weighted, to contribute to the outcome. It is entirely feasible that the machine builds a model that is not extensible or generalizable, which may have ethical implications, as you discuss later in the example of autonomous vehicles. Robust [and standardized?] procedures for testing and validating AIs would be a pragmatic solution, even if we don’t understand fully the heuristics. Perhaps, by extensive testing with actual or synthetic data sets and extreme scenarios, an AI could be validated for its intended purpose, including likely paths of future learning?

    • Thank you Bryn for your valuable feedback!

      I agree with you when you say: even if we don’t understand fully the heuristics, perhaps, by extensive testing with synthetics data sets and extreme scenarios, an AI could be validated for whatever purpose it is designed, including likely paths of future learning, if it is deployed in that state?

      In fact, I was asked a similar question when I presented the same talk at Uber in San Francisco..

      I thought that we do not allow kids to drive a car, they need to be at least 16 in USA, and 18 in Europe and have done a traffic school class and passed a test.

      Perhaps we can “certify” AIs by the number of testing with synthetics data sets and extreme scenario they went through- before allowing AIs to drive a car (similar to what happens to airplane pilots)….

      Somebody would need to define when good is enough. And this may be tricky…

  2. Roberto, thanks a million for the very complete overview of ethical questions arising with AI.

    In Defense, AI used in conjunction with offensive weapons will change warfare. Hence, for some years now, the United Nations were conducting related ethical debates. (In air defense, AI has been in use for over two decades already, raising zero ethical concerns raised because of defensive usage. That’s a bit weird, though.)

    Although AI allows new forms and scenarios of warfare – AI will always and everywhere change the way we do things: business processes, company structures, application concepts –, its deployment does not occur in a legal vacuum. In Defence, AI must respect the ethical foundations of International Humanitarian Law (IHL). IHL preserves, so to speak, fundamental laws in military conflict.

    Two topics are most relevant to the debate of AI in autonomous weapons, so that similar questions arise as to e.g. deployment of autonomous cars.

    1. Is the behavior of the autonomous system proportionate?

    This particularly concerns proper discrimination (combatant vs. non-combatant). Even for man, discriminating between a combatant and a civilian is difficult, because in modern conflicts combatants are no longer distinguishable from civilians. Many warring adversaries are not recognizable as fighters/soldiers because civil dressed. The decision as to which an autonomous machine’s behavior is legitimate therefore needs to be made using CONTEXT. When making a similar decision, man would rely on concepts such as “good faith” or sensus communis. The philosopher Markus Gabriel speaks of the “unified impression” people have on everyday life. This is not comparable to the data salad an autonomous machine needs to fuse into a picture of a situation. Thus, with the requirement of proportionality, the question arises: Will an autonomous machine “think” like man?

    2. Who is accountable?

    In answering this question, the military is ahead of the civilian economy and industry.

    (1), who puts an autonomous system on the market and operates it, is accountable. If Bundeswehr orders and commissions an autonomous offensive system which was built by Airbus Defence and Space, the Bundeswehr and not the manufacturer (nor its designers, programmers, etc.), must be held liable – the manufacturer is liable (merely) according to the statutory product liability.

    (2), if the use of an autonomous offensive system causes damage to the civil population – such as violation-by-accident or violation-by-design –, an individual must be accountable, as a legal entity cannot be accountable (at least not according to German law).
    Is it then fair to blame a commander?
    Yes, propose some nations, but: The autonomous system must be extremely well tested. Particularly, this comprise STATISTICAL TESTS and Independent Validation and Verification IV&V. The commander must know the probability distributions when, for example, civilians may be affected in the case of deploying an autonomous system. If the commander does not know about the system’s probability distribution, and intentionally or with gross negligence, uses the autonomous system anyway, and civilians become illegally affected by the system’s decision making, the commander makes himself liable to prosecution (of having committed a war crime). For such a legal solution, both IHL and criminal laws would have to be amended.

    (3), from a legal point of view, every loss event is always a case for an insurance. In the case of whatever damage caused by (civil) autonomous systems, therefore, it will necessary to set up an insurance business, which also reflects the above mentioned thoughts.

    • Hi Yvonne
      thank you for your detailed feedback!
      Of particular interest is your remark:
      Who is accountable?
      In answering this question, the military is ahead of the civilian economy and industry.

      There is a difference though: the motivation.

  3. Mario A Vinasco permalink

    My key take a way is the important distinction of the different areas of application of AI and big data; if social networks’ use of AI for instance are regulated the same as medical applications, we as consumers will loose valuable services without any new protection.

    We saw this when congress summoned Facebook CEO last May; senators displayed their ignorance about how the basic internet and AI works.

    This presentation is a great step to clarify, demystify and add context to this discussion

    • Hi Mario
      thank you for your feedback.
      I believe that Ethics and Societal issues do occur also in AI used for social networks as well.

  4. Cynthia Saracco permalink

    Thanks for a good introduction to the topic of ethics and AI / Big Data, including coverage of several European initiatives in this broad area. I can certainly relate to the need for “explainable AI” as well as imagine some of the difficulties involved to achieve that. It would be interesting to hear technology leaders and researchers from the commercial domain weigh in with their perspectives on what ethical obligations (if any) they associate with development of their AI-based technologies and how they’re addressing these in their commercial offerings.

  5. This presentation provides a useful introduction to some of the ethical and moral issues around the use of AI. In particular, in terms of using AI within automated systems that are making decisions that would once have been the responsibility of a human decision maker.

    I think some of the questions raised make it clear that the underlying mathematics of machine learning/AI is arguably the easy part of implementing ethical AI solutions in a practical setting. How one deals with the AI-Human interface, the identification and mitigation of bias, explicability of outcomes and the assignment of responsibility when things go wrong are where the constraints on the real-world use of AI systems lie. This applies across a wide spectrum of applications, be it self-driving cars, autonomous weapon systems or deciding who to grant loans to.

    From a design point of view, this means that considerations other than raw decision-making ability (e.g. predictive accuracy of a loan granting system) need be core considerations during the design phase of an AI project. Ethical issues can’t just be “tacked on” at the end of the delivery. It would be interesting to survey all of the new AI/Machine learning courses that have appeared in our universities in the last few years and see how much time is devoted to ethical and other real-world issues compared to the quantitative algorithmic elements. I would argue that a data scientist’s education is incomplete unless these “soft issues” are fully explored as part of their degrees.

    The question the presentation raises about regulatory frameworks is an interesting one and one should bear in mind that there is more than one perspective as to how to approach this.

    The EU approach (as captured by the GDPR) is very much a rights-based one. The starting point is that your data is yours and it’s your right to decide how that data is used – even if your refusal to allow use results in sub-optimal outcomes/harm. For example, by not allowing your data to be used to support medical research, others may suffer because new treatments will take longer to develop. A similar argument might be that I have a right to drive myself, even if I am less safe than an autonomous vehicle. This contrasts with the more utilitarian perspective, expressed in the quote by Steve Lohr at the start of the presentation, of thinking about data as a raw material. Data is an asset to be harvested and used. From the utilitarian perspective, one seeks to maximize the use of resources for the general good and only take specific actions to prevent mis-use; i.e. do no harm. Both a rights-based approach and a utilitarian perspective have their merits and drawbacks.

    The EU has gone down the rights-based approach and to date the US has been more utilitarian, but it will be interesting to see these things develop across the different regulatory regions of the world over time.

    • Hi Steven
      I can only agree with you when you say:

      ” I would argue that a data scientist’s education is incomplete unless these “soft issues” are fully explored as part of their degrees.”

      • and I really believe in what you say here: “From a design point of view, this means that considerations other than raw decision-making ability (e.g. predictive accuracy of a loan granting system) need be core considerations during the design phase of an AI project. Ethical issues can’t just be “tacked on” at the end of the delivery.”

  6. John K Thompson permalink

    Roberto,

    I have been reading your interviews and opinion pieces for the last 5+ years now. This presentation is some of your best work. I was and continue to be inspired by the breadth and depth of vision of the possible use of data and analytics.

    I have shared your presentation in our organization, on social media and with colleagues and friends.

    Solid work.

    Best,
    John

  7. Gregory Richards permalink

    Thank you for the slides Roberto. Most of my work is in government and there, strict rules for how data are used exist. We are working to develop a strategic choice model that factors in the ethical aspects of AI algorithms that respect these rules. Other jurisdictions have created guidelines for data ethics that are just finding their way into implementation. So yes, many challenges exist, but progress is being made.

  8. Great overview that will be of interest to those who are at the coalface of AI development as well as those who just want to know more about AI without being overwhelmed – from citizen data scientists to the general public.

  9. A very thoughtful document. Last year, we published a Data Manifesto (https://www.forbes.com/sites/abb/2017/04/13/a-call-to-action-for-the-internet-of-things-industry-lets-write-a-data-bill-of-rights-for-cloud-customers/#2b5c66139a21) to address some key beliefs/principles around the data gathering enabled by ever more connected “things”. We took key ideas from HIPAA and the airline “passenger bill of rights” as a source of inspiration. This data is going to fuel the AI revolution. We need to similarly articulate key principles for AI. We also would need to figure out how can we test AI systems for systemic bias, how to see if they make ethical decisions, reset/reconfigure them when they go astray… So many interesting questions and very timely if we are to avoid a popular backlash against AI (and unbridled use of it by unscrupulous actors).

    • Key point you made: “We also would need to figure out how can we test AI systems for systemic bias, how to see if they make ethical decisions, reset/reconfigure them when they go astray”

  10. Partha Deka permalink

    Thanks for sharing the slides. Overall a very detailed presentation covering all aspects of ethical AI. Innovation as well as ethical Regulation on AI development must go hand in hand. We are seeing rapid pace of research and development in the field of Artificial Intelligence recently. Its the responsibility of govt and non-govt agencies to charter down ethical regulations around AI to nurture its growth as well its positive impact on human evolution in the near and foreseeable future.

    • I believe that also designers of AI-systems (and their managers too) need to be involved in the discussion. Here I am talking about software engineers, data scientists.

  11. Daniel permalink

    Very comprehensive thinking about a complicated and quickly-evolving subject. In my eyes this highlights a fundamental problem that goes even beyond the ethics of use AI – How can a society of humans, who are socialised mainly around linear concepts, understand the effects of exponential technology, leveraging the good (with all shades of how good may be defined in different societies) and avoiding the bad? In a world that is split on such fundamental concepts as religion or philosophy, how can we even begin a dialogue about how best to use exponential technology – or should we stay away from it altogether? Would not leveraging the “good” of such technology constitute a moral flaw per se? It’s hard to imagine any kind of societal consensus around any of these questions, which is why the real issue that needs exploration is: what constitutes “truth”, and how can we re-learn the ability to debate concepts that can essentially polarise those engaged in the discussion?

  12. As AI technology evolves, more and more models trained from big data are not easily explainable. This worries users that AI model may behave against ethic or is unsafe. As a big data software vendor, we support explainable AI! We already see promising advantage from Graph database technology that can contribute on this perspective. By visualizing the entities and their relationships in 2D graph model, it becomes easier for human to understand some of the features that contribute to the model classification accuracy. For example, we think it makes more sense to use domain knowledge to come up with graph features that explainable to the task at hand, than trusting a deep learning framework to blindly digest the data to come up a with a good metric model that is unexplainable.

    • Dear Mingxi

      it is true, as you say, that people are not confident with models that they do not understand.

      However the point is: what is the target ot the model? If the target is to organize data into a logical scheme, graphs are a good tool. If the target is to make a predictive model we should care first about the accuracy of the model, and in some cases deep lerning models are the most accurate. As any statistical model, DeepNN models show correlations, not causation; even Bayesian networks are hardly transformed into causal neworks.

      Deep learning models can be explained, as in the well known examples of images. The only point is that the interpretation is not trivial, but can we expect a trivial interpretation of a complex phenomenon?

  13. A joy to read! I wrote the idea of blockchain based caged autonomy. It fits nicely in with the first posts. The Blockchain is a deterministic permanent record of real orders, perfect to border a the probabilistic, ever-changing and reality guessing AI system. AI and blockchain are fit seamless together. Making a safe smart contract grid on top of AI or any system brings solutions. https://www.linkedin.com/pulse/ai-controlled-blockchain-rules-arnoud-w-berghuis-msc/

    Added to this I like to suggest:
    “that the paradigm we discuss is the same for ALL systems and machines”!
    I stress emphasize governmental systems and organizations. The reasoning above can be applied to them as well. Just as well as to cats and dogs and many humans.

    Adding governmental systems to our discussion makes our discussion easier! We do know hen to take them seriously, hold them responsible and accountable Government is sometimes ABOVE humans. Organizations can be above too. They trust machines at their core. Inside organizations can be people who just follow blunt orders.

    If you go along with my first point you almost automatically can conclude
    “That Machines can become people, but people can become machines too”!

    This “people become machines” problem was described in the book “Der Proceß” by Franz Kafka has written 100 years ago, around 11 November 1917, the day WW1 ended. The day Europe ended for what it was. No individual or country to blame only terrible Kafkasian system” This problem was NEVER really solved. But now we can! By using blockchain, a once in a generation technology that brings 100% traceability. Trace and “Kafka is solved!” This would be really the best thing of our age! Certainly worth a book too! We should write it first Isn’t it? It’s so positive, we solve a real issue! On the fly, we solve our new issue, by understanding the root cause. Here I kick off: Understanding the “law of resistance to change”​ is key to use blockchain. https://www.linkedin.com/pulse/understanding-law-resistance-change-key-use-arnoud-w-berghuis-msc/

  14. Decades ago I posed for an intelligence agency the difficulty of monitoring transformational technologies for ethical consequences when at the same time the structures of ethical thinking, religious ideation and frameworks, and other essential aspects of our humanity were simultaneously going through the looking-glass of transformation too. Religion framed by print text, for example, is not the same as religion framed by digital technologies, nor are the humans who engage with it the same kinds of humans as those who merely read text. This plunge into ethics and the hard questions raised by big data and its mining is a good effort to get those who matter to at the very least think about these issues. My fear is that, as so often, emergent realities happen, then we think about ethics, and it is too late to apply our thinking retroactively. Yet … what else can we do? It was/is very difficult to get software designers to think about building in security, and now we are asking them to build in ethical frames, when real experts at ethics break the rules all the time, but know when to break the rules, a gift beyond machine learning still. I deeply appreciate the issues raised but fear those who should be paying attention are busy doing other things.

    • Very good point when you say: “My fear is that, as so often, emergent realities happen, then we think about ethics, and it is too late to apply our thinking retroactively.”

      You asked what else can we do?…I do not have the solution, but awareness first and actions are called for.

  15. When you say “You can use AI technologies either to automate or to augment humans.”, this reminds me that I defined “conventional applied computer science” by “augmenting human”, and AI as letting machine develop themselves. A successful AI is not necessarily pro-human, and we can bet there is some “puberty age”, where the machine will be rebellious and “stupid”. An AI is a sort of Alien. “AI” is a misnomer. I would say that the universal Turing machine is already and naturally very clever, especially the “Löbian” one, which are those who knows (in some theoretical computer science sense) that they are universal. They are wise and modest. But we don’t use them, because we prefer to “enslave” them instead, and to make them working for us, like when I am sending this post. Only experts listen to them (Gödel, Löb, Solovay). An intelligent machine will build its own ethics, and it will be related to our own as far as we are able to recognise ourself in them. If not we will get a “terrible child”. The “singularity” point might be the point where the machine will not be as intelligent as the humans, but as stupid as the humans. Maybe.

    • I agree when you say that: ” A successful AI is not necessarily pro-human, and we can bet there is some “puberty age”, where the machine will be rebellious and “stupid”.

      We can argue what “intelligence” really means….

  16. Christoph Zimmermann permalink

    Thanks for sharing the slides, Roberto – much appreciated. Very insightful and detailed food for thought indeed.

    In addition to the excellent comments above, there’s a central point which needs addressing in this context as well: Care should be taken that private companies do not take these matters into their own hands, possibly disregarding ethical and moral basics.

    While I’m not a big proponent of extensive regulation, governments should keep an eye on this. If there’s a conflict of interest, NGOs and other organization can close this gap. Google’s project Maven is probably the most recent example where a group of employees forced a major corporation to reverse their decision to advance AI technology which was destined for military deployment.

    • Hi Christoph
      very interesting reference to the Google`s project Maven.
      I did some research and found this book:
      Army of None- AUTONOMOUS WEAPONS AND THE FUTURE OF WAR
      by Paul Scharre, a Pentagon defense expert
      http://books.wwnorton.com/books/Army-of-None/

      From the abstract of the book:

      “What happens when a Predator drone has as much autonomy as a Google car? Or when a weapon that can hunt its own targets is hacked? Although it sounds like science fiction, the technology already exists to create weapons that can attack targets without human input. Paul Scharre, a leading expert in emerging weapons technologies, draws on deep research and firsthand experience to explore how these next-generation weapons are changing warfare.

      Scharre’s far-ranging investigation examines the emergence of autonomous weapons, the movement to ban them, and the legal and ethical issues surrounding their use. He spotlights artificial intelligence in military technology, spanning decades of innovation from German noise-seeking Wren torpedoes in World War II—antecedents of today’s homing missiles—to autonomous cyber weapons, submarine-hunting robot ships, and robot tank armies. Through interviews with defense experts, ethicists, psychologists, and activists, Scharre surveys what challenges might face “centaur warfighters” on future battlefields, which will combine human and machine cognition. We’ve made tremendous technological progress in the past few decades, but we have also glimpsed the terrifying mishaps that can result from complex automated systems—such as when advanced F-22 fighter jets experienced a computer meltdown the first time they flew over the International Date Line.

      At least thirty countries already have defensive autonomous weapons that operate under human supervision. Around the globe, militaries are racing to build robotic weapons with increasing autonomy. The ethical questions within this book grow more pressing each day. To what extent should such technologies be advanced? And if responsible democracies ban them, would that stop rogue regimes from taking advantage? At the forefront of a game-changing debate, Army of None engages military history, global policy, and cutting-edge science to argue that we must embrace technology where it can make war more precise and humane, but without surrendering human judgment. When the choice is life or death, there is no replacement for the human heart. ”

      I plan to read it.

  17. vint cerf permalink

    While this presentation is focused on AI, I came away feeling that the ethical issues cited are application to all software, not just AI-based software.

  18. This is an excellent presentation, showing clearly and convincingly the complexity of the AI challenge we are facing today. I agree safety, human dignity, personal freedom of choice and data autonomy are essential requirements – not the only ones – that must be integrated into system design endowed with ethics. The difficulty is to define what “ethics” is in this context (useful slide is given with 5 useful principles) and how to take it into account in the design (technically, organizationally, managerially). I suggest to liaise and if possible include in the next stages of the reflections eminent experts like Prof. Dr. Sarah SPIEKERMANN (Institute for Management Information Systems at Vienna University of Economics and Business) and Linda FISHER THORNTON (Leading in Context CEO, author of “7 Lenses”).

  19. Nicolai Pogadl permalink

    I would have welcomed to see a slide with a focus on the need for diversity in AI research and development teams, as this has direct impact on ethical considerations regarding AI/ML systems. If AI/ML teams are too homogeneous, the likelihood of group-think and one-dimensional perspectives rises – thereby increasing the risk of leaving the whole AI/ML project vulnerable to inherent biases and unwanted discrimination.

    This is something the leading AI/ML learning conferences are starting to address, and I think we can all do our part to promote the work and thinking by underrepresented groups in the research community.

    Women in Machine Learning ( https://wimlworkshop.org/
    ) and Black in AI ( https://blackinai.github.io/
    ) are probably good places to start.

    One last related thought on a macro, geopolitical level: I believe diversity and free thought/expression is a key advantage in the global ultra-competitive “AI race” (not a fan of this term/framing though…) and should be leveraged as such in the development of systems using AI/ML.

  20. Roberto,
    Thank you for addressing very real ethical issues that need to be sorted out quickly. I have spent the past 9 years working on answering the question Pedro posed on slide # 17 – “what do we really mean by ethics?” My research revealed that there are 7 dimensions of “ethical” that are all important to consider when making decisions like the ones we face with AI and the Internet of Things. The 7 Lenses model I wrote about in my book is available here: https://leadingincontext.com/7lenses/model/. I am working on a paper that applies these multiple ethical perspectives to some of the big ethical questions your presentation has raised. It seems to me that multiple perspectives are required to get a complete enough picture to make ethical choices and prevent significant unintended consequences.
    Linda

  21. Monica Beltrametti permalink

    Great presentation! I think we first have to define and agree to some overarching ethical principles (see for example https://www.academia.edu/37691951/AI4People_-_An_Ethical_Framework_for_a_Good_AI_Society_Opportunities_Risks_Principles_and_Recommendations). Then each industry has to derive from these overarching ethical principles its own industry specific principles.

    By the way, I think there is more behind Big Data than economic power. See for example Semantic Capital: Its Nature, Value, and Curation
    Luciano Floridi1,2
    # Springer Nature B.V. 2018

  22. There will be human-in-the-loop for a long time yet but ultimately its a question of whether politics can ever catch up with disruptive tech. IMHO driving a truck isn’t the healthiest option for a human so, under specific circumstances, I don’t see the problem with automating the truck. People hate change, even if the status quo is slowly killing them! ..so I’m all for disruption, with a universal basic income. A lot of medicine is currently archaic, barbaric even, we need the disruption AI is bringing.

    Re. ethics, the study of AI/AGI is teaching us more and more about human psychology, neuroscience and ethics.

  23. Prof. Zicari has compiled a comprehensive discussion regarding ethics and AI. Ethics is an important problem with significant impacts on the future development of AI. Great work!

  24. Simona Mellino permalink

    Hi roberto,

    very interesting piece of work as usual. I particularly agree with the distinction you make between sales/marketing and healthcare, where of course the consequences of something going wrong would be dramatically worse. And of course ethical design principles have to be across industries but specifically tailored or more stringent for particular applications

  25. Steve Lohr permalink

    It strikes me that building an ethical framework for AI is a lot like writing software, or any other creative endeavor — an iterative process. All these checklists are useful, but only a start.
    I suspect that where we’ll end up with AI is similar to what has happened with other transformative technologies from the internal combustion engine to atomic weapons. Where you come out is less solutions to the challenges the technology raises, but accommodations that seem reasonable. And what are the rules and tools to get us there?
    With this technology, I do think the Explainable AI efforts could yield quite useful tools.

  26. Tome Sandevski permalink

    This is a very helpful overview! As someone who is working at a university I am wondering how prominently ethical and societal implications of big data and ai feature in teaching programmes. Do students approach big data and AI as technical/mathematical challenges or do they reflect on the broader implications?

  27. FYI–
    the draft of the Ethics Guidelines for Trustworthy AI is available online for feedback.
    https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai

    This working document was produced by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG). The final version is due in March 2019.

Leave a Reply

Note: HTML is allowed. Your email address will not be published.

Subscribe to this comment feed via RSS