On AI, Governance, Ethics, and Societal impact. Interview with Lambert Hogenhout
“There is too little attention being given to the effect of all this emerging technology in the medium to long term, let’s say 5–10 years. The effects on how we work, how we learn, communicate, form connections and self-identify.”
Q1. How do the challenges of implementing responsible AI differ across varying contexts (developed vs. developing nations), and what fundamental principles remain constant regardless of a country’s technological maturity or resources?
Lambert Hogenhout: In advanced economies, the primary challenges tend to be around algorithmic bias embedded in legacy systems, regulatory complexity, and managing the pace of adoption across large, entrenched institutions. In developing countries, the challenges are more foundational: limited digital infrastructure, smaller pools of technical talent, weaker data ecosystems, and the risk that AI solutions designed elsewhere are imported without sufficient adaptation to local realities, languages, and cultural contexts. The fundamental principles are the same however: transparency (people should understand when AI is being used and how it affects them); accountability (someone must be answerable when things go wrong); fairness (AI should not entrench or amplify inequalities); agency (the people affected by AI-driven decisions should have meaningful recourse).
Q2. What misconceptions about AI governance do you encounter most frequently at the international level?
Lambert Hogenhout: The illusion that AI safety and innovation are mutually exclusive. The idea that if you govern AI responsibly, you necessarily slow down progress and lose competitive advantage. The evidence does not support that. In fact, organizations and countries that invest in trustworthy AI frameworks tend to foster greater adoption, because users, businesses, and governments are more willing to rely on systems they can trust.
Another misconception is that governance of AI is a technology issue. It is not. It is about values, power, and inclusion: decides, whose interests are represented, and who bears the consequences when things go wrong.
Q3. How has the conversation around AI ethics and responsible tech evolved over the past 20+ years?
Lambert Hogenhout: As we have gradually digitized a large part of our lives, compute power has grown and algorithms have advanced, both the potential useful applications and the risk of undesirable effects has grown. Policy needs to capture that at a high level, and strategy needs to determine how this all affects us and what’s next. In the early days of big data, the conversation was largely about privacy and data protection—who has access to our information and what are they doing with it. As machine learning matured, the focus shifted to bias and fairness—we realized that models trained on historical data could perpetuate and even amplify discrimination. Now, with generative AI, the conversation has broadened dramatically to include questions about misinformation, intellectual property, the nature of creativity, and even what it means to have an autonomous system making consequential decisions. What has also evolved is who is part of the conversation. Twenty years ago, these were largely technical discussions among specialists. Today, AI ethics is debated in parliaments, boardrooms, classrooms, and living rooms. That democratization of the discourse is healthy, even if it makes governance more complex.
Q4. What lessons from earlier technology waves are we forgetting as we rush to deploy generative AI, and what genuinely new ethical challenges does GenAI present?
Lambert Hogenhout: What is new is that the challenges have become more complex. A designer or regulator, with full power to make AI responsible, will have a hard time to foresee the risks of outputs and decisions by AI systems. Part of that is that unlike previous technologies, today’s AI is inherently non-deterministic. Part is that it is increasingly a general-purpose technology and it is not always clear at the outset exactly how an AI system will be used, and therefore what the risks are.
One lesson we are forgetting is the importance of deploying gradually and learning as we go. As the speed of innovation increases, the pressure to adopt quickly has led many organizations to deploy widely before they fully understand the risks. Another forgotten lesson is that technology alone does not solve organizational problems—you need to change processes, train people, and build governance structures alongside the technology. The new challenges include the sheer scale of potential misuse—the ability to generate convincing disinformation, deepfakes, and synthetic content at unprecedented volume and speed.
Data privacy concerns have been brought to a whole new level with the increased capabilities to collect, correlate and process data. For instance, I have been working recently on Facial Privacy, which is under threat from facial recognition built into cameras, smartphones and AI glasses (and, unlike a password, we cannot change our face when it is compromised!). There is also the question of intellectual property: the existing regulations and norms (e.g. “fair use”) were not designed for the current reality of massive data and AI, and it will take time to adjust them. In the mean time, we find ourselves in an IP grey zone that is ungoverned and probably unfair. And the increasingly capable forms of generative AI blur the line between human and machine output in ways that raise deep questions about authenticity, trust, and accountability.
Q5. What are the critical components of effective data literacy that go beyond “understanding what data is” to actually empowering people to make better decisions with data?
Lambert Hogenhout: From my experience, the most effective data literacy programs are anchored in real work. People learn best when they can immediately apply what they have learned to problems they care about. Second, effective programs do not only focus on technical skills but include a mindset that includes thinking about data. Teaching people to ask the right questions: Where did this data come from? What is missing? What are the limitations? What decisions will this inform, and what are the consequences of getting it wrong? It is also important to realize that data literacy is not a one-time effort. It requires ongoing practice, peer learning, and support (tools and communities of practice) and clear data governance so people know what data they can use and how.
Q6. How should organizations think about data literacy differently in the age of AI?
Lambert Hogenhout: The data, the models, the reasoning processes, output and decisions, and the UI to steer these processes, are all part of the same system. Feeding bad data to the AI will result in unreliable outputs or wrong decisions, just as bad prompts will deliver poor results. This means data literacy must evolve into something broader—what I would call AI literacy. It is not enough to understand data in isolation not is it enough to just focus on prompting skills, for instance. People need to understand how data flows into models, how models generate outputs, and where the opportunities for error, bias, or hallucination exist along that chain. They need to develop an intuition for when to trust AI outputs and when to question them. As the building of AI systems and AI agents is increasingly democratized, the design of an AI agent also depends on the user’s understanding of the workings of AI, from the data layer to the result. When anyone can build an AI agent, the consequences of poor understanding are no longer limited to a bad spreadsheet. They can cascade through automated systems in ways that are difficult to trace and correct.
Q7. How do you see the relationship between legal compliance (privacy regulations like GDPR, CCPA) and ethical responsibility?
Lambert Hogenhout: For data privacy, as with AI, the accountability for the safety of such systems is shared between the governments (regulation), the model providers, the builders of the AI applications, and the end users. Neither of them by themselves can guarantee AI safety. For model providers and creators of AI applications, building in ethics by design—with regard to training data, algorithms and guardrails—is the right decision in the long run, not only morally, but also good for business. As happened with data privacy, where citizens became increasingly concerned about their personal data, I see the same happening with AI: consumers will become more critical of which AI systems they want to use and which not. And how and where they want them and where not.
Q8. Can organizations be fully compliant yet still deploy technology irresponsibly? How should leaders navigate this tension?
Lambert Hogenhout: For most organizations, the more valuable currency is their reputation and the trust of their customers, partners and their own employees. Each of these groups have expectations of what can be expected within the societal norms. To betray that trust and those expectations for the sake of efficiencies created with AI is always a bad strategy. Examples are targeted advertising that exploits psychological vulnerabilities, or AI-driven hiring tools that are technically non-discriminatory by legal standards but systematically disadvantage certain communities in practice.
Conversely, there are situations where doing the ethically right thing may create tension with strict regulatory interpretation—for instance, using health data in ways that could save lives but push the boundaries of consent frameworks designed for a different era. My advice to leaders is this: do not let your legal team be the sole arbiter of what is acceptable. Build an ethics function that works alongside compliance, brings diverse perspectives to the table, and asks the harder questions—not just “can we do this?” but “should we do this?” And engage your stakeholders—your employees, your customers, and the communities you affect—in that conversation.
Q9. What are the biggest gaps between what technologists understand about policy and what policymakers understand about technical realities? How can we create better dialogue?
Lambert Hogenhout: The pace and complexity of technology and its pervasiveness in society and business makes it hard for regulators to understand what they regulate. In some industries (e.g. finance) we have seen voluntary standards evolve. I would like to see that in tech as well. However given the pace of development and the large amounts of investment, many Big Tech companies are hesitant to slow themselves down too much for the sake of ethical concerns. On the other side, many technologists underestimate the complexity of policymaking. They tend to think of governance as a binary—regulate or do not regulate—and miss the nuance of how policy is negotiated, implemented, and enforced across different jurisdictions and cultures. They sometimes dismiss governance as bureaucratic overhead rather than recognizing it as a mechanism that can actually create the conditions for sustainable innovation.
To bridge this gap, I believe we need three things. First, we need more people who can speak both languages—technologists who understand policy and policymakers who understand technology. These translators are rare and valuable. Second, we need structured forums where technical experts and policymakers can engage in genuine dialogue—not lobbying, not adversarial testimony, but collaborative problem-solving. The model of regulatory sandboxes, where new technologies can be tested within a governed environment, is a promising approach. Third, we need the private sector to engage more constructively. Voluntary standards, industry-led certification, and genuine self-regulation—not as an alternative to public governance, but as a complement to it. The industries that have done this well, like aviation safety, show that it is possible to innovate rapidly while maintaining strong safety cultures. The question is whether the tech sector has the will to follow that example.
Q10. Looking ahead to 2030–2035, what emerging AI capabilities will fundamentally reshape governance, ethics, and societal impact? Are we preparing adequately?
Lambert Hogenhout: This is exactly what keeps me awake at night and that I often speak about: so much is happening right now that it takes our full attention to deal with the Now, with tomorrow and next week. There is too little attention being given to the effect of all this emerging technology in the medium to long term, let’s say 5–10 years. The effects on how we work, how we learn, communicate, form connections and self-identify. The convergence of AI with biotechnology, brain-computer interfaces, and robotics will raise questions about human identity and autonomy that we are barely beginning to consider. And the increasing use of AI in defense and security applications creates risks that are existential in nature.
A worst case scenario is where technology ends up making us unhappier, lonely, unfulfilled and unproductive. I think by making more intentional choices in how we adopt technology we can increase the chances for a future where humans thrive. No, we are not preparing adequately. We are governing yesterday’s AI while tomorrow’s is being built. To change that, we need to invest far more in foresight—not prediction, but structured thinking about possible futures and their implications. And we need to embed that long-term thinking into the organizations and institutions that shape our collective future.
Q11. What should organizations and policymakers be doing now to prepare for AI capabilities that don’t yet exist in production systems?
Lambert Hogenhout: The Canadian philosopher Wayne Gretzky famously said: “Don’t skate to where the [ice-hockey] puck is, but to where it is going to be.” While I recognize this is challenging in a landscape that shifts by the month, policymakers can focus on building adaptive governance frameworks—regulations that are principles-based rather than prescriptive, so they remain relevant as the technology evolves. They can invest in technical expertise within government so they are not entirely dependent on industry to explain what is happening. And they can establish international coordination mechanisms now, before the technology outpaces our ability to govern it collectively.
Similarly, leaders of organizations can invest in building organizational resilience and adaptability. This means developing AI governance structures that can evolve, training their workforce not just for today’s tools but for the capacity to learn continuously, and building strong ethical foundations that will guide decision-making regardless of what specific technologies emerge. The organizations that will navigate the next decade successfully are those that see responsible AI not as a compliance burden but as a core strategic capability.
Q12. What practical advice would you give to organizations trying to implement AI responsibly? What does the organizational structure, governance framework, and decision-making process of a truly responsible AI deployer look like?
Lambert Hogenhout: Start with clarity about your values and your risk appetite, not with the technology. The organizations that struggle most are those that adopt AI tools first and then try to retrofit governance and ethics around them. By that point, the technology has created its own momentum, and course correction becomes much harder. A truly responsible AI deployer has several characteristics: it has clear accountability (usually a senior leader or body with real authority); it embeds ethical review into the development and deployment lifecycle, (ethics by design); it invests in diverse teams, because the blind spots that lead to harmful AI outcomes are most often the result of homogeneous thinking; and in includes feedback loops (continuous monitoring).
…………………………………………………………………

Lambert Hogenhout is Chief Data and AI at the United Nations Secretariat.
He is also an author, keynote speaker and advisor on AI and responsible use of technology. He has 25 years of experience working both in the private sector and with international organizations such as the World Bank and the United Nations. He leads governance and strategy in the areas of data and AI and oversees its practical implementation. He has published on data privacy, data governance, the societal implications of technology and responsible use of AI.
……………………………..

