“Today, AI can be a cluster bomb. Rich people reap the benefits while poor people suffer the result. Therefore, we should not wait for trouble to address the ethical issues of our systems. We should alleviate and account for these issues at the start.” — Ricardo Baeza-Yates.
Q1. What are your current projects as Director of Research at the Institute for Experiential AI of Northeastern University?
Ricardo Baeza-Yates: I am currently involved in several applied research projects in different stages at various companies. I cannot discuss specific details for confidentiality reasons, but the projects relate predominantly to aspects of responsible AI such as accountability, fairness, bias, diversity, inclusion, transparency, explainability, and privacy. At EAI, we developed a suite of responsible AI services based on the PIE model that covers AI ethics strategy, risk analysis, and training. We complement this model with an on-demand AI ethics board, algorithmic audits, and an AI systems registry.
Q2. What is responsible AI for you?
Ricardo Baeza-Yates: Responsible AI aims to create systems that benefit individuals, societies, and the environment. It encompasses all the ethical, legal, and technical aspects of developing and deploying beneficial AI technologies. It includes making sure your AI system does not interfere with a human agency, cause harm, discriminate, or waste resources. We build Responsible AI solutions to be technologically and ethically robust, encompassing everything from data to algorithms, design, and user interface. We also identify the humans with real executive power that are accountable when a system goes wrong.
Q3. Is it the use of AI that should be responsible and/or the design/implementation that should be responsible?
Ricardo Baeza-Yates: Design and implementation are both significant elements of responsible AI. Even a well-designed system could be a tool for illegal or unethical practices, with or without ill intention. We must educate those who develop the algorithms, train the models, and supply/analyze the data to recognize and remedy problems within their systems.
Q4. How is responsible AI different/similar to the definition of Trustworthy AI – for example from the EU High Level Experts group?
Ricardo Baeza-Yates: Responsible AI focuses on responsibility and accountability, while trustworthy AI focuses on trust. However, if the output of a system is not correct 100% of the time, we cannot trust it. So, we should shift the focus from the percentage of time the system works (accuracy) to the portion of time it does not (false positives and negatives). When that happens and people are harmed, we have ethical and legal issues. Part of the problem is that ethics and trust are human traits that we should not transfer to machines.
Q5. How do you know when an application may harm people?
Ricardo Baeza-Yates: This is a very good question as in many cases harm occurs in unexpected ways. However, we can mitigate a good percentage of it buy thinking in the possible problems before they happen. How exactly to do it is an area of current research, but already we can do many things:
- Work with the stakeholders of your system from the design to the deployment. That implies your power users, your non digital users, regulators, civil society, etc. They should be able to check your hypotheses, your functional requirements, your fairness measures, your validation procedures, etc. They should be able to contest you.
- Analyze and mitigate bias in the data (e.g., gender and ethnic bias), in the results of the optimization function (e.g., data bias is amplified or an unexpected group of users is discriminated) and/or in the feedback loop between the system and its users (e.g., exposure and popularity bias).
- Do an ethical risk assessment and/or a full algorithmic audit, that includes not only the technical part but also the impact of your system on your users.
Q6. What is your take on the EU proposed AI law?
Ricardo Baeza-Yates: Among the many details of the law, I think AI regulation poses two significant flaws: First, we should not regulate the use of technology but focus instead on the problems and sectors in a way that is independent of the technology. Rather than restrict technology that may harm people, we can approach it the same as food or health regulations that work for all possible technologies. Otherwise, we will need to regulate distributed ledgers or quantum computing in the near future.
The second flaw is that risk is a continuous variable. Dividing AI applications into four risk categories (one is implicit, the no risk category) is a problem because those categories do not really exist (see The Dangers of Categorical Thinking.) Plus, when companies self-evaluate, it presents a conflict of interest and a bias to choose the lowest risk level possible.
Q7. You mentioned that “we should not regulate the use of technology, but focus instead on the problems and sectors in a way that is independent of the technology”. AI seems to introduce an extra complexity, that is, the difficulty in many cases to explain the output of an AI system. If you are making a critical decision that can affect people based on an AI algorithm for which you do not know why it produced an output, it would be in your analogy equivalent to allow a particular medicine to be sold that is producing lethal side effects. Do we want this?
Ricardo Baeza-Yates: No, of course not. However, I do not think it is the best analogy, as the studies needed for a new medicine must find why the side effects occur and after that you do an ethical risk assessment to approve it (i.e., the benefits of the medicine justify the lethal side effects). But the analogy is better for the solution. We may need something similar to the FDA in the U.S.A. that approves each medicine or device via a 3-phase study with real people. Of course, this is needed only for systems that may harm people.
Today, AI can be a cluster bomb. Rich people reap the benefits while poor people suffer the result. Therefore, we should not wait for trouble to address the ethical issues of our systems. We should alleviate and account for these issues at the start. To help companies confront these problems, I compiled 10 key questions that a company should ask before using AI. They address competence, technical quality, and social impact.
Q8. Ethics principles have been established long ago, well before AI and new technology were invented. Laws are often running behind technology, and that is why we need ethics. Do you agree?
Ricardo Baeza-Yates: Ethics always runs behind technology too. It happened with chemical weapons in World War I and nuclear bombs in World War II, to mention just two examples. And I disagree because ethics is not something that we need, ethics is part of being human. It is associated with feeling disgust, when you know that something is wrong. So, ethics in practice existed before the first laws. Is the other way around, laws exist because there are things so disgusting (or unethical) that we do not want people doing them. However, in the Christian world, Bentham and Austin proposed the separation of law and morals in the 19th century, which in a way implies that ethics applies only to issues not regulated by law (and then the separation boundary is different in every country!). Although this view started to change in the middle of the 20th century, the separation still exists, which for me does not make much sense. I prefer the Muslim view where ethics applies to everything and law is a subset of it.
Q9. A recent article you co-authored “is meant to provide a reference point at the beginning of this decade regarding matters of consensus and disagreement on how to enact AI Ethics for the good of our institutions, society, and individuals.” Can you please elaborate a bit on this? What are the key messages you want to convey?
Ricardo Baeza-Yates: The main message of the open article that you refer to is freedom for research in AI ethics, even in industry. This was motivated by what happened with the Google AI Ethics team more than a year ago. In the article we first give a short history of AI ethics and the key problems that we have today. Then we point to the dangers: losing research independence, dividing the AI ethics research community in two (academia vs. industry), and the lack of diversity and representation. Then we propose 11 actions to change the current course, hoping that at least some of them will be adopted.
……………………………………………………………

Ricardo Baeza-Yates is Director of Research at the Institute for Experiential AI of Northeastern University. He is also a part-time Professor at Universitat Pompeu Fabra in Barcelona and Universidad de Chile in Santiago. Before he was the CTO of NTENT, a semantic search technology company based in California and prior to these roles, he was VP of Research at Yahoo Labs, based in Barcelona, Spain, and later in Sunnyvale, California, from 2006 to 2016. He is co-author of the best-seller Modern Information Retrieval textbook published by Addison-Wesley in 1999 and 2011 (2nd ed), that won the ASIST 2012 Book of the Year award. From 2002 to 2004 he was elected to the Board of Governors of the IEEE Computer Society and between 2012 and 2016 was elected to the ACM Council. Since 2010 is a founding member of the Chilean Academy of Engineering. In 2009 he was named ACM Fellow and in 2011 IEEE Fellow, among other awards and distinctions. He obtained a Ph.D. in CS from the University of Waterloo, Canada, in 1989, and his areas of expertise are web search and data mining, information retrieval, bias and ethics on AI, data science and algorithms in general.
Regarding the topic of this interview, he is actively involved as expert in many initiatives, committees or advisory boards related to Responsible AI all around the world: Global AI Ethics Consortium, Global Partnership on AI, IADB’s fAIr LAC Initiative (Latin America and the Caribbean), Council of AI (Spain) and ACM’s Technology Policy Subcommittee on AI and Algorithms (USA). He is also a co-founder of OptIA in Chile, a NGO devoted to algorithmic transparency and inclusion, and member of the editorial committee of the new AI and Ethics journal where he co-authored an article highlighting the importance of research freedom on ethical AI.
…………………………………………………………………………
Resources
–AI and Ethics: Reports/Papers classified by topics
-– Ethics Guidelines for Trustworthy AI. Independent High-Level Expert Group on Artificial Intelligence. European commission, 8 April, 2019. Link to .PDF
– WHITE PAPER. On Artificial Intelligence – A European approach to excellence and trust. European Commission, Brussels, 19.2.2020 COM(2020) 65 final. Link to .PDF
– Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS. LINK
– Recommendation on the ethics of artificial intelligence. UNESCO, November 2021. LINK
– Recommendation of the Council on Artificial Intelligence. OECD,22/05/2019 LINK
–– How to Assess Trustworthy AI in practice, Roberto V. Zicari, Innovation, Governance and AI4Good, The Responsible AI Forum Munich, December 6, 2021. DOWNLOAD .PDF: Zicari.Munich.December6,2021
Related Posts
– On Responsible AI. Interview with Kay Firth-Butterfield, World Economic Forum. ODBMS Industry Watch. September 20, 2021
Follow us on Twitter: @odbmsorg
“People are our biggest asset, and we have been continually investing in and advancing our People digital and data science capabilities.” –Sastry Durvasula.
I sat down with Sastry Durvasula, Global Chief Technology & Digital Officer, and John Almasan, Distinguished Engineer, Technology & Digital Leader, at McKinsey to learn how the firm is leveraging AI, cloud, and data & analytics to power digital colleague experiences and client service capabilities in the new normal of hybrid work.
RVZ
Q1: Can you explain the role of technology and digital capabilities at McKinsey? What is your strategy for advancing the firm in the new normal?
SD: The firm has experienced significant growth over the last few years, with nearly 40K colleagues serving clients across 150 global locations. Our technology and digital strategy is focused on powering the future of the firm with a range of innovative capabilities, platforms, and experiences. Our strategic shifts include doubling our innovation in digital client service, firm-wide cloud transformations of all our platforms and applications, next-gen capabilities for AI and knowledge management, and leading-edge colleague-facing technology and hybrid experiences.
As per our recent study, Cloud is a trillion dollar opportunity for businesses, and we are very actively working with our clients to advance their cloud journey. Earlier this year, we acquired cloud consultancy Candid and their accomplished team of 100+ technical experts, helping us accelerate our clients’ end-to-end cloud transformations.
5K+ technologists at the firm are organized across our global guilds, which include Design, Product Management, Engineering & Architecture, Data Science, Cyber, etc., and they provide digital transformation solutions to our clients and development of assets and internal capabilities. Our agile Ways of Working (WoW) and build-buy-partner models are central to our product development, empowering teams to innovate at speed and scale, with psychological safety to experiment and learn.
Q2: What roles do cloud, data science, and AI play in your strategy? Can you provide some examples?
SD: AI and data science are central to this strategy in both serving our clients and transforming our internal capabilities. Thanks to the significant technological advancements in AI/ML powering our data science capabilities, we are unlocking innovative client-service and colleague digital experiences. We are building and advancing a hybrid and multi-cloud ecosystem to power distinctive solutions and assets for our clients, which includes strategic partnerships and integrations with leading industry hyperscalers and software products.
As an example, on the client-service side, we are completely transforming our core knowledge and expertise platforms leveraging cloud-native technologies and AI/ML. Similarly, McKinsey.com and the McKinsey Insights mobile app serve up strategic insights, analytics, studies, and content to a broad range of users across the globe — including the C-suite and aspiring students alike. Our cloud transformation of these iconic platforms enables innovation, scale, and speed in publishing, smart search, audience engagement, subscriber experience, and reach & relevance efforts.
On the colleague experience side, AI and AR/VR powered digital workplace capabilities, colleague-facing chatbots, and hybrid-in-a-box tools are a huge focus, as well as predictive and proactive services to detect and service technology issues for our global workforce. People analytics, recruiting, and onboarding journeys are also key areas where we are leading with distinctive capabilities and tools supported by data and AI-driven HR, allowing us to achieve a substantial step up from HR 2.0 to HR 3.0.
Q3: Can you elaborate on knowledge and expertise management, and the role AI plays in shaping this space at the firm?
SD: We have a unique and proprietary knowledge management platform that codifies decades of wisdom and integrates the firm’s extensive insights, studies, industry domain content, knowledge, structured and unstructured data, and analytics with a wide range of artifacts using secure and role-based access. This platform is widely used by our colleagues across the globe, creating profound impact to our clients as well as our firm’s business functions. We have been advancing this platform and the surrounding ecosystem by leveraging AI and cloud technologies for semantic searches, auto-curated and personalized results. Important to mention are our AI-powered chatbots with NLP, which provide valuable intelligence for our colleagues in various industry practices. Using graph database technologies and data science modeling for contextual understanding significantly enhances our knowledge search capabilities, including video scanning, speech to text, summarization, and the ability to index topics of interest.
For finding expertise, we are also making use of ML ontologies to uncover behaviors and relationships between various types of “skills” and Subject Matter Experts (SMEs) to manage, govern, and dynamically connect colleagues with the best domain experts based on desired skills and/or knowledge needs. Our colleague-facing “Know” mobile app provides on-the-go access to our curated knowledge databases and domain experts, integrating with all our internal communication channels and collaboration tools, and AI-driven recommendations.
Q4: Can you expand a little bit more on how AI and data science are powering the HR 3.0 agenda?
SD: People are our biggest asset, and we have been continually investing in and advancing our People digital and data science capabilities. For example:
People analytics play a vital role, and we consider them a stairway to impact with growing maturity in data, engineering, and data science capabilities. Our transformation to HR 3.0 relies on globally rich datasets, cloud capabilities, advanced analytics, and first-class data science and engineering teams, along with integrated operational processes. By making use of hybrid cloud-based graphs databases, R, Python, Julia, etc. to join disparate sources of data, our data engineering teams assemble not only one of the highest data quality ecosystems in the firm, but also a very resilient one. Being aware of the fact that, in general, 80% of data science effort is with data cleaning, our strategy removes such roadblocks and ensures an analytically ready, understandable data solution, so our data scientists can be effective in delivering people analytics rather than data curation and sanitization.
On the recruiting and onboarding front, given our scale of hiring talent every year across the globe — both fresh talent from innovative academic institutions and experienced hires from various industries with a wide range of skills— we have significantly invested in AI-driven capabilities for identifying, recruiting, and hiring talented individuals. As an example, our intelligent NLP driven “Resume Processing Review,” built with the use of deep learning models, enables us to process over 750,000 resumes annually and to identify characteristics of successful applicants. By making use of intelligent guidance with dynamic customizable questions, activities like scoring, prioritizing, and sorting candidates are simplified while the overall process timeline is tremendously reduced. Ensuring that solutions avoid AI bias in recruiting is also a major focus. Additionally, these AI capabilities are beneficial for enabling a smooth and personalized onboarding experience for candidates.
Our recent report the workforce of the future highlights the emerging trends and insights, which include flexibility and continuous learning opportunities to foster and retain an engaged workforce. Our “Job-to-Job Matching” ML system accelerates the discovery and matching of jobs with those looking for another opportunity. AI-driven learning is another big priority, which enables highly personalized learning tracks for our colleagues based on their skills, engagements, and aspirations as part of our proprietary platforms.
Q5: Can you share some insights and details on your technology ecosystem and how it powers your internal and external platforms and products?
JA: To power our global product development solutions and innovations, we focused on transforming the firm’s core technology architecture with a more robust yet flexible 7-layer stack. This new framework is based on hybrid and multi-cloud platforms, secure-by-design engineering capabilities, and futuristic tools to propel delivery at scale and speed.
Developer experience is a core focus, providing premium software engineering tools, APIs, and services across hyperscalers. Our modularized platform as a service, consolidated into a service catalog, allows developers the flexibility and agility to customize complex computing and infrastructure designs to address any internal and external tech ecosystem. Our AI driven CI/CD pipeline enables interoperability across a wide range of technologies, and identifies in real-time SDLC vulnerabilities to reduce potential risk and improve the overall software quality. Data scientists play a vital role across a range of studies and client-service. The stack includes a specialized studio for data scientists with state-of-the-art MLOps and AIOps tools and libraries.
We have developed a cloud security framework, enabling our E2E solutions to be built with secure-by-design and “zero trust” principles in mind, meeting or exceeding the industry “security posture” standards and regulatory needs. Lastly, our global presence demands proactive planning and innovative technologies to ensure that our internal and external platforms and products exist in ecosystems that comply with various country and region-level regulations.
Q6: Tell me about how AI powered colleague experiences helped during the pandemic.
JA: Digital colleague experience has been more crucial than ever during the pandemic and in a hybrid world. We are employing AI to enable seamless capabilities, tools, and rapid response time to client-service requests and issue handling.
First, let me start with CASEE (Caring And Smart Engineered Entity), our colleague-facing chatbot, which provides intelligent technology support and services across the globe. CASEE leverages conversational NLP, leading open source frameworks, and off-the-shelf tool integrations with the ability to improvise from every interaction and support request. It has been a huge help during the pandemic, when our global workforce switched to remote with an unprecedented spike in demand while we were also dealing with the effects on our global servicing teams. As an example, CASEE was specifically trained in less than a week to respond and handle 90% of the questions regrading remote working and common device and network issues. It has also been integrated with our digital collaboration tools as well as incident response systems.
Another example is the intelligent automation of our Global Helpdesk capabilities, which we turbo-charged during the pandemic and are widely recognized in the industry and by our clients as a go and see reference. We’ve augmented our tools with AI driven services that can intelligently detect hardware and/or software deterioration on our users’ machines and can proactively fix or mitigate these problems. The system is capable of initiating a laptop replacement, perform driver updates, trigger software patching, or even remove or stop glitched software.
Q7: I heard about the firm’s open source efforts. Can you elaborate?
JA: We recognized the fact that McKinsey tech has a great opportunity to support and to give back to the Open Source Community. Kedro, for example, is a powerful ML framework for creating reproducible, maintainable, and modular data science code. It seamlessly blends software engineering concepts like modularity, separation of concerns, and versioning, and then applies them to ML code. Kedro proved to be one of our most valuable ML solutions, and it was successfully used across more than 50 projects to date, providing a set of best practices and a revolutionized workflow for complex analytics projects. We’ve open-sourced Kedro to support both our clients and non-clients alike, and to foster ML and software engineering innovation within the community of developers. Our approach starts with our global guilds first, and then contributing to open source. Stay tuned for more exciting developments in this space.
Q8: How are you attracting and developing talent in this highly competitive market?
SD: As you can see, we have some very exciting and interesting problems across a wide-range of technologies, industries, geographies, and next horizon initiatives. We are constantly focused on attracting inquisitive and continuous learners. We have also been fostering deep strategic relationships with universities and industry networks across the globe.
We have been expanding our global hubs, adding new locations and advancing our hybrid/remote workforce capabilities across the US, Europe, Asia, and Latin America with several hundred active open jobs as we speak. We are also opening a major new center in Atlanta, which will be home to more than 600 technologists and professionals, and with strong diversity, inclusivity, and sense of community. We are partnering with leading non-profits including Girls in Tech globally, Chzechitas in Prague, and Black Girls Code and Historically Black Colleges and Universities (HBCUs) in the US.
We launched personalized development programs for our colleagues, including certifications in cloud, cyber, and other emerging technologies. Over 60% of our developers are certified in one or more cloud ecosystems. We’re proud of being recognized by Business Insider as one of the 50 most attractive employers for engineering and technology students around the world. At #19, we are the highest-ranked professional services firm on the list.
……………………………………………

Sastry Durvasula is the Global Chief Technology and Digital Officer, and Partner at McKinsey. He leads the strategy and development of McKinsey’s differentiating digital products and capabilities, internal and client-facing technology, data & analytics, AI/ML and Knowledge platforms, hybrid-cloud ecosystem, and open-source efforts. He serves as a senior expert advisor on client engagements, co-chairs the Firm’s technology governance board, and leads strategic partnerships with tech and digital companies, academia, and research groups.
Previously, Sastry held Chief Digital Officer, Chief Data & Analytics, CIO, and global technology leadership roles at Marsh and American Express and worked as a consultant at Fortune Global 500 companies, with a breadth of experience in the technology, payments, financial services, and insurance domains.
Sastry is a strong advocate for diversity, chairs DE&I at McKinsey’s Tech & Digital, and is on the Board of Directors for Girls in Tech, the global non-profit dedicated to eliminating the gender gap. He championed industry-wide initiatives focused on women in tech, including #ReWRITE and Half the Board. He holds a Master’s degree in Engineering, is credited with 30+ patents, and has been the recipient of several honors and awards as an innovator and industry influencer.

John Almasan is a Distinguished Engineer, Technology & Digital Leader at McKinsey. He is a hands-on, accomplished technology executive with 20+ years of experience in leading global tech teams and building large-scale data, analytics, and cloud platforms. He has deep expertise in hybrid multi-cloud big data engineering, machine learning, and data science. John is currently focused on engineering solutions for the firm’s transformation and the build of the next gen data analytics platform.
Previously John held engineering leadership roles with Nationwide Insurance, American Express, and Bank of America focusing on cloud, data & analytics, AI and ML in financial services and insurance domains. He gives back through his pro bono consultancy work for the Arizona Counterterrorism Center, the Rocky Mountain Information Center, and as a member of the Arizona State University’s Board of Advisors.
John holds a Master’s degree in Engineering, a Master of Public Administration, and a Doctor of Business Administration. He is an AWS Educate Cloud Ambassador, Certified AWS Data Analytics & ML engineer, GCP ML Certified. John is credited with 10+ patents and has been the recipient of several awards.
Resources
The state of AI in 2021– December 8, 2021 | Survey. The results of our latest McKinsey Global Survey on AI indicate that AI adoption continues to grow and that the benefits remain significant— though in the COVID-19 pandemic’s first year, they were felt more strongly on the cost-savings front than the top line. As AI’s use in business becomes more common, the tools and best practices to make the most out of AI have also become more sophisticated
The search for purpose at work, June 3, 2021 | Podcast. By Naina Dhingra and Bill Schaninger. In this episode of The McKinsey Podcast, Naina Dhingra and Bill Schaninger talk about their surprising discoveries about the role of work in giving people a sense of purpose. An edited transcript of their conversation follows.
Related Posts
Follow us on Twitter: @odbmsorg
##
On Designing and Building Enterprise Knowledge Graphs. Interview with Ora Lassila and Juan Sequeda
“The limits of my language mean the limits of my world.” – Ludvig Wittgenstein
I have interviewed Ora Lassila, Principal Graph Technologist in the Amazon Neptune team at AWS and Juan Sequeda, Principal Scientist at data.world. We talked about knowledge graphs and their new book.
RVZ
Q1. You wrote a book titled “Designing and Building Enterprise Knowledge Graphs”. What was the main motivation for writing such a book?
Ora Lassila and Juan Sequeda: We wanted to tackle the topic of knowledge graphs more broadly than just from the technology standpoint. There is more than just technology (e.g., graph databases) when it comes to successfully building a knowledge graph.
Time and time again we see people thinking about knowledge graphs and jumping to the conclusion that they just need a graph database and start there. Not only is there more technology you need, but there are issues with people, processes, organizations, etc.
Q2. What are knowledge graphs and what are they useful for?
Ora Lassila and Juan Sequeda: We see knowledge graphs as a vehicle for data integration and to make data accessible within an organization. Note that when we say “accessible data”, we really mean this: accessible data = physical bits + semantics. The semantics part is really important, since no data is truly accessible unless you also understand what the data means and how to interpret it. We call this issue the “knowledge/data gap”; Chapter 1 of our book gets deep into this.
You could say that knowledge graphs are a way to “democratize” data: make data more accessible and understandable to people who are not technology experts.
Q3. Why connecting relational databases with knowledge graphs?
Ora Lassila and Juan Sequeda: Frankly, the majority of enterprise data is in relational databases, so this seemed like a very good way to scope the problem. At the beginning of our book we show examples of how data is connected today and frankly, it’s a pain. And it’s not just a technical pain, there are important social and organizational aspects to this.
Juan Sequeda: Understanding the relationship between relational databases and the semantic web/knowledge graphs has been my quest since my undergraduate years. The title of my PhD dissertation is “Integrating Relational Databases with the Semantic Web”. Therefore I can say that this is a passion of mine.
Q4. Does it make more sense to use a native graph database instead or a NoSQL database?
Ora Lassila and Juan Sequeda: There is always the question “why use X instead of Y?”… and the answer almost always is “it depends”. We even bring this up in the foreword: As computer scientists we understand that there are many technologies that can be used to solve any particular problem. Some are easier, more convenient, and others are not. Just because you can write software in assembly language does not mean you shouldn’t seek to use a high-level programming language. Same with databases: find one that suits your purpose best.
Q5. What are the typical roles within an organization responsible for the knowledge graph?
Ora Lassila and Juan Sequeda: Organizations really need to get into the mindset of treating data as a product. When you acknowledge this, you realize you need the roles for designing, implementing and managing products, in this case data products. We see upcoming roles such as data product managers and knowledge scientists (i.e. Knowledge Engineers 2.0). We get into this in Chapter 4 of our book.
Q6. Data and knowledge are often in silos. Sharing knowledge and data is sometimes hard in an enterprise. What are the technical and non technical reasons for that?
Ora Lassila and Juan Sequeda: Technical problems are solvable, and many solutions exist. That said, we think knowledge graphs are really addressing this issue nicely.
The non-technical issues are an interesting challenge, and in many ways more difficult: people and process, organizational structure, centralization vs decentralization, etc. One specific issue that shows up all the time is this: If you want to share knowledge within a broader organization, you have to cross organizational boundaries, and that lands you on someone else’s “turf”. There is a great deal of diplomacy that is needed to tackle these kinds of issues.
Q7. When is it more appropriate to use RDF graph technologies instead of native property graph technologies?
Ora Lassila and Juan Sequeda: First, we object to the notion of “native” when it comes to property graphs, they are no more native than RDF graphs.
These are two slightly different approaches to building graphs. Ultimately, the question is not all that interesting. A more interesting question is: When should you use a graph as opposed to something else? If you do decide to use a graph, there are a lot of considerations and modeling decisions before you even come to the question of RDF vs. property graphs.
Of course, RDF is better suited to some situations (e.g., when you use external data, or have to merge graphs from different sources). Try using property graphs there and you merely end up re-inventing mechanisms that are already part of RDF. On the other hand, property graphs often appeal more to software developers, thanks to available access mechanisms and programming language support (e.g., Gremlin).
Q8. How can enterprises successfully adopt knowledge graphs to integrate data and knowledge, without boiling the ocean?
Ora Lassila and Juan Sequeda: First of all, you can’t build enterprise knowledge graphs in a “boil the ocean” approach. No chance in hell. You first need to break the problem in smaller pieces, by business units and use cases. This ultimately is a people and process problem. The tech is already here.
That said, there is a certain “build it and they will come” aspect to knowledge graphs. You should think of them more as a platform rather than as an application. Start by knowing some use cases, and gradually generalize and widen your scope. But you need to be solving some pressing problems for the business. Spend time understanding the problems, the limitations of their current solutions (assuming they are somewhat viable) and finding a champion (i.e. “if you can solve this problem better/faster/etc, I’m all ears!”). Also try to avoid educating on the technology: Business units don’t care if their problem is solved with technology A, B or C… all they want is for their problem to be solved.
Q9. Knowledge graphs and AI. Is there any relationships between them?
Ora Lassila and Juan Sequeda: Yes. Knowledge Graphs are a modern solution to a long-time (and in some ways, “ultimate”) goal in computer science: to integrate data and knowledge at scale. For at least the past half century, we’ve seen independent and integrated contributions coming from the AI community (namely knowledge representation, a subfield of classical AI) and the data management community. See section 1.3 of the book.
Qx Anything else you wish to add?
Ora Lassila and Juan Sequeda: We see a lot of what Albert Einstein gave as the definition of insanity: Doing the same thing over and over, and expecting different results. We need to do something truly different. But this is challenging for many reasons, not least because of this:
“The limits of my language mean the limits of my world.” – Ludvig Wittgenstein
For example, if SQL is your language, it may be very hard for you to see that there are some completely different ways of solving problems (case in point: graphs and graph databases).
Another challenge is that there are hard people and process issues, but as technologists we are wired to focus on technology, and to seek how to scale and automate.
Finally, we think the “graph industry” needs to evolve past the RDF vs. property graphs issue. Most people do not care. We need graphs. Period.
………………………………………..

Dr. Ora Lassila, Principal Graph Technologist in the Amazon Neptune team at AWS, mostly focusing on knowledge graphs. Earlier, he was a Managing Director at State Street, heading their efforts to adopt ontologies and graph databases. Before that, he worked as a technology architect at Pegasystems, as an architect and technology strategist at Nokia Location & Commerce (aka HERE), and prior to that he was a Research Fellow at the Nokia Research Center Cambridge. He was an elected member of the Advisory Board of the World Wide Web Consortium (W3C) in 1998-2013, and represented Nokia in the W3C Advisory Committee in 1998-2002. In 1996-1997 he was a Visiting Scientist at MIT Laboratory for Computer Science, working with W3C and launching the Resource Description Framework (RDF) standard; he served as a co-editor of the RDF Model and Syntax specification.

Juan Sequeda, Principal Scientist at data.world. He holds a PhD in Computer Science from The University of Texas at Austin. Juan’s goal is to reliably create knowledge from inscrutable data. His research and industry work has been on designing and building Knowledge Graph for enterprise data integration. Juan has researched and developed technology on semantic data virtualization, graph data modeling, schema mapping and data integration methodologies. He pioneered technology to construct knowledge graphs from relational databases, resulting in W3C standards, research awards, patents, software and his startup Capsenta (acquired by data.world). Juan strives to build bridges between academia and industry as the current co-chair of the LDBC Property Graph Schema Working Group, past member of the LDCB Graph Query Languages task force, standards editor at the World Wide Web Consortium (W3C) and organizing committees of scientific conferences, including being the general chair of The Web Conference 2023. Juan is also the co-host of Catalog and Cocktails, an honest, no-bs, non-salesy podcast about enterprise data.
Resources

Designing and Building Enterprise Knowledge Graphs Synthesis Lectures on Data, Semantics, and Knowledge August 2021, 165 pages, (https://doi.org/10.2200/S01105ED1V01Y202105DSK020) Juan Sequeda, data.world; Ora Lassila, Amazon
Related Posts
Fighting Covid-19 with Graphs. Interview with Alexander Jarasch ODBMS Industry Watch, June 8, 2020
Follow us on Twitter: @odbmsorg
##
“I think that many companies need to understand that their customers are worried about the use of AI and then act accordingly. I believe they should set up ethics advisory boards and then follow the advice or internal teams to advise on what they should do and take that advise.”
–Kay Firth-Butterfield
I have interviewed Kay Firth-Butterfield, Head of Artificial Intelligence and member of the Executive Committee at the World Economic Forum. We talked about Artificial Intelligence (AI) and in particular, we discussed responsible AI, trustworthy AI and AI ethics.
RVZ
Q1. You are the Head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum. What is your mission at the World Economic Forum?
Kay Firth-Butterfield: We are committed to improving the state of the world.
Q2. Could you summarize for us what are in your opinion the key aspects of the beneficial and challenging technical, economic and social changes arising from the use of AI?
Kay Firth-Butterfield: The potential benefits of AI being used across government, business and society are huge. For example using AI to help find ways of educating the uneducated, giving healthcare to those without it and helping to find solutions to climate change. Both embodied in robots and in our computers it can help keep the elderly in their homes and create adaptive energy plans for air conditioning so that we use less energy and help keep people safe. Apparently some 8800 people died of heat in US last year but only around 450 from hurricanes. Also, it helps with cyber security and corruption. On the other side, we only need to look at the fact that over 190 organisations have created AI principles and the EU is aiming to regulate use of AI and the OHCHR has called for a ban on AI which affects human rights to know that there are serious problems with the way we use the tech, even when we are careful.
Q3. The idea of responsible AI is now mainstream. But why when it comes to operationalizing this in the business, companies are lagging behind?
Kay Firth-Butterfield: I think they are worried about what regulations will come and the R&D which they might lose from entering the market too soon. Also, many companies don’t know enough about the reasons why they need AI. CEOs are not envisaging the future of the company with AI which, if available is often left to a CTO. It is still hard to buy the right AI for you and know whether it is going to work in the way it is intended or leave an organisation with an adverse impact on its brand. Boards often don’t have technologists and so they can help the CEO think through the use of AI for good or ill. Finally, its is hard to find people with the right skills. I think this may be helped by remote working when people don’t have to locate to a country which is reluctant to issue visas.
Q4. What is trustworthy AI?
Kay Firth-Butterfield: The design, development and use of AI tools which do more good for society than they do harm.
Q5. The Forum has developed a board tool kit to help board member on how to operationalize AI ethics. What is it? Do you have any feedback on how useful is it in practice?
Kay Firth-Butterfield: It provides Boards with information which allows they to understand how their role changes when their company uses AI and therefore gives them the tools to develop their governance and other roles to advise on this complex topic. Many Boards have indicated that they have found it useful and it has been downloaded more than 50,000 times.
Q6. Let´s talk about standards for AI. Does it really make sense to standardize an AI system? What is your take on this?
Kay Firth-Butterfield: I have been working with the IEEE on standards for AI since 2015, I am still the Vice-Chair. I think that we need to use all types of governance for AI from norms to regulation depending on risk. Standards provide us with an excellent tool in this regard.
Q7. There are some initiatives for Certification of AI. Who has the authority to define what a certification of AI is about?
Kay Firth-Butterfield: At the moment there are many who are thinking about certification. There is not regulation and no way of being certified to certify! This needs to be done or there will be a proliferation and no-one will be able to understand which is good and which is bad. Governments have a role here, for example Singapore’s work on certifying people to use their Model AI Governance Framework.
Q8. What kind of incentives are necessary in your opinion for helping companies to follow responsible AI practices?
Kay Firth-Butterfield: I think that many companies need to understand that their customers are worried about the use of AI and then act accordingly. I believe they should set up ethics advisory boards and then follow the advice or internal teams to advise on what they should do and take that advise. In our Responsible Use of Technology work we have considered this in detail.
Q9. Do you think that soft government mechanisms would be sufficient to regulate the use of AI or would it be better to have hard government mechanisms?
Kay Firth-Butterfield: both
Q10. Assuming all goes well, what do you think a world with advanced AI would look like?
Kay Firth-Butterfield: I think we have to decide what trade offs of privacy we want to allow for humans to develop harnessing AI. I believe that it should be up to each of us but sadly one person deciding to use surveillance via a doorbell surveills many. I believe that we will work with robots and AI so that we can do our jobs better. Our work on positive futures with AI is designed to help us better answer this question. Report out next month! Meanwhile here is an agenda.
…………………………………………………………

Kay Firth-Butterfield is a lawyer, professor, and author specializing in the intersection of business, policy, artificial intelligence, international relations, and AI ethics.
Since 2017, she has been the Head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum and is one of the foremost experts in the world on the governance of AI. She is a barrister, former judge and professor, technologist and entrepreneur and vice-Chair of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. She was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles, is a member of the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI and AI4All.
She regularly speaks to international audiences addressing many aspects of the beneficial and challenging technical, economic and social changes arising from the use of AI.
Resources
- Empowering AI Leadership: An Oversight Toolkit for Boards of Directors. World Economic Forum.
- Ethics by Design: An organizational approach to responsible use of technology. White Paper December 2020. World Economic Forum.
- A European approach to artificial intelligence, European Commission.
- The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems
Related Posts
On Digital Transformation and Ethics. Interview with Eberhard Schnebel. ODBMS Industry Watch. November 23, 2020
On the new Tortoise Global AI Index. Interview with Alexandra Mousavizadeh. ODBMS Industry Watch, April 7, 2021
Follow us on Twitter: @odbmsorg
##
I have interviewed Ryan Betts, VP of Engineering at InfluxData. We talked about time series databases, InfluxDB and the InfluxData stack. RVZ
“Time series databases have key architectural design properties that make them very different from other databases. These include time-stamped data storage and compression, data lifecycle management, data summarization, ability to handle large time-series-dependent scans of many records, and time-series-aware queries.“–Ryan Betts
Q1. What is time series data?
Ryan Betts: Time series data consists of measurements or events that are captured and analyzed, often in real time, to operate a service within an SLO, detect anomalies, or visualize changes and trends. Common time series applications include server metrics, application performance monitoring, network monitoring, and sensor data analytics and control loops. Metrics, events, traces and logs are examples of time series data.
Q2. What are the hard database requirements for time series applications?
Ryan Betts: Managing time series data requires high-performance ingest (time series data is often high-velocity, high-volume), real-time analytics for alerting and alarming, and the ability to perform historical analytics against the data that’s been collected. Additionally, many time series applications apply a lifecycle policy to the data collected — perhaps downsampling or aggregating raw data for historical use.
With time series, it’s common to perform analytics queries over a substantial amount of data. Time series queries commonly include columnar scans, grouped and windowed aggregates, and lag calculations. This kind of workload is difficult to optimize in a distributed key value store. InfluxDB uses columnar database techniques to optimize for exactly these use cases, giving sub-second query times over swathes of data and supporting a rich analytics vocabulary.
While time series data is typically structured, it often has dynamic properties that aren’t well-suited to strict schema enforcement. Time series databases often specify the structure of data but allow schema-on-write. Another way of saying this is that time series databases often support arbitrary dimension data to decorate the contents of the fact table. This allows developers to create new instrumentation or collect metrics from new sources without performing frequent schema migrations. Document databases and column-family stores similarly allow flexible schema in their own contexts. The motivation with time series is similar — optimizing for developer productivity.
In addition to high-performance ingest, non-trivial analytics queries, and flexible schema, TSDBs also need to bridge real-time analytics to real-time action. There’s little point doing real-time monitoring if you can’t also automate real-time responses. So time series databases, like other real-time analytics systems, need to provide the analytics function and the ability to tie into real-time operations. That means integrating automated alerting, alarming, and API invocations with the query analytics performed for monitoring.
Q3. How do you manage the massive volumes and countless sources of time-stamped data produced by sensors, applications and infrastructures?
Ryan Betts: The InfluxData stack is optimized for both regular (metrics often gathered from software or hardware sensors) and irregular time series data (events driven either by users or external events), which is a significant differentiator from other solutions like Graphite, RRD, OpenTSDB, or Prometheus. Many services and time series databases support only the regular time series metrics use case.
InfluxDB lets users collect from multiple and diverse sources, store, query, process and visualize raw high-precision data in addition to the aggregated and downsampled data. This makes InfluxDB a viable choice for applications in science and sensors that require storing raw data.
At the storage level, InfluxDB organizes data into a columnar format and applies various compression algorithms, typically reducing storage to a fraction of the raw uncompressed size. Time series applications are “append-mostly”. The majority of arriving data is appended. Late arriving data and deletes occur with some frequency — but primarily writes result in appending to the fact table. The database uses a log structured merge tree architecture to meet these requirements. Deletes are recorded first as tombstones and are later removed through LSM compaction.
Q4. Can you give us some time series examples?
Ryan Betts: Time series data, also referred to as time-stamped data, is a sequence of data points indexed in time order. Time-stamped is data collected at different points in time.
These data points typically consist of successive measurements made from the same source over a time interval and are used to track change over time.
Weather records, step trackers, heart rate monitors, all are time series data. If you look at the stock exchange, a time series tracks the movement of data points, such as a security’s price over a specified period of time with data points recorded at regular intervals.
InfluxDB has a line protocol for sending time series data which takes the following form:
<measurement name>,<tag set> <field set> <timestamp>
The measurement name is a string, the tag set is a collection of key/value pairs where all values are strings, and the field set is a collection of key/value pairs where the values can be int64, float64, bool, or string. The measurement name and tag sets are kept in an inverted index which makes lookups for specific series very fast.
For example, if we have CPU metrics:
cpu,host=serverA,region=uswest idle=23,user=42,system=12 1549063516
Timestamps in InfluxDB can be by second, millisecond, microsecond, or nanosecond precision. The micro and nanosecond scales make InfluxDB a good choice for use cases in finance and scientific computing where other solutions would be excluded. Compression is variable depending on the level of precision the user needs.
Q5. The fact that time series data is ordered makes it unique in the data space because it often displays serial dependence. What does it mean in practice?
Ryan Betts: Serial dependence occurs when the value of a datapoint at one time is statistically dependent on another datapoint at another time.
Though there are no events that exist outside of time, there are events where time isn’t relevant. Time series data isn’t simply about things that happen in chronological order — it’s about events whose value increases when you add time as an axis. Time series data sometimes exists at high levels of granularity, as frequently as microseconds or even nanoseconds. With time series data, change over time is everything.
Q6. How is time series data understood and used?
Ryan Betts: Time series data is gathered, stored, visualized and analyzed for various purposes across various domains:
- In data mining, pattern recognition and machine learning, time series analysis is used for clustering, classification, query by content, anomaly detection and forecasting.
- In signal processing, control engineering and communication engineering, time series data is used for signal detection and estimation.
- In statistics, econometrics, quantitative finance, seismology, meteorology, and geophysics, time series analysis is used for forecasting.
Time series data can be visualized in different types of charts to facilitate insight extraction, trend analysis, and anomaly detection. Time series data is used in time series analysis (historical or real-time) and time series forecasting to detect and predict patterns — essentially looking at change over time.
Q7. You also handle two other kinds of data, namely cross-section and panel data. What are these? How do you handle them?
Cross-sectional data is a collection of observations (behavior) for multiple entities at a single point in time. For example: Max Temperature, Humidity and Wind (all three behaviors) in New York City, SFO, Boston, Chicago (multiple entities) on 1/1/2015 (single instance).
Panel data is usually called cross-sectional time series data, as it is a combination of both time series data and cross-sectional data (i.e., collection of observations for multiple subjects at multiple instances).
This collection of data can be combined in a single series, or you can use Flux lang to combine and review this data to gather insights.
Q8. There are several time series databases available in the market. What makes InfluxDB time series database unique?
Ryan Betts: When doing a comparison, the entire InfluxDB Platform should be taken into account. There are multiple types of databases that get brought up for comparison. Mostly, these are distributed databases like Cassandra or more time-series-focused databases like Graphite or RRD. When comparing InfluxDB with Cassandra or HBase, there are some stark differences. First, those databases require a significant investment in developer time and code to recreate the functionality provided out of the box by InfluxDB. Finally, they’ll have to create an API to write and query their new service.
Developers using Cassandra or HBase need to write tools for data collection, introduce a real-time processing system and write code for monitoring and alerting. Finally, they’ll need to write a visualization engine to display the time series data to the user. While some of these tasks are handled with other time series databases, there are a few key differences between the other solutions and InfluxDB. First, other time series solutions like Graphite or OpenTSDB are designed with only regular time series data in mind and don’t have the ability to store raw high-precision data and downsample it on the fly.
While with other time series databases, the developer must summarize their data before they put it into the database, InfluxDB lets the developer seamlessly transition from raw time series data into summarizations.
InfluxDB also has key advantages for developers over Amazon Timestream. Among them:
- InfluxData is first and foremost an open source company. It is committed to sharing ideas and information openly, collaborating on solutions and providing full transparency to drive innovation.
- Hybrid cloud and on-premises support. Every business has specific functionalities, and a hybrid cloud system offers the flexibility to choose services that best fit their needs, whether to support GDPR regulatory requirements or teams that are spread across multiple providers.
Q9. What distinguishes the time series workload?
Ryan Betts: Time series databases have key architectural design properties that make them very different from other databases. These include time-stamped data storage and compression, data lifecycle management, data summarization, ability to handle large time-series-dependent scans of many records, and time-series-aware queries.
For example: With a time series database, it is common to request a summary of data over a large time period. This requires going over a range of data points to perform some computation like a percentile increase this month of a metric over the same period in the last six months, summarized by month. This kind of workload is very difficult to optimize for with a distributed key value store. TSDB’s are optimized for exactly this use case giving millisecond- level query times over months of data.
Q10. Let’s talk about integrations. Software services don’t work alone. Suppose an application relies on Amazon Web Services, or monitors Kubernetes with Grafana or deploys applications through Docker, how easy is it to integrate them with InfluxDB?
Ryan Betts: InfluxData provides tools and services that help you integrate your favorite systems across the spectrum of IT offerings, from applications to services, databases to containers. We currently offer 200+ Telegraf plugins to allow these seamless integrations. Developers using the InfluxDB platform build their applications with less effort, less code, and less configuration with the use of a set of powerful APIs and tools. InfluxDB client libraries are language-specific tools that integrate with the InfluxDB API and can be used to write data into InfluxDB as well as query the stored data.
………………………………………………..

Ryan Betts is VP of Engineering at InfluxData. Ryan has been building high performance infrastructure software for over twenty years. Prior to InfluxData, Ryan was the second employee and CTO at VoltDB. Before VoltDB, he spent time building SOA security and core networking products. Ryan holds a B.S. in Mathematics from Worcester Polytechnic Institute and an MBA from Babson College.
Resources
influxdata/influxdb: Scalable datastore for metrics – GitHub
Introduction to Time Series Databases | Getting Started [1 of 7] YouTube
Related Posts
COVID-19 Tracking Using Telegraf and InfluxDB Dashboards
On Big Data Benchmarking. Q&A with Richard Stevens
The 2021 AI Index report (HAI Stanford University)
Follow us on Twitter: @odbmsorg
##
“The most dangerous pitfall is when you solve the wrong problem.” –Joyce Weiner
I have interviewed Joyce Weiner, Principal AI Engineer at Intel Corporation. She recently wrote a book on Why AI/Data Science Projects Fail.
RVZ
Q1. In your book you start by saying that 87% of Artificial Intelligence/Big Data projects don’t make it into production, meaning that most projects are never deployed. Is this still actual?
Joyce Weiner: I can only provide the anecdotal evidence that it is still a topic of conversation at conferences and an area of concern. A quick search doesn’t provide me with any updated statistics. The most recent data point appears to be the Venture Beat reference (VB Staff, 2019). Back in 2019, Gartner predicted that “Through 2022, only 20% of analytic insights will deliver business outcomes.” (White, 2019)
Q2. What are the common pitfalls?
Joyce Weiner: I specifically address the common pitfalls that are in the control of the people working on the project. Of course, there can be other external factors that will impact a project’s success. But just focusing on what you can control and change:
- The scope of the project is too big
- The project scope increased in size as the project progressed (scope creep)
- The model couldn’t be explained
- The model was too complex
- The project solved the wrong problem
Q3. You mention five pitfalls, which of the five are most frequent?, and which one are the most dangerous for a project?
Joyce Weiner: Of the five pitfalls, scope creep has been the one I have seen the most in my experience. It’s an easy trap to fall into, you want to build the best solution and there is a tendency to add features when they come to mind without assessing the amount of value they add, or if it makes sense to add them right now. The most dangerous pitfall is when you solve the wrong problem. In that case, not only have you spent time and effort on a solution, once you have realized that you solved the wrong problem, you now need to go and redo the project to target the correct problem. Clearly, that can be demoralizing for the team working on the project, not to mention the potential business impact from the delay in delivering a solution.
Q4. You suggest five methods to avoid such pitfalls. What are they?
Joyce Weiner: The five methods I discuss in the book to avoid the pitfalls mentioned previously are:
- Ask questions – this addresses the project scope as well as providing information to decide on the amount of explainability required, and most importantly, ensures you are solving the correct problem.
- Get alignment – working with the project stakeholders and end users, starting as early as the project definition and continuing throughout the project, addresses problems with project scope and makes sure you are on track to solve the correct problem
- Keep it simple – this addresses model explainability and model complexity
- Leverage explainability – obviously directly related to model explainability, and addresses the pitfall of solving the wrong problem
- Have the conversation – continually discussing the project, expected deliverables, and sharing mock-ups and prototypes with your end users as you build the project addresses all 5 of the project pitfalls.
Q5. How do you apply and measure effectivnesss of these methods in practice?
Joyce Weiner: Well, the most immediate measurement is if you were able to deploy a solution into production. As a project progresses, you can measure things that will help you stay on track. For example, having a project charter to document and communicate your plans becomes a reference point as you build a project so that you recognize scope creep. A project charter is also useful when having conversations with project stakeholders to document alignment on deliverables.
Q6. Throughout your book you use the term “data science projects” as an all-encompassing term that includes Artificial Intelligence (AI) and Big Data projects. Don’t you think that this is a limitation to your approach? Big Data projects might have different requirements and challenges than AI projects?
Joyce Weiner: Well, that is true Big Data projects do have additional challenges, especially around the data pipeline. The five pitfalls still apply, and those are the biggest challenges to getting a project into deployment based on my experience.
Q7. In your book you recommend as part of the project charter to document the expected return on investment for the project. You write that assessing the business value for your project will help get resources and funding. What metrics do you suggest for this?
Joyce Weiner: I propose several metrics in my book, which depend on the type of project you are delivering. For example, a common data science project is performing data analysis. Deliverables for this type of project are root cause determination, problem solving support, and problem identification. Metrics are productivity, which can be measured as time saved, time to decision which is how long it takes to gather the information needed to make a decision, decision quality, and risk reduction due to improved information or consistency in the information used to make decisions.
Q8. You also write that in acquiring data, there are two cases. One, when the data are available already either in internal systems or from external sources, and two, when you don’t have the data. How do you ensure the quality (and for example the absence of Bias) of the existing data?
Joyce Weiner: The easiest way to ensure you have high quality data is to automate data collection as much as possible. If you rely on people to provide information, make it easy for them to enter the data. I have found that if you require a lot of fields for data entry, people tend to not fill things in, or they don’t fill things in completely. If you can collect the data from a source other than a human, say ingesting a log file from a program, your data quality is much higher. Checking for data quality by examining the data set before beginning on any model building is an important step. You can see if there are a lot of empty fields or gaps, or one-word responses in free text fields – things that call the quality of the data into question. You also get a sense of how much data cleaning you’ll need to do.
Bias is something that you need to be aware of, for example, if your data set is made solely of failing samples, you have no information on what makes something good or bad. You can only examine the bad. Building a model from those data that “predicts” good samples would be wrong. I’ve found that thinking through the purpose of the data and doing it as early as possible in the process is key. Although it’s tempting to say, “given these data, what can I do?” it’s better to start from a problem statement and then ensure you are collecting the proper data related to the problem to avoid having a biased data set.
Q9. What do you do if you do not have any data?
Joyce Weiner: Well, it makes it very difficult to do a data science project without any data. The first thing to do is to identify what data you would want if you could have them. Then, develop a plan for collecting those data. That might be building a survey or that might mean adding sensors or other instruments to collect data.
Q10. How do you know when an AI/Big Data Project is ready for deployment?
Joyce Weiner: In my experience a project is ready for deployment when you have aligned with the end user and have completed all the items needed to deliver the solution they want. This includes things like a maintenance plan, metrics to monitor the solution, and documentation of the solution.
Q11. Can you predict if a project will fail after deployment?
Joyce Weiner: If a project doesn’t start well, meaning if you aren’t thinking about deployment as you build the solution, it doesn’t bode well for the project overall. Without a deployment plan, and without planning for things like maintainability as you build the project, then it is likely the project will fail after deployment. And by this I include a dashboard which doesn’t get used, or a model that stops working and can’t be fixed by the current team.
Q12. What measures do you suggest to monitor a BigData/AI project after it is deployed?
Joyce Weiner: The simplest measure is usage. If the solution is a report, are users accessing it? If it’s a model, then also adding predicted values versus actual measurements. In the book, I share a tool called a SIPOC or supplier-input-process-output-customer which helps identify the metrics the customer cares about for a project. Some examples are timeliness, quality, and support level agreements.
Q13. In your book you did not address the societal and ethical implications of using AI. Why?
Joyce Weiner: I didn’t address the societal and ethical implications of AI for two reasons. One, it isn’t my area of expertise. Second, it is such a big topic that it warrants its own book.
……………………………………
Joyce Weiner is a Principal AI Engineer at Intel Corporation. Her area of technical expertise is data science and using data to drive efficiency. Joyce is a black belt in Lean Six Sigma. She has a BS in Physics from Rensselaer Polytechnic Institute, and an MS in Optical Sciences from the University of Arizona. She lives with her husband outside Phoenix, Arizona.
References
VB Staff. (2019, July 19). Why do 87% of data science projects never make it into production? Retrieved from VentureBeat: https://venturebeat.com/2019/07/19/why-do-87-of-data-science-projects-never-make-it-into-production/
White, A. (2019, Jan 3). Our Top Data and Analytics Predicts for 2019. Retrieved from Gartner: https://blogs.gartner.com/andrew_white/2019/01/03/our-top-data-and-analytics-predicts-for-2019/
“We like to say that we have the biggest database on companies and sole proprietors in Spain. We handle 7 million national economic agents, and the database undergoes more than 150,000 daily information updates. We have been active since 1992, so our historic file is massive. The database as a whole exceeds 40 Terabytes.” –Carlos Fernández
I have interviewed Carlos Fernández Deputy General Manager at INFORMA Dun & Bradstreet. We talked about their use of the LeanXcale database.
RVZ
Q1. Could you describe in a few words what Informa Dun & Bradstreet is and what its figures are?
Carlos Fernández: Informa D&B is the leading business information services company for customer and supplier acquisitions, analyses and management. We maintain this leadership in the three markets in which we compete: Spain, Portugal and Colombia.
We like to say that we have the biggest database on companies and sole proprietors in Spain. We handle 7 million national economic agents, and the database undergoes more than 150,000 daily information updates. We have been active since 1992, so our historic file is massive. The database as a whole exceeds 40 Terabytes.
To maintain and update this massive database, we invest 12 million euros every year in data and data handling procedures and systems, and we have 130 data specialists that take care of every single piece of information that we load into the database. Data quality, accuracy and timeliness as well as the coherence between different sources are essential for us.
Q2. I understand that Informa D&B has begun a profound update of its data architecture in order to continue being a market leader for another 10 years. What does the update consist of?
Carlos Fernández: We really began updating when gigabytes were insufficient for our needs. Now we see that terabytes will follow the same path. Petabytes are the future, and we need to be prepared for it. We usually say that when you need to travel to another continent, you need an airplane, not a car.
What does this mean in practical terms? Our customers are used to online responses to their needs. However, these needs have become more complex and require greater data depth.
If you are able to store hundreds of terabytes, use them very quickly and use complex analytic models to easily find the answer to your question, then you are in good shape.
To fulfill these requirements, a Data Lake orientation is really a must, and solutions like LeanXcale will become key factors in our new architectural approach.
Q3. You mentioned that you have found a new database manager, LeanXcale, to address the challenges for your data platform. What kind of database manager were you using before and why are you replacing it?
Carlos Fernández: INFORMA was, and still is, an “Oracle” company. Having said that, the more we began to move into a Data Lake design, the more new solutions and new names came into play. Mongo, Cassandra, Spark …
So, having come from an SQL-oriented environment featuring many lines of code, we wondered if we could fulfill our new requirements with the old technology. The answer to that query is a clear NO. Can we rewrite INFORMA as a whole? The answer is again NO. Can we meet our new requirements by increasing our computing capacity? Once more, the answer is NO.
We needed to be smart and find a solution that could bring positive outcomes in an affordable technical environment.
Q4. According to you, one of the main improvements has been the acceleration of the process through leveraging the interfaces of LeanXcale with NoSQL and SQL. Can you elaborate on how it helped you?
Carlos Fernández: As I mentioned before, we have quite challenging business and product performance requirements. On the other hand, business rules are also complex and difficult to rewrite for different environments.
Can we solve our issues without a huge investment in expensive servers? Can we also accommodate these requirements in a scalable fashion?
LeanXcale and its NoSQL and SQL interfaces were the perfect match for our needs.
Q5. What are the technical and business benefits of having a linear scaling database such as LeanXcale?
Carlos Fernández: We have many customers. They range from the biggest Spanish companies to small businesses and sole proprietors. They have completely different needs, but, at the same time, they share many requirements, with the main one being immediate response time.
Of course, the amount of data and model complexity involved in generating a response can vary quite a lot, depending on the size of the company and its portfolio.
Only by being able to accommodate such demands with a scalable solution can we provide the required services under a valid cost structure
Q6. How was your experience with LeanXcale as a provider?
Carlos Fernández: For us, this has been quite an experience. From the very beginning, the LeanXcale team acted as though they worked for INFORMA.
We started with a POC, and it was not an easy one. We had the feeling that we had the best parts of the company involved in the project. Well, not really the feeling since that really was the case.
The key factor, however, was the team’s knowledge, that is, the depth of their technical approach, the extent to which they understood our needs and their ability to reshape many aspects to make our requirements a reality.
Q7. You said that LeanXcale has a high impact on reducing total cost of ownership. Could you provide us with figures comparing it to the previous scenario?
Carlos Fernández: LeanXcale has reduced our processing time by more than 72 times over. The standard LeanXcale licensing and support price means savings of around 85%. In our case, we have maximized these savings by signing an unlimited License Agreement for the next five years.
Additionally, this improved performance reduces the infrastructure used in our hybrid cloud by the same proportion: 72 times over.
However, these savings are less crucial than the operational risk reduction and the enablement of new services. Being ready to react to any unexpected event quickly makes our business more reliable. New services will allow us to maintain our market leadership for the next decade.
Q8. How will this new technology affect the services offered to the customer?
Carlos Fernández: I think that we can consider two periods of time in the answer.
Right now, we are capable to improving our actual product range features. We can deliver updated external databases faster and more frequently and offer a better customer experience in many areas. We can provide more data and more complex solutions to a wider range of customers.
For the future, we are discovering new ways to design new products and services. When you break down barriers, new ideas come up quite easily. Our marketing team is really excited about the new capabilities we will have. I am sure that we will shortly see many new things coming from us.
QX. Anything else you wish to add?
Carlos Fernández: INFORMA D &B is a company that has put innovation at the top of its strategy. We never stop and will find new opportunities through using LeanXcale. We are very pleased and very sure that we will be a market leader for many years to come!
——————————————
Carlos Fernández holds a Superior Degree in Physics and an MBA from the “Instituto de Empresa” in Madrid. His professional career has included stints at companies such as Saint Gobain, Indra, Reuters and Fedea.
At the present time, he is Deputy General Manager at INFORMA and a member of the board of the XBRL Spanish Jurisdiction. In addition, he is a member of the Alcobendas City Council’s Open Data Advisory Board. This entity is firmly committed to continue advancing and publishing information in a reusable format to generate social and economic value.
Furthermore, he is a former member of various boards, including the boards of ASNEF Logalty, ASFAC Logalty and CTI.
He is a former member of GREFIS (Financial Information Services Group of Experts) and a current member of XBRL CRAS (Credit Risk Services), for which he is Vice President of the Technical Working Group. He is also a former member of the Information Technologies Advisory Council (CATI) and the AMETIC Association (Multi-Sector Partnership of Electronics, Communications Technology, Telecommunications and Digital Content Companies).
Resources
YouTube: LeanXcale’s success story on Informa D&B by Carlos Fernández Iñigo, CTO at Informa D&B
NewSQL: principles, systems and current trends Tutorial, IEEE Big Data 2019, Los Angeles, USA, 12 December 2019 by Patrick Valduriez and Ricardo Jimenez-Peris.
Related Posts
Follow us on Twitter: @odbmsorg
##
“Like it or not, debugging is part of programming. There is a lot of research and cool technology about preventing bugs (programming language features or design decisions that make certain bugs impossible) or catching bugs very early (through static or dynamic analysis or better testing), and all this is of course laudable and good stuff. But I’ve often been struck by how little attention is placed on making it easier to fix those bugs when they inevitably do happen.” — Greg Law
Q1: You are a prolific speaker at C++ conferences and podcasts. In your experience, who is still using C++?
Greg Law: C++ is used widely and its use is growing. I see a lot of C++ usage in Data Management, Networking, Electronic Design Automation (EDA), Aerospace, Games, Finance, etc.
It’s probably true that use of some other languages – particularly JavaScript and Python – is growing even faster, but those languages are weak where C++ is strong and vice versa. Go is growing a lot and Rust is getting a lot of attention right now and has some very attractive properties. 10-15 years ago, it felt almost like programming languages were “done” but these days, we’re seeing a lot of innovation both in terms of new or newish languages, and development of older languages. Even plain old C is seeing a bit of a resurgence. We are going to continue living in a multi-language world; I expect C++ to remain an important language for a long while yet.
Q2: In my interview with Bjarne Stroustrup last year, he spoke about the challenge of designing C++ in the face of contradictory demands of making the language simpler, whilst adding new functionality and without breaking people’s code. What are your thoughts on this?
Greg Law: I totally agree. I think all engineering is about two things – minimising mistakes and making tradeoffs (i.e. judgements). Mistakes might be a miscalculation when designing a bridge so that it won’t stand up or an off-by-one error in your program – those are clearly undesirable, we don’t want those. A tradeoff might be between how expensive the bridge is to build and how long it will last, or how long the code takes to write and how fast it runs.
But tradeoffs are relevant when it comes to reducing errors too – what price should we pay to avoid errors in our programs? How much extra time are we prepared to spend writing or testing it to get the bugs out? How far do we go tracking down those flaky 1-in-a-thousand failures in the test-suite? Are we going to sacrifice runtime performance by writing it in a higher-level and less error-prone language? Alternatively, we could choose to make that super-clever optimisation about which it’s hard to be confident it is correct today and even harder to be sure it will remain correct as the code around it changes; but is the runtime performance gain worth it, given the uncertainty that has been introduced? It’s counterintuitive, but actually there is an optimal bugginess for any program – we inevitably trade off cost of implementation and performance against potential bugs.
It’s probably fair to say however that most programs have more bugs than is optimal! I think it’s also true that human nature means we tend to under-invest in dealing with the bugs early, particularly flaky tests. We always feel “this week is particularly busy, I’ll part that and take a look next week when I’ll have a bit more time”; and of course next week turns out to be just as bad as this week.
Q3: I understand Undo helps software engineering teams with debugging complex C/C++ code bases. What is the situation with debugging C/C++? What are you seeing on the ground?
Greg Law: Like it or not, debugging is part of programming. There is a lot of research and cool technology about preventing bugs (programming language features or design decisions that make certain bugs impossible) or catching bugs very early (through static or dynamic analysis or better testing), and all this is of course laudable and good stuff. But I’ve often been struck by how little attention is placed on making it easier to fix those bugs when they inevitably do happen. The situation is not unlike medicine in that prevention is better than cure, and the earlier the diagnosis the better; but no matter what we do, we will always need cure (unlike medicine we have the balance wrong the other way round – in medicine we spend way too much on cure vs prevention!).
It’s all about tradeoffs again. All else being equal, we’d ensure there are no bugs in the first place; but all else never is equal, and how high a price can we afford on prevention? And actually if you make diagnosis and fixing cheaper, that further reduces how much you need to spend on prevention.
The harsh reality is that close to none of the software out there today is truly understood by anyone. Humans just aren’t very good at writing code, and economic pressure and other factors mean we add and fix tests until our fear of delivering late outweighs our fear of bugs. This is compounded as code ages; people move on from the project, bugs get fixed by adding a quick hack, further increasing the spaghettification. Like frogs in boiling water, we’ve kind of become so used to it that we don’t notice how awful it is any more!
People routinely just disable flaky failing tests because they can’t root-cause them. Over a third of production failures can be traced back directly or indirectly to a test that was failing and was ignored.
Q4: You have designed a time travel debugger for C/C++. What is it for?
Greg Law: Debugging is really answering one question: “what happened?”. I had certain expectations for what my code was going to do and all I know is that reality diverged from those expectations. Traditional debuggers are of limited help here – they don’t tell you what happened, they just tell you what is happening right now. You hit a breakpoint, you can look around and see what state everything is in, and either it looks all good or you can see something wrong. If it’s good, set another breakpoint and continue. If it’s bad… well, now you want to know what happened, how it became bad. The odds of breaking just at the right point and stepping your code through the badness are pretty long. So you run again, and again, if you’re lucky vaguely the same thing happens each time so you can home in on it; if not, well… you’re in trouble.
With a time travel debugger like UDB, it’s totally different – you see some piece of state is bad, you can just go backwards to find out why. Watchpoints (aka data breakpoints) are super powerful here – you can watch the bad piece of data and run backwards and have the debugger take you straight to the line of code that last modified it. We have customers who have been trying to fix something for literally years who with a couple of watch + reverse-continue operations had it nailed in an hour.
Time travel debuggers are really powerful for any bug where a decent amount of time passes between the bug itself and the symptoms (assertion failure, segmentation fault, bad results produced). They are particularly useful when there is any kind of non-determinism in the program – when the bug only occurs one time in a thousand and/or every time you run the program it fails at a different point in or a different way. Most race conditions are examples of this; so are many memory or state corruption bugs. It can also help to diagnose complex memory leaks. Most leak detectors or static analysis help with the trivial issues( say you returned an error and forgot to add a free) but not the hard ones (for example when you have a reference counting bug and so the reference never hits zero and the resources don’t get cleaned up).
This new white paper provides more insight into what kind of bugs time travel debugging helps with *. It’s not uncommon for software engineers to spend half their time debugging, so it’s a must-read for anyone who wants to increase development team productivity.
By the way, Time Travel Debugging is also sometimes known as Replay Debugging or Reverse Debugging.
Q5: Since you say it lets you see what happened, could it help with code exploration too?
Greg Law: Funny you say that. This is a use case it wasn’t initially designed for, but many engineers are using it to explore unfamiliar codebases they didn’t write. They use it to observe program behaviour by navigating forwards and backwards in the program’s execution history, examine registers to find the address of an object etc. They say there’s a huge productivity benefit in being able to go backwards and forwards over the same section of code until you fully understand what it does. Especially as you’re trying to understand a certain piece of code, and there are often millions of lines you don’t care about right now, it’s easy to get lost. When that happens you can go straight back to where you were and continue exploring.
Debugging is about answering “what did the code do” (ref. cpp.chat podcast on setting a breakpoint in the past **); but there are other activities that involve asking that same question. As I say, most code out there is not really understood by anyone.
Q6: What are your tips on how to diagnose and debug complex C++ programs?
Greg Law: The hard part about debugging is figuring out the root cause. Usually, once you’ve identified what’s wrong, the fix is quite simple. We once had a bug that sunk literally months of engineering time to root cause, and the fix was a single character – that’s extreme but the effect it’s illustrating is very common.
Identifying the problem is an exercise in figuring out what the code really did as opposed to what you expected. Somewhere reality has diverged from your expectations – and that point of divergence is your bug. If you’re lucky, the effects manifest soon after the bug – maybe a NULL pointer is dereferenced and you needed a check for NULL right before it. But more often that pointer should never be NULL, the problem is earlier.
The answer to this is multi-pronged:
1. Liberal use of assertions to find problems as close to their root cause as possible. I reckon that 50% of assert fails are just bogus assertions, which is annoying but cheap to fix because the problem is at the very line of code that you notice. The other 50% will save you a lot of time.
2. If you see something not right, do not sweep it under the carpet. This is sometimes referred to as ‘smelling smoke’. Maybe it’s nothing, but you better go and look and see if there’s a fire. When you’re smelling smoke, you’re getting close to the root cause. If you ignore it, chances are that whatever the underlying cause of the weirdness is, it will come back and bite you in a way that gives you much less of a clue as to what’s wrong, and it’ll take you a lot longer to fix it. Likewise don’t paper over the cracks – if you don’t understand how that pointer can be NULL, don’t just put a check for NULL at the point the segv happened.
This most often manifests itself in people ignoring flaky test failures. 82% of software companies report having failing tests that were not investigated that went on to cause production failures *** (the other 18% are probably lying!). Working in this way requires discipline – following that smell of smoke or fixing that flaky test that you know isn’t your fault will be a distraction from your proximate goal. But when something is not right, or not understood, ignoring it now is going to cost you a lot of time in the long run.
3. Provide a way to know what your code is really doing. The trendy term is observability. This can be good old printf or some more fancy logging. An emerging technique is Software Failure Replay, which is related to Time-Travel Debugging. Here you record the program execution (a failed process), such that a debugger can be pointed at the execution history and you can go back to any line of code that executed and see full program state. This is like the ultimate observability. Discovering where reality diverged from your expectations becomes trivial.
————————————-
Dr Greg Law is the founder of Undo, the leading Software Failure Replay platform provider. Greg has 20 years’ experience in the software industry prior to founding Undo and has held development and management roles at companies, including Solarflare and the pioneering British computer firm Acorn. Greg holds a PhD from City University, London, and is a regular speaker at CppCon, ACCU, QCon, and DBTest.
Resources
* White Paper: Increase Development Productivity with Time Travel Debugging
** cpp.chat podcast – Setting a Breakpoint in the Past
*** Freeform Dynamics Analyst Report – Optimizing the software supplier and customer relationship
Related Posts
Follow us on Twitter: @odbmsorg







