ODBMS Industry Watch http://www.odbms.org/blog Trends and Information on Big Data, New Data Management Technologies, Data Science and Innovation. Fri, 22 Mar 2019 09:03:02 +0000 en-US hourly 1 https://wordpress.org/?v=4.2.23 On European Data Protection. Interview with Giovanni Buttarelli http://www.odbms.org/blog/2019/03/on-european-data-protection-interview-with-giovanni-buttarelli/ http://www.odbms.org/blog/2019/03/on-european-data-protection-interview-with-giovanni-buttarelli/#comments Wed, 06 Mar 2019 18:01:59 +0000 http://www.odbms.org/blog/?p=4945

“AI is now the most fashionable pretext for collecting data. The long honeymoon with big tech is over. But they need to be part of the solution no longer part of the problem.” –Giovanni Buttarelli

I have interviewed Giovanni Buttarelli, head of the European Data Protection Supervisor.  We talked about the mission of the European Data Protection Supervisor (EDPS), Digital Ethics, AI,  China, USA, Tech companies, Democracy and many more…

RVZ

Q1. What is the mission of the European Data Protection Supervisor (EDPS)?

Giovanni Buttarelli: The EDPS is an independent supervisory authority and EU institution. We monitor the processing of personal data by the EU institutions and bodies, advise on policies and legislation that affect privacy and cooperate with similar authorities to ensure consistent data protection.
As of 25 of May 2018, we are full member of the European Data Protection Board and in addition provide its secretariat. Since we launched our five year strategy in 2015, our aim has been for EDPS to be a hub for discussion on the next generation of protections for digital rights.
But I want to focus on two very strong initiatives of this mandate.

In our 2016 opinion on a “coherent enforcement of fundamental rights in the age of big data”, we established the Digital Clearinghouse, a voluntary network of regulatory bodies to share information, voluntarily and within the bounds of their respective competences, about possible abuses in the digital ecosystem and the most effective way of tackling them.
The network is increasingly growing and finding legitimacy among regulators.

In 2015, we launched a debate on digital ethics. The EDPS set up the Ethics Advisory Group with the aim of advance debate about ‘digital life’ beyond the legal requirements of the current law and help finding a new approach. After 3 years of discussion and the report of the EAG, in 2018 the 40th International Conference of Data Protection and Privacy Commissioners hosted by the EDPS represented the ground for an open and global discussion on the need of an ethical approach towards new technologies.
Our motto is ‘leading by example’ and the EDPS has become the centre for gravity in data protection and privacy matters, also representing a hub for discussion in data driven society.

Q2. How many of the EDPB representatives do have a technical background?

Giovanni Buttarelli: We have an IT Policy team composed of half a dozen computer scientists and technical experts who monitor and advise on emerging technologies. We are also in the process of recruiting someone with a data scientist background.

Q3. You recently organised the 2018 International Conference of Data Protection and Privacy Commissioners in Brussels. What are the main messages that came out from such a conference?

Giovanni Buttarelli: The conference’s success built on the fact that we identified a need to take the data protection debate onto a new level and discuss the subject matter in its broader context. Data protection cannot be isolated from developments in AI, machine learning, big data, the internet of things, biometrics… anymore.
Everyone who is a first-hand witness of this, from tech developers to data protection authorities, has a responsibility to acknowledge that this unprecedented digital shift is a historical moment and that new approaches are needed in light of the challenges it brings about. Whenever technological innovation came with risks and dangers, ethics have been key in addressing and preventing them: Ethics can also help us now to find a path into a digital future that re-affirms and protects our long-standing culture of values and rights.
The conference also showed that a collective effort is necessary to move towards internationally recognised and respected standards. What is ethical for whom and how can we agree upon common standards? From tech developers and service providers, to regulators and supervisory authorities, ethicists and anthropologists, civil society organisations and human rights defenders, and representatives of these from all regions of our planet, everyone must engage in this debate.

Q4. And what are the main challenges ahead?

Giovanni Buttarelli: The overall challenge we are facing is to ensure we gain the maximum benefit from new technologies without undermining fundamental rights and long-standing ethical principles. This requires a collective intelligence exercise as it has maybe never happened before: it requires farsighted, long-term thinking, precaution and risk-awareness.
This will take us into abstract ethical deliberations about the meaning of human autonomy and self-determination in the digital age, it will require us to realistically think through various scenarios of how emerging tech could affect our lives in the future, and it will require us to ensure a timely prevention of their potential detrimental effects – without hindering innovation.

In the language of examples: we do not want an overly restrictive stem cell research regulation because of unfounded fears – this has been a criticism of Canada’s approach. On the other hand, we don’t want to introduce smart cars which constantly sends location information to the government – as is in China.

Striking the right balance is the challenge. And getting everyone on board, most importantly, the leading tech developers and providers, and regions with different ethical standards.

Certain technologies – facial recognition, autonomous weapons, smart glasses – imply such profound and unpredictable consequences for society that ethics may demand a general prohibition unless there is a clear benefit for society and clear controls on their use, with accountability where something goes wrong.

Q5. Data, AI and Intelligent systems are becoming sophisticated tools in the hands of a variety of stakeholders, including political leaders. Why do you think it is necessary to define a global digital ethics?

Giovanni Buttarelli: The questions raised by AI are myriad, many legal but more importantly ethical. Indeed as the big subject of the World Economic Forum in Davos last January, privacy was flagged as the biggest concern surrounding the development of AI systems.
The last few years have demonstrated that digital markets cannot be left entirely to their own devices. Doubts surfaced with a string of high profile data breaches, like the Ashley Maddison incident in 2015, controversial not so much in the volume of data but the sensitivity of the type of data. It culminated with the Facebook / Cambridge Analytica case.
Big tech has a major responsibility here. But the problem is systemic. The dominant business model dictates that to be successful you need to track everyone, profile and target. So if you want to develop autonomous vehicles it is fine, but you have to take responsibility for the data collection and surveillance which seems to be needed to train these systems.
AI is now the most fashionable pretext for collecting data. It requires personal data on a huge scale until it becomes intelligent enough to teach itself. In this context, a number of ethical questions have to be raised: for instance, how do we control bias in AI systems? How do we keep control of AI developments and algorithms? Who decides what is right or wrong?
Recent events have shown how our democracies can be affected by the unethical use of personal data, and if our democracies are at stake we are undermining the foundation of our society.

Q6. Is it really feasible? How do you plan to engage nations outside Europe, such as USA, China, India, to name a few?

Giovanni Buttarelli: Is this feasible? I don’t think we have only ‘yes’ and ‘no’ answer in this case. First we need to build a global consensus on what is and is not acceptable. We don’t have that consensus now. Look at how the UN is not able to even start to discuss whether to ban killer drones.
We will continue discussions this year with our programme of teleconferences and podcasts on digital ethics. Like with the conference, we will involve experts from all regions of the world including China and India.
After the International Conference, I remain optimistic. I believe that in the years to come Europe will not be alone and other countries will be more and more involved in such a key debate.

Q7. How does such digital ethics interact with the law? How has it materialised in fields like the life sciences and what role does it play in resolving public policy dilemmas?

Giovanni Buttarelli: GDPR represents a landmark in data protection law not only at EU level, but globally. With the new regulation, the European Union set the highest standards and now many countries are trying to emulate it. What makes GDPR future proof for at least 15 years are the ethical principles that it incorporates.
Accountability principle, privacy by default and privacy by design for instance are the first steps towards the adoption of a more ethical approach. However, an effective implementation of the legal principle of data protection by design and by default is a necessary yet not sufficient milestone towards responsible technology and data governance at the service of humans and should be framed within the wider concept of “ethics by design”.
We need to clarify that ethics is not a substitute for robust, clear, simple and well enforced legislation. Companies and governments need to reflect on the impact of their use of technology on individuals, groups, society and the environment, as well as respecting the spirit and the letter of the law.
It is interesting how the Social Credit System in China – a complex, programme involving multiple government agencies at all levels – seems to elevate the “ethical” notion of trustworthiness in a harmonious society above the established traditions of rule of law and human rights in the Peoples Republic. That seems dangerous, and an instructive example of the ethics and law debate we need to have.

Q8. What if the decision made using AI-driven algorithm harmed somebody, and you cannot explain how the decision was made?

Giovanni Buttarelli: This is exactly what I did at the International Conference of Data Protection and Privacy Commissioners. The first session was dedicated on purpose to the identification of what we mean by ethics and the recognition of relevant experiences before we move speaking of digital ethics.
AI and automated decision making have risen a number of ethical concerns: technology should serve human kind and not the other way round. The EDPS wanted to have a global discussion on data driven technologies because they constantly affect people’s lives.
Self-driving cars, for instance, are increasingly spreading worldwide. They are controlled by algorithms which analyse the surroundings and take actions or decisions. Nevertheless, in case of a unavoidable incident that may involve a children crossing the street or a third person, who decides what is the best decision to make? If the car has to decide between potentially kill the driver or potentially kill the pedestrian, what is the right choice? Is it right that a machine can determine human life? The same applies to self-guided drones and many other technologies ready to be launched soon.
These technologies are growing worldwide, therefore I believe that a global consensus on what is feasible or acceptable is possible.

Q8. Pedro Domingos (Professor at University of Washington) said in a interview “So maybe AI will force us to confront what we really mean by ethics before we can decide how we want AIs to be ethical.” What do think of this?

Giovanni Buttarelli: AI – in the sense of autonomous machines – has been with us for several decades. It has not developed in a vacuum – algorithms have been trained by people with their own conscious and unconscious biases. Increasingly, the market for AI, like most digital services, is becoming concentrated in the hands the tech giants. So you cannot separate the technology from the power structure and the prejudices and inequalities which already exist in society – and which seem to be getting bigger.
AI is potentially very powerful, but more urgent is to debate what the AI is supposed to achieve, who will benefit, who will suffer, who will take responsibility when something goes wrong. These are ethical questions.
Plus we need to avoid hyperbolic statements about AI, and instead force AI investment to take place within the existing legal framework. In data protection terms, that means clear lines of responsibility, purpose limitation, data minimisation, respect for the rights of data subjects and privacy by design.

Q9. Are computer system designers (i.e. Software Developers, Software Engineers, Data Scientists, Data Engineers, etc,), the ones who will decide what the impact of these technologies are and whether to replace or augment humans in society?

Giovanni Buttarelli:  Not alone. At the moment there is too much power in the hands of a few mega tech companies and governments. We need to decentralise the internet, give more power to people over their digital lives.
Engineers have a valid voice but they need to be part of a conversation with lawyers, ethicists, experts from the humanities. IPEN, our initiative, seeks to do this.

How is it possible to define incentives for using an ethical approach to software development, especially in the area of AI?

Giovanni Buttarelli: Societal awareness is indeed increasing and many people are getting more privacy-conscious.
A recent US study
says that since Cambridge Analytica, over half of (adult) Facebook users have adjusted their privacy settings, around 40% have taken a break from checking the platform for a period of several weeks or more, while around a quarter say they have deleted the Facebook app from their cellphone. Their share prices seem to have taken a tumble too.
The long honeymoon with big tech is over. But they need to be part of the solution no longer part of the problem.
There are three lessons to be taken away by any company.
First, your clients will lose their trust in you and leave you if you do not respect their rights and dignity.
Second, you face sanctions and reputational damage.
And third, your business model is not successful in the long-term. So I repeat: real innovation is responsible innovation. Trust in big tech companies is decreasing and at the same time we can measure an increasing number of people using data protection or privacy oriented services.
—————————

EDPS Team

EDPS Team

Giovanni Buttarelli, European Data Protection Supervisor.

Mr. Giovanni Buttarelli (1957) has been European Data Protection Supervisor since December 2014. He was appointed by a joint decision of the European Parliament and the Council on 4 December 2014 for a term of five years. He previously served as Assistant EDPS, from January 2009 until December 2014. Before joining the EDPS, he worked as Secretary General to the Italian Data Protection Authority, a position he occupied between 1997 and 2009. A member of the Italian judiciary with the rank of Cassation judge, he has attended to many initiatives and committees on data protection and related issues at international level.

Resources

2018 EDPS Annual Report

Related Posts

– On Artificial Intelligence, Machine Learning, and Deep Learning. Interview with Pedro Domingos. ODBMS Industry Watch, June 18, 2018

– Will Democracy Survive Big Data and Artificial Intelligence?. Helbing, D., Frey, B. S., Gigerenzer, G., Hafen, E., Hagner, M., Hofstetter, Y., van den Hoven, J., Zicari, R. V., & Zwitter, A.. (2017).  Scientific American (February 25, 2017).

Follow us on Twitter: @odbmsorg

##

]]>
http://www.odbms.org/blog/2019/03/on-european-data-protection-interview-with-giovanni-buttarelli/feed/ 2
On Kubernetes. Interview with Eric Tune http://www.odbms.org/blog/2019/02/on-kubernetes-interview-with-eric-tune/ http://www.odbms.org/blog/2019/02/on-kubernetes-interview-with-eric-tune/#comments Mon, 18 Feb 2019 10:07:33 +0000 http://www.odbms.org/blog/?p=4909

“Perhaps less obvious is how role definitions in an organization change as scale increases. Once rare tasks that were just a small part of one team’s responsibilities become so common that they are a full-time job for someone. At that point, one either needs to create automation for the task, or a new team needs to be assembled (or hired) to perform that task full time. ” — Eric Tune

I have interviewed Eric Tune, Senior Staff Engineer at Google. We talked about Kubernetes. Eric has been a Kubernetes contributor since 1.0

RVZ

Q1. What are the main technical challenges in implementing massive-scale environments?

Eric Tune: Whether working at small or massive scale, the high-level technical goals don’t change: security, developer velocity, efficiency in use of compute resources, supportability of production environments, and so on.

As scale increases, there are some fairly obvious discontinuities, like moving from an application that fits on a single-machine to one that spans multiple machines, and from a single data center or zone to multiple regions. Quite a bit has been written about this. Microservices in particular can be a good fit because they scale well to more machines and more regions.

Perhaps less obvious is how role definitions in an organization change as scale increases. Once rare tasks that were just a small part of one team’s responsibilities become so common that they are a full-time job for someone. At that point, one either needs to create automation for the task, or a new team needs to be assembled (or hired) to perform that task full time. Sometimes, it is obvious how to do this. But, when this repeats many times, one can end up with a confusing mess of automation and tickets, dragging down development velocity and confounding attempts to analyze security and debug systemic failure.

So, a key challenge is finding the right separation responsibilities so that multiple pieces of automation, and multiple human teams collaborate well. Doing it requires not only having a broad view of an organization’s current processes and responsibilities around development, operations, and security; but also which assumptions behind those are no longer valid.

Kubernetes can help hereby providing automation for exactly the types of tasks that become toilsome as scale increases. Several of its creators have lived through organic growth to a massive-scale.  Kubernetes is built from that experience, with awareness of the new roles that are needed at massive-scale.

Q2. What is Kubernetes and why is it important?

Eric Tune: First, Kubernetes is one of the most popular ways to deploy applications in containers. Containers make the act of maintaining the machine & operating system a largely separate process from installing and maintaining an application instance – no more worrying about shared library or system utility version differences.

Second, it provides an abstraction over IaaS: VMs, VM images, VM types, load balancers, block storage, auto-scalers, etc. Kubernetes runs on numerous clouds, on-premises, and on a laptop. Many complex applications, such as those consisting of many microservices, can be deployed onto any Kubernetes cluster regardless of the underlying infrastructure. For an organization that may want to modernize their applications now, and move to cloud later, targeting Kubernetes means they won’t need to re-architect when they are ready to move. Third, Kubernetes supports infrastructure-as-code (IaC). You can define complex applications, including storage, networking, and application identity, in a common configuration language, called the Kubernetes Resource model. Unlike other IaC systems, which mostly support a “single-user” model, Kubernetes is designed for multiple users.  It supports controlled delegation of responsibility from an ops team to a dev team.

Fourth, it provides an opinionated way to build distributed system control planes, and to extend the APIs and infrastructure-as code type system. This allows solution vendors and in-house infrastructure teams to build custom solutions that feel like they are first class parts of Kubernetes.

Q3. Who should be using Kubernetes?

Eric Tune: If your organization runs Linux-based microservices and has explored container technology, then you are ready to try Kubernetes.

Q4. You are a Kubernetes contributor since 1.0 (4 years). What did you work on specifically?

Eric Tune: During the first year, I worked on whatever needed to be done, including security (namespaces, service accounts, authentication and authorization, resource quota), performance, documentation, testing, API review and code review.

In those first years, people were mostly running stateless microservices on Kubernetes. In the second year, I worked to broaden the set of applications that can run on Kubernetes. I worked on the Job and CronJob APIs of Kubernetes, which support basic batch computation, and the StatefulSet API, which supports databases and other stateful applications. Additionally, I worked with the Helm project on Charts (easy-to-install applications for Kubernetes), with the Spark open source community to get it running on Kubernetes.

Starting in 2017, Kubernetes interest was growing so quickly that the project maintainers could not accept a fraction of the new features that were proposed. The answer was to make Kubernetes extensible so that new features could be build “out of the core.” I worked to define the extensibility story for Kubernetes, particularly for Custom Resource Definitions (CRDs) and Webhooks. The extensibility features of Kubernetes have enabled other large projects, such as Istio and Knative, to integrate with Kubernetes with lower overhead for the Kubernetes project maintainers.

Currently, I lead teams which work on both Open Source Kubernetes and Google Cloud.

Q5. What are the main challenges of migrating several microservices to Kubernetes?

Eric Tune: Here are three challenges I see when migrating several microservices to Kubernetes, and how I recommend handling them:

  • Remove Ordering Dependencies: Say microservice C depends on microservices A and B to function normally.  When migrating to declarative configuration and Kubernetes, the startup order for microservices can become variable, where previously it was ordered (e.g. by a script).  This can cause unexpected behaviors. For example, microservice C might log errors at a high rate or crash if A is not ready yet. A first reaction is sometimes “how can I guarantee ordering of microservice startup,”   My advice is not to impose order, but to change problematic behavior. For example, C could be changed to return some response for a request even when A and B are unreachable. This is not really a Kubernetes-specific requirement – it is a good practice for microservices, as it allows for graceful recovery from failures and for autoscaling.
  • Don’t Persist Peer Network Identity: Some microservices permanently record the IP addresses of their peers at startup time, and then don’t expect it to ever change. That’s not a great match for the Kubernetes approach to networking. Instead, resolve peer addresses using their domain names and re-resolve after disconnection.
  • Plan ahead for Running in Parallel: When migrating a complex set of microservices to Kubernetes, it’s typical to run the entire old environment and the new (Kubernetes) environment in parallel. Make sure you have load replay and response diffing tools to evaluate a dual environment setup.

Q6. How can Kubernetes scale without increasing ops team?

Eric Tune: Kubernetes is built to respond to many types of application and infrastructure failures automatically – for example slow memory leaks in an application, or kernel panics in a virtual machine. Previously this kind of problem may have required immediate attention. With Kubernetes as the first line of defense, ops can wait for more data before taking action. This in turn supports faster rollouts, as you don’t need to soak as long if you know that slow memory leaks will be handled automatically, and you can fix by rolling forward rather than back.

Some ops teams also face multiple deployment environments, including multi-cloud, hybrid, or varying hardware in on-premises datacenters. Kubernetes hides somes differences between these, reducing the number of variations of configuration that is needed.

A pattern I have seen is role specialization within ops teams, which can bring efficiencies. Some members specialize in operating the Kubernetes cluster itself, what I call a “Cluster Operations” role, while others specialize in operating a set of applications (microservices). The clean separation between infrastructure and application – in particular the use of Kubernetes configuration files as a contract between the two groups – supports this separation of duties.

Finally, if you are able to choose a hosted version of Kubernetes such as Google Container Engine (GKE),  then the hosted service takes on much of the Cluster Operations role. (Note: I work on GKE.)

Q7. On-premises, hybrid, or public cloud infrastructure: which solutions would you think is it better for running Kubernetes?

Eric Tune: Usually factors unrelated to Kubernetes will determine if an application needs to run on-premises, such as data sovereignty, latency concerns or an existing hardware investment. Often some applications need to be on-premises and some can move to public cloud. In this case you have a hybrid Kubernetes deployment, with one or more clusters on-premises, and one or more clusters on public cloud. For application operators and developers, the same tools can be used in all the clusters. Applications in different clusters can be configured to communicate with each other, or to be separate, as security needs dictate. Each cluster is a separate failure domain. One does not typically have a single cluster which spans on-premises and public cloud.

Q8. Kubernetes is open source. How can developers contribute?

Eric Tune: We have 150+ associated repositories that are all looking for developers (and other roles) to contribute. If you want to help but aren’t sure what you want to work on, then start with the Community ReadMe, and come to the community meetings or watch a rerun. If you think you already know what area of Kubernetes you are interested in, then start with our contributors guide, and attend the relevant Special Interest Group (SIG) meeting.

—————————–

EricTuneHeadshot3

Dr.  Eric  Tune is a Senior Staff Engineer at Google.  He leads dozens of engineers working on Kubernetes and GKE.  He has been a Kubernetes contributor since 1.0. Previously at Google he worked on the Borg container orchestration system, drove company-wide compute efficiency improvements, created the Google-wide Profiling system, and helped expand the size of Google’s search index. Prior to Google, he was active in computer architecture research.  He holds computer engineering degrees (PhD, MS, BS) from UCSD . 

]]>
http://www.odbms.org/blog/2019/02/on-kubernetes-interview-with-eric-tune/feed/ 0
On gaining Knowledge of Diabetes using Graphs. Interview with Alexander Jarasch http://www.odbms.org/blog/2019/02/on-gaining-knowledge-of-diabetes-using-graphs-interview-with-alexander-jarasch/ http://www.odbms.org/blog/2019/02/on-gaining-knowledge-of-diabetes-using-graphs-interview-with-alexander-jarasch/#comments Mon, 04 Feb 2019 09:11:54 +0000 http://www.odbms.org/blog/?p=4911

“The challenge is that we have to combine lots of different types of data, simultaneously, depending on genetics, epigenetics, different subject matter areas such as lipidomics, metabolomics, the lifestyle and behaviour of the patient and looking at people in different cultures and environments.  The variety of data we need to analyse is a major challenge, which is why from a data perspective we use graph. It is here we can make the links to answer biomedical queries.” –Alexander Jarasch.

I have interviewed Alexander Jarasch, head of data and knowledge management at the German Center for Diabetes Research (DZD). We discussed what are the main challenges in trying to understand more about diabetes, and how diabetes researchers are using graph database technology in order to create knowledge graphs and find hidden connections in medical data.

RVZ

Q1. You are the head of data and knowledge management at the German Center for Diabetes Research (DZD). What are your main tasks?

Alexander Jarasch: There are several responsibilities that my team fulfils within DZD, – these include IT infrastructure which can encompass databases, data transfer services, data management and knowledge management as a second part of our remit.

Q2. Diabetes is one of the most widespread diseases worldwide. What are the main challenges in trying to understand more about diabetes?

Alexander Jarasch: Diabetes is a metabolic disease, and a complex area to understand. It is not yet obvious what causes type 2 diabetes, but it is clearly linked to obesity. Here, we try to understand the molecular mechanisms, where diabetes starts and how we can try and prevent it. The challenge is that we have to combine lots of different types of data, simultaneously, depending on genetics, epigenetics, different subject matter areas such as lipidomics, metabolomics, the lifestyle and behaviour of the patient and looking at people in different cultures and environments.

All these dependencies are connected to each other. Metabolism is connected to the environment, genetics, epigenetics and so forth. The big challenge is to see this not just from one perspective, but from as many perspectives at the same time as we can get.

From a data management point of view it is not easy bringing all this patient-related data together with basic research data, and then to combine it with publicly available data, all held in disparate data stores and souces. We need to bring this heterogenous data together and connect it in a very clear way.

Q3. How is the status of research in treating and preventing the disease?

Alexander Jarasch: Diabetes is not currently curable. We have to distinguish between type 1 and type 2 diabetes. Preventing type 1 is not relevant, as it is genetic and one inherits it. Preventing type 2 is very complicated. Obviously it is suggested that patients lead a healthier life, play more sport and drink less alcohol. But some patients don’t respond to lifestyle interventions.

The research itself is very complex and diverse. You can look at it from the patient side, the basic research side or the animal model side. Preventing diabetes is a complicated field and the research is ongoing. There is no clear outcome for the patient at present.

Q4. How do you gain knowledge of diabetes from the datasets and the databases you already have?

Alexander Jarasch: We have different types of data – patient data from clinical trials, animal models, basic research – basically all the data from the various omics. We analyse this to gain more knowledge by connecting this data and viewing it all simultaneously. We also look to gain knowledge from the large data sets by applying machine learning. On the database side we have introduce graph databases in the form of Neo4j in order to create knowledge graphs.

Q5. If you look at the characteristics of Big Data: Volume, Variety, Velocity, Veracity; which ones are relevant for you?

Alexander Jarasch: I would not highlight anyone of these as they all have the same level of importance. If we don’t have enough data we don’t have the statistical significance , if we don’t have a variety of data we can’t distinguish between its different states. If we don’t have high quality data we can’t keep up with the velocity necessary to answer the questions. The variety of data we need to analyse is a major challenge, which is why from a data perspective we use graph. It is here we can make the links to answer biomedical queries.

Q6. What are the main benefits when you start connecting patients’ data?

Alexander Jarasch: The main benefit of connecting a patient’s data, which could also incidentally be an animal model, is that you can see the data from a number of perspectives. The more parameters you have the more complete the puzzle can be. The benefit here is being able to see the patient from many different sides. One discipline is not sufficient to answer the biomedical questions or help in the prevention of diabetes.

We can also connect between different centers. Diabetes, for example, has co-complications with other diseases. These include cancer, cardiovascular disease and Alzheimer’s. We can now connect and look at these different types of data and better understand how symptoms and causes interconnect.

Q7. In one of your use case you have studied pre diabetes, using graphs to connect data from animal models, genetics, metabolomics and literature to deduce causes of prediabetes in human. What results did you obtain so far?

Alexander Jarasch: We have connected different types of public data and our own data. One result is the hypothesis of seven metabolites that overlap between human genomic data and that seen in a prediabetes pig model. This is now under further investigation and we will dig deeper. The question is which pathways do these metabolites follow and how are they regulated in the body? It is in itself a very complex question.

Q8. What is your experience so far in using graph technology and specifically Neo4j?

Alexander Jarasch: We are now at a point with graph databases where we can easily connect different types of data – where the drawings and brainstorming sessions with researchers come very close to the data model. This makes it much easier to query data, even for non-computer scientists to answer questions. When it comes to Neo4j it is easy to install and implement. The query language, Cypher, is easy to understand and the visualisation software is again very promising for non-computer scientists. Essentially, it makes it far easier for us to combine different types of data.

Q9. What are the main benefits in using graph technology in your area of work?

Alexander Jarasch: The main benefit of graph technology is its ability to connect heterogenous data across different locations and species. This is possible with relational databases, but it is very complicated. We do still use relational databases as they are connected to different devices and recurring processes and are fit for purpose in these roles. It is in combining and connecting heterogenous data, where graph technology has the greatest impact. This is a situation where relational databases are rather limited.

Q10. Do you think that connecting data and applying modern machine learning techniques will help scientists getting closer to understand this complex disease and hopefully help to care for patients in the future?

Alexander Jarasch: Yes, I would definitely agree with this. Connecting different types of data is key to modern data analysis and especially in life science / health care industry. Of course this makes the process much more complex and far bigger. Applying machine learning techniques can help to cope this and to gain the knowledge from many data sources. This provides us with a better understanding of diseases in general I would say. We are applying ML techniques on our big data sets. One example would be to cluster patient groups in order to identify different subtypes of diabetes.

The question is how can we distinguish between patient (groups) and treat people individually when they get diabetes. Some people, for example, don’t react to lifestyle intervention when it comes to diabetes. We have tall, lean people who have diabetes, obese people with diabetes but also obese people who don’t have diabetes. Obviously, the mechanisms behind that must be quite different from each other, and thus a single therapy or prevention for all people is most likely not working. That’s why we connecting data sources and try to cluster our patients into subgroups to come up with individual treatments or suggested interventions. Graph technology provides us with a way of connecting relevant data sources.

———————–

Jarasch_Alexander_2

Alexander Jarasch currently works at the German Center for diabetes research and is responsible for data and knowledge management. Before he worked at the “Pharma Research and Early Development (pRED)” at Roche. Alexander does research in Computing in Mathematics, Natural Science, Engineering and Medicine, Databases and Data Mining.

Resources

Graphs to Fight Diabetes – Dr. Alexander Jarasch, DZD (link to YouTube Video), GraphConnect-2018, New York.

 Graphs to Fight Diabetes Dr. Alexander Jarasch (link to Slides), GraphConnect-2018, New York.

– Using Graphs to Fight Diabetes: A Podcast with Alexander Jarasch, 2018/10/25.

– Artificial Intelligence Methodologies and Their Application to DiabetesJ Diabetes Sci Technol. 2018 Mar; 12(2): 303–310, Published online 2017 May 25. doi: 10.1177/1932296817710475

– Beyond data integration, Drug Discovery Today, February 2008 R

Related Posts

– On using AI and Data Analytics in Pharmaceutical Research. Interview with Bryn Roberts , ODBMS Industry Watch, 2018-09-10

– Beyond the Molecule and Beyond the Device: Machine Learning and the Future of Healthcare, ODBMS.org, EXPERT ARTICLES, 24 AUG, 2017

Follow us on Twitter: @odbmsorg

##

]]>
http://www.odbms.org/blog/2019/02/on-gaining-knowledge-of-diabetes-using-graphs-interview-with-alexander-jarasch/feed/ 1
On SQL++ and Couchbase N1QL for Analytics. Interview with Mike Carey. http://www.odbms.org/blog/2019/01/on-sql-and-couchbase-n1ql-for-analytics-interview-with-mike-carey/ http://www.odbms.org/blog/2019/01/on-sql-and-couchbase-n1ql-for-analytics-interview-with-mike-carey/#comments Wed, 16 Jan 2019 09:31:57 +0000 http://www.odbms.org/blog/?p=4894

“N1QL for Analytics is the first commercial implementation of SQL++.” –Mike Carey

I have interviewed Michael Carey, Bren Professor of Information and Computer Sciences and Distinguished Professor of Computer Science at UC Irvine, where he leads the AsterixDB project, as well as a Consulting Architect at Couchbase. We talked about SQL++, the AsterixDB project, and the Couchbase N1QL for Analytics.

RVZ

Q1. You are Couchbase’s Consulting Chief Architect. What are your main tasks in such a role?

Mike Carey: This came about when Couchbase began working on the effort that led to the recently released Couchbase Analytics Service, a service that was born when Ravi Mayuram (Couchbase’s Senior VP of Engineering and CTO) and I realized that Couchbase and the AsterixDB project shared a common vision regarding what future data management systems ought to look like. Rather than making me quit my day job, I was given the opportunity to participate in a consulting role and build a team within Couchbase to make the Analytics Service happen — using AsterixDB as a starting point. I guess now I’m kind of a mini-CTO for database-related issues; I primarily focus on the Analytics Service, but I also pay attention to the Query Service and the Couchbase Data Platform as a whole, especially when it comes to things like its query capabilities. I spend one day a week up at Couchbase HQ, at least most weeks. It’s really fun, and this keeps me connected to what’s happening in the “real world” outside academia.

Q2. What is SQL++ ? And what is special about it?

Mike Carey: SQL++ is a language that came out of work done by Prof. Yannis Papakonstantinou and his group at UC San Diego. Prior to SQL++, in the AsterixDB project, we had invented and implemented a full query language for semi-structured data called AQL (short for Asterix Query Language) based on a data model called ADM (short for Asterix Data Model). ADM was the result of realizing back in 2010 that JSON was coming in a pretty big way — we looked at JSON from a database data modeling perspective and added some things inspired by object databases that were missing. Most notable were the option to specify schemas, at least partially, if desired, and the ability to have multisets as well as arrays as multi-valued fields. AQL was the result of looking at XQuery, since it had been designed by a group of world experts to deal with semi-structured data, and then throwing out its “XML cruft” in order to gain a nice query language for ADM. To make AQL a bit more natural for SQL users, we also allowed some optional keyword substitutions (such as SELECT for RETURN and FROM for FOR). We had a pretty reasonable technical explanation for users as to why AQL was what it was — why it wasn’t just a SQL extension. Users listened and learned AQL, but they always seemed to wistfully sigh and continue to wish that AQL was more directly like SQL (in its syntax and not just its query power).

More or less in parallel, Yannis and friends were building a data integration system called FORWARD to integrate data of varied shapes and sizes from heterogeneous data stores. The FORWARD view of data was based on a semi-structured data model, and SQL++ was the SQL-based language framework that Yannis developed to classify the query capabilities of the stores. It also served as the integration language for FORWARD’s end users. At some point he approached us with a draft of his SQL++ framework paper, getting our attention by saying nice things about AQL relative to the other JSON query languages (:-)), and we took a look. Pretty quickly we realized that SQL++ was very much like AQL, but with a SQL-based syntax that would make those wistful AQL users much happier. Yannis did a very nice job of extending and generalizing SQL, allowing for a few differences where needed, such as where SQL had made “flat-world” or schema-based assumptions that no longer hold for JSON, and exploiting the generality of the nested data model, like adding richer support for grouping and de-mystifying grouped aggregation.

We have since “re-skinned” Apache AsterixDB to use SQL++ as the end-user query language for the system. This was actually relatively easy to do since all of the same algebra and physical operators work for both. We recently deprecated AQL altogether as an end-user language.

Q3. What is N1QL for Analytics?

Mike Carey: The Couchbase Analytics service is a component of the Couchbase Data Platform that allows users to run analytical-sized queries over their Couchbase JSON data. N1QL for Analytics is the product name for the end-user query language of Couchbase Analytics. It’s a dialect of SQL++, which itself is a language framework; the framework includes a number of choices that a SQL++ implementer gets to pin down about details like data types, missing information, supported functions, and so on. N1QL for Analytics could have been called “Couchbase SQL++”, but N1QL (non-1NF query language) is what Couchbase originally called the SQL-inspired query language for its Query service. A decision was made to keep the N1QL brand name, while adding “for Query” or “for Analytics” to more specifically identify the target service. Over time both N1QLs will be converging to the same dialect of SQL++. The bottom line is that N1QL for Analytics is the first commercial implementation of SQL++.
By the way, there’s a terrific new book available on Amazon called “SQL++ for SQL Users: A Tutorial.” It was written by Don Chamberlin, of SQL fame, for folks who want to learn more about SQL++ (from one of the world’s leading query language experts).

Q4. Is N1QL for Analytics based entirely on the SQL++ framework?

Mike Carey: Indeed it is. As I mentioned, N1QL for Analytics is really a dialect of SQL++, having chosen a particular combination of detailed settings that the framework provides options for. In the future it may gain other extensions, e.g., support for window queries, but right now, N1QL for Analytics is based entirely on the SQL++ framework.

Q5. How is new Couchbase Analytics influenced by the open-source Apache AsterixDB project?

Mike Carey: You’ve probably seen those computer ads in magazines that say “Intel Inside,” yes? In this case, the ad would say “Apache AsterixDB Inside”… :-)

Q6. Specifically, did you re-use the Apache AsterixDB query engine? Or else?

Mike Carey: Specifically, yes. The Couchbase Data Platform, internally, is based on a software bus that the Data service (the Key/Value store service) broadcasts all data events on — and components like the Index service, Full Text service, Cross Datacenter Replication service, and others are all bus listeners. The Analytics service is a listener as well, and it manages a real-time replica of the KV data in order to make that data immediately available for analysis in a performance-isolated manner. Performance isolation is needed so that analytical queries don’t interfere with the front-end applications. Under the hood, the Analytics service is based on Apache AsterixDB — its storage facilities are used to store and manage the data, and its query engine powers the parallel query processing. The developers at Couchbase contribute their work on those components back to the Apache AsterixDB open source, and these days they’re among its most prolific committers. Couchbase Analytics also has some extensions that are only available from Couchbase — including integrated system management, cluster resizing, and a nice integrated query console — but the core plumbing is the same.

Q7. SQL does not provide an efficient solution for querying JSON or semi-structured data in JSON form. Can you explain how Couchbase Analytics analyzes data in JSON format? What is that capability useful for?

Mike Carey: Couchbase Analytics supports a JSON-based “come as you are” data model rather than requiring data to be normalized and schematized for analysis. We like to say that this gives users “NoETL for NoSQL.” You can perhaps think of it as being a data mart for Couchbase application data. The application folks think about their data naturally; if it’s nested, it’s allowed to be nested (e.g., an order object can contain a nested set of line items and a nested shipping address), and if it’s heterogeneous, it’s allowed to be heterogeneous (e.g., an electronic product can have different descriptive data than a clothing product or a furniture product). Couchbase Analytics allows data analysis on data that looks like that — data can “come as it is” and SQL++ is ready to query it in that “as is” form. You can do all the same analyses that you could do if you first designed a relational schema and wrote a collection of ETL scripts to move the data into a parallel SQL DBMS — but without having to do all that. Instead, you can now “have your data and query it too” in its original, natural, front-end JSON structure.

Q8. Can you please explain the architecture behind Couchbase`s MPP engine for JSON data?

Mike Carey: Sure, that’s easy — I can pretty much just refer you to the body of literature on parallel relational data management. (For an overview, see the classic DeWitt and Gray CACM paper on parallel database systems.)
Under the hood, the query engine for Couchbase Analytics and Apache AsterixDB looks like a best-practices parallel relational query engine. It uses hash partitioning to scale out horizontally in an MPP fashion, and it using best-practices physical operators (e.g., dynamic hash join, broadcast join, index join, parallel sort, sort-based and hash-based grouped aggregation, …) to deal gracefully with very large volumes of data. The operator set and the optimizer rules have just been extended where needed to accommodate nesting and schema optionality. Data is hash-partitioned on its primary key (the Couchbase key), with optional local secondary indexes on other fields, and queries run in parallel on all nodes in order to support linear speed-up and/or scale-up.

Q9. Do you think other database vendors will implement their own version/dialect of SQL++ ?

Mike Carey: Indeed I do. It’s a really nice language, and it makes a ton of sense as the “right” answer to querying the more general data models that one gets when one lets down their relational guard. It’s a whole lot cleaner than the “JSON as a column type” approach to adding JSON support to traditional RDBMSs in my opinion.

Qx. Anything else you wish to add?

Mike Carey: I teach the “Introduction to Data Management” class at UC Irvine as part of my day job. Our class sizes these days are exceeding 400 students per quarter — database systems are clearly not dead in students’ eyes! For the past few years I’ve been spending the last bit of the class on “NoSQL technology” — which to me means “no schema required” — and I’ve used SQL++ for the associated hands-on homework assignment. It’s been great to see how quickly and easily (relatively new!) SQL users can get their heads around the more relaxed data model and the query power of SQL++. Some faculty friends at the University of Washington have done this as well, and their experience there has been similar. I would like to encourage others to do the same! With SQL++, richer data no longer has to mean writing get/put programs or effectively hand-writing query plans, so it’s a very nice platform for teaching future generations about the emerging NoSQL world and its concepts and benefits.

———————

MIke

Michael Carey received his B.S. and M.S. degrees from Carnegie-Mellon University and his Ph.D. from the University of California, Berkeley. He is currently a Bren Professor of Information and Computer Sciences and Distinguished Professor of Computer Science at UC Irvine, where he leads the AsterixDB project, as well as a Consulting Architect at Couchbase, Inc. Before joining UCI in 2008, he worked at BEA Systems for seven years and led the development of their AquaLogic Data Services Platform product for virtual data integration. He also spent a dozen years at the University of Wisconsin-Madison, five years at the IBM Almaden Research Center working on object-relational databases, and a year and a half at e-commerce platform startup Propel Software during the infamous 2000-2001 Internet bubble. He is an ACM Fellow, an IEEE Fellow, a member of the National Academy of Engineering, and a recipient of the ACM SIGMOD E.F. Codd Innovations Award. His current interests center around data-intensive computing and scalable data management (a.k.a. Big Data).

Resources

SQL++ For SQL Users: A Tutorial, Don Chamberlin, September 2018 (Free Book 143 pages)

AsterixDB Overview (Link to .PDF)

Related Posts

– On Engagement Database. Interview with Ravi Mayuram, ODBMS Industry Watch, September 24, 2018

Follow us on Twitter: @odbmsorg

##

]]>
http://www.odbms.org/blog/2019/01/on-sql-and-couchbase-n1ql-for-analytics-interview-with-mike-carey/feed/ 0
On Learned Index Structures. Interview with Alex Beutel http://www.odbms.org/blog/2018/12/on-learned-index-structures-interview-with-alex-beutel/ http://www.odbms.org/blog/2018/12/on-learned-index-structures-interview-with-alex-beutel/#comments Mon, 24 Dec 2018 15:14:13 +0000 http://www.odbms.org/blog/?p=4878

” Learned indexes are able to learn from and benefit from patterns in the data and the workload. Most previous data structures were not designed to optimize for a particular distribution of data.” –Alex Beutel

I have interviewed Alex Beutel, Senior Research Scientist in the Google Brain SIR team. We talked about “Learned Index Structures“- data structures thought of as performing prediction tasks- their difference with respect to traditional index structures and their main benefits.

RVZ

 Q1. What is your role at Google?

Alex Beutel: I’m a research scientist within Google AI, specifically the Google Brain team. I focus on a mixture of recommender systems, machine learning fairness, and machine learning for systems. While these may sound quite different, I think they are all areas of machine learning application with unique, rich challenges and opportunities driving from understanding the data distribution.

Q2. You recently published a paper on so called Learned Index Structures [1]. In the paper, you stated that Indexes (e.g B-Tree-Index, Hash-Index, BitMap-Index) can be replaced with other types of models, including deep-learning models, which you term learned indexes. Why do you want to replace well known Index-structures?

Alex Beutel: Traditional index structures are fundamental to databases and computer science in general, so they are important to study and have been deeply studied for a long time. I think whenever you can find a new perspective on such a well-studied area, it is worth exploring. In this case, we challenge the assumptions in data structure design by jumping from the more traditional discrete structures to continuous, stochastic components that can make mistakes. However, by taking this perspective, we find that we now have at our disposal a whole breadth of tools from the machine learning, data mining, and statistics communities that we can bring to bear on databases and more broadly data systems problems. Personally, rethinking these fundamental tasks with this new lens has been extremely exciting and fun.

Q3. What is the key idea for learned indexes?

Alex Beutel: The key idea for learned indexes is that many data structures can be thought of as performing prediction tasks, and as a result rather than building a discrete structure, use machine learning to build a model for the task [1].

Q4. What are the main benefits of learned indexes? Which applications could benefits from such learned indexes?

Alex Beutel: I want to separate what are the possible benefits and when or why can learned indexes realize those benefits. At a high level, using machine learned models lets us build data structures from a new broader set of tools. We have found that depending on the learned index configuration, we are able to get improvements in latency (speed), memory usage, and computational cost of running the index structure. Depending on the application, we can tune the learned index to get more savings in one or more of these dimensions. For example, in the paper we propose a hierarchical model structure, and we show that we can build a larger hierarchy and use more memory to get an even faster lookup or use a much smaller hierarchy to save memory and still not make the system too slow.

Why and when we are able to realize these benefits is a much more complicated question. One of the big advantages is that machine learning models make use of floating point operations which can be more easily parallelized with modern hardware, and with the growth of GPUs and TPUs, we may be able to build bigger and more accurate models without increasing latency.

Another aspect that I find exciting is that learned indexes are able to learn from and benefit from patterns in the data and the workload. Most previous data structures were not designed to optimize for a particular distribution of data. Rather, they often assume a worst-case distribution or ignore it entirely. But data structures aren’t being used in the abstract — they are being used on real data, which as we know from other areas of research, have many significant patterns. So one could ask, how can we make use of the patterns in the data being stored or processed to improve the efficiency of systems? ML models are extremely effective in adapting to those varying data distributions.

I think any application that is processing large amounts of data stands to benefit from taking this perspective. We focused on index structures in databases, but we have already seen multiple papers being published applying this perspective to new systems.

Q5. How can learned indexes learn the sort order or structure of lookup keys and use this signal to predict the position or existence of records?

Alex Beutel: B-Trees are already predicting the positions of records: they are built to give the block in which a record lies, and they do this just by processing the key. Learned indexes can do the same thing where they predict approximately where the record is. For example, if the keys are all even integers from 100 to 1000 (that is, key=100 has position 0, key=102 has position 1, key=104 has position 2, etc.), then the model f(key) = (key – 100)/2 will perfectly map from keys to positions. If the data aren’t exactly the even integers but on average we see one key every 2 spots (for example, keys: 100, 101, 105, 106, 109, 110, …) then f(key) above is still a pretty good model and for any key the model will almost find the exact position. Even if the data follow a more complicated pattern, we can learn a model to understand the distribution. It turns out that this is learning the cumulative distribution function, which has long been studied in statistics. This is exciting in that for those examples above, lookups become a constant-time operation, rather than growing with the size of the data; and more generally, this could change how we think about the complexity of these functions.

One challenge is that we can’t just return the approximate position; these data structures need to return the actual record being searched for. Typically, B-Trees will then scan through the block where the key is to find the exact right position. Likewise, when using a learned index, the model may not give the exact right position, but instead a close by one.
To return exactly the correct record, we search near the predicted position to find it; and the more accurate the model is, the faster the search will be.

Knowing if a record exists is quite different. Traditionally, Bloom filters have been used for this task; given a key, the Bloom filter will tell you if the key exists in the dataset, and if the key isn’t in the dataset the Bloom filter will mistakenly tell you it is with some small probability, called the false positive rate (FPR). This is a binary prediction problem: given a key, predict whether it’s in the dataset. Unlike traditional Bloom filters, we learn a model that tries to learn if there is some systematic difference between keys in the dataset and other questions (queries) asked of the Bloom filter. That is, if the dataset has all positive integers less than 1000, there is a trivial model g(key) := 1000 > key > 0 that can perfectly answer any query. If the dataset has all positive integers less than 1000 except for 517 then this is still a pretty good model with very few mistakes (FPR = 0.1%). If the dataset is malware URLs, these patterns are less obvious, but in fact lots of researchers have been studying what patterns are indicative of malware URLs (and distinguish them from normal webpage URLs), and we can build models to make use of these systematic differences.

From an accuracy perspective, Bloom filters have stringent requirements about no false negatives and low FPR, and so we build systems that combine machine learning classifiers and traditional Bloom filters to meet these requirements.

Q6. Under which conditions learned indexes outperform traditional index structures?

Alex Beutel: As mentioned above, I think there are a few key conditions for learned indexes being beneficial. First and foremost, it depends on the patterns of the data and workload being processed. In the range query case (B-Trees), if the data follow a linear pattern then learned indexes will easily excel; more complex data distributions may require more complex model structures which may not be okay for the application at hand. For existence indexes, the success of the model depends on how easily it can distinguish between keys in the dataset and real queries to the Bloom filter; distinguishing between even and odd integers is easy, but if the dataset is entirely random keys this will be very difficult.

In addition to making use of patterns in the data and workload, learned indexes depend on the environment they are being used in. For example, we study in-memory databases in our paper, and more recently we have found that disk-based systems require new techniques. For our learned Bloom filters we assume that saving memory is most important, but if there is a strict latency requirement, then the model design may need to change. If GPUs and TPUs were readily available, the learned index design would likely change dramatically.

Q7. What are the main challenges in designing learned index structures?

Alex Beutel: I think there are interesting challenges both in system design and in machine learning.
For systems, machine learned models provide much looser guarantees about accuracy than traditional data structures.
As a result, making use of ML models’ noisy predictions requires building systems that are robust to those errors.
In the B-Tree case we studied different local search strategies. For existence indexes we coupled the model with a Bloom filter to guarantee no false negatives. Interestingly, new research by Michael Mitzenmacher has shown that sandwiching the model between two Bloom filters does even better [2]. I believe there are lots of interesting questions about (a) what is the right prediction task for machine learning models when incorporated in a system and (b) how should these models be safely integrated in the system.

On the machine learning side there are numerous challenges in building models that match the needs of these systems.
For example, most machine learning models are expected to execute on the order of milliseconds or slower; for learned indexes we often need the model to execute thousands of times faster. Tim Kraska, the first author on our paper, did a lot of optimizations for very fast execution of the model. In most of machine learning, overfitting is bad; for learned indexes that is not true — how should that change model design? How do I build model families that can trade-off memory and latency?
How do I build models that match the hardware they are running on, from parallelization to caching effects?

While these are challenges to making learned indexes work, they also present opportunities for interesting research from different communities working together.

Q8. How does it compare using neural nets for learned index structures vs traditional cache-optimized B-Trees?

Alex Beutel: We found some really great benefits. Depending on the use case learned indexes were able to be up to 3 times faster and in some cases use only 1% of the memory of a traditional B-Tree.

Q9. What is the implication of replacing core components of a data management system through learned models for future systems designs?

Alex Beutel:  As I mentioned above, there have already been multiple papers applying these ideas to new core components, and we have been studying how to extend these ideas to a wide range of areas from indexing multidimensional data to sorting algorithms [3]. We have seen similar opportunities and excitement in systems beyond databases, such as research for scheduling and caching.

My hope is that more folks building data management systems, and really any system that is processing data, think about if there are patterns in the data and workload the system is processing. Because there most likely are patterns, and I believe building new systems that can be customized and optimized for those patterns will greatly improve the systems’ efficiency.

——————————

AB

Alex Beutel is a Senior Research Scientist in the Google Brain SIR team working on neural recommendation, fairness in machine learning, and ML for Systems. He received his Ph.D. in 2016 from Carnegie Mellon University’s Computer Science Department, and previously received his B.S. from Duke University in computer science and physics. His Ph.D. thesis on large-scale user behavior modeling, covering recommender systems, fraud detection, and scalable machine learning, was given the SIGKDD 2017 Doctoral Dissertation Award Runner-Up. He received the Best Paper Award at KDD 2016 and ACM GIS 2010, was a finalist for best paper in KDD 2014 and ASONAM 2012, and was awarded the Facebook Fellowship in 2013 and the NSF Graduate Research Fellowship in 2011. More details can be found at alexbeutel.com.

Resources

[1] Tim Kraska, Alex Beutel, Ed H. Chi, Jeffrey Dean, Neoklis Polyzotis. The Case for Learned Index Structures. SIGMOD, 2018.

[2] Michael Mitzenmacher. A Model for Learned Bloom Filters, and Optimizing by Sandwiching. NeurIPS, 2018.

[3] Tim Kraska, Mohammad Alizadeh, Alex Beutel, Ed H. Chi, Jialin Ding, Ani Kristo, Guillaume Leclerc, Samuel Madden, Hongzi Mao, Vikram Nathan. SageDB: A Learned Database System. CIDR, 2019.

Stanford Seminar – The Case for Learned Index Structures. EE380: Computer Systems. Speakers: Alex Beutel and Ed Chi, Google, Published on Oct 18, 2018 (LINK to YouTube Video)

Related Posts

On Data, Exploratory Analysis, and R. Q&A with Ronald K. Pearson, ODBMS.org, April 13, 2018

On Apache Kafka®. Q&A with Gwen Shapira,  ODBMS.org, March 26, 2018.

How to make Artificial Intelligence fair, transparent and accountable, ODBMS.org, January 27, 2018

Follow us on Twitter: @odbmsorg

##

]]>
http://www.odbms.org/blog/2018/12/on-learned-index-structures-interview-with-alex-beutel/feed/ 0
On in-database machine learning. Interview with Waqas Dhillon. http://www.odbms.org/blog/2018/11/on-in-database-machine-learning-interview-with-waqas-dhillon/ http://www.odbms.org/blog/2018/11/on-in-database-machine-learning-interview-with-waqas-dhillon/#comments Sat, 17 Nov 2018 18:51:10 +0000 http://www.odbms.org/blog/?p=4859

“The goal of in-database machine learning is to bring popular machine learning algorithms and advanced analytical functions directly to the data, where it most commonly resides – either in a data warehouse or a data lake.” — Waqas Dhillon.

I have interviewed Waqas Dhillon, Product Manager – Machine Learning at Vertica. We talked about in-database machine learning, and what are the new machine learning features of Vertica.

RVZ

Q1. What is in-database machine learning?

Waqas Dhillon: The goal of in-database machine learning is to bring popular machine learning algorithms and advanced analytical functions directly to the data, where it most commonly resides – either in a data warehouse or a data lake. While machine learning is a common mechanism used to develop insights across a variety use cases, the growing volume of data has increased the complexity of building predictive models, since few tools are capable of processing these massive datasets. As a result, most organizations are down-sampling, which can impact the accuracy of machine models and created unnecessary steps to the predictive analytics process.

In-database machine learning changes the scale and speed through which these machine learning algorithms can be trained and deployed, removing common barriers and accelerating time to insight on predictive analytics projects. To that end, we’ve built machine learning and data preparation functions natively into Vertica, so the computational processes can be parallelized across nodes –scaling-out to address performance requirements, larger data volumes, and serving many concurrent users. Vertica in-database machine learning aims to eliminate the need of downloading and installing separate packages, purchasing 3rd party tools, or moving data out of database. Unlike traditional statistical analysis tools, we’ve given users the ability to archive and manage machine learning models inside the database, so they can train, deploy, and manage their models with a few simple lines of SQL.

Q2. What problem domains are most suitable for using Predictive Analytics?

Waqas Dhillon: Most organizations are realizing the role that predictive analytics can play in addressing certain business challenges to create a competitive advantage. While simple business intelligence and reporting has played a key role in understanding how an organization operates and where improvements can be made, the volume of data available combined with the power of machine learning is driving the adoption of forward-looking, predictive analytics projects. This adoption is compounded by an increase in end-user/customer demand for applications with embedded intelligence that no longer just identified ‘what happened’ but predicts ‘what will happen’.

In general, machine learning models using linear regression, logistic regression, naïve Bayes, etc. are better suited for problem domains involving structured data analysis. Beyond this, the most suitable domains for using predictive analytics are driven by the use cases and business applications that drive new revenue opportunities, increase operational efficiencies, or both.

Q3. Can you give us some examples?

Waqas Dhillon: In-database machine learning and the use of predictive analytics can drive tangible business benefits across a broad range of industries. Below are some of the most common industries and use cases where I’ve seen an adoption of predictive analytics capabilities:

• Financial services organizations can discover fraud, detect investment opportunities, identify clients with high-risk profiles, or determine the probability of an applicant defaulting on a loan.
• Communication service providers can leverage a variety of network probe and sensor data to analyze network performance, predict capacity constraints, and ensure quality service delivery to end customers.
• Marketing and sales organizations can use machine learning to analyze buying patterns, segment customers, personalize the shopping experience, and predict which targeted marketing campaigns will be most effective.
• Oil and gas organizations can leverage machine learning to analyze minerals to find new energy sources, streamline oil distribution for increased efficiency and cost effectiveness, or predict mechanical or sensor failures for proactive maintenance.
• Transportation organizations can analyze trends and identify patterns that can be used to enhance customer service, optimize routes, and increase profitability.

Q4. How do you handle machine learning on Big Data using an in-database approach?

Waqas Dhillon: The Vertica Analytics Platform was always built specifically for Big Data analytics and other analytical workloads where speed, scalability, and simplicity are crucial requirements.
Since we had spent years building out such a high-performance, scalable SQL engine, we started to ask ourselves, “Why should we limit the scope of our platform to standard SQL functions and descriptive analytics? Why not extend the power of Vertica to include more advanced analytics and machine learning functions?”

While some solutions might be limited by inherent architectural problems, such as lacking a shared-nothing-cluster architecture suitable for big data analytics, Vertica has an incredible engine for performing analytics on large scale data. That’s why we felt it was such an obvious choice to build machine learning functions natively into the platform. By building these machine learning capabilities on top of a foundation that already provides a tested, reliable distributed architecture and columnar compression, customers can now leverage these core features for advanced and predictive analytics uses cases.

In Vertica, we have implemented all in-database algorithms from scratch to run in parallel across multiple nodes in a cluster. Using parallel execution for model training, as well as scoring, not only results in extremely fast performance but also extends the capability of these algorithms to run on much larger datasets in comparison to traditional machine learning tools.

Using Vertica for machine learning provides another great advantage born from the fact that the computation engine and data storage management system are combined – this combination eliminates the need to move data between a database and a statistical analysis tool. You can build, share and deploy your machine learning pipelines in-place, where the data lives. This is a very important consideration when working with Big Data since it’s not just difficult, but sometimes outright impossible to move data at that scale between different tools.

Q5. How does Vertica support the machine learning process? Can you give some examples?

Waqas Dhillon: Vertica supports the entire machine learning workflow from data exploration and preparation to model deployment.
Users can explore their data using native database functions. As an analytics database, Vertica includes a large number of functions to support data exploration, and many more have recently been added to the machine learning library. Users can also prepare data with functions for normalization, outlier detection, sampling, imbalanced data processing, missing value imputation and many other native SQL and extended functions. They can also train and test advanced machine learning models like random forests and support vector machines on very large data sets.

There are multiple model evaluation metrics likes ROC, lift-table, AUC, etc. which can be used to assess your existing trained models. Any models built within Vertica can be stored inside the platform, shared with other users using the same instance of Vertica, or exported out to other Vertica databases. This can be quite useful while training models in test clusters and then moving them to production clusters. Training and managing models inside the database also reduces the overhead needed to transfer data into another system for analysis, along with the maintenance of that system.

Q6. How did you take advantage of a Massively Parallel Processing (MPP) Architecture, when implementing in-database machine learning in Vertica?

Waqas Dhillon: Vertica’s MPP architecture provided a great foundation on top of which we built a range of in-database machine learning functions, from data ingestion to model storage and scoring capabilities.

For data ingestion, there was already an extremely fast copy command used to move data in parallel into Vertica, where it’s stored on multiple nodes in a cluster. When we were writing our distributed machine learning algorithms, we could already rely on the data distribution across various nodes and instead focus our engineering efforts on the computation logic used to parallelize model training. We have also used a built-in distributed file system to maintain intermediate results as well as the final, trained models. These machine learning functions are mainly developed using Vertica’s C++ SDK, and are executed with Vertica’s distributed execution-engine.

To give an example of a machine learning algorithm used natively within Vertica leveraging the MPP architecture, let’s look at Random Forests. Random Forests is a popular algorithm among data scientists for training predictive models that can be applied to both regression and classification problems. It provides good prediction performance, and is quite robust against overfitting. The running time and memory footprint of this algorithm in R-randomForest package or Python-sklearn can be a major hurdle when working with large data volumes.
Our distributed implementation of Random Forest overcomes these obstacles. Model training is distributed across multiple nodes in a distributed architecture with multiple trees possibly being trained on the various nodes in the network, and then combining these results to provide a classification model. This model can then be used to perform scoring in parallel on data that might be distributed across multiple nodes (possibly hundreds) in a cluster.

Q7. You offer SQL-based machine learning functions. Is this an extension to SQL? Can you give us some examples?

Waqas Dhillon: Although Vertica follows the SQL standard, it offers multiple SQL extensions such as windowing functions and pattern matching. In-database machine learning algorithms are now part of the database’s analytical toolset, allowing users to write SQL like commands to run machine learning processes. They go beyond other, simpler SQL extensions users will find within Vertica.
For example, a simpler SQL extension in Vertica would be event series pattern matching. Event patterns are simply a series of events that occur in an order, or pattern that you specify. Vertica evaluates each row in your table, looking for the event you define. When Vertica finds a sequence of rows that conform to your pattern among a dataset of possibly hundreds of billions of rows or more, it outputs the rows that contribute to the match.

An example of a SQL extension for machine learning would be support vector machines (SVM). SVM is a very powerful algorithm that can be applied to large data sets for both classification and regression problems. For instance, an SVM model can be trained to predict the sales revenue of an e-commerce platform. There are many other extended SQL functions in Vertica as well to support a typical machine learning workflow from data preparation to model deployment.

Q8. What are the common barriers to Applying Machine Learning at Scale?

Waqas Dhillon: There are several challenges when it comes to applying machine learning to massive volumes of data. Predictive analytics can be complex, especially when big data is added to the mix. Since larger data sets yield more accurate results, high-performance, distributed, and parallel processing is required to obtain insights at a reasonable speed suitable for today’s business.

Traditional machine learning tools require data scientists to build and tune models using only small subsets of data (called down-sampling) and move data across different databases and tools, often resulting in inaccuracies, delays, increased costs, and slower access to critical insights:

• Slower development: Delays in moving large volumes of data between systems increases the amount of time data scientists spend creating predictive analytics models, which delays time-to-value.
• Inaccurate predictions: Since large data sets cannot be processed due to memory and computational limitations with traditional methods, only a subset of the data is analyzed, reducing the accuracy of subsequent insights and putting at risk any business decisions based on these insights.
• Delayed deployment: Owing to complex processes, deploying predictive models into production is often slow and tedious, jeopardizing the success of big data initiatives.
• Increased costs: Additional hardware, software tools, and administrator and developer resources are required for moving data, building duplicate predictive models, and running them on multiple platforms to obtain the desired results.
• Model management: Archiving and managing the machine learning models is a challenge when using most of the data science tools as they usually lack a mechanism for model management.

Q9. How do you overcome such barriers in Vertica?

Waqas Dhillon: Capable of storing large amounts of diverse data while also providing key built-in machine learning algorithms, Vertica eliminates or minimizes many of these barriers. Built from the ground up to handle massive volumes of data, Vertica is designed specifically to address the challenges of big data analytics using a balanced, distributed, compressed columnar paradigm.

Massively parallel processing enables data to be handled at petabyte scale for your most demanding use cases. Column store capabilities provide data compression, reducing big data analytics query times from hours to minutes or minutes to seconds, compared to legacy technologies. In addition, as a full-featured analytics system, Vertica provides advanced SQL-based analytics including pattern matching, geospatial analytics and many more capabilities.

As an optimized platform enabling advanced predictive modeling to be run from within the database and across large data sets, Vertica eliminates the need for data duplication and processing on alternative platforms—typically requiring multi-vendor offerings—that add complexity and cost. Now that same speed, scale, and performance used for SQL-based analytics can be applied to machine learning algorithms, with both running on a single system for additional simplification and cost savings.

————————————-
Waqas
Waqas Dhillon, Product Manager – Machine Learning, Vertica

Waqas is the product management lead for machine learning with Vertica. In his current role, he drives the strategy and implementation of advanced analytics and machine learning features in the Vertica MPP platform. Waqas holds a bachelor’s degree in computer software engineering from NUST and a master’s degree in management from Harvard University.
Prior to his current role, Waqas has worked in multiple positions where he applied data analytics and machine learning for consumer research and revenue growth for companies in consumer packaged goods and telecommunication industries.

Resources

– Vertica in-database machine learning: product page.

– Vertica in-database machine learning: full documentation.

– Try version of Vertica for free

Related Posts

– On using AI and Data Analytics in Pharmaceutical Research. Interview with Bryn Roberts  ODBMS Industry Watch, Published on 2018-09-10

– On AI and Data Technology Innovation in the Rail Industry. Interview with Gerhard Kress ODBMS Industry Watch, Published on 2018-07-31

– On Artificial Intelligence, Machine Learning, and Deep Learning. Interview with Pedro Domingos ODBMS Industry Watch, Published on 2018-06-18

Follow us on Twitter: @odbmsorg

##

]]>
http://www.odbms.org/blog/2018/11/on-in-database-machine-learning-interview-with-waqas-dhillon/feed/ 0
Big Data and AI– Ethical and Societal implications http://www.odbms.org/blog/2018/10/big-data-and-ai-ethical-and-societal-implications/ http://www.odbms.org/blog/2018/10/big-data-and-ai-ethical-and-societal-implications/#comments Wed, 31 Oct 2018 15:34:43 +0000 http://www.odbms.org/blog/?p=4851

Are computer system designers (i.e. Software Developers, Software Engineers, Data Scientists, Data Engineers, etc,), the ones who will decide what the impact of these technologies are and whether to replace or augment humans in society?

Big Data, AI and Intelligent systems are becoming sophisticated tools in the hands of a variety of stakeholders, including political leaders.

Some AI applications may raise new ethical and legal questions, for example related to liability or potentially biased decision-making.

I recently gave a talk at UC Berkeley on the Ethical and Societal implications of Big Data and AI and what designers of intelligent systems can do to take responsibility, not only for policy makers and lawyers.

You can find copy of the presentation here: 
http://www.odbms.org/wp-content/uploads/2018/10/Zicari.UCBerkeley.2018.pdf

I am interested to hear from you and receive your feedback.

RVZ

]]>
http://www.odbms.org/blog/2018/10/big-data-and-ai-ethical-and-societal-implications/feed/ 66
On the Future of AI in Europe. Interview with Roberto Viola http://www.odbms.org/blog/2018/10/on-the-future-of-ai-in-europe-interview-with-roberto-viola/ http://www.odbms.org/blog/2018/10/on-the-future-of-ai-in-europe-interview-with-roberto-viola/#comments Tue, 09 Oct 2018 18:36:10 +0000 http://www.odbms.org/blog/?p=4829

“Creating a suitable ethical and legal framework is key to our European approach on AI and draft AI ethics guidelines will be developed by the end of the year.”– Roberto Viola

I have interviewed Roberto Viola, Director General of DG CONNECT (Directorate General of Communication Networks, Content and Technology) at the European Commission. We talked about the future of AI in Europe, and the new initiatives of the European Commission to foster public and private investment in AI, and to create a “Digital Europe programme”.

RVZ

Q1. Companies with big data pools do have great economic power. Today, that shortlist includes USA companies such as Google, Microsoft, Facebook, Amazon, Apple, and Chinese companies such as Baidu. None of these companies are European.
USA and China are ahead of Europe in developing Data-driven services and solutions, often based on AI. Would you like to comment on this?

Roberto Viola: Europe is quite strong in many areas of AI: it is home to world-class researchers, labs and start-ups, and we have a strong industrial base that can be at the forefront of the adoption of AI. We can capitalise on our assets and strengthen European leadership by supporting excellence in research, particularly in areas where we already lead e.g. in robotics.

However, it is true that, overall Europe is behind in private investment in AI, compared to North America and Asia, and that is why it is crucial that the EU creates an environment that stimulates investment. Our goal is to build on our strengths and support the European entrepreneurial spirit. We must also ensure broader and easier access to services for citizens and industry and address socio-economic and legal issues, based on strong European values such as privacy and data protection.

It is important for European countries and various stakeholders to work together when trying to accomplish these things.
That is why we created the European AI Alliance. Here, everyone with an interest in AI can imagine its future shape, discuss how to maximise the benefits for everyone or debate how to develop ethical AI. I would also like to use this opportunity to invite everyone with an expertise or interest in AI to join the AI Alliance and actively participate in it.

Q2. What are in your opinion the main challenges in the adoption of AI in Europe?

Roberto Viola: The biggest challenge is the adoption of AI all over Europe by organisations of any size and in all fields, not just in the tech sector. This is a key priority for us. AI is already in use in many areas in Europe, and surveys show that the benefits of adopting AI are widely recognised by European businesses. However, only a fraction of European companies have already adopted digital technologies. This situation is particularly acute in the SME category: last year for example, only 10% of SMEs in the EU were using big data analytics, which could in turn be used to build AI technologies.

Europe can only reap the full benefits of AI if all have easy access to the technology and to related knowledge and services. That is why we focus on facilitating access for all potential users to AI technologies, in particular SMEs, companies from non-tech sectors and public administrations, and encourage them to test AI solutions. We aim to achieve this by setting up an AI-on-demand platform and via a network of Digital Innovation Hubs (DIHs). This includes both an existing network of more than 400 DIHs and a new dedicated network of AI-focused DIHs.

Q3. AI technologies can be used either to automate or to augment humans. In the first case, machines replace people, in the second case machine complements people (at least in theory). What is your take on this?

Roberto Viola: I believe that AI cannot only make the lives of workers easier, for example by helping with repetitive, strenuous or dangerous tasks but that it can also provide new solutions by supporting more people to participate and remain in the labour market, including people with disabilities. It is estimated, for example, that around 90% of road accidents are caused by human errors. AI can help to reduce this number. It is vital, however, that these new developments and uses of AI are carried out in an environment of trust and accountability. Creating a suitable ethical and legal framework is key to our European approach on AI and draft AI ethics guidelines will be developed by the end of the year.

AI will both create and destroy jobs, and it will certainly transform many of the existing jobs. AI, like other new technologies before it, is expected to change the nature of work and transform the labour market. It remains unclear what the net effect will be, and studies of the subject differ widely. However, it is obvious that our workforce will have to re-skill and up-skill to be able to master these changes. The ICT sector has created 1.8 million jobs since 2011 and the need for ICT specialists continues to grow. There are now at least 350,000 vacancies for such professionals in Europe pointing to significant skills gaps. Preparing for these socioeconomic changes is one of the three main dimensions of the EU initiative on AI: we need to prepare society as a whole, help workers in jobs that are most likely to be transformed or to disappear, and train more specialists in AI.

Q4. The European Commission has recently proposed an approach to increase public and private investment in AI in Europe. Can you elaborate on this?

Roberto Viola: Our ambitious proposals for investment in AI include a total of EUR20 billion in public and private funding for the period 2018-2020, and then reaching a yearly average of EUR20 billion in the decade after 2020.
The Commission is stepping up its own investment to roughly EUR1.5 billion by the end of 2020 – an increase of around 70%.
The total amounts that we have proposed can be achieved if Member States and the private sector make similar investment efforts, and we are working closely with the Member States on a coordinated action plan on AI to be agreed by the end of 2018, with a view to maximising the impact of such investments at EU and national level.

Under the next multiannual budget of the EU, the Commission plans to increase its investment in AI further, mainly through two programmes: the research and innovation framework programme Horizon Europe, and a new programme called Digital Europe.
Out of a total of nearly EUR100 billion for 2021-2027 under Horizon Europe, the Commission proposes to invest EUR15 billion in the Digital and Industry cluster, which also includes AI as a key activity.

We intend to fund both research and innovation and the accelerating adoption of AI. We will support basic and industrial research, and breakthrough market-creating innovation. Building on Member States’ efforts to establish joint AI-focused research centres, the objective is to strengthen AI excellence centres across Europe by facilitating collaboration and networking between them. Furthermore, the Commission will provide support for testing and experimentation infrastructures that are open to businesses of all sizes and from all regions.

Q5. What instruments do you have to assess the impact of such a plan?

Roberto Viola: We will monitor the adoption of AI across the economy and identify potential shifts in industrial value chains caused by AI as well as societal and legal developments and the situation on the labour market.
We will also regularly evaluate progress towards our objectives. This will involve a systematic analysis of AI-related developments such as advances in AI capabilities, policy initiatives in the Member States, the application of AI solutions in different sectors of the economy, and the effects that the spread of AI applications will have on labour markets.

Q6. Professional codes of ethics do little to change peoples’ behaviour. How is it possible to define incentives for using an ethical approach to software development, especially in the area of AI?

Roberto Viola: AI has great potential benefits – ranging from making our IT systems more efficient to solving some of the world’s greatest challenges, but it also comes with considerable challenges and risks. Some AI applications may indeed raise new ethical and legal questions, for example related to liability or potentially biased decision-making.
For example, algorithms are used to review loan applications, recruit new employees and assess potential customers, and if the data are skewed the decisions recommended by such algorithms may be discriminatory against certain categories or groups.
Given such risks, there is strong demand for the EU to ensure that AI is developed and applied within an appropriate framework that promotes innovation but at the same time also protects our values and fundamental rights.

As a first step, we have initiated the process of drawing up draft AI ethics guidelines with the involvement of all relevant stakeholders. The Commission has set up a new High-Level Expert Group on Artificial Intelligence and a European AI Alliance that brings together a large number of stakeholders. They will work together in close cooperation with representatives from EU Member States to prepare draft AI ethics guidelines that will cover aspects such as the future of work, fairness, safety, security, social inclusion and algorithmic transparency.

While self-regulation can be a first stage in applying an ethical approach, public authorities must ensure that the regulatory framework that applies to AI technologies is fit for purpose and in line with our values and fundamental rights.
For example, the Commission is currently assessing the safety and national and EU liability frameworks in light of the new challenges, and we will examine whether any legislative changes are required. Evaluations of the Product Liability Directive and the Machinery Directive have already been conducted. On the evaluation of the Product Liability Directive, the Commission will issue an interpretative guidance document by mid-2019. The Commission has also carried out an initial assessment of the current liability frameworks. An expert group will help the Commission to analyse these challenges further. We will publish a report, by mid-2019, on the broader implications for, potential gaps in, and orientations for the liability and safety frameworks for AI, Internet of Things and robotics.

Q7. The European Commission has also proposed to create a “Digital Europe programme”. What is it? What are the areas that the Commission will support under such program?

Roberto Viola: Digital Europe is a new programme that builds on the EU’s Digital Single Market strategy launched in 2015 and its achievements so far, and it is aimed at aligning the next multiannual EU budget with increasing digital challenges. The total amount proposed under Digital Europe is €9.2 billion, targeting five areas of investment: digital skills, cybersecurity, high performance computing, artificial intelligence, and public administration.

€2.5 billion of Digital Europe are earmarked for AI: the funding will target in particular testing and experimentation facilities and data platforms. Digital Europe also provides for investing €700 million in supporting the development of advanced digital skills, and €1.3 billion in support for deployment projects, notably in areas like AI.

Q9. Data, AI and Intelligent systems are becoming sophisticated tools in the hands of a variety of stakeholders, including political leaders. “Under the label of “nudging,” and on massive scale, some governments are trying to steer citizens towards healthier or more environmentally friendly behaviour by means of a “nudge”—a modern form of paternalism.
The magic phrase is “big nudging“, which is the combination of big data with nudging.” Is the European Commission doing anything to avoid this in Europe?

Roberto Viola: Like every technology or tool, AI generates new opportunities, but also poses new challenges and risks. Such risks will be addressed in the draft AI ethics guidelines that will be prepared by the High-Level Expert Group on Artificial Intelligence. AI systems have to be developed and used within a framework of trust and accountability.
Citizens and businesses alike need to be able to trust the technology they interact with, and have effective safeguards protecting fundamental rights and freedoms. In order to increase transparency and minimise the risk of bias, AI systems should be developed and deployed in a manner that allows humans to understand the basis of their actions. Explainable AI is an essential factor in the process of strengthening people’s trust in such systems.

Q10. Do we need to regulate the development of artificial intelligence?

Roberto Viola: The Commission closely monitors all relevant developments related to AI and, if necessary, we will review our existing legal framework. The EU has a strong and balanced regulatory framework to build on in order to develop a sustainable approach to AI technologies. This includes high standards in terms of safety and product liability, EU-wide rules on network and information systems security and stronger protection of personal data that came into force in May 2018.

——————————————————

Photo R_Viola 2

Roberto Viola is Director General of DG CONNECT (Directorate General of Communication Networks, Content and Technology) at the European Commission.

He was the Deputy Director-General of DG CONNECT, European Commission from 2012 to 2015.

Roberto Viola served as Chairman of the European Radio Spectrum Policy group (RSPG) from 2012 to 2013, as Deputy Chairman in 2011 and Chairman in 2010. He was a member of the BEREC Board (Body of European Telecom Regulators), and Chairman of the European Regulatory Group (ERG).

He held the position of Secretary General in charge of managing AGCOM, from 2005 to 2012. Prior to this, he served as Director of Regulation Department and Technical Director in AGCOM from 1999 to 2004.

From 1985-1999 he served in various positions including Head of Telecommunication and Broadcasting Satellite Services at the European Space Agency (ESA).

Roberto Viola holds a Doctorate in Electronic Engineering and a Masters in Business Administration (MBA).

Resources

– European AI Alliance

– High Level Expert group on AI

– Communication Artificial Intelligence for Europe 

– European Commission website on AI policy

Digital Europe:

Link to press release: http://europa.eu/rapid/press-release_IP-18-4043_en.htm

Link to regulation page: https://ec.europa.eu/info/law/better-regulation/initiatives/com-2018-434_fr

– According to McKinsey (2016), European companies operating at the digital frontier only reach a digitisation level of 60% compared to their US peers. Source: https://ec.europa.eu/digital-single-market/digital-scoreboard.

Related Posts

– On Artificial Intelligence, Machine Learning, and Deep Learning. Interview with Pedro DomingosODBMS Industry Watch, June 6, 2018

– On Technology Innovation, AI and IoT. Interview with Philippe Kahn , ODBMS Industry Watch, January 27, 2018

– Will Democracy Survive Big Data and Artificial Intelligence? — Dirk Helbing, Bruno S. Frey, Gerd Gigerenzer, Ernst Hafen, Michael Hagner, Yvonne Hofstetter, Jeroen van den Hoven, Roberto V. Zicari and Andrej Zwitter, Scientific America, February 25, 2017

Follow us on Twitter: @odbmsorg

##

]]>
http://www.odbms.org/blog/2018/10/on-the-future-of-ai-in-europe-interview-with-roberto-viola/feed/ 0
On Engagement Database. Interview with Ravi Mayuram http://www.odbms.org/blog/2018/09/on-engagement-database-interview-with-ravi-mayuram/ http://www.odbms.org/blog/2018/09/on-engagement-database-interview-with-ravi-mayuram/#comments Mon, 24 Sep 2018 17:48:21 +0000 http://www.odbms.org/blog/?p=4814

“The efficacy of any recommendations or results is entirely dependent on ensuring the right data is being fed into purpose-built models — not simply enabling a connection to Google TensorFlow or Apache Spark MLlib.”– Ravi Mayuram

I have interviewed Ravi Mayuram, Senior Vice President of Engineering and CTO of Couchbase. Main topics of the interview are: how the latest technology trends are influencing the database market, what is an engagement database, and how Couchbase plan to extend their data platform.

RVZ

Q1. How are the latest technology trends- such as for example cloud-native, containers, IoT, edge computing- influencing the database market?

Ravi Mayuram: Businesses today are tasked with solving much harder technological problems than ever before.
A massive amount of data is being generated at an unprecedented pace, and companies are pursuing several technology trends (cloud-native architectures, containerization, IoT data management solutions, edge computing) to maintain or uncover new competitive advantages.

This wide array of trends require a combination of several types of solutions. Common approaches of adding yet another backend toolkit are no longer competitive. Instead, bringing the power of high speed data interaction out of the database and into the hands of users has stretched developers and the tools they use.

What’s becoming more apparent is that while the latest technologies can certainly address capturing and managing this data explosion, the hard part is to minimize database sprawl by meeting different use cases in a consolidated platform. Only then can you get the full benefit of intelligently combining different data sources and technologies. And that’s precisely where I see these trends influencing the database market: a need to consolidate multiple point solutions into a single platform that will allow us to cover a much wider range of use cases, and at the same time, contain the sprawl.

With a database technology like Couchbase, we’ve recognized this challenge and built a single platform to manage that convergence, giving you access to your data in a flexible, intuitive way. The database itself is more intelligent than ever before – self-managing, more easily deployable, and handling failures better. We’ve focused on introducing new features that allow developers to extract more value (intelligently!) from their data sources via new analytics, eventing, and text search services – all in a single platform. The end result is a more seamless experience across a wider range of technology trends and endpoints, helping our customers gain actionable insights from data captured and stored in Couchbase yet pushed out to the edge to enable user interaction better than ever before.

Q2. How have databases changed over the last 5 years?

Ravi Mayuram: Over the last 7-8 years, the NoSQL movement has matured tremendously. Initially, there was a vast divide in what traditional database systems offered and what NoSQL databases held promise to deliver. While the new databases solved the scale and performance problems, they were not mature in their industrial strength or were not enterprise-grade. These issues have been addressed, and more and more business-critical data now sits in NoSQL systems. These modern database systems are also getting battle tested under production workloads, across every industry imaginable. This has made our engagement database increasingly robust and dependable for developers to stand up far more complex applications, while delivering significant value to the customers they serve.

Q3. What are the main use cases where organisations will benefit in transitioning workloads from relational databases to non relational multi-cloud environments?

Ravi Mayuram: Enterprises have chosen Couchbase to run their most mission critical applications for its rich set of capabilities – from the cloud to the edge.

Today’s database capabilities are increasingly defined by the end user application of the tool. For example, due to the dynamic nature of applications as they mature, the database must have a flexible schema that can adapt as needed. Similarly, it must support both clustered server environments as well as in “always on” mobile applications. The database must also be able to grow and scale as needed along with supporting highly available environments and global, replicated, environments.

At a high level, our customers are building user profiles, session stores, operational dashboarding, and personalization services for their Customer 360, Field Service, Catalog & Inventory Management, and IoT Data Management solutions.
And that’s because relational databases can’t keep up with the demands of these types of applications anymore. More data than ever before is now being generated at every single customer and employee touch point, and the ability to capture new types of data on the fly, and securely move, query, and analyze that data requires a flexible, geo-distributed, robust data platform. Couchbase Data Platform consolidates many tiers into one – caching, database, full text search, database replication technologies, mobile back end services, and mobile databases. This consolidation of tiers enables architects and developers to build and deliver application that have not been brought to market before, and at the same time, modernize existing applications efficiently and quickly.

Specifically, some use-cases include content entitlement, site monitoring, shopping cart, inventory/pricing engine, recommendation engine, fleet tracking, identity platform, work order management, and mobile wallet to name a few.

Q4. How do you define an end-to-end platform?

Ravi Mayuram: From a technical requirements perspective, there are six key concerns I believe a true end-to-end platforms solves for:

  1. Intuitive: Accessing data has to be easy. It must follow industry conventions that are familiar to SQL database users. Using standard SQL query patterns allows applications to be developed faster by leveraging existing database knowledge whether for ad hoc querying, real-time analytics, or text search.
  2. Cloud: The platform must be built for any type of cloud: private, public, hybrid, on-premises. And it has to be global, always available, all the time.
  3. Scale: The platform must be built for scale. This is a given. As your user demand spikes, your data platform needs to support that.
  4. Mobile: The platform must be seamlessly mobile. Data must be available at the point of interaction in today’s digital world, and that has grown ever-so important as more customers and employees have moved to mobile devices for their everyday activities.
  5. Always-on availability: The platform should always be on (five nines availability), and always be fast. No downtime, because who can afford downtime in today’s global economy?
  6. Security across the stack: The platform needs to be secure, end-to-end. A lot of customer and business data sits in these databases. You must be able to encrypt, audit, protect, and secure your data wherever it lives – on the device, over the internet, in the cloud.

Based on these criteria, I’d define an end-to-end platform as one just like Couchbase provides.

Q5. You have positioned Couchbase as the ‘engagement database’. How would you define an engagement database? What are the competitive differentiators compared with other types of databases?

Ravi Mayuram: An engagement database makes it easier to capture, manipulate, and retrieve the data involved in every interaction between customers, employees, and machines. The exponential rise of big data is making it more costly and technically challenging for massively interactive enterprises to process – and leverage – those interactions, especially as they become richer and more complex in terms of the data, documents, and information that are shared and created.

Many organizations have been forced to deploy a hard-to-manage collection of disparate point solutions. These overly complex systems are difficult to change, expensive to maintain, and slow, and that ultimately harms the customer experience.

An engagement database enhances application development agility by capitalizing on a declarative query language, full-text search, and adaptive indexing capabilities, plus seamless data mobility. It offers unparalleled performance at scale – any volume, volatility, or speed of data, any number of data sources, and any number of end users with an in-memory dataset process, smart optimization, and highly performant indexing. And it does all this while remaining simple to configure and set up, easy to manage across the multi-cloud environments common in today’s enterprises, as well as globally reliable and secure in context of the stringent uptime requirements for business-critical applications.

Q6. Often software vendors offer managed services within their own cloud environments. Why did you partner with Rackspace instead?

Ravi Mayuram: One of the key tenets of Couchbase Managed Cloud was to offer our customers maximum flexibility without compromise – with respect to performance, security and manageability. By deploying within the customer’s cloud environment, we can achieve all three without any compromises:

  • Co-locating applications and databases within the same cloud environment eliminates expensive hops of traversing cloud environment boundaries thus offering the maximum performance possible at the lowest possible latency.
  • Enables the infrastructure and data to reside within the security boundaries defined by the customer to ensure a consistent security and compliance enforcement across their entire cloud infrastructure.
  • Lastly it gives our customers choice and flexibility to get the best pricing on their cloud infrastructure from their provider of choice without a vendor in the middle charging a premium for the same infrastructure as some of our competitors force them to.

Along with our design principles, it also became evident to us early on, that instead of building this on our own we would serve our customers better by partnering with someone who has developed significant managed services expertise. We quickly zoned in on Rackspace – a pioneer in the managed services industry – as our partner of choice. We believe this best of breed combination of Couchbase’s database expertise with our powerful NOSQL technology and Rackspace’s fanatical support model and dev-ops expertise offers our customer a compelling option as evidenced by the overwhelming response to the product since its launch.

Q7. What technical challenges do developers need to overcome as they begin to integrate emerging technologies such as AI, machine learning and edge computing into their applications?

Ravi Mayuram: AI/ML brings together multiple disciplines from data engineering to data science, and the cross-disciplinary nature of these implementations is often at the core of the technical challenges for developers. Combining the knowledge of how the models and algorithms work with a firm and grounding in the data being fed into those models, is critical yet challenging. Moreover, with machine learning we have a fundamentally difficult debugging problem, rooted in requisite modeling creativity and extensive experimentation. Thus the efficacy of any recommendations or results is entirely dependent on ensuring the right data is being fed into purpose-built models — not simply enabling a connection to Google TensorFlow or Apache Spark MLlib.

Add in edge computing, and we are further confounded by the challenges of big data, from streaming analytics requiring active queries where the answers update in real-time as the data changes, to long-term storage and management of real-time data, both on the cloud and on the edge.

Q8. Talking with your customers, what are their biggest unmet, underserved needs?

Ravi Mayuram: For many of our customers, it comes down to a matter of scale. Information architectures in enterprises have evolved over time to include many solutions, all aimed at different needs. That makes it hard to really capitalize on the data that is now an asset for every business. As traffic grows, it can be impossible to adequately scale performance, a headache to manage multiple complex software solutions, avoid duplication of data, and difficult to quickly develop applications that meet the modern expectations for user experience.

As we continue to evolve our platform, we look for opportunities to solve for these challenges. We architected the platform from the ground up to meet the demands of enterprise performance. We are consolidating more services in the database tier, bringing logic to the data layer to make sure these businesses are more efficient about how they capitalize on their data assets. We make sure we leverage language familiar to developers and we contribute to and build toward industry standards.

Ultimately, we want to provide a data platform that both empowers architects to solve their near-term issues and supports their long-term digital strategy, whatever that may be.

Q9. What advice would you offer enterprises for managing database sprawl?

Ravi Mayuram: “Database sprawl” has continued to be one of the biggest issues facing companies today.
As applications continue to evolve, rapidly changing requirements have led to a growing number of point solutions at the data layer. The organization is then forced to stitch together a broad array of niche solutions and manage the complexity of changing API’s and versions. Without a platform to contain this sprawl, companies are moving data between systems, inexplicably duplicating data, changing the data model or format to suit each individual technology while working to learn the internal skills necessary to manage all of them. That’s why so many companies are choosing a platform like Couchbase, to consolidate these technologies, enabling them to bring their solutions faster to market with streamlined data management.

Q10. How do you plan to extend your platform?

Ravi Mayuram: As our customers continue to converge data technologies onto Couchbase, we will remain steadfast on building the most robust, highly-performant enterprise platform for data management. At the same time, systems are expected to become more and more intelligent. As we automate more and more database services, we envision increasingly autonomous systems – that can self-manage, and be self-healing. We’ve already built tools like our Autonomous Operator for Kubernetes that help with the heavy lifting in cloud environments. We’re providing new capabilities like the Couchbase Analytics service that will allow users to get real-time analytics from their operational data, and Couchbase Eventing for server-side processing.

Meanwhile, as the amount of data grows, so does the need to extract more value from that data. We are aiming to further decrease the total cost of ownership by reducing operational complexity and supporting more multi-tenancy and high application density scenarios. All of these features will extend our platform into a more manageable, responsive, and intelligent system for our users.

———————————————-

Ravi Mayuram

Ravi Mayuram

As Senior Vice President of Engineering and CTO, Ravi is responsible for product development and delivery of the Couchbase Data Platform, which includes Couchbase Server and Couchbase Mobile. He came to Couchbase from Oracle, where he served as senior director of engineering and led innovation in the areas of recommender systems and social graph, search and analytics, and lightweight client frameworks. Also while at Oracle, Ravi was responsible for kickstarting the cloud collaboration platform. Previously in his career, Ravi held senior technical and management positions at BEA, Siebel, Informix, HP, and startup BroadBand Office. Ravi holds a Master of Science degree in Mathematics from University of Delhi.

Resources

– Couchbase Announces First Commercial Implementation of SQL++ with N1QL for Analytics

– Couchbase Brings NoETL to NoSQL with New Analytics Service in Latest Release of Couchbase Data Platform

– Don Chamberlin’s SQL++ Tutorial (LINK to .PDF registration required)

– Couchbase Autonomous Operator 1.0 provides the first native integration of Kubernetes with the Couchbase Data Platform (Release Notes)

Related Posts

YCSB-JSON: Benchmarking JSON databases by extending YCSB. by  Keshav Murthy. Couchbase Blog, September 10, 2018

Follow us on Twitter: @odbmsorg

##

]]>
http://www.odbms.org/blog/2018/09/on-engagement-database-interview-with-ravi-mayuram/feed/ 0
On using AI and Data Analytics in Pharmaceutical Research. Interview with Bryn Roberts  http://www.odbms.org/blog/2018/09/on-using-ai-and-data-analytics-in-pharmaceutical-research-interview-with-bryn-roberts/ http://www.odbms.org/blog/2018/09/on-using-ai-and-data-analytics-in-pharmaceutical-research-interview-with-bryn-roberts/#comments Mon, 10 Sep 2018 03:53:49 +0000 http://www.odbms.org/blog/?p=4794

” I’m intrigued by the general trend towards empowering individuals to share their data in a secure and controlled environment. Democratisation of data in this way has to be the future. Imagine what we will be able to do in decades to come, when individuals have access to their complete healthcare records in electronic form, paired with high quality data from genomics, epigenetics, microbiome, imaging, activity and lifestyle profiles, etc., supported by a platform that enables individuals to share all or parts of their data with partners of their choice, for purposes they care about, in return for services they value – very exciting! Bryn Roberts

I have interviewed Bryn Roberts, Global Head of Operations for Roche Pharmaceutical Research & Early Development, and Site Head in Basel. We talked about using AI and Data Analytics in Pharmaceutical Research.

RVZ

Q1. What are your responsibilities as Global Head of Operations for Roche Pharmaceutical Research & Early Development, and Site Head in Basel?

Bryn Roberts: I have a broad range of responsibilities that center around creating and operating a highly innovative global R&D enterprise, Roche pRED, where excellent scientific decision making is optimised along with efficiency, effectiveness, sustainability and compliance. Informatics is my largest department and includes workflow platforms in discovery and early development, architecture, infrastructure and software development, data science and digital solutions.

Facilities, infrastructure and end-to-end lab services, including three new R&D center building projects, provide state-of-the-art innovation centers and labs that integrate the latest architectural concepts, instrumentation, automation, robotics and supply chain to facilitate cutting-edge science.

These more tangible assets are complemented by a number of business operations teams, who oversee quality, compliance, risk management and business continuity, information and knowledge management, research contracts, academic and industrial collaborations, change and transformation, procurement, safety-health-environment, etc.

Finally, leadership of our Personalised Healthcare and Medical Genomics strategies and initiatives, ensuring close collaboration and insight sharing across the Roche Group.

As the Site Head for the Roche Innovation Center in Basel, I am, together with my local leadership team, accountable for the engagement and well-being of more than a thousand Research and Early Development colleagues at our headquarter site.
Our task is to create a vibrant environment that attracts, motivates and equips world-class talent. Initiatives range from scientific meetings, wellbeing programmes, workplace improvements, communication and knowledge sharing, celebrations and social events, to engagement of local academic and governmental organizations, sponsorship of local scientific conferences, and contribution to the overall Roche site development in Basel and Kaiseraugst.

Q2. Understanding a disease now requires integrated data and advanced analytics. What are the most common problems you encounter when integrating data from different sources and how you solve them?

Bryn Roberts: The challenges often relate to the topics represented by the FAIR acronym. These are Findability, Accessibility, Interoperability and Reusability. As with all organizations where large data assets have been generated and acquired over a long period of time, across many departments and projects, it is challenging to establish and maintain these FAIR Data principles. We have been committed to FAIR data for many years and continue to increase our investment in ‘FAIRification’, with particular emphasis currently on clinical trial and real-world data. Addressing the challenges requires a well thought through, and robustly implemented, information architecture incorporating data catalogues based on high quality meta-data, a holistic terminology service that enables semantic data integration, curation processes supporting data quality and annotation, appropriate application of data standards, etc.

On the advanced analytics side, it is very helpful to establish mechanisms for sharing algorithms and analysis pipelines, such as code repositories, and for annotating derived data and insights. Applying the FAIR principles to algorithms and analysis pipelines, as well as datasets, is an excellent way of sharing knowledge and leveraging expertise in an organization.

We are currently implementing a ‘Data Commons’ architecture framework to facilitate data management, integration, ‘FAIRification’, and to enable analysts of different types to leverage fully the data, as well as insights and analyses from their colleagues. Frameworks like this are essential in a large R&D enterprise, utilising complex high-dimensional data (e.g. genomics, imaging, digital monitoring), requiring federation of data and/or analyses, robust single-point-of-truth or master data management, access control, etc. In this regard, we are in our second generation architecture for our platform supporting disease understanding. My colleague, Jan Kuentzer, presented an excellent overview at the PRISME Forum last year and the slides are available if people would like to learn more (Roche Data Commons).

Roche1

Q3. How do you judge the quality of data, before and after you have done data integration?

Bryn Roberts: This question of data quality is even more complicated than it may first appear. Although there are some more elaborate models out there, conceptualising data quality in two broad perspectives may help.

Firstly, what we might call prescriptive quality, where we can test data against pre-determined standards such as vocabularies, ontologies, allowed values or ranges, counts, etc. This is an obvious step in data quality assessment and can be automated to a large degree, including in database schema and constraints. A very challenging aspect of prescriptive quality is judging the upstream processes involved in data collection and pre-processing. For example, determining: if analytical data have been associated with the correct samples, if a manual entry was correctly read and typed, if data from a collaborator have been falsified. The probability of quality issues such as these can be reduced through robust protocols, in-process QA steps, automation, and algorithms to detect systematic anomalies, etc. In the standard data quality models, we might consider prescriptive quality as covering dimensions such as accuracy, integrity, completeness, conformity, consistency and validity.

Secondly, what we might call the interpretive quality perspective, relating to the way the data will be interpreted and used for decision making. For example, the smoking status of patients with lung diseases may be recorded simply as: current, former or never. Despite the data meeting the prescribed standards, being accurate, complete and conforming to the model, they may not be of sufficient quality to address the complexities of the biology underlying the diseases, where one might need information describing the number of cigarettes smoked per day and the time since the individual smoked their last cigarette.
Similarly, when working with derived data from an algorithm, one may need to understand the training set and boundaries of the model to understand how far the derived data can be interpreted for specific input conditions. One can address some of these issues with meta-data describing how data were generated in lab, clinic or silico.

Q4. What are the criteria you use to select the data to be integrated?

Bryn Roberts: Certainly the quality aspects above play a key role. We have, for example, discarded historical laboratory results when, after careful consideration, we decided that the meta-data (lab protocols, association with target information, etc.) were insufficient for anyone to make meaningful use of them. Data derived from old technologies, despite being valuable at some point in the past, may have been superseded or may not meet today’s requirements, so will have lower priority for integration, although may still be archived for specialist reference. Relevance is another critical factor – we prioritise the integration of data relating to our current molecules or disease targets, and data that we deem to have the most valuable content.

Q5. Is there a risk that data integration introduces unwanted noise or bias? If yes, how do you cope with that?

Bryn Roberts: I’m not too concerned about these aspects when the above architectures and principles are applied. There is clearly a risk of bias when the integrated landscape is incomplete, so understanding what you have, and what you don’t, when searching is important. Storing only aggregated or derived data can be risky, as aggregation can mask properties such as skewness and outliers, and there are obviously benefits in having the ability to access and re-analyse upstream and raw data, as models and algorithms improve or an analyst has a specific use-case. Integration, if performed well, should not introduce additional noise, although noise reduction may potentially mask signals in data when they are aggregated or transformed in other ways.

I often hear people talking about Data Lakes and it certainly seems to be one of the hype terms of the last couple of years. This approach to data ‘integration’ does concern me if not implemented thoughtfully, especially for the complex scientific and clinical data used in R&D. Given that the Big Data stack allows for data to be poured into the metaphorical ‘Lake’ with a very low cost of entry, it is tempting to throw everything in, getting caught up with KPIs such as volume captured, with little thought to the backend use-cases and costs incurred when utilising the data. I also wouldn’t advocate the opposite extreme of RDBMS-only data Warehousing, where the front-end costs and timelines escalate to unreasonable levels and the models struggle to incorporate new data types. There is a pragmatic middle-ground, where up-front work on the majority of data has a positive return on investment, but challenging data are not excluded from the integration. This complementary Warehouse+Lake approach allows for continuous refinement, based on ongoing use of the data, to maximize value over the longer-term.

Q6. What specific data analytics techniques are most effective in pharma R&D?

Bryn Roberts: We have so many data types and use-cases, spanning chemistry, biology, clinic, business, etc. that we apply almost every analytic technique you can think of. Classical statistical methods and visual analytics have broad application, as do modelling and simulation. The latter being used extensively in areas from molecular simulations and computational chemistry to genotype-phenotype associations, pharmacokinetics and epidemiology. We are increasingly using Artificial Intelligence (AI), Machine Learning and Deep Learning, in applications such as image analysis, large scale clinico-genomic analysis, phenotypic profiling and analysis of high dimensional time-series data.

Q7. What are your current main projects in the so called “Precision Medicine”?

Bryn Roberts: In Roche we tend to use the term Personalised Healthcare rather than Precision Medicine, since we have both Pharmaceuticals and Diagnostics divisions. However, the intention is similar in that we want to identify which treatments and other interventions will be effective and safe for which patients, based on profiling, which may include genetics, genomics, proteomics, imaging, etc. We have many initiatives ongoing in research, development and for established products. Developing a deeper understanding how mutational and immunological status of tumours influences response to targeted therapeutics, immunotherapies and combinations is one example. Forward and reverse translation in such examples is critical, as we design clinical trials and select participants then, in turn, inform new research initiatives based on data fed back from the clinic. We have made considerable headway in this space thanks to progress in genomic analysis and quantitative digital pathology, supported by collaborations across the Roche Group, including organizations such as Tissue Diagnostics, Foundation Medicine and Flatiron.

Another example is in ophthalmology, where we are trying to identify biomarkers that predict disease progression and drug response to enable appropriate treatment selection, including prophylaxis.

A third quite different example is our application of mobile and sensor technology to monitor symptoms, disease progression and treatment response – the so called “Digital Biomarkers”. We have our most advanced programmes in Multiple Sclerosis (MS) and Parkinson’s Disease (PD), with several more in development. Using these tools, a longitudinal real-world profile is built that, in these complex syndromes, helps us to identify signals and changes in symptoms or general living factors, which may have several potential benefits. In clinical trials we hope to generate more sensitive and objective endpoints with high clinical relevance, with the potential to support smaller and shorter studies, and possibly validate targets in earlier studies that might otherwise be overlooked. In the general healthcare setting, tools like these may have great value for patients, physicians and healthcare systems if they are used to inform tailored treatment regimens and enable supportive interventions such as the timing of home visits or provision of walking aids to reduce falls. For those interested in learning more about our work in MS there is more information available online about our Floodlight Open programme.

Q8. You have been working on trying to detect Parkinson disease. What is your experience of using Deep Learning of that purpose?

Bryn Roberts: The data we collect with the digital biomarker apps fall into two classes: 1) active test data, where the subject performs specific tasks on a daily basis, and 2) continuous passive monitoring data, where the subject carries the device (e.g. smartphone) with them as they go about their daily lives and sensors, such as accelerometers and gyrometers, collect data continuously. These latter data form complex time series, with acceleration and rotation being measured in 3 axes each, many times per second. From these data, we build a picture of the individual’s daily activities and performance, which is ultimately what we hope to improve for patients with our new therapies. We apply Deep Learning to do this activity-performance classification, or Human Activity Recognition (HAR), using deep artificial neural networks that have been trained using well-annotated datasets. Since the data are time-series, the network utilises Long Short-Term Memory (LSTM) layers to provide recurrence, hence the name “Recurrent Neural Network” or RNN. Examples of what we might study here are how well a patient with PD is able to stand up from a chair or climb a staircase.

The advantages of using digital, mobile and AI technologies in this way, compared to infrequent in-clinic assessments, is that they are highly objective and sensitive, have the possibility to detect symptom fluctuations day-to-day, they are performed in the real-world setting providing increased relevance, they have a relatively low burden for patients, and data can be assessed by the patient and/or physician in near real-time so they become better informed and empowered.

Extending the application beyond clinical trials, disease monitoring and management, these technologies have the potential, in some disorders, to deliver solutions with a direct beneficial effect that can be measured objectively through improved outcomes. Thus, this work is also laying the foundation for advances in digital therapeutics, or “digiceuticals”, where we’ve seen a huge increase in interest, and the first regulatory approvals, over the last year or so.

With great power comes great responsibility, so we work closely with the participants and regulators to ensure that data protection and privacy are upheld to the highest standards and that participants are fully informed and consent.
As we have developed the platform over the past few years, the establishment of robust end-to-end data processes and the building of trust has run side-by-side with the technology innovation.

Q9. What kind of public data did you use for training your models?

Bryn Roberts: The Human Activity Recognition (HAR) model was initially trained using two independent public datasets of everyday activity from normal individuals. The first from Stisen et al. (“Smart devices are different: Assessing and mitigating mobile sensing heterogeneities for activity recognition”, Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems, 2015.) and the second from Weiss et al. (“The impact of personalization on smartphone-based activity recognition”, AAAI Workshop on Activity Context Representation: Techniques and Languages, 2012.). From these data, 90% were used to train the model and 10% for model validation.

Q10. What results did you obtain so far?

Bryn Roberts: In passive monitoring of gait and mobility we have published, for example, significant differences between healthy subjects and PD patients in parameters such as sitting-to-standing transitions, turning speed when walking and in overall gait parameters. In the active test panel, we have demonstrated correlation with the current standard rating scale for PD (MDS-UPDRS) in symptom areas such as tremor, dexterity, balance and postural stability. However, in some measurements (e.g. rest tremor) the digital biomarker appears to be more sensitive at detecting low-intensity symptoms than the in-clinic rating, and corresponds better with patients’ self-reported data.

For more information, see, for example: “Evaluation of Smartphone-Based Testing to Generate Exploratory Outcome Measures in a Phase 1 Parkinson’s Disease Clinical Trial”, Lipsmeier et al., Movement Disorders, 2018.

 Q11. Are there any other technological advances on the horizon that you are excited about?

Bryn Roberts: There’s a lot of activity in the healthcare and pharma sector at the moment around blockchain. Some of the use-cases have potential interest to us in R&D, such as secure sharing of genomic and medical data.
I don’t think blockchain is a requirement to do this effectively but may be an enabler, especially if it gains broad adoption.

I’m intrigued by the general trend towards empowering individuals to share their data in a secure and controlled environment. Democratisation of data in this way has to be the future. Imagine what we will be able to do in decades to come, when individuals have access to their complete healthcare records in electronic form, paired with high quality data from genomics, epigenetics, microbiome, imaging, activity and lifestyle profiles, etc., supported by a platform that enables individuals to share all or parts of their data with partners of their choice, for purposes they care about, in return for services they value – very exciting!

This vision, and even the large datasets available today, are driving a paradigm shift in data management and compute for us. The need to federate, both data and compute, across multiple locations and organisations is a change from the recent past, when we could internalise all the data of interest into our own data centers. Cloud, Hadoop, containers and other technologies that support federation are maturing quickly and are a great enabler to big data and advanced analytics in R&D.

What I’m particularly excited about just now is the potential of universal quantum computing (QC). Progress made over the last couple of years gives us more confidence that a fault-tolerant universal quantum computer could become a reality, at a useful scale, in the coming years. We’ve begun to invest time, and explore collaborations, in this field. Initially, we want to understand where and how we could apply QC to yield meaningful value in our space. Quantum mechanics and molecular dynamics simulation are obvious targets, however, there are other potential applications in areas such as Machine Learning.
I guess the big impacts for us will follow “quantum inimitability” (to borrow a term from Simon Benjamin from Oxford) in our use-cases, possibly in the 5-15 year timeframe, so this is a rather longer-term endeavour.

————————————————–

Bryn

Dr Bryn Roberts

Bryn gained his BSc and PhD in pharmacology from the University of Bristol, UK. Following post-doctoral work in neuropharmacology, he joined Organon as Senior Scientist in 1996. A number of roles followed with Zeneca and AstraZeneca, including team and project leader roles in high throughput screening and research informatics. In 2004 he became head of Discovery Informatics at the AstraZeneca sites in Cheshire, UK.

Bryn joined Roche in Basel in 2006, and his role as Global Head of Informatics was expanded in 2014 to Global Head of Operations for Pharma Research and Early Development. He is also the Centre Head for the Roche Innovation Centre Basel.

Beyond Roche, Bryn is a Visiting Fellow at the University of Oxford, where he is a member of the External Advisory Board of the Dept. of Statistics and the Scientific Management Committee for the Systems Approaches to Biomedical Sciences Centre for Doctoral Training. He is a member of the Advisory Board to the Pistoia Alliance. Bryn was recognized in the Fierce Biotech IT list of Top 10 Biotech Techies 2013 and in the Top 50 Big Data Influencers in Precision Medicine by the Big Data Leaders Forum in 2016.

Resources

– How artificial intelligence is changing drug discovery. Nature, 30 May 2018

Related Posts

– AI, Mental Health & Youper. Q&A with Jose Hamilton Vargas. ODBMS.org, August 23, 2018

– On Precision Medicine. Q&A with Todd Winey. ODBMS.org, February 14, 2018

– Beyond the Molecule and Beyond the Device: Machine Learning and the Future of Healthcare. By Shomit Ghose, Onset Ventures. ODBMS.org, August 24, 2017

Follow us on Twitter: @odbmsorg

##

]]>
http://www.odbms.org/blog/2018/09/on-using-ai-and-data-analytics-in-pharmaceutical-research-interview-with-bryn-roberts/feed/ 0