{"id":3127,"date":"2014-03-25T07:51:55","date_gmt":"2014-03-25T07:51:55","guid":{"rendered":"http:\/\/www.odbms.org\/blog\/?p=3127"},"modified":"2014-03-25T07:51:55","modified_gmt":"2014-03-25T07:51:55","slug":"data-centers-challenges-interview-david-gorbet","status":"publish","type":"post","link":"https:\/\/www.odbms.org\/blog\/2014\/03\/data-centers-challenges-interview-david-gorbet\/","title":{"rendered":"What are the challenges for modern Data Centers? Interview with David Gorbet."},"content":{"rendered":"<blockquote><p><strong>&#8220;The real problem here is the word \u201csilo.\u201d To answer today\u2019s data challenges requires a holistic approach. Your storage, network and compute need to work together.&#8221;&#8211;David Gorbet.<\/strong><\/p><\/blockquote>\n<p>What are the challenges for modern data centers? On this topic I have interviewed <strong>David Gorbet<\/strong>, Vice President of Engineering at <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.marklogic.com\/');\"  href=\"http:\/\/www.marklogic.com\/\" target=\"_blank\">MarkLogic<\/a>.<br \/>\nRVZ<\/p>\n<p><strong>Q1. Data centers are evolving to meet the demands and complexities imposed by increasing business requirements. What are the main challenges?<\/strong><\/p>\n<p><strong>David Gorbet<\/strong>: The biggest business challenge is the rapid pace of change in both available data and business requirements. It\u2019s no longer acceptable to spend years designing a data application, or the infrastructure to run it. You have to be able to iterate on functionality quickly. This means that your applications need to be developed in a much more agile manner, but you also need to be able to reallocate your infrastructure dynamically to the most pressing needs. In the era of Big Data this problem is exacerbated. The increasing volume and complexity of data under management is stressing both existing technologies and IT budgets. It\u2019s not just a matter of scale, although traditional \u201cscale-up\u201d technologies do become very expensive as data volumes grow. It\u2019s also a matter of complexity of data. Today a lot of data has a mix of structured and unstructured components, and the traditional solution to this problem is to split the structured components into an RDBMS, and use a search technology for the unstructured components. This creates additional complexity in the infrastructure, as different technology stacks are required for what really should be components of the same data.<\/p>\n<p>Traditional technologies for data management are not agile. You have to spend an inordinate amount of time designing schemas and planning indexing strategies, both of which require pretty-much full knowledge of the data and query patterns you need to provide the application value. This has to be done before you can even load data. On the infrastructure side, even if you\u2019ve embraced cloud technologies like virtualization, it\u2019s unlikely you\u2019re able to make good use of them at the data layer. Most database technologies are not architected to allow elastic expansion or contraction of capacity or compute power, which makes it hard to achieve many of the benefits (and cost savings) of cloud technologies.<\/p>\n<p>To solve these problems you need to start thinking differently about your data center strategy. You need to be thinking about a data-centered data center, versus today\u2019s more application-centered model.<\/p>\n<p><strong>Q2. You talked about a &#8220;data-centered&#8221; data center? What is it? and what is the difference with respect to a classical Data-Warehouse?<\/strong><\/p>\n<p><strong>David Gorbet<\/strong>: To understand what I mean by \u201cdata-centered\u201d data center, you have to think about the alternative, which is more application-centered. Today, if you have data that\u2019s useful, you build an application or put it in a data warehouse to unlock its value. These are database applications, so you need to build out a database to power them. This database needs a schema, and that schema is optimized for the application. To build this schema, you need to understand both the data you\u2019ll be using, and the queries that the application requires.<br \/>\nSo you have to know in advance everything the application is going to do before you can build anything. What\u2019s more, you then have to ETL this data from wherever it lives into the application-specific database.<\/p>\n<p>Now, if you want another application, you have to do the same thing. Pretty soon, you have hundreds of data stores with data duplicated all over the place. Actually, it\u2019s not really duplicated, it\u2019s data derived from other data, because as you ETL the data you change its form losing some of the context and combining what\u2019s left with bits of data from other sources. That\u2019s even worse than straight-up duplication because provenance is seldom retained through this process, so it\u2019s really hard to tell where data came from and trace it back to its source. Now imagine that you have to correct some data.<br \/>\nCan you be sure that the correction flowed through to every downstream system? Or what if you have to delete data due to a privacy issue, or change security permissions on data? Even with \u201csmall data\u201d this is complicated, but it\u2019s much harder and costlier with high volumes of data.<\/p>\n<p>A \u201cdata-centered\u201d data center is one that is focused on the data, its use, and its governance through its lifecycle as the primary consideration. It\u2019s architected to allow a single management and governance model, and to bring the applications to the data, rather than copying data to the applications. With the right technologies, you can build a data-centered data center that minimizes all the data duplication, gives you consistent data governance, enables flexibility both in application development over the data and in scaling up and down capacity to match demand, allowing you to manage your data securely and cost-effectively throughout its lifecycle.<\/p>\n<p><strong>Q3. Data center resources are typically stored in three silos: compute, storage and network: is this a problem?<\/strong><\/p>\n<p><strong>David Gorbet<\/strong>: It depends on your technology choices. Some data management technologies require direct-attached storage (<a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/en.wikipedia.org\/wiki\/Direct-attached_storage');\"  href=\"http:\/\/en.wikipedia.org\/wiki\/Direct-attached_storage\" target=\"_blank\">DAS<\/a>), so obviously you can\u2019t manage storage separately with that kind of technology. Others can make use of either DAS or shared storage like <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/en.wikipedia.org\/wiki\/Storage_area_network');\"  href=\"http:\/\/en.wikipedia.org\/wiki\/Storage_area_network\" target=\"_blank\">SAN<\/a> or <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/en.wikipedia.org\/wiki\/Network-attached_storage');\"  href=\"http:\/\/en.wikipedia.org\/wiki\/Network-attached_storage\" target=\"_blank\">NAS<\/a>.<br \/>\nWith the right technology, it\u2019s not necessarily a problem to have storage managed independently from compute.<br \/>\nThe real problem here is the word \u201csilo.\u201d To answer today\u2019s data challenges requires a holistic approach. Your storage, network and compute need to work together.<\/p>\n<p>Your question could also apply to application architectures. Traditionally, applications are built in a three-tiered architecture, with a DBMS for data management, an application server for business logic, and a front-end client where the UI lives. There are very good reasons for this architecture, and I believe it\u2019s likely to be the predominant model for years to come. But even though business logic is supposed to reside in the app server, every enterprise DBMS supports stored procedures, and these are commonly used to leverage compute power near the data for cases where it would be too slow and inefficient to move data to the middle tier. Increasingly, enterprise DBMSes also have sophisticated built-in functions (and in many cases user-defined functions) to make it easy to do things that are most efficiently done right where the data lives. Analytic aggregate calculations are a good example of this. Compute doesn\u2019t just reside in the middle tier.<\/p>\n<p>This is nothing new, so why am I bringing it up? Because as data volumes grow larger, the problem of moving data out of the DBMS to do something with it is going to get a lot worse. Consider for example the problem faced by the <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.cancer.gov');\"  href=\"http:\/\/www.cancer.gov\" target=\"_blank\">National Cancer Institute<\/a>. The current model for institutions wanting to do research based on genomic data is to download a data set and analyze it. But by the end of 2014, the <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/ncip.nci.nih.gov\/blog\/computational-needs-for-large-scale-data-analysis-towards-a-cancer-knowledge-cloud\/');\"  href=\"http:\/\/ncip.nci.nih.gov\/blog\/computational-needs-for-large-scale-data-analysis-towards-a-cancer-knowledge-cloud\/\">Cancer Genome Atlas<\/a> is expected to have grown from less than 500 TB to 2.5 PB. Just downloading 2.5 PB, even over 10-gigabit network would take almost a month.<\/p>\n<p>The solution? Bring more compute to the data. The implication? Twofold: First, methods for narrowing down data sets prior to acting on them are critical. This is why search technology is fast becoming a key feature of a next-generation DBMS. Search is the query language for unstructured data, and if you have complex data with a mix of structured and unstructured components, you need to be able to mix search and query seamlessly. Second, DBMS technologies need to become much more powerful so that they can execute sophisticated programs and computations efficiently where the data lives, scoped in real-time to a search that can narrow the input set down significantly. That\u2019s the only way this stuff is going to get fast enough to happen in real-time. Another way of putting this is that the \u201cM\u201d in DBMS is going to increase in scope. It\u2019s not enough just to store and retrieve data. Modern DBMS technology needs to be able to do complex, useful computations on it as well.<\/p>\n<p><strong>Q4. How do you build such a &#8220;data-centered&#8221; data center?<\/strong><\/p>\n<p><strong>David Gorbet<\/strong>: First you need to change your mindset. Think about the data as the center of everything. Think about managing your data in one place, and bringing the application to the data by exposing data services off your DBMS. The implications for how you architect your systems are significant. Think service-oriented architectures and continuous deployment models.<\/p>\n<p>Next, you need the right technology stack. One that can provide application functionality for transactions, search and discovery, analytics, and batch computation with a single governance and scale model. You need a storage system that gives great SLAs on high-value data and great TCO on lower-value data, without ETL. You need the ability to expand and contract compute power to serve the application needs in real time without downtime, and to run this infrastructure on premises or in the cloud.<\/p>\n<p>You need the ability to manage data throughout its lifecycle, to take it offline for cost savings while leaving it available for batch analytics, and to bring it back online for real-time search, discovery or analytics within minutes if necessary. To power applications, you need the ability to create powerful, performant and secure data services and expose them right from where the data lives, providing the data in the format needed by your application on the fly.<br \/>\nWe call this <em>\u201cschema on read.\u201d<\/em><\/p>\n<p>Of course all this has to be enterprise class, with high availability, disaster recovery, security, and all the enterprise functionality your data deserves, and it has to fit in your shrinking IT budget. Sounds impossible, but the technology exists today to make this happen.<\/p>\n<p><strong>Q5. For what kind of mission critical apps is such a &#8220;data-centered&#8221; data center useful?<\/strong><\/p>\n<p><strong>David Gorbet<\/strong>: If you have a specific application that uses specific data, and you won\u2019t need to incorporate new data sources to that application or use that data for another application, then you don\u2019t need a data-centered data center. Unfortunately, I\u2019m having a hard time thinking of such an application. Even the dull line of business apps don\u2019t stand alone anymore. The data they create and maintain is sent to a data warehouse for analysis.<br \/>\nThe new mindset is that all data is potentially valuable, and that isn\u2019t just restricted to data created in-house.<br \/>\nMore and more data comes from outside the organization, whether in the form of reference data, social media, linked data, sensor data, log data\u2026 the list is endless.<\/p>\n<p>A data-centered data center strategy isn\u2019t about a specific application or application type. It\u2019s about the way you have to think about your data in this new era.<\/p>\n<p><strong>Q6. How Hadoop fits into this &#8220;data-centered&#8221; data center?<\/strong><\/p>\n<p><strong>David Gorbet<\/strong>: <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/hadoop.apache.org');\"  href=\"http:\/\/hadoop.apache.org\" target=\"_blank\">Hadoop<\/a> is a key enabling technology for the data-centered data center. <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/wiki.apache.org\/hadoop\/HDFS');\"  href=\"http:\/\/wiki.apache.org\/hadoop\/HDFS\" target=\"_blank\">HDFS<\/a> is a great file system for storing loads of data cheaply.<br \/>\nI think of it as the new shared storage infrastructure for \u201cbig data.\u201d Now HDFS isn\u2019t fast, so if you need speed, you may need NAS, SAN, or even DAS or SSD. But if you have a lot of data, it\u2019s going to be much cheaper to store it in HDFS than in traditional data center storage technologies. Hadoop <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/hadoop.apache.org\/docs\/r1.2.1\/mapred_tutorial.html');\"  href=\"https:\/\/hadoop.apache.org\/docs\/r1.2.1\/mapred_tutorial.html\" target=\"_blank\">MapReduce <\/a>is a great technology for batch analytics. If you want to comb through a lot of data and do some pretty sophisticated stuff to it, this is a good way to do it. The downside to MapReduce is that it\u2019s for batch jobs. It\u2019s not real-time.<\/p>\n<p>So Hadoop is an enabling technology for a data-centered data center, but it needs to be complemented with high-performance storage technologies for data that needs this kind of SLA, and more powerful analytic technologies for real-time search, discovery and analysis. Hadoop is not a DBMS, so you also need a DBMS with Hadoop to manage transactions, security, real-time query, etc.<\/p>\n<p><strong>Q7. What are the main challenges when designing an ETL strategy?<\/strong><\/p>\n<p><strong>David Gorbet<\/strong>: <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/en.wikipedia.org\/wiki\/Extract,_transform,_load');\"  href=\"http:\/\/en.wikipedia.org\/wiki\/Extract,_transform,_load\" target=\"_blank\">ETL<\/a> is hard to get right, but the biggest challenge is maintaining it. Every app has a v2, and usually this means new queries that require new data that needs a new schema and revised ETL. ETL also just fundamentally adds complexity to a solution.<br \/>\nIt adds latency since many ETL jobs are designed to run in batches. It\u2019s hard to track provenance of data through ETL, and it\u2019s hard to apply data security and lifecycle management rules through ETL. This isn\u2019t the fault of ETL or ETL tools.<br \/>\nIt\u2019s just that the model is fundamentally complex.<\/p>\n<p><strong>Q8. With Big Data analytics you don\u2019t know in advance what data you\u2019re going to need (or get in the future). What is the solution to this problem?<\/strong><\/p>\n<p><strong>David Gorbet<\/strong>: This is a big problem for relational technologies, where you need to design a schema that can fit all your data up front.<br \/>\nThe best approach here is to use a technology that does not require a predefined schema, and that allows you to store different entities with different schemas (or no schema) together in the same database and analyze them together.<br \/>\nA <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/en.wikipedia.org\/wiki\/Document-oriented_database');\"  href=\"http:\/\/en.wikipedia.org\/wiki\/Document-oriented_database\" target=\"_blank\">document database<\/a>, which is a type of NoSQL database, is great for this, but be careful which one you choose because some NoSQL databases don\u2019t do transactions and some don\u2019t have the indexing capability you need to search and query the data effectively.<br \/>\nAnother trend is to use <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/en.wikipedia.org\/wiki\/Semantic_Web');\"  href=\"http:\/\/en.wikipedia.org\/wiki\/Semantic_Web\" target=\"_blank\">Semantic Web technology<\/a>. This involves modeling data as triples, which represent assertions with a subject, a predicate, and an object.<br \/>\nLike <em>\u201cThis derivative (subject) is based on (predicate) this underlying instrument (object).\u201d<\/em><br \/>\nIt turns out you can model pretty much any data that way, and you can invent new relationships (predicates) on the fly as you need them.<br \/>\nNo schema required. It\u2019s also easy to relate data entities together, since triples are ideal for modeling relationships. The challenge with this approach is that there\u2019s still quite a bit of thought required to figure out the best way to represent your data as triples. To really make it work, you need to define rules about what predicates you\u2019re going to allow and what they mean so that data is modeled consistently.<\/p>\n<p><strong>Q9. What is the cost to analyze a terabyte of data?<\/strong><\/p>\n<p><strong>David Gorbet<\/strong>: That depends on what technologies you\u2019re using, and what <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/en.wikipedia.org\/wiki\/Service-level_agreement');\"  href=\"http:\/\/en.wikipedia.org\/wiki\/Service-level_agreement\" target=\"_blank\">SLAs<\/a> are required on that data.<br \/>\nIf you\u2019re ingesting new data as you analyze, and you need to feed some of the results of the analysis back to the data in real time, for example if you\u2019re analyzing risk on derivatives trades before confirming them, and executing business rules based on that, then you need fast disk, a fair amount of compute power, replicas of your data for <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/en.wikipedia.org\/wiki\/High-availability_cluster');\"  href=\"http:\/\/en.wikipedia.org\/wiki\/High-availability_cluster\" target=\"_blank\">HA failover<\/a>, and additional <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/en.wikipedia.org\/wiki\/Replication_(computing)');\"  href=\"http:\/\/en.wikipedia.org\/wiki\/Replication_(computing)\" target=\"_blank\">replicas<\/a> for DR. Including compute, this could cost you about $25,000\/TB.<br \/>\nIf your data is read-only and your analysis does not require high-availability, for example a compliance application to search those aforementioned derivatives transactions, you can probably use cheaper, more tightly packed storage and less powerful compute, and get by with about $4,000\/TB. If you\u2019re doing mostly batch analytics and can use HDFS as your storage, you can do this for as low as $1,500\/TB.<\/p>\n<p>This wide disparity in prices is exactly why you need a technology stack that can provide real-time capability for data that needs it, but can also provide great TCO for the data that doesn\u2019t. There aren\u2019t many technologies that can work across all these data tiers, which is why so many organizations have to ETL their data out of their transactional system to an analytic or archive system to get the cost savings they need. The best solution is to have a technology that can work across all these storage tiers and can manage migration of data through its lifecycle across these tiers seamlessly.<br \/>\nAgain, this is achievable today with the right technology choices.<\/p>\n<p>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<br \/>\n<strong>David Gorbet<\/strong><em>, Vice President, Engineering, <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.marklogic.com');\"  href=\"http:\/\/www.marklogic.com\" target=\"_blank\">MarkLogic<\/a>.<\/em><br \/>\n<em>David brings two decades of experience bringing to market some of the highest-volume applications and enterprise software in the world. David has shipped dozens of releases of business and consumer applications, server products and services ranging from open source to large-scale online services for businesses, and twice has helped start and grow billion-dollar software products.<\/p>\n<p>Prior to MarkLogic, David helped pioneer Microsoft\u2019s business online services strategy by founding and leading the SharePoint Online team. In addition to SharePoint Online, David has held a number of positions at Microsoft and elsewhere with a number of products, including Microsoft Office, Extricity B2Bi server software, and numerous incubation products.<\/p>\n<p>David holds a Bachelor of Applied Science degree in Systems Design Engineering with an additional major in Psychology from the University of Waterloo, and an MBA from the University of Washington Foster School of Business.<\/em><\/p>\n<p><strong>Related Posts<\/strong><\/p>\n<p>&#8211;<a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.odbms.org\/blog\/2013\/09\/on-linked-data-interview-with-john-goodwin\/');\"  href=\"http:\/\/www.odbms.org\/blog\/2013\/09\/on-linked-data-interview-with-john-goodwin\/\" target=\"_blank\">On Linked Data. Interview with John Goodwin. ODBMS Industry September 1, 2013<\/a><\/p>\n<p>&#8211; <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.odbms.org\/blog\/2013\/08\/on-nosql-interview-with-rick-cattell\/');\"  href=\"http:\/\/www.odbms.org\/blog\/2013\/08\/on-nosql-interview-with-rick-cattell\/\" target=\"_blank\">On NoSQL. Interview with Rick Cattell. ODBMS Industry Watch August 19, 2013<\/a><\/p>\n<p><strong>Resources<\/strong><\/p>\n<p>&#8211; <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.odbms.org\/2014\/03\/loss-zovn-2\/');\"  href=\"http:\/\/www.odbms.org\/2014\/03\/loss-zovn-2\/\" target=\"_blank\">Got Loss? Get zOVN!<\/a><br \/>\n<em>Authors:<\/em> Daniel Crisan, Robert Birke, Gilles Cressier, Cyriel Minkenberg and Mitch Gusat. IBM Research \u2013 Zurich Research Laboratory.<br \/>\n<em>Abstract:<\/em> Datacenter networking is currently dominated by two major trends. One aims toward lossless, flat layer-2 fabrics based on Converged Enhanced Ethernet or InfiniBand, with ben- efits in efficiency and performance.<\/p>\n<p>&#8211; <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.odbms.org\/2014\/03\/f1-distributed-sql-database-scales\/');\"  href=\"http:\/\/www.odbms.org\/2014\/03\/f1-distributed-sql-database-scales\/\" target=\"_blank\">F1: A Distributed SQL Database That Scales<\/a><br \/>\n<em>Authors:<\/em> Jeff Shute, Radek Vingralek, Eric Rollins, Stephan Ellner, Traian Stancescu, Bart Samwel, Mircea Oancea, John Cieslewicz, Himani Apte, Ben Handy, Kyle Littlefield, Ian Rae*. Google, Inc., *University of Wisconsin-Madison<br \/>\n<em>Abstract:<\/em> F1 is a distributed relational database system built at Google to support the AdWords business. <\/p>\n<p><strong>Events<\/strong><\/p>\n<p>David Gorbet will be speaking at <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.odbms.org\/2014\/03\/marklogic-world-2014-san-francisco\/');\"  href=\"http:\/\/www.odbms.org\/2014\/03\/marklogic-world-2014-san-francisco\/\" target=\"_blank\">MarkLogic World in San Francisco<\/a> from April 7-10, 2014.<\/p>\n<p><strong>ODBMS.org on Twitter: <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/twitter.com\/odbmsorg');\"  href=\"https:\/\/twitter.com\/odbmsorg\" target=\"_blank\"><strong> @odbmsorg<\/strong><\/a><\/strong> <\/p>\n<!-- AddThis Advanced Settings generic via filter on the_content --><!-- AddThis Share Buttons generic via filter on the_content -->","protected":false},"excerpt":{"rendered":"<p>&#8220;The real problem here is the word \u201csilo.\u201d To answer today\u2019s data challenges requires a holistic approach. Your storage, network and compute need to work together.&#8221;&#8211;David Gorbet. What are the challenges for modern data centers? On this topic I have interviewed David Gorbet, Vice President of Engineering at MarkLogic. RVZ Q1. Data centers are evolving [&hellip;]<!-- AddThis Advanced Settings generic via filter on get_the_excerpt --><!-- AddThis Share Buttons generic via filter on get_the_excerpt --><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[35,66,97,102,124,145,239,355,358,412,413,499,527,631],"_links":{"self":[{"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/posts\/3127"}],"collection":[{"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/comments?post=3127"}],"version-history":[{"count":11,"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/posts\/3127\/revisions"}],"predecessor-version":[{"id":3152,"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/posts\/3127\/revisions\/3152"}],"wp:attachment":[{"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/media?parent=3127"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/categories?post=3127"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/tags?post=3127"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}