On Special Purpose Silicon. Q&A with Michael Hay

Michael Hay, Vice President & Chief Engineer at Hitachi Vantara

Q1. What is special purpose silicon?

It is the application of non-traditional CPU silicon technologies to problems related to data processing, network processing, artificial intelligence, etc.  Here are a few examples of this trend: Amazon’s EC2 F1 service provides access to FPGAs for user programming, Microsoft’s Project Catapult uses FPGAs for internal network processing including encryption acceleration,  Google’s Tensor Flow Processing Units are leveraged to save on power consumption, Intel now has 4 distinct chip properties (including Xeon) allowing consumers to use the right tool for the right job, Apple’s recent application of the A11 Bionic System on Chip have special purpose elements to speed up Machine Learning, and so on.  While a variety of chip types have been on the market for a long time there is something unique here: Notably that these systems are both becoming more visible and easier to use for regular software developers.  Perhaps Microsoft’s statement about FPGAs best symbolizes this point more so than others, “With the Configurable Clouds design, reconfigurable logic becomes a first-class resource in the datacenter, and over time may even be running more computational work than the data center’s CPUs,” Microsoft concludes. (Register Article)

Q2. What are the main differences in having intensive processing executed by general purpose compute cores vs special purpose pipelines?

There are many potential differences.  Examples include dramatically increased performance (think 20x, 50x, and in some cases more than 100x), better function than what’s possible in a pure software on general purpose CPUs, decreased footprint when compared to a pure general purpose CPU based solution.  But these are just technological differences that enable material business improvements.  In the case of Microsoft they’ve been able to move computing functions from their CPUs – which they sell as a part of their Azure service – on to smart NICs.

There are two effects: 1.) More CPU cycles can be monetized, 2.) Fewer physical nodes are needed for the same effort.
In both cases, if you think about it, this can lead to more profitable revenue generation.  While this intelligence has been gathered from conferences and informal face to face discussions, a better example are JPMC’s documented usage of Maxeler’s toolchains and systems to speed up processing for their business (see: JP Morgan…).  In this case they went from a 8 – 12 hour processing time for Collateralized Debt Obligations down to 4 minutes which is at least a 120x improvement.  This enabled JPMC to run multiple scenarios per day and therefore conduct business with increased insight.
So, I suppose that the main difference is the resolution of business problems better than with general purpose CPUs alone. 
And that’s just it, it isn’t about CPUs or Special Purpose Processing it is really an “and” situation to combine both technologies.

Q3. Is it possible to engineer the processor to do what the software really needs?

Yes this is getting easier everyday with tools like MaxJ from Maxeler, MyHDL a Python framework to ease the FPGA programming, NVidia’s CUDA for GPUs, Tensor Flow programming, also libraries are wrapping accelerators enabling programmers to execute logic on Special Purpose Processors without even knowing. However, when considering the average software engineer as a target audience the programming languages, development environments and design tools for FPGAs, GPUs, DSPs, AI accelerators, and so on are decades behind the tooling for general purpose CPUs. This makes the investment in both coding logic to execute on these non-traditional processors higher than general purpose CPUs. As a result there is still hesitation to use them, and we’re right back up to the answer to the previous question: It requires a business problem that would benefit dramatically from the application of Special Purpose Processors.

Q4. What are the trend for acceleration of software on special purpose silicon?

Very simple: Lowering the barrier to entry for software engineers! On a more macro level there are two major trends for Special Purpose Processing:

1.) The M&A moves by Intel to diversify their chip portfolio, which lags the needs of the Cloud Majors, signals an increased pace of simpler tools to develop for the Special Purpose Processor. In a real sense it is the Cloud Majors who are now setting the trends in Enterprise Computing infrastructures, and they are clearly innovating in every part of the stack possible including data center design, chip design, package design, software design, etc.

2.) On the consumer side it used to be that Wintel ruled the roost, but that is being challenged largely due to Apple’s iPhone phenomena.  With Apple people, in the consumer side of tech, are now free to innovate in every area not just software.
For Apple, innovations happen with packaging, custom System on Chip, batteries, displays, etc.

Given these two macro trends, a likely collision between them will result in a mountain of innovation. I believe that when coupled to better tooling this will accelerate the adoption of acceleration technology and perhaps we’ll even stop talking about these elements as Special Purpose Processors. Instead we’ll see computers as the synthesis of multiple types of processing elements and it will be natural to target parts of programs to different elements such that the maximum benefit is realized.

Q5. Some database vendors have taken an innovative Software in Silicon (SWiS) approach. What is your take on that?

It is a good approach to getting more users of acceleration technology in the market.  Hitachi’s been working this approach firstly with our VSP Gx00 unified systems that implement an FPGA accelerated NAS device, an FPGA accelerated RDBMS prototype, and some works to make Hadoop much much faster.  Transparent communications about these kinds of stacks is needed to help the regular IT buyers in companies become ok with the purchase of things which are either more than Intel or leveraging the whole of Intel’s chip portfolio.

Q6. What software applications will likely benefit most from special purpose silicon?

There are a bunch of application categories here are some examples: Storage systems, Security defenses, Data Bases, Data Integration, Stream Processing, Machine Learning/AI, and so on. Again, I think that as the tools become way easier to use we’ll be asking a different question: which applications only use one type of processing technology. Instead today because the tools aren’t as mature as those for standard CPUs you have to experience an order of magnitude in benefit of some kind and a close tie to a business problem which was maybe unsolvable before.

You may also like...