On MemSQL 6. Q&A with Eric Hanson
Q1. What is wrong with precomputing summary aggregates to gain speed?
Precomputing summary aggregates has several problems associated with it, including:
- Someone has to decide which aggregates to maintain and implement the logic to maintain them
- You have to update the aggregates when you load or update data, which can take time and potentially result in out-of-date summaries
- If you change your query slightly, e.g. you add a new group-by column, you might not be able to use the aggregate any more, and your query may slow down by orders of magnitude because you have to process the fine-grained fact data.
Using a fast, distributed execution engine with compilation, vectorization, and columnstore (as in MemSQL) allows you to calculate summary aggregates in every query on the fly from raw data. So it eliminates the problems listed above with precomputed aggregates. Precomputed aggregates, when you have just the right aggregate for your need, can give great performance, but they come with quite a few problems.
Q2. Can you explain how MemSQL 6 executes single-table group-by/aggregate queries on columnstore data?
For group-by/aggregate queries on columnstore tables, when the number of groups is relatively small (less than a few thousand) per million-row segment, MemSQL uses the encoded data value as an integer to look up into an array. If no entry is found, one is added. If an entry is found, its aggregate value is updated. The local aggregates for each segment are then passed on and “rolled up” at the partition, leaf, or aggregator level, as needed. The segment-level aggregation is where MemSQL operates directly on encoded data using vectorization and SIMD, leading to up to 80x faster processing than non-vectorized operations on decoded data.
You can find more details here in our documentation: https://docs.memsql.com/concepts/v6.0/understanding-ops-on-encoded-data/.
Q3. Is MemSQL always performing query processing operations directly on encoded data in columnstore tables?
No. We only perform operations on encoded data at the segment level when processing a columnstore table for a single-table group-by/aggregate query, when there are less than a few thousand groups per segment. For situations with a smaller number of distinct groups, this tends to work really well. For larger numbers of groups, we revert to operations on decoded data. At the partition and aggregator level, we operate on decoded data as well.
Q4. How do you optimise multiple-tables queries?
To make multi-table queries run fast, we first choose a good query plan using rules, search, and a cost model. We get fast execution performance by using parallel distributed join algorithms and compilation. We support both broadcast and shuffle joins, as well as joins of a distributed table with a smaller replicated table (a “reference table”). Filters are pushed below joins. Bloom filters from hash builds are sometimes used to speed up scans of larger fact tables by eliminating non-joining rows early. Using code generation, we can spend fewer instructions per row than systems that interpret the handling of each row. In future releases, we will improve multi-table query performance by adding more operations on encoded data and enhancements to reduce shuffle cost.
Eric Hanson is a principal product manager at MemSQL, responsible for query processing, extensibility, and geo-spatial feature areas. He’s a PhD graduate of UC Berkeley, was an Air Force officer, a professor of computer science at the University of Florida during the 1990s, and a principal program manager and developer in the SQL Server team at Microsoft from 2002-2016. He was named a Hive committer for contributions to Stinger. He’s a technology expert on data warehousing, column stores, and vectorized query execution.