An open supply unified execution engine

- Meta is introducing Velox, an open supply unified execution engine geared toward accelerating information administration programs and streamlining their growth.
- Velox is beneath lively growth. Experimental outcomes from our paper printed on the Worldwide Convention on Very Giant Knowledge Bases (VLDB) 2022 present how Velox improves effectivity and consistency in information administration programs.
- Velox helps consolidate and unify information administration programs in a fashion we consider might be of profit to the trade. We’re hoping the bigger open supply group will be part of us in contributing to the challenge.
Meta’s infrastructure performs an necessary function in supporting our services. Our information infrastructure ecosystem consists of dozens of specialised information computation engines, all centered on totally different workloads for a wide range of use circumstances starting from SQL analytics (batch and interactive) to transactional workloads, stream processing, information ingestion, and extra. Lately, the speedy progress of synthetic intelligence (AI) and machine studying (ML) use circumstances inside Meta’s infrastructure has led to extra engines and libraries focused at function engineering, information preprocessing, and different workloads for ML coaching and serving pipelines.
Nonetheless, regardless of the similarities, these engines have largely advanced independently. This fragmentation has made sustaining and enhancing them tough, particularly contemplating that as workloads evolve, the {hardware} that executes these workloads additionally adjustments. Finally, this fragmentation leads to programs with totally different function units and inconsistent semantics — lowering the productiveness of information customers that must work together with a number of engines to complete duties.
In an effort to tackle these challenges and to create a stronger, extra environment friendly information infrastructure for our personal merchandise and the world, Meta has created and open sourced Velox. It’s a novel, state-of-the-art unified execution engine that goals to hurry up information administration programs in addition to streamline their growth. Velox unifies the frequent data-intensive parts of information computation engines whereas nonetheless being extensible and adaptable to totally different computation engines. It democratizes optimizations that have been beforehand applied solely in particular person engines, offering a framework through which constant semantics will be applied. This reduces work duplication, promotes reusability, and improves total effectivity and consistency.
Velox is beneath lively growth, however it’s already in varied phases of integration with greater than a dozen information programs at Meta, together with Presto, Spark, and PyTorch (the latter by way of a knowledge preprocessing library known as TorchArrow), in addition to different inside stream processing platforms, transactional engines, information ingestion programs and infrastructure, ML programs for function engineering, and others.
Because it was first uploaded to GitHub, the Velox open supply challenge has attracted greater than 150 code contributors, together with key collaborators corresponding to Ahana, Intel, and Voltron Knowledge, in addition to varied educational establishments. By open-sourcing and fostering a group for Velox, we consider we will speed up the tempo of innovation within the information administration system’s growth trade. We hope extra people and corporations will be part of us on this effort.
An outline of Velox
Whereas information computation engines could seem distinct at first, they’re all composed of an identical set of logical parts: a language entrance finish, an intermediate illustration (IR), an optimizer, an execution runtime, and an execution engine. Velox offers the constructing blocks required to implement execution engines, consisting of all data-intensive operations executed inside a single host, corresponding to expression analysis, aggregation, sorting, becoming a member of, and extra — additionally generally known as the info airplane. Due to this fact, Velox expects an optimized plan as enter and effectively executes it utilizing the sources accessible within the native host.

Velox leverages quite a few runtime optimizations, corresponding to filter and conjunct reordering, key normalization for array and hash-based aggregations and joins, dynamic filter pushdown, and adaptive column prefetching. These optimizations present optimum native effectivity given the accessible information and statistics extracted from incoming batches of information. Velox can also be designed from the bottom as much as effectively help complicated information varieties because of their ubiquity in trendy workloads, and therefore extensively depends on dictionary encoding for cardinality-increasing and cardinality-reducing operations corresponding to joins and filtering, whereas nonetheless offering quick paths for primitive information varieties.
The primary parts offered by Velox are:
- Sort: a generic sort system that enables builders to characterize scalar, complicated, and nested information varieties, together with structs, maps, arrays, features (lambdas), decimals, tensors, and extra.
- Vector: an Apache Arrow–appropriate columnar reminiscence structure module supporting a number of encodings, corresponding to flat, dictionary, fixed, sequence/RLE, and body of reference, along with a lazy materialization sample and help for out-of-order outcome buffer inhabitants.
- Expression Eval: a state-of-the-art vectorized expression analysis engine constructed based mostly on vector-encoded information, leveraging methods corresponding to frequent subexpression elimination, fixed folding, environment friendly null propagation, encoding-aware analysis, dictionary peeling, and memoization.
- Features: APIs that can be utilized by builders to construct customized features, offering a easy (row by row) and vectorized (batch by batch) interface for scalar features and an API for mixture features.
- A perform bundle appropriate with the favored PrestoSQL dialect can also be offered as a part of the library.
- Operators: implementation of frequent SQL operators corresponding to TableScan, Challenge, Filter, Aggregation, Change/Merge, OrderBy, TopN, HashJoin, MergeJoin, Unnest, and extra.
- I/O: a set of APIs that enables Velox to be built-in within the context of different engines and runtimes, corresponding to:
- Connectors: permits builders to specialize information sources and sinks for TableScan and TableWrite operators.
- DWIO: an extensible interface offering help for encoding/decoding common file codecs corresponding to Parquet, ORC, and DWRF.
- Storage adapters: a byte-based extensible interface that enables Velox to hook up with storage programs corresponding to Tectonic, S3, HDFS, and extra.
- Serializers: a serialization interface concentrating on community communication the place totally different wire protocols will be applied, supporting PrestoPage and Spark’s UnsafeRow codecs.
- Useful resource administration: a group of primitives for dealing with computational sources, corresponding to CPU and reminiscence administration, spilling, and reminiscence and SSD caching.
Velox’s fundamental integrations and experimental outcomes
Past effectivity beneficial properties, Velox offers worth by unifying the execution engines throughout totally different information computation engines. The three hottest integrations are Presto, Spark, and TorchArrow/PyTorch.
Presto — Prestissimo
Velox is being built-in into Presto as a part of the Prestissimo challenge, the place Presto Java employees are changed by a C++ course of based mostly on Velox. The challenge was initially created by Meta in 2020 and is beneath continued growth in collaboration with Ahana, together with different open supply contributors.
Prestissimo offers a C++ implementation of Presto’s HTTP REST interface, together with worker-to-worker change serialization protocol, coordinator-to-worker orchestration, and standing reporting endpoints, thereby offering a drop-in C++ alternative for Presto employees. The primary question workflow consists of receiving a Presto plan fragment from a Java coordinator, translating it right into a Velox question plan, and handing it off to Velox for execution.
We carried out two totally different experiments to discover the speedup offered by Velox in Presto. Our first experiment used the TPC-H benchmark and measured near an order of magnitude speedup in some CPU-bound queries. We noticed a extra modest speedup (averaging 3-6x) for shuffle-bound queries.
Though the TPC-H dataset is an ordinary benchmark, it’s not consultant of actual workloads. To discover how Velox would possibly carry out in these situations, we created an experiment the place we executed manufacturing site visitors generated by a wide range of interactive analytical instruments discovered at Meta. On this experiment, we noticed a mean of 6-7x speedups in information querying, with some outcomes rising speedups by over an order of magnitude. You may be taught extra concerning the particulars of the experiments and their leads to our research paper.
Prestissimo’s codebase is out there on GitHub.
Spark — Gluten
Velox can also be being built-in into Spark as a part of the Gluten project created by Intel. Gluten permits C++ execution engines (corresponding to Velox) for use inside the Spark setting whereas executing Spark SQL queries. Gluten decouples the Spark JVM and execution engine by making a JNI API based mostly on the Apache Arrow information format and Substrait question plans, thus permitting Velox for use inside Spark by merely integrating with Gluten’s JNI API.
Gluten’s codebase is out there on GitHub.
TorchArrow
TorchArrow is a dataframe Python library for information preprocessing in deep studying, and a part of the PyTorch challenge. TorchArrow internally interprets the dataframe illustration right into a Velox plan and delegates it to Velox for execution. Along with converging the in any other case fragmented area of ML information preprocessing libraries, this integration permits Meta to consolidate execution-engine code between analytic engines and ML infrastructure. It offers a extra constant expertise for ML finish customers, who’re generally required to work together with totally different computation engines to finish a selected process, by exposing the identical set of features/UDFs and making certain constant conduct throughout engines.
TorchArrow was just lately launched in beta mode on GitHub.
The way forward for database system growth
Velox demonstrates that it’s doable to make information computation programs extra adaptable by consolidating their execution engines right into a single unified library. As we proceed to combine Velox into our personal programs, we’re dedicated to constructing a sustainable open supply group to help the challenge in addition to to hurry up library growth and trade adoption. We’re additionally serious about persevering with to blur the boundaries between ML infrastructure and conventional information administration programs by unifying perform packages and semantics between these silos.
Wanting on the future, we consider Velox’s unified and modular nature has the potential to be helpful to industries that make the most of, and particularly people who develop, information administration programs. It should enable us to companion with {hardware} distributors and proactively adapt our unified software program stack as {hardware} advances. Reusing unified and extremely environment friendly parts can even enable us to innovate sooner as information workloads evolve. We consider that modularity and reusability are the way forward for database system growth, and we hope that information firms, academia, and particular person database practitioners alike will be part of us on this effort.
In-depth documentation about Velox and these parts will be discovered on our website and in our analysis paper “Velox: Meta’s unified execution engine.”
Acknowledgements
We want to thank all contributors to the Velox challenge. A particular thank-you to Sridhar Anumandla, Philip Bell, Biswapesh Chattopadhyay, Naveen Cherukuri, Wei He, Jiju John, Jimmy Lu, Xiaoxuang Meng, Krishna Pai, Laith Sakka, Bikramjeet Vigand, Kevin Wilfong from the Meta group, and to numerous group contributors, together with Frank Hu, Deepak Majeti, Aditi Pandit, and Ying Su.