How Facebook Handles 300 Petabytes Of Daily Data

PrestoArchitectureFacebook offered some insight into how it handles the more than 300 petabytes of data it stores for its 1.19 billion monthly active users, providing some details on Presto, an interactive query system it created and is open-sourcing, in a note on the Facebook Engineering page.

Presto was designed to help Facebook process queries for data with lower latency — in other words, quicker from an end-user standpoint — and a “small team” in the social network’s data infrastructure group (Martin Traverso, Dain Sundstrom, David Phillips, Eric Hwang, Nileema Shingte, and Ravi Murthy) launched the project in the fall of 2012.

The interactive query system is now open-sourced, and developers can access the code and other information via the Presto site or GitHub.

Facebook offered more background on Presto in its note:

The Presto system is implemented in Java because it’s fast to develop, has a great ecosystem, and is easy to integrate with the rest of the data-infrastructure components at Facebook that are primarily built in Java. Presto dynamically compiles certain portions of the query plan down to byte code, which lets the JVM optimize and generate native machine code. Through careful use of memory and data structures, Presto avoids typical issues of Java code related to memory allocation and garbage collection. (In a later post, we will share some tips and tricks for writing high-performance Java system code and the lessons learned while building Presto.)

Extensibility is another key design point for Presto. During the initial phase of the project, we realized that large data sets were being stored in many other systems in addition to HDFS. Some data stores are well-known systems such as HBase, but others are custom systems such as the Facebook News Feed back end. Presto was designed with a simple storage abstraction that makes it easy to provide SQL query capability against these disparate data sources. Storage plugins (called connectors) only need to provide interfaces for fetching metadata, getting data locations, and accessing the data itself. In addition to the primary Hive/HDFS back end, we have built Presto connectors to several other systems, including HBase, Scribe, and other custom systems.

We are actively working on extending Presto’s functionality and improving performance. In the next few months, we will remove restrictions on join and aggregation sizes and introduce the ability to write output tables. We are also working on a query “accelerator” by designing a new data format that is optimized for query processing and avoids unnecessary transformations. This feature will allow hot subsets of data to be cached from backend data store, and the system will transparently use cached data to “accelerate” queries. We are also working on a high-performance HBase connector.

Readers: Can you even fathom the amount of data Facebook processes on a daily basis?

PrestoPluggableBackends

Related Stories
Mediabistro Course

Content Marketing 101

Content Marketing 101Almost 60% of businesses use some form of content marketing. Starting December 8, get hands-on content marketing training in our online boot camp! Through an interactive series of webcasts, content and marketing experts will teach you how to create, distribute, and measure the success of your brand's content. Sign-up before November 10 to get $50 OFF with early bird pricing. Register now!