Skip to main content

14 posts tagged with "interview"

View All Tags

Lars Kamp

Apache Iceberg is a new table format that offers both the simplicity of SQL and separation of storage and compute. The Iceberg table format works with any compute engine, so users are not limited to working with a single engine. Popular engines (e.g., Spark, Trino, Flink, and Hive) and modern cloud warehouses (e.g., Snowflake, Redshift, and BigQuery) can work with Iceberg tables at the same time.

A table format is a layer that sits between the file format and database. Iceberg is an abstraction layer above file formats like Parquet, Avro, and ORC born out of necessity at Netflix. Like many other companies at the time, Netflix shifted from MPP data warehouses to the Hadoop ecosystem in the 2010s. MPP warehouses like Teradata were hitting scale limitations and becoming too expensive at Netflix's scale.

The Hadoop ecosystem abandoned the table abstraction layer in favor of scale. In Hadoop, we deal directly with file systems like HDFS. The conventional wisdom at the time was that bringing compute to storage was easier than moving the data to compute. Hadoop scales compute and disk together, which turned out to be incredibly hard to manage in the on-premise world.

Early on, Netflix shifted to the cloud and started storing data in Amazon S3 instead, which separated storage from compute. Snowflake, the cloud warehouse, also picked up on that principle, bringing back SQL semantics and tables from "old" data warehouses.

Netflix wanted to separate storage/compute and SQL table semantics. They wanted to add, remove, and rename columns without S3 paths. But rather than going with another proprietary vendor, Netflix wanted to stay in open source and open formats. And thus, Iceberg was developed and eventually donated to the Apache Foundation. Today, Iceberg is also in use at companies like Apple and LinkedIn.

Tabular commercializes Apache Iceberg. Working with open-source Iceberg tables still requires understanding of object stores and distributed data processing engines and how various components interact with each other. Tabular lowers the bar for adoption and removes the heavy lifting.

Jason Reid is a co-founder and heads Product at Tabular. In this episode, Jason walks us through the benefits of using an open table format like Iceberg and how it works with existing analytics infrastructure and tooling of the modern data stack like dbt.

Lars Kamp
Julia Schottenstein

dbt Labs' mission is to empower data practitioners to create and disseminate organizational knowledge with its open-source product dbt. dbt helps write and execute data transformation jobs by compiling code to SQL and running it against your cloud warehouse.

When raw data from production or SaaS apps arrives in a cloud warehouse for analysis, it's not in a usable state. Analytics engineers need to prepare, clean, join, and transform the data to match business needs. These needs could include visualizing data for a sales forecast, feeding data into a machine learning model, or preparing operational analytics with infrastructure data. The analytics engineering workflow covers all the steps from raw data extraction to data modeling and end uses like reporting or data science.

Today, over 16,000 companies use dbt. dbt has become a foundational technology for the analytics engineering workflow, which is very similar to the DevOps workflow. dbt applies software engineering principles to working with data. To "productionize" data, engineers develop, test, and integrate it—and then also provide observability and alerting once it's in production. All of this functionality is included in dbt Cloud, the commercial version of dbt.

Julia Schottenstein heads Product at dbt Labs. In this episode, Julia walks us through the evolution of dbt from a tool for data teams at start-ups to enterprise deployments where sometimes thousands of analytics engineers collaborate through dbt. We cover all aspects of the modern data stack—cloud warehouses, ETL, data pipelines, and orchestration—with an outlook on the wider use of data in the enterprise by both humans and applications:

  • dbt's semantic layer, which assigns definitions (e.g., revenue, customer, churn) to a specific metric

    The semantic layer in dbt contains the definitions for each metric, ensuring consistency and flexibility—users can slice and dice a metric along any dimension. Metrics are computed at the time of a query rather than pointing to an already materialized view.

  • Continuous integration and deployment (CI/CD) for data

    Building data pipelines is expensive, and data transformation can take a long time with large data sets and complex queries. dbt Cloud ships a purpose-built CI tool that builds the absolute minimum set of code and data to test changes.

  • How dbt works, with its direct acyclic graph (DAG)

    The DAG is a visual representation of data models and the connections in-between them. dbt started out with SQL to run all transformations, but is now also inviting other languages such as Python.

Lars Kamp
Michael Driscoll

Creating an analytics dashboard is a time-consuming process that involves stitching together many components: ELT pipelines, cloud warehouses, transformation and semantic layers, data catalogs, and a dashboard tool. The flexibility of the Modern Data Stack (MDS) also means a great deal of complexity and many design decisions.

Rill Data is on a mission to radically simplify how developers create operational dashboards. Rill offers blazing fast dashboards that come bundled with a real-time analytical database and a modeling layer.

Michael Driscoll is the co-founder and CEO of Rill Data. In this episode, Mike demos the latest 0.16 release of Rill Developer.

There are three pieces of infrastructure that form a Rill dashboard application:

  • Sources: Rill ships with a CLI you can use to import data from an object store like AWS S3 or Google Cloud Storage. Rill treats the object store as the source of truth and imports data for the "last-mile ETL." As data in the object store changes, Rill orchestrates incremental updates.
  • Runtime: The runtime itself consists of a database (DuckDB), a web UI for rendering the dashboards (SvelteKit), and a middleware written in Go. Rill Enterprise replaces DuckDB with Apache Druid to process large data sets.
  • Models: Configuration code that parameterizes the dashboards, using YAML and SQL.

Bringing these things together in one application is an opinionated way to transform data to dashboards that Mike says covers "80%+ of the use cases that [they've] come across when building operational dashboards." Rill customers create dashboards to build analytics for their advertising, marketplace, and infrastructure operations.

Rill's stack is a departure from point-and-click interfaces, moving towards what Mike calls "BI-as-code." Source definitions and metrics are implemented in YAML, and models create a SQL query. The combination of SQL and YAML creates a BI layer that can be checked into a Git repository, which can then be managed automatically by CI workflows.

We also cover broader trends in our discussion, including the convergence of engineering and analytics cultures as engineers adopt practices from analytics to work with infrastructure data. Watch this episode to learn more about building data infrastructure for engineering teams with SQL and YAML with Rill.

Lars Kamp

Some studies estimate that nine out of ten copies of data are precomputed. Precomputation requires a lot of engineering and batch processing. Compare this to what you can achieve when instead computing raw data, which reduces the amount of data you need to manage, store, and secure by up to 90%. Yet, some precomputation has often still been required because of bottlenecks in I/O, storage, or compute.

FeatureBase is the first analytical database built entirely on bitmaps.

Bitmaps lay out data differently from both the row-oriented layout of transactional databases and the columnar layout of analytical databases; bitmaps store data at the value. Due to the nature of bitmaps, the data pertaining to each unique value within a row or column can be accessed independently without having to scan the row or column. The I/O for typical analytical workloads is only a fraction of that of traditional analytical queries.

Bitmaps are more efficient when it comes to storing, transporting, and managing data—they are orders of magnitude faster than today's popular cloud warehouses, and also an order of magnitude more efficient at storing data. Their efficiency makes them ideal for real-time processing and artificial intelligence workloads.

In fact, that's what positions FeatureBase as the database between real-time streaming engines like Kafka on one end, and cloud warehouses as long-term storage engines on the other end. FeatureBase is the working memory in-between the two.

Higinio "H.O." Maycotte is Founder and CEO at FeatureBase. In this session, we explore the mathematical pillars of databases and bitmaps. We cover:

The data footprint and scale of some of FeatureBase's customers is nothing short of breathtaking. One of their advertising customers processes 120 billion updates a day—that's 1.38 million updates per second. FeatureBase allowed them to reduce their server count from 1,000 servers to just 11, saving them millions of dollars per year.

The team at FeatureBase has invested over $30 million in R&D and nine years of their lives to advance the use of bitmaps in databases. Watch this fascinating session with H.O. to learn more about math, bitmaps, and modern real-time processing data architecture.

Lars Kamp
Patrick DeVivo

Software engineering is often more art than science, making it difficult to measure productivity. There are ways to use data to be more effective as an individual contributor or an engineering leader, but surprisingly, engineering organizations and teams typically are not data-driven.

MergeStat is on a mission to change this with open-source, operational analytics for software engineering organizations. MergeStat started as an experiment to bring together two technologies: SQL and Git repositories. MergeStat provides data integration for your Git repositories, facilitating the exploration of legacy code and identification of code that hadn't been touched in a while and maybe deserved new attention.

From there, the use cases evolved. Today, MergeStat is used by organizations that have hundreds or even thousands of repositories. MergeStat is data infrastructure for Git repositories, where anyone can query the history and contents of their code bases.

Behind the scenes, MergeStat syncs data from the tools used to build and ship software into a PostgreSQL instance, as APIs provided by these tools are not always easy to understand and extract data from. MergeStat puts a lot of the usual work into implementing good API data consumption, like pagination and respecting rate limits.

From there, a user can query their data directly in MergeStat, or use other business intelligence tools and dashboards that know how to speak to PostgreSQL. See this example Grafana dashboard for GitHub pull requests.

Patrick DeVivo is Founder and CEO at MergeStat. In this session, we start out with a general overview of MergeStat and how it's used today.

Patrick explains how MergeStat is a general-purpose engine that companies use to craft the queries that fit their organization. We go into a few MergeStat use cases that Patrick sees today:

  • In some cases, the actual data collection is the use case. For example, with audits the action is to deliver the list of pull requests that didn't follow best practices.
  • Understanding the different versions of a programming language in use. If you're a Go shop, a single query aggregates the different Go versions used across all repositories.
  • Find pull requests that have been open for a long time or merged without review.

Patrick's advice is to use MergeStat in a way that is positive and constructive to take action. Watch this episode to learn more about data integration for the software development lifecycle.

Lars Kamp

In the old world of software engineering, developer productivity was measured by lines of code. However, time has shown how code quantity is a poor measure of productivity. So, how come engineering organizations continue to rely on this metric? Because they do not have a "single-pane" view across all the different systems that have data on various activities that actually correlate with productivity.

That's where Faros AI comes in. Faros AI connects the dots between engineering data sources—ticketing, source control, CI/CD, and more—providing visibility and insight into a company's engineering processes.

Vitaly Gordon is the founder and CEO of Faros AI. Vitaly came up with the concept for Faros AI when he was VP of Engineering in the Machine Learning Group at Salesforce. As an engineering leader, it's not always code; you also have business responsibilities. That meant interacting with other functions of the business, like sales and marketing.

In those meetings, Vitaly realized that other functions used standardized metrics that measure the performance of their business. Examples are CAC, LTV, or NDR. These functions built data pipelines to acquire the necessary data and compute these metrics. Surprisingly, engineering did not have that same understanding of their processes.

An example of an engineering metrics framework is DORA. DORA is an industry-standard benchmark that correlates deployment frequency, lead time, change failure rate, and time to restoration with actual business outcomes and employee satisfaction. For hyperscalers like Google and Meta, these metrics are so important that they employ thousands of people just to build and report them.

So, how do you calculate DORA metrics for your business? With data, of course. But, it turns out the data to calculate these metrics is locked inside the dozens of engineering tools used to build and deliver software. While those tools have APIs, they are optimized for workflows, not for exporting data. If you're not a hyperscaler with the budget to employ thousands of people, what do you do? You can turn to Faros AI, which does all the heavy lifting of acquiring data and calculating metrics for you.

The lessons learned from the modern data stack (MDS) come in when building data pipelines to connect data from disparate tools. In this episode, we explore the open-source Faros Community Edition and the data stack that powers it.

Lars Kamp
Waldemar Hummer

Waldemar Hummer is Co-Founder and CTO at LocalStack. LocalStack gives you a fully functional local cloud stack so you can develop and test your cloud and serverless apps offline. LocalStack is an open-source project that started at Atlassian, where its initial purpose was to keep developers productive on their daily commutes despite poor internet connectivity.

LocalStack emulates AWS cloud services on your laptop, increasing the number of phases in your infrastructure environment to four: local, test, staging, and production—with LocalStack efficiently covering the local and test phases (including CI builds). LocalStack also integrates with a large set of other cloud tools, such as Terraform, Pulumi, and CDK.

While the commute problem went mostly away with COVID, it became clear that a local development environment has speed, quality, and cost advantages. Local provisioning of resources is faster and can speed up dev feedback cycles by an order of magnitude. Developers can start their work without IAM enforcement, then later introduce security policies and migrate to the cloud. A local environment also reduces the cost of cloud sandbox accounts.

A key requirement for LocalStack to be valuable is parity with cloud provider services, which means replicating services and API responses. LocalStack is built in Python, and Waldemar walks us through LocalStack's process of building out the platform to have 99% parity with AWS.

In this episode, we also cover developer marketing, community building, and how LocalStack amassed over 44,000 stars on GitHub. Waldemar takes us through both a live LocalStack demo and a deep-dive into LocalStack's GitHub repository.

Lars Kamp
Jonathan Bernales

There is a new generation of companies that are building their applications 100% cloud-native, with a pure serverless paradigm. One such company is Ekonoo, a French FinTech startup that enables customers and organizations to efficiently invest in retirement funds.

Jonathan Bernales is a DevOps Engineer at Ekonoo. In this interview, Jonathan walks us through Ekonoo's approach of giving developers the autonomy to build and deploy code along with the responsibility for security and cost.

Holding developers responsible for security and cost is a rather new part of "shift-left." Cost awareness becomes part of the development culture. To keep cloud bills under control, Ekonoo developers are responsible for their individual test accounts and have access to the AWS Billing Console and AWS Cost Explorer.

At Ekonoo, there is no dedicated "production team." Rather, DevOps collaborates with developers to create guidelines and guardrails for architecture, automation, security, and cost. The entire Ekonoo stack runs on AWS using native AWS services such as CloudFormation, Lambda, and Step Functions.

Watch this episode to learn about Ekonoo's transition to a microservices architecture and the lessons learned along the way.

Lars Kamp
Andreas Grabner

Andreas Grabner is a DevOps Activist at Dynatrace, where he has fifteen years of experience helping developers, testers, operations, and XOps folks do their jobs more efficiently.

In this episode, Andreas and I discuss how the shift to cloud-native and more dynamic infrastructure is followed by a change in how developers, architects, and site reliability engineers (SREs) work together.

With the sheer quantity of resources running in cloud-native infrastructure and the monitoring signals produced by each resource, the only way to keep growing without "throwing people at the problem" is to turn to automation.

Andreas makes a noteworthy distinction between DevOps engineers and SREs:

  • DevOps engineers use automation to speed up delivery and get new changes into production.
  • SREs use automation to keep production healthy.

SREs are often former IT operations and system administrators responsible for physical machines, virtual machines (VMs), and Kubernetes clusters. As SREs, they move up the stack and become responsible for everything from the bottom of the stack all the way up to serverless functions and the service itself.

We dive into the differences between SLAs, SLOs, and Google's four golden signals of monitoring—latency, traffic, errors, and saturation. Andreas shares the example of a bank and how they started defining SLOs to measure the growth of their mobile app business versus just defining engineering metrics.

This episode covers "engineering for game days," chaos engineering, and making the unplannable, plannable. Andreas shares his perspective on the general trend to "shift left" and include performance engineering in the development and architecture of cloud-native systems.

Lars Kamp

Dvir Mizrahi is Head of Financial Engineering at Wix, the leader in website creation with 220 million users running e-commerce operations. And with over six thousand employees, Wix ships more than fifty thousand builds each day.

Dvir is also among the original authors of the AWS Cloud Financial Management certification.

In this episode, Dvir covers how Wix shifted from FinOps to Financial Engineering. It's an engineering-first approach to build tooling and processes tracking financial key performance indicators (KPIs) for its multi-cloud infrastructure. The new approach established a culture of financial responsibility that supports Wix's continued growth.

Wix started in 2006 and initially ran its infrastructure on-premise. Today, Wix runs a multi-cloud environment on Google Cloud Platform (GCP) and Amazon Web Services (AWS). As Wix shifted from on-premise to the cloud, the procurement process of resources changed with it.

In the old world, purchasing additional hardware was a closed and controlled process. But in the cloud, Dvir compares resource procurement to "a supermarket where people can go in, take whatever they want, and leave without passing the registers." A developer could spin up a hundred thousand instances with just the click of a button.

Wix realized the financial risk that comes with liberal permissions to spin up infrastructure and hired Dvir in 2017. FinOps approaches infrastructure governance from a billing perspective and handles workloads already provisioned in the cloud. But at Wix's scale, where there are thousands of engineers, the FinOps approach stops working. "By the time you have a financial incident, it's too late and you didn't govern anything."

Dvir shifted the strategy to proactively preventing waste in the first place, by incorporating financial KPIs into engineering goals. In addition, Dvir built an internal platform called "InfraGod" which collects infrastructure data, integrates with Terraform, and enforces rules at the time of resource provisioning. Taking action at the time resources are provisioned rather than after the fact is "the difference between Finance and Financial Engineering."

Listen to this episode for a deep dive into the tactics that Dvir uses to run Financial Engineering at Wix, such as data collection, engineering post-mortems, monthly reports, and mandatory resource tagging.