Data Engineering
Data Engineering
- Gather as much domain knowledge as possible. Technical knowledge is not enough. Then prioritize:
- Fidelity: how reliably can data be transferred and stored without corruption or loss?
- Capacity: how much data can be moved and how quickly?
- Reliability: how well can systems recover from outages and incidents?
- Speed of execution: how quickly can you get a new data source up and running?
- If it can be solved with SQL, stick to SQL.
- SQL will be the abstraction layer in streaming too so you don't have to care about incremental materialization or timely dataflows.
- A consistent pattern across your data pipelines helps devs communicate easily and understand code better.
- Data Engineering can learn from decentralized systems ideas like, Content Addressed Data, Immutability, and Idempotence.
- Schemas aren't eliminated by using a "schemaless" data store (like a NoSQL database). They're just pushed to the reading layer.
Data Pipelines
Data Pipelines are a set of actions that extract data, transform it, and then load the final data somewhere. As any distributed system, they're tricky to work with. These are some great principles to keep in mind as production data engineering is mostly computer science.
Systems tend towards production and data pipelines aren't an exception. Valuable data work and outputs end up being consumed in use cases that are increasingly more important / production grade.
Basic Principles
- Simplicity: Each steps is easy to understand and modify. Rely on immutable data. Write only. No deletes. No updates. Avoid having too much "state". Hosting static files on S3 is much less friction and maintenance than a server somewhere serving an API.
- Reliability: Errors in the pipelines can be recovered. Pipelines are monitored and tested. Data is saved in each step (storage is cheap) so it can be used later if needed. For example, adding a new column to a table can be done extracting the column from the intermediary data without having to query the data source. It is better to support 1 feature that works reliably and has a great UX than 2 that are unreliable or hard to use. One solid step is better than 2 finicky ones.
- Modularity: Steps are independent, declarative, and itempotent. This makes pipelines composable.
- Consistency: Same conventions and design patterns across pipelines. If a failure is actionable by the user, clearly let them know what they can do. Schema on write.
- Efficiency: Low event latency when needed. Easy to scale up and down. A user should not be able to configure something that will not work. Don't mix heterogeneous workloads under the same tooling (e.g: big data warehouses doing simple queries 95% of their time and 1 big batch once a day).
- Flexibility: Steps change to conform data points. Changes don't stop the pipeline or losses data. Fail fast and upstream.
Data Flow
- In each step of the pipeline there are producers of data and consumers. Consumers can be also producers, e.g
B
is both consumer ofA
's data and producer ofC
s data.- Decouple producers and consumers adding a layer in between. That can be something as simple as a text file or complex as a database.
- Schemas changes. Most of the time you won't be there at the exact time of the change so aim to save everything.
- Ideally, the schema will evolve in a backward compatible way:
- Data types don't change in the same column.
- Columns are either deleted or added but never renamed.
- Ideally, the schema will evolve in a backward compatible way:
- Create a few extra columns like
processed_at
orschema_version
. - Generate stats to provide the operator with feedback.
- Data coming from pipelines should be easily reproducible. If you want to re-run a process, you should ensure that it will produce always the same result. This can be achieved by enforcing the Functional Data Engineering Paradigm.
- Event Sourcing is a great pattern when implementing a new system since it couples state with business logic.
- State is a projection of history. Keep the history and reconstruct the state!
- Embrace immutability:
- Avoid states and mutable data. Functions should always yield the same result!
- Objects will be more thread safe inside a program.
- Easier to reason about the flow of a program.
- Easier to debug and troubleshoot problems.
Great Blog Posts
- The AI Hierarchy of Needs.
- Why is data hard?.
- Building a Data Pipeline from Scratch.
- A Beginner's Guide to Data Engineering Part I and Part II.
- The Rise of the Data Engineer.
- The Downfall of the Data Engineer.
- Functional Data Engineering — a modern paradigm for batch data processing.
- So You Want to be a Data Engineer?.
- Reshaping Data Engineering