Skip to main content

Quickstart for dbt Core using DuckDB

dbt Core
Quickstart
Beginner
Menu

    Introduction

    In this quickstart guide, you'll learn how to use dbt Core with DuckDB, enabling you to get set up quickly and efficiently. DuckDB is an open-source database management system which is designed for analytical workloads. It is designed to provide fast and easy access to large datasets, making it well-suited for data analytics tasks.

    This guide will demonstrate how to:

    • Create a virtual development environment using a template provided by dbt Labs.
      • This sets up a fully functional dbt environment with an operational and executable project. The codespace automatically connects to the DuckDB database and loads a year's worth of data from our fictional Jaffle Shop café, which sells food and beverages in several US cities.
      • For additional information, refer to the README for the Jaffle Shop template. It includes instructions on how to do this, along with animated GIFs.
    • Run any dbt command from the environment’s terminal.
    • Generate a larger dataset for the Jaffle Shop café (for example, five years of data instead of just one).

    You can learn more through high-quality dbt Learn courses and workshops.

    Prerequisites

    • When using DuckDB with dbt Core, you'll need to use the dbt command-line interface (CLI). Currently, DuckDB is not supported in dbt Cloud.
    • It's important that you know some basics of the terminal. In particular, you should understand cd, ls , and pwd to navigate through the directory structure of your computer easily.
    • You have a GitHub account.

    Set up DuckDB for dbt Core

    This section will provide a step-by-step guide for setting up DuckDB for use in local (Mac and Windows) environments and web browsers.

    In the repository, there's a requirements.txt file which is used to install dbt Core, DuckDB, and all other necessary dependencies. You can check this file to see what will be installed on your machine. It's typically located in the root directory of your project.

    The requirements.txt file is placed at the top level of your dbt project directory, alongside other key files like dbt_project.yml:


    /my_dbt_project/
    ├── dbt_project.yml
    ├── models/
    │ ├── my_model.sql
    ├── tests/
    │ ├── my_test.sql
    └── requirements.txt

    For more information, refer to the DuckDB setup.

    1. First, clone the Jaffle Shop git repository by running the following command in your terminal:

      git clone https://github.com/dbt-labs/jaffle_shop_duckdb.git

    2. Change into the docs-duckdb directory from the command line:


      cd jaffle_shop_duck_db

    3. Install dbt Core and DuckDB in a virtual environment.

       Example for Mac
       Example for Windows
       Example for Windows PowerShell
    4. Ensure your profile is setup correctly from the command line by running the following dbt commands.

      • dbt compile — generates executable SQL from your project source files
      • dbt run — compiles and runs your project
      • dbt test — compiles and tests your project
      • dbt build — compiles, runs, and tests your project
      • dbt docs generate — generates your project's documentation.
      • dbt docs serve — starts a webserver on port 8080 to serve your documentation locally and opens the documentation site in your default browser.

    For complete details, refer to the dbt command reference.

    Here's what a successful output will look like:


    (venv) ➜ jaffle_shop_duckdb git:(duckdb) dbt build
    15:10:12 Running with dbt=1.8.1
    15:10:13 Registered adapter: duckdb=1.8.1
    15:10:13 Found 5 models, 3 seeds, 20 data tests, 416 macros
    15:10:13
    15:10:14 Concurrency: 24 threads (target='dev')
    15:10:14
    15:10:14 1 of 28 START seed file main.raw_customers ..................................... [RUN]
    15:10:14 2 of 28 START seed file main.raw_orders ........................................ [RUN]
    15:10:14 3 of 28 START seed file main.raw_payments ...................................... [RUN]
    ....

    15:10:15 27 of 28 PASS relationships_orders_customer_id__customer_id__ref_customers_ .... [PASS in 0.32s]
    15:10:15
    15:10:15 Finished running 3 seeds, 3 view models, 20 data tests, 2 table models in 0 hours 0 minutes and 1.52 seconds (1.52s).
    15:10:15
    15:10:15 Completed successfully
    15:10:15
    15:10:15 Done. PASS=28 WARN=0 ERROR=0 SKIP=0 TOTAL=28

    To query data, some useful commands you can run from the command line:

    • dbt show — run a query against the data warehouse and preview the results in the terminal.
    • dbt source — provides subcommands such as dbt source freshness that are useful when working with source data.
      • dbt source freshness — checks the freshness (how up to date) a specific source table is.
    note

    The steps will fail if you decide to run this project in your data warehouse (outside of this DuckDB demo). You will need to reconfigure the project files for your warehouse. Definitely consider this if you are using a community-contributed adapter.

    Troubleshoot

     Could not set lock on file error

    Generate a larger data set

    If you'd like to work with a larger selection of Jaffle Shop data, you can generate an arbitrary number of years of fictitious data from within your codespace.

    1. Install the Python package called jafgen. At the terminal's prompt, run:

      python -m pip install jafgen
    2. When installation is done, run:

      jafgen [number of years to generate] # e.g. jafgen 6

      Replace NUMBER_OF_YEARS with the number of years you want to simulate. This command builds the CSV files and stores them in the jaffle-data folder, and is automatically sourced based on the sources.yml file and the dbt-duckdb adapter.

    As you increase the number of years, it takes exponentially more time to generate the data because the Jaffle Shop stores grow in size and number. For a good balance of data size and time to build, dbt Labs suggests a maximum of 6 years.

    Next steps

    Now that you have dbt Core, DuckDB, and the Jaffle Shop data up and running, you can explore dbt's capabilities. Refer to these materials to get a better understanding of dbt projects and commands:

    • The About projects page guides you through the structure of a dbt project and its components.
    • dbt command reference explains the various commands available and what they do.
    • dbt Labs courses offer a variety of beginner, intermediate, and advanced learning modules designed to help you become a dbt expert.
    • Once you see the potential of dbt and what it can do for your organization, sign up for a free trial of dbt Cloud. It's the fastest and easiest way to deploy dbt today!
    • Check out the other quickstart guides to begin integrating into your existing data warehouse.

    Additionally, with your new understanding of the basics of using DuckDB, consider optimizing your setup by documenting your project, commit your changes and, schedule a job.

    Document your project

    To document your dbt projects with DuckDB, follow the steps:

    • Use the dbt docs generate command to compile information about your dbt project and warehouse into manifest.json and catalog.json files
    • Run the dbt docs serve command to create a local website using the generated .json files. This allows you to view your project's documentation in a web browser.
    • Enhance your documentation by adding descriptions to models, columns, and sources using the description key in your YAML files.

    Commit your changes

    Commit your changes to ensure the repository is up to date with the latest code.

    1. In the GitHub repository you created for your project, run the following commands in the terminal:
    git add 
    git commit -m "Your commit message"
    git push
    1. Go back to your GitHub repository to verify your new files have been added.

    Schedule a job

    1. Ensure dbt Core is installed and configured to connect to your DuckDB instance.
    2. Create a dbt project and define your models, seeds, and tests.
    3. Use a scheduler such Prefect to schedule your dbt runs. You can create a DAG (Directed Acyclic Graph) that triggers dbt commands at specified intervals.
    4. Write a script that runs your dbt commands, such as dbt run, dbt test and more so.
    5. Use your chosen scheduler to run the script at your desired frequency.

    Congratulations on making it through the guide 🎉!

    0