Skip to content

Cookiecutter Data Science

A logical, flexible, and reasonably standardized project structure for doing and sharing data science work.

CCDS V2 Announcement

Version 2 of Cookiecutter Data Science has launched recently. To learn more about what's different and what's in progress, see the announcement blog post for more information.


Cookiecutter Data Science v2 requires Python 3.8+. Since this is a cross-project utility application, we recommend installing it with pipx. Installation command options:

pipx install cookiecutter-data-science

# From the parent directory where you want your project
pip install cookiecutter-data-science
# From the parent directory where you want your project
# conda install cookiecutter-data-science -c conda-forge

# From the parent directory where you want your project
# ccds
pip install cookiecutter

# From the parent directory where you want your project
cookiecutter -c v1

Use the ccds command-line tool

Cookiecutter Data Science v2 now requires installing the new cookiecutter-data-science Python package, which extends the functionality of the cookiecutter templating utility. Use the provided ccds command-line program instead of cookiecutter.

Starting a new project

Starting a new project is as easy as running this command at the command line. No need to create a directory first, the cookiecutter will do it for you.


The ccds commandline tool defaults to the Cookiecutter Data Science template, but you can pass your own template as the first argument if you want.


ccds project_name (project_name):My Analysis repo_name (my_analysis):my_analysis module_name (my_analysis): author_name (Your name (or your organization/company/team)):Dat A. Scientist description (A short description of the project.):This is my analysis of the data. python_version_number (3.10):3.12 Select dataset_storage 1 - none 2 - azure 3 - s3 4 - gcs Choose from [1/2/3/4] (1):3 bucket (bucket-name):s3://my-aws-bucket aws_profile (default): Select environment_manager 1 - virtualenv 2 - conda 3 - pipenv 4 - none Choose from [1/2/3/4] (1):2 Select dependency_file 1 - requirements.txt 2 - environment.yml 3 - Pipfile Choose from [1/2/3] (1):1 Select pydata_packages 1 - none 2 - basic Choose from [1/2] (1):2 Select open_source_license 1 - No license file 2 - MIT 3 - BSD-3-Clause Choose from [1/2/3] (1):2 Select docs 1 - mkdocs 2 - none Choose from [1/2] (1):1

Now that you've got your project, you're ready to go! You should do the following:

  • Check out the directory structure below so you know what's in the project and how to use it.
  • Read the opinions that are baked into the project so you understand best practices and the philosophy behind the project structure.
  • Read the using the template guide to understand how to get started on a project that uses the template.


Directory structure

The directory structure of your new project will look something like this (depending on the settings that you choose):

├── LICENSE            <- Open-source license if one is chosen
├── Makefile           <- Makefile with convenience commands like `make data` or `make train`
├──          <- The top-level README for developers using this project.
├── data
│   ├── external       <- Data from third party sources.
│   ├── interim        <- Intermediate data that has been transformed.
│   ├── processed      <- The final, canonical data sets for modeling.
│   └── raw            <- The original, immutable data dump.
├── docs               <- A default mkdocs project; see for details
├── models             <- Trained and serialized models, model predictions, or model summaries
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.
├── pyproject.toml     <- Project configuration file with package metadata for 
│                         {{ cookiecutter.module_name }} and configuration for tools like black
├── references         <- Data dictionaries, manuals, and all other explanatory materials.
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
├── setup.cfg          <- Configuration file for flake8
└── {{ cookiecutter.module_name }}   <- Source code for use in this project.
    ├──             <- Makes {{ cookiecutter.module_name }} a Python module
    ├──               <- Store useful variables and configuration
    ├──              <- Scripts to download or generate data
    ├──             <- Code to create features for modeling
    ├── modeling                
    │   ├── 
    │   ├──          <- Code to run model inference with trained models          
    │   └──            <- Code to train models
    └──                <- Code to create visualizations