DEMO
Build robust data engineering pipelines with Snowpark for Python in Snowflake
Data engineers focus on building and maintaining robust and reliable data pipelines. Often times, these pipelines are mission critical, as they play an important role in transporting data from a source and transforming that data so that it can be used in data analysis. In this demo, we’ll show you how to build data engineering pipelines using Snowpark Python. Here’s a brief look at some of the things you’ll do:
- Ingest Parquet data into Snowflake using schema inference
- Write a Python UDF for temperature conversions
- Build a data engineering pipelines with stored procedures
- Deploy the stored producers via a CI/CD pipeline