How do I design a data ingestion process in Snowflake that includes update/inserts and maintain optimum performance
I will be ingesting about 20-years of data that includes files with millions of rows about 500 columns. Reading through Snowflake (SF) documentation I saw that I should load the files in an order that would allow SF to create the micro-partitions (MP) with metadata optimized for pruning. However, I am concerned because I will…