Database Migration Glossary

Essential terminology for understanding database migrations to Snowflake and cloud data warehouses.

Snowflake

A cloud-native data warehouse platform that separates storage and compute, allowing independent scaling. Snowflake supports SQL and provides features like time travel, data sharing, and automatic optimization.

DDL (Data Definition Language)

SQL commands used to define database structure, including CREATE TABLE, ALTER TABLE, and DROP TABLE statements. During migration, DDL scripts are generated to recreate schema in the target database.

Schema Analysis

The process of examining database structure including tables, columns, data types, indexes, and constraints. Schema analysis identifies compatibility issues before migration.

Related: features

Data Type Mapping

Converting data types from source database (e.g., MySQL) to target database (e.g., Snowflake). For example, MySQL DATETIME maps to Snowflake TIMESTAMP_NTZ.

OLTP (Online Transaction Processing)

Database systems optimized for fast, short transactions like INSERT, UPDATE, DELETE. MySQL, PostgreSQL, and SQL Server are OLTP databases. They handle day-to-day operations.

OLAP (Online Analytical Processing)

Database systems optimized for complex analytical queries on large datasets. Snowflake is an OLAP system designed for reporting, BI, and data analytics workloads.

ETL (Extract, Transform, Load)

A data integration process that extracts data from source systems, transforms it to fit the target schema, and loads it into the destination. Common in data warehouse migrations.

ELT (Extract, Load, Transform)

Modern approach where raw data is loaded into the target first, then transformed using the target system's processing power. Snowflake's architecture is well-suited for ELT.

Incremental Sync

Migrating only new or changed data since the last sync, rather than full table reloads. Uses timestamps or change tracking to identify modified rows.

Related: how it works

Bulk Load

Loading large volumes of data at once, typically from files (CSV, Parquet) staged in cloud storage. Snowflake's COPY INTO command is optimized for bulk loading.

VARIANT

Snowflake data type for storing semi-structured data like JSON, Avro, or XML. MySQL JSON columns typically map to Snowflake VARIANT.

TIMESTAMP_NTZ

Snowflake timestamp type without timezone information (NTZ = No Time Zone). MySQL DATETIME and TIMESTAMP typically map to this type.

Clustering Key

Snowflake feature that organizes table data to improve query performance. Similar concept to indexes in traditional databases, but managed differently.

Virtual Warehouse

Snowflake's compute resource that executes queries. Warehouses can be started, stopped, and scaled independently of data storage. You pay only when warehouses run.

Time Travel

Snowflake feature allowing access to historical data at any point within a retention period (up to 90 days). Useful for recovering accidentally deleted data.

Zero-Copy Cloning

Snowflake feature to instantly create copies of databases, schemas, or tables without duplicating data. Useful for testing migrations without additional storage costs.

Stage

A Snowflake location for storing files before loading into tables. Can be internal (managed by Snowflake) or external (S3, Azure Blob, GCS).

ENUM

MySQL data type that stores one value from a predefined list. Not directly supported in Snowflake; typically converted to VARCHAR with a CHECK constraint or lookup table.

Primary Key

Column(s) that uniquely identify each row in a table. While Snowflake supports PRIMARY KEY syntax, it's not enforced - it's metadata for documentation and query optimization.

Data Validation

Post-migration verification that data was transferred correctly. Includes row count comparison, checksum validation, and sample data verification.

Related: how it works

Ready to Start Your Migration?

Use our free schema analyzer to understand your database before migrating.

Start Free Analysis