Migrate your data across any source and destination with a single command!
GLoader is a powerful and flexible CLI tool for data migration between different databases. It provides a seamless way to migrate your data from any source database to any destination database, Whether you are upgrading your database or moving data between different systems; GLoader makes the process efficient and reliable.
Database \ As | Source | Destination |
---|---|---|
MySQL | ✅ | ❌ (Soon) |
CockroachDB | ❌ (Soon) | ✅ |
PostgreSQL | ❌ (Soon) | ❌ (Soon) |
MongoDB | ❌ | ❌ |
SQLite | ❌ | ❌ |
SQL Server | ❌ | ❌ |
Oracle | ❌ | ❌ |
Redis | ❌ | ❌ |
Cassandra | ❌ | ❌ |
Elasticsearch | ❌ | ❌ |
Kafka | ❌ | ❌ |
RabbitMQ | ❌ | ❌ |
DynamoDB | ❌ | ❌ |
Note: The database that is marked with ❌ will be supported soon. However, if you have time and want to contribute, you can help us to support them faster by contributing to the project.
curl -fsSL https://repo.gloader.tech/apt/gpg.key | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/gloader.gpg
echo "deb https://repo.gloader.tech/apt * *" > /etc/apt/sources.list.d/gloader.list
sudo apt update && sudo apt install gloader
echo '[gloader]
name=Gloader
baseurl=https://repo.gloader.tech/yum
enabled=1
gpgcheck=1
gpgkey=https://repo.gloader.tech/yum/gpg.key' | sudo tee /etc/yum.repos.d/gloader.repo
sudo yum install gloader
--- OR ---
sudo snap install gloader
brew tap mohammadv184/gloader
brew install gloader
go install github.com/mohammadv184/gloader@latest
You can download the binary builds from the releases
You can download the deb and rpm packages from the releases
gloader run <source-dsn> <destination-dsn> [flags]
flags:
--end-offset stringToInt64 end offset for each table (default [])
-e, --exclude strings exclude tables from migration
-f, --filter stringToStringSlice filter data to migrate
--filter-all strings filter data to migrate (all tables)
-h, --help help for run
-r, --rows-per-batch uint number of rows per batch (default 100)
-s, --sort stringToStringSlice sort data to migrate in ascending order
--sort-all strings sort data to migrate in ascending order (all tables)
-S, --sort-reverse stringToStringSlice sort data to migrate in descending order
--sort-reverse-all strings sort data to migrate in descending order (all tables)
--start-offset stringToInt64 start offset for each table (default [])
-t, --table strings migrate only these tables
-w, --workers uint number of workers (default 3)
- source-dsn: The Data Source Name (DSN) used to connect to the source database.
- destination-dsn: The DSN used to connect to the destination database.
- --start-offset: The initial row offset for each table. This sets the starting point for migrating rows from the source to the destination.
- --end-offset: The final row offset for each table, limiting the number of rows migrated.
- --exclude: Exclude specific tables from the migration process.
- --table: Selectively migrate specific tables.
- --filter: Apply data filters to rows being migrated. Use operators such as
=
,!=
,>
,>=
,<
, and<=
. - --filter-all: Apply a universal data filter for all tables.
- --sort: Sort data in ascending order before migration.
- --sort-all: Apply ascending sorting for all tables.
- --sort-reverse: Sort data in descending order before migration.
- --sort-reverse-all: Apply descending sorting for all tables.
- --rows-per-batch: Set the number of rows migrated per batch.
- --workers: Specify the number of parallel migration workers.
gloader run "mysql://root:root@tcp(localhost:3306)/source" "mysql://root:root@tcp(localhost:3306)/destination"
gloader run "mysql://root:root@tcp(localhost:3306)/source" "mysql://root:root@tcp(localhost:3306)/destination" --table users
gloader run "mysql://root:root@tcp(localhost:3306)/source" "mysql://root:root@tcp(localhost:3306)/destination" --exclude users
gloader run "mysql://root:root@tcp(localhost:3306)/source" "mysql://root:root@tcp(localhost:3306)/destination" --filter "id > 100"
gloader run "mysql://root:root@tcp(localhost:3306)/source" "mysql://root:root@tcp(localhost:3306)/destination" --filter-all "id > 100"
gloader run "mysql://root:root@tcp(localhost:3306)/source" "mysql://root:root@tcp(localhost:3306)/destination" --sort "id"
gloader run "mysql://root:root@tcp(localhost:3306)/source" "mysql://root:root@tcp(localhost:3306)/destination" --sort-all "id"
gloader run "mysql://root:root@tcp(localhost:3306)/source" "mysql://root:root@tcp(localhost:3306)/destination" --sort-reverse "id"
gloader run "mysql://root:root@tcp(localhost:3306)/source" "mysql://root:root@tcp(localhost:3306)/destination" --sort-reverse-all "id"
gloader run "mysql://root:root@tcp(localhost:3306)/source" "mysql://root:root@tcp(localhost:3306)/destination" --start-offset "users=100"
gloader run "mysql://root:root@tcp(localhost:3306)/source" "mysql://root:root@tcp(localhost:3306)/destination" --end-offset "users=100"
gloader run "mysql://root:root@tcp(localhost:3306)/source" "mysql://root:root@tcp(localhost:3306)/destination" --sort-all "id" --rows-per-batch 1000 --workers 10
- Data: A fundamental unit of information comprising a key, value, and data type. Data is the smallest entity in GLoader and represents the content being migrated.
- DataType: Denotes the type of data, such as strings, numbers, booleans, dates, JSON, etc., used in migration.
- DataSet: : A collection of data items representing a single row in a relational database.
- DataBatch: A group of DataSets migrated together in a single operation.
- DataBuffer: A storage space containing DataBatches fetched from the source database.
- DataCollection: An assembly of DataSets representing a table in a relational database.
- DataMap: A map associating DataCollections with their respective attributes, is equivalent to a table schema in a relational database.
- Database: A collection of DataCollections representing a database in a relational context.
- Migration: The process of relocating data from a source database to a target destination.
If you discover any security-related issues, please email [email protected] instead of using the issue tracker.
The MIT License (MIT). Please see License File for more information.