Jay Taylor's notes

back to listing index

Import MySQL dump to PostgreSQL database

[web search]
Original source (stackoverflow.com)
Tags: database postgres postgresql mysql stackoverflow.com
Clipped on: 2016-12-05

I tried several ways through internet and scripts to use MySQL dump "xxx.sql" and ORACLE dump "xxx.dump" to postgresql database. Nothing worked for me. Kindly anyone suggest me to do this.

I want to import xxx.dump from ORACLE and xxx.sql from MySQL to postgresql database.

asked Mar 24 '11 at 9:57
Image (Asset 2/16) alt=
Palani Kannan
2 upvote
How did you create your "xxxx.dump" file from Oracle. Did you use export or expdp or rdbms.get_ddl? – Benoit Mar 24 '11 at 10:16
up vote 13 down vote accepted

Don't expect that to work without editing. Maybe a lot of editing.

mysqldump has a compatibility argument, --compatible=name, where "name" can be "oracle" or "postgresql", but that doesn't guarantee compatibility. I think server settings like ANSI_QUOTES have some effect, too.

You'll get more useful help here if you include the complete command you used to create the dump, along with any error messages you got instead of saying just "Nothing worked for me."

answered Mar 24 '11 at 10:16
Image (Asset 3/16) alt=

This question is a little old but a few days ago I was dealing with this situation and found pgloader.io.

This is by far the easiest way of doing it, you need to install it, and then run a simple lisp script (script.lisp) with the following 3 lines:

/* content of the script.lisp */
FROM mysql://dbuser@localhost/dbname
INTO postgresql://dbuser@localhost/dbname;

/*run this in the terminal*/
pgloader script.lisp

And after that your postgresql DB will have all of the information that you had in your MySQL SB.

On a side note, make sure you compile pgloader since at the time of this post, the installer has a bug. (version 3.2.0)

answered Feb 25 '15 at 1:07
Image (Asset 5/16) alt=
Thanks a lot. It helps – Atul Khanduri Sep 17 '15 at 7:42
3 upvote
I'm very happily surprised. Worked without a hitch. On mac: brew install pgloader; the the command: pgloader mysql://root@localhost/mydb postgresql:///otherdb – pixelearth Jan 30 at 5:48
@pixelearth : Do you know how to use pgloader in linux ? – ihue Jun 15 at 18:19
@Rob : Do you know how to use pgloader in linux ? – ihue Jun 15 at 18:20

The fastest (and most complete) way I found was to use Kettle. This will also generate the needed tables, convert the indexes and everything else. The mysqldump compatibility argument does not work.

The steps:

  1. Download Pentaho ETL from http://kettle.pentaho.org/ (community version)

  2. Unzip and run Pentaho (spoon.sh/spoon.bat depending on unix/windows)

  3. Create a new job

  4. Create a database connection for the MySQL source (Tools -> Wizard -> Create database connection)

  5. Create a database connection for the PostgreSQL source (as above)

  6. Run the Copy Tables wizard (Tools -> Wizard -> Copy Tables)

  7. Run the job

answered Mar 24 '11 at 12:05
Image (Asset 6/16) alt=

You could potentially export to CSV from MySQL and then import CSV into PostgreSQL.

answered Mar 24 '11 at 10:15
Image (Asset 7/16) alt=

For those Googlers who are in 2015+.
I've wasted all day on this and would like to sum things up.

I've tried all the solutions described at this article by Alexandru Cotioras (which is full of despair). Of all the solutions mentioned there only one worked for me.

lanyrd/mysql-postgresql-converter @ github.com (Python)

But this alone won't do. When you'll be importing your new converted dump file:

# \i ~/Downloads/mysql-postgresql-converter-master/dump.psql 

PostgreSQL will tell you about messed types from MySQL:

psql:/Users/jibiel/Downloads/mysql-postgresql-converter-master/dump.psql:381: ERROR:  type "mediumint" does not exist
LINE 2:     "group_id" mediumint(8)  NOT NULL DEFAULT '0',

So you'll have to fix those types manually as per this table.

In short it is:

tinyint(2) -> smallint  
mediumint(7) -> integer
# etc.

You can use regex and any cool editor to get it done.

MacVim + Substitute:

answered Sep 3 '15 at 18:42
Image (Asset 8/16) alt=

It is not possible to import an Oracle (binary) dump to PostgreSQL.

If the MySQL dump is in plain SQL format, you will need to edit the file to make the syntax correct for PostgreSQL (e.g. remove the non-standard backtick quoting, remove the engine definition for the CREATE TABLE statements adjust the data types and a lot of other things)

answered Mar 24 '11 at 10:16
Image (Asset 9/16) alt=

I have this bash script to migrate the data, it doesn't create the tables because they are created in migration scripts, so I need only to convert the data. I use a list of the tables to not import data from the migrations and sessions tables. Here it is, just tested:


MTABLES="car dog cat"

mysqldump -h -P 6033 -u $MUSER -p$MPASS --default-character-set=utf8 --compatible=postgresql --skip-disable-keys --skip-set-charset --no-create-info --complete-insert --skip-comments --skip-lock-tables $MDB $MTABLES > outputfile.sql

sed -i 's/UNLOCK TABLES;//g' outputfile.sql
sed -i 's/WRITE;/RESTART IDENTITY CASCADE;/g' outputfile.sql
sed -i 's/LOCK TABLES/TRUNCATE/g' outputfile.sql
sed -i "s/'0000\-00\-00 00\:00\:00'/NULL/g" outputfile.sql
sed -i "1i SET standard_conforming_strings = 'off';\n" outputfile.sql
sed -i "1i SET backslash_quote = 'on';\n" outputfile.sql
sed -i "1i update pg_cast set castcontext='a' where casttarget = 'boolean'::regtype;\n" outputfile.sql
echo "\nupdate pg_cast set castcontext='e' where casttarget = 'boolean'::regtype;\n" >> outputfile.sql

psql -h localhost -d $PDB -U $PUSER -f outputfile.sql

You will get a lot of warnings you can safely ignore like this:

psql:outputfile.sql:82: WARNING:  nonstandard use of escape in a string literal
LINE 1: ...,(1714,38,2,0,18,131,0.00,0.00,0.00,0.00,NULL,'{\"prospe...
HINT:  Use the escape string syntax for escapes, e.g., E'\r\n'.
answered Jul 10 '15 at 23:43
Image (Asset 10/16) alt=

Here is a simple program to create and load all tables in a mysql database (honey) to postgresql. Type conversion from mysql is coarse-grained but easily refined. You will have to recreate the indexes manually:

import MySQLdb
from magic import Connect #Private mysql connect information
import psycopg2


mysql='''show tables from honey'''
dbx.execute(mysql); ts=dbx.fetchall(); tables=[]
for table in ts: tables.append(table[0])
for table in tables:
    mysql='''describe honey.%s'''%(table)
    dbx.execute(mysql); rows=dbx.fetchall()
    psql='drop table %s'%(table)
    DC.execute(psql); DB.commit()

    psql='create table %s ('%(table)
    for row in rows:
        name=row[0]; type=row[1]
        if 'int' in type: type='int8'
        if 'blob' in type: type='bytea'
        if 'datetime' in type: type='timestamptz'
        psql+='%s %s,'%(name,type)
    print psql
    try: DC.execute(psql); DB.commit()
    except: pass

    msql='''select * from honey.%s'''%(table)
    dbx.execute(msql); rows=dbx.fetchall()
    n=len(rows); print n; t=n
    if n==0: continue #skip if no data

    for row in rows:
        ps=', '.join(['%s']*cols)
        psql='''insert into %s values(%s)'''%(table, ps)
        if n%1000==1: DB.commit(); print n,t,t-n
answered Mar 28 '13 at 19:35
Image (Asset 11/16) alt=

As with most database migrations, there isn't really a cut and dried solution.

These are some ideas to keep in mind when doing a migration:

  1. Data types aren't going to match. Some will, some won't. For example, SQL Server bits (boolean) don't have an equivalent in Oracle.
  2. Primary key sequences will be generated differently in each database.
  3. Foreign keys will be pointing to your new sequences.
  4. Indexes will be different and probably will need tweaked.
  5. Any stored procedures will have to be rewritten
  6. Schemas. Mysql doesn't use them (at least not since I have used it), Postgresql does. Don't put everything in the public schema. It is a bad practice, but most apps (Django comes to mind) that support Mysql and Postgresql will try to make you use the public schema.
  7. Data migration. You are going to have to insert everything from the old database into the new one. This means disabling primary and foreign keys, inserting the data, then enabling them. Also, all of your new sequences will have to be reset to the highest id in each table. If not, the next record that is inserted will fail with a primary key violation.
  8. Rewriting your code to work with the new database. It should work but probably won't.
  9. Don't forget the triggers. I use create and update date triggers on most of my tables. Each db sites them a little different.

Keep these in mind. The best way is probably to write a conversion utility. Have a happy conversion!

answered Jul 11 '15 at 1:26
Image (Asset 12/16) alt=

Use your xxx.sql file to set up a MySQL database and make use of FromMysqlToPostrgreSQL. Very easy to use, short configuration and works like a charm. It imports your database with the set primary keys, foreign keys and indices on the tables. You can even import data alone if you set appropriate flag in the config file.

FromMySqlToPostgreSql migration tool by Anatoly Khaytovich, provides an accurate migration of table data, indices, PKs, FKs... Makes an extensive use of PostgreSQL COPY protocol.

See here too: PG Wiki Page

answered Aug 15 at 16:11
Image (Asset 13/16) alt=

I could copy tables from MySQL to Postgres using DBCopy Plugin for SQuirreL SQL Client. This was not from a dump, but between live databases.

answered Aug 2 at 11:20
Image (Asset 14/16) alt=

Mac OS X

brew update
brew install homebrew/php/php55-pdo-pgsql
brew tap benesch/pgloader
brew install pgloader

pgloader mysql://user@host/db_name postgresql://user@host/db_name
answered Jun 15 at 17:28
Image (Asset 15/16) alt=

I had to do this recently to a lot of large .sql files approximately 7 GB in size. Even VIM had troubling editing those. Your best bet is to import the .sql into MySql and then export it as a csv which can be then imported to Postgres.

But, the MySQL export as a csv is horrendously slow as it runs the select * from yourtable query. If you have a large database/table I would suggest using some other method. One way is to write a script that reads the sql inserts line by line and uses string manipulation to reformat it to "Postgres-compliant" insert statements and then execute these statements in Postgres

answered Jul 11 '15 at 0:32
Image (Asset 16/16) alt=

Your Answer


5 years ago


26608 times


3 months ago

Featured on Meta

Hot Network Questions

Technology Life / Arts Culture / Recreation Science Other
  1. Stack Overflow
  2. Server Fault
  3. Super User
  4. Web Applications
  5. Ask Ubuntu
  6. Webmasters
  7. Game Development
  8. TeX - LaTeX
  1. Software Engineering
  2. Unix & Linux
  3. Ask Different (Apple)
  4. WordPress Development
  5. Geographic Information Systems
  6. Electrical Engineering
  7. Android Enthusiasts
  8. Information Security
  1. Database Administrators
  2. Drupal Answers
  3. SharePoint
  4. User Experience
  5. Mathematica
  6. Salesforce
  7. ExpressionEngine® Answers
  8. Cryptography
  1. Code Review
  2. Magento
  3. Signal Processing
  4. Raspberry Pi
  5. Programming Puzzles & Code Golf
  6. more (7)
  1. Photography
  2. Science Fiction & Fantasy
  3. Graphic Design
  4. Movies & TV
  5. Music: Practice & Theory
  6. Seasoned Advice (cooking)
  7. Home Improvement
  8. Personal Finance & Money
  1. Academia
  2. more (8)
  1. English Language & Usage
  2. Skeptics
  3. Mi Yodeya (Judaism)
  4. Travel
  5. Christianity
  6. English Language Learners
  7. Japanese Language
  8. Arqade (gaming)
  1. Bicycles
  2. Role-playing Games
  3. Anime & Manga
  4. Motor Vehicle Maintenance & Repair
  5. more (17)
  1. MathOverflow
  2. Mathematics
  3. Cross Validated (stats)
  4. Theoretical Computer Science
  5. Physics
  6. Chemistry
  7. Biology
  8. Computer Science
  1. Philosophy
  2. more (3)
  1. Meta Stack Exchange
  2. Stack Apps
  3. Area 51
  4. Stack Overflow Talent
site design / logo © 2016 Stack Exchange Inc; user contributions licensed under cc by-sa 3.0 with attribution required
rev 2016.12.6.4293