Ticket #175 (assigned enhancement)

Opened 5 years ago

Last modified 4 years ago

New DatabaseOperations commands backup/restore

Reported by: Robert <trebor74hr@…> Owned by: andrew
Priority: major Milestone: The Future
Component: commands Version: 0.6-pre
Keywords: Cc:


Ticket #173 is dealing with failed migrations issues, how to detect etc. This ticket proposes one of the solutions how to solve "unrevertable" failed migrations with the help of system (and improve the system).

First I would list some general rules I found useful when doing migrations:

  • migration development - develop and test on small data, do full test on test before deployment to production
  • migrating production - DO BACKUP, overview what is to be done, avoid live migrations, migrations/end user change conflict, control the process, reserve enough time for RESTORING in case of failure, test if migration is done correctly

Relating to this I think that south should have at least two new commands: BACKUP/RESTORE commands in (can be part of DatabaseOperations class).

Such commands can help in solving conflicting state - like explained as CONFL (http://south.aeracode.org/ticket/173#comment:2).

Full backup:

  • each database engine has its own backup/restore procedure
  • full database backup/restore on bigger db can last very long/big
  • full db bak/rest can be paused/blocked with record locks when doing it on live system and can cause conflicting states

That's why I would suggest following:

  • do backup only on data that is to be changed - partial backup/restore (on ibm db2 export/import/load are used for such purposes)
  • request write lock on data that is to be backuped/changed/restored (maybe new DatabaseOperations command - lock table? new ticket?)

There could be two ways to do backup:

  • native db engine backup/restore commands - these have advantages like speed, add metadata, stability etc.
  • db engine agnostic backup/restore commands have advantages like independent data format, data can be transfered between diff db engines etc.


I suggest following:

  • backup - use db engine agnostic backup - i.e. adjust logic used in django-admin dumpdata for backup, and django-admin loaddata for restore
  • add DatabaseOperations.backup(backup_name, queryset) which saves <south_migrationhistory.id>_<migration_name>_<backup_name>.json in some predefined directory. Queryset param will be explained later
  • add additional output to migrate --list - which lists all backups that are in directory for each migration
  • add DatabaseOperations.restore(backup_name) which loads <south_migrationhistory.id>_<migration_name>_<backup_name>.json to database back

Method backup has param queryset, which should enable partial backup. Queryset is django.queryset object with optional definition of list of columns to be backuped (i.e. exported). Note that primary key must be allways included. Example:

  • I want to migrate model Person : column name -> first_name, last_name. Therefore I want to do backup table Person before actual migration
  • I can do backup in two ways:
    • the whole Person table with Person model definition - for full table backup
    • only Person.id (pk), person.name, person.first_name, person.last_name - column table backup
  • restore can be done in two ways:
    • from full table backup - the whole table is erased - recreated and loaded again
    • from column table backup - one by one record is updated by primary key - it updates only columns backuped (name, first_name, last_name). This kind of restore doesn't handle alter table commands.

In column table restore there is an issue with eventual with new records added to table. Restore procedure should put db to the same state where it was before the backup. That means that new records in this limited backup mode should be deleted in restore procedure. How to detect them? We have list of id-s to be updated, all records that are not updated (pk not in pk_backuped_list) should be deleted.

What with very large backups - how to avoid transaction overload. When you want to do update on 1.000.000 records then probably this can't be done in one transaction. That is why restore needs to do commit from time to time (db2 import has commitcount option). This should be optional parameter in restore procedure.

I think that restore should be callable from shell too (e.g. migrate --restore or just restore).

My first impression is that this issue can become very complex, but if this is to be implemented, then I suggest for the first release to be implemented as simple as possible, but stable.

Change History

comment:1 Changed 5 years ago by andrew

  • Status changed from new to assigned
  • Version set to subversion
  • Milestone set to The Future

This is one of those features that is useful, but not terribly necessary - while, yes, it will save people some time, it'll take ages to develop, and if you're migrating a production server you should have a backup anyway (so if it fails, you can revert quickly).

I'd much rather solve this issue with education about the merits of backups for now, and possibly include it in the future. To this end, I'm marking it as "The Future".

Note: See TracTickets for help on using tickets.