Backlog Enterprise server migration process

For Backlog Enterprise Professional Edition (self-hosted), you can follow the steps below to migrate your Backlog server. 

 

Step 1: Check the system requirements

  •  The database installed for the migration destination must be of the same version as the migration source.
  • The Backlog version installed for the migration destination must be of the same version as the migration source.


For the installation please refer to the
Backlog Enterprise Installation Guide and for migration destination, please refer to the “Step 4: Installation” process. 

 

Step 2: Overview of data that needs to be migrated

The data that needs to be migrated are as follows. 

  • Database
  • Directory where icon images are stored
  • Directory where Subversion data is stored
  • Directory where Git data is stored
  • Directory that stores search index data
  • Directory that stores shared filed (WebDAV) data

Each storage directory exists under /opt/backlog/data and it will be referred to as the data area directory. 

 

Step 3: Migration process

The command is described below: 

Database

backlog

Database user

backlog

Migration data storage location on the migration destination server

/mnt

 

Follow the steps below to complete the migration. 

Migration source

1. Stop Backlog

  • For Red Hat Enterprise Linux 6 server or CentOS 6
    service backlog-www stop
    service backlog-app stop
    service backlog-git stop
    service backlog-api stop

  • For Red Hat Enterprise Linux 7 or CentOS 7
    systemctl stop backlog.target

 

2. Create a database dump file

  • For PostgreSQL

pg_dump -U backlog backlog > backlog-dump.sql

  • For MySQL
    mysqldump -ubacklog -p --opt backlog > backlog-dump.sql

 

3. Compress the data area directories

cd /opt/backlog/data
tar cvf backlog-data-image.tar image
tar cvf backlog-data-svn.tar svn
tar cvf backlog-data-git.tar git

cd lucene
tar cvf backlog-data-lucene-index.tar index

cd ../solr/issue
tar cvf backlog-data-solr-issue-data.tar data
cd ../wiki
tar cvf backlog-data-solr-wiki-data.tar data
cd ../pull_request
tar cvf backlog-data-solr-pull_request-data.tar data
cd ../shared_file
tar cvf backlog-data-solr-shared_file-data.tar data

cd ../../share
tar cvf backlog-data-share-dav.tar dav

 

Destination

4. Get the dump file and the compressed file of the data area from the migration source

Save it under /mnt by scp etc. from the migration destination. 

 

5. Stop Backlog

  • For Red Hat Enterprise Linux 6 server or CentOS 6

service backlog-www stop
service backlog-app stop
service backlog-git stop
service backlog-api stop

  • For Red Hat Enterprise Linux 7 or CentOS 7

systemctl stop backlog.target

 

6. Delete the database created during installation and create a new database

  • For PostgreSQL

dropdb -U backlog backlog
createdb -U backlog -E UTF8 backlog

  • For MySQL

mysqladmin -u backlog -p drop backlog
mysqladmin -u backlog -p create backlog

 

7. Empty the data area directory

rm -rf /opt/backlog/data/image
rm -rf /opt/backlog/data/svn
rm -rf /opt/backlog/data/git
rm -rf /opt/backlog/data/lucene/index
rm -rf /opt/backlog/data/solr/issue/data
rm -rf /opt/backlog/data/solr/wiki/data
rm -rf /opt/backlog/data/solr/pull_request/data
rm -rf /opt/backlog/data/solr/shared_file/data
rm -rf /opt/backlog/data/share/dav

 

8. Restore the database dump file

  • For PostgreSQL
    psql -U backlog backlog < /mnt/backlog-dump.sql
  • For MySQL
    mysql -ubacklog -p backlog < /mnt/backlog-dump.sql

 

9. Decompress the compressed file in the data area and copy it to the source data area

mkdir /mnt/backlog-data
mkdir /mnt/backlog-data/image
tar xvf /mnt/backlog-data-image.tar -C /mnt/backlog-data/image
cp -r /mnt/backlog-data/image/image /opt/backlog/data

mkdir /mnt/backlog-data/svn
tar xvf /mnt/backlog-data-svn.tar -C /mnt/backlog-data/svn
cp -r /mnt/backlog-data/svn/svn /opt/backlog/data

mkdir /mnt/backlog-data/git
tar xvf /mnt/backlog-data-git.tar -C /mnt/backlog-data/git
cp -r /mnt/backlog-data/git/git /opt/backlog/data

mkdir /mnt/backlog-data/lucene-index
tar xvf /mnt/backlog-data-lucene-index.tar -C /mnt/backlog-data/lucene-index
cp -r /mnt/backlog-data/lucene-index/index /opt/backlog/data/lucene

mkdir /mnt/backlog-data/solr-issue-data 
tar xvf /mnt/backlog-data-solr-issue-data.tar -C /mnt/backlog-data/solr-issue-data
cp -r /mnt/backlog-data/solr-issue-data/data /opt/backlog/data/solr/issue

mkdir /mnt/backlog-data/solr-wiki-data 
tar xvf /mnt/backlog-data-solr-wiki-data.tar -C /mnt/backlog-data/solr-wiki-data
cp -r /mnt/backlog-data/solr-wiki-data/data /opt/backlog/data/solr/wiki

mkdir /mnt/backlog-data/solr-pull_request-data
tar xvf /mnt/backlog-data-solr-pull_request-data.tar -C /mnt/backlog-data/solr-pull_request-data
cp -r /mnt/backlog-data/solr-pull_request-data/data /opt/backlog/data/solr/pull_request

mkdir /mnt/backlog-data/solr-shared_file-data
tar xvf /mnt/backlog-data-solr-shared_file-data.tar -C /mnt/backlog-data/solr-shared_file-data
cp -r /mnt/backlog-data/solr-shared_file-data/data /opt/backlog/data/solr/shared_file

mkdir /mnt/backlog-data/share-dav
tar xvf /mnt/backlog-data-share-dav.tar -C /mnt/backlog-data/share-dav
cp -r /mnt/backlog-data/share-dav/dav /opt/backlog/data/share

 

*Specify backlog as the owner and group of the data area directory.

chown -R backlog.backlog /opt/backlog/data

 

10. Launch Backlog

  • For Red Hat Enterprise Linux 6 server or CentOS 6

service backlog-app start
service backlog-git start
service backlog-www start
service backlog-api start

  • For Red Hat Enterprise Linux 7 or CentOS 7

systemctl start backlog.target

 

After completing the data migration, when accessing your Backlog Enterprise, check and set the space URL in the Space Settings > Edit Space



Was this article helpful?