Skip to content

SQLBackup README

Glenn edited this page Aug 26, 2017 · 6 revisions

SQLBackup README

  • This is the github sqlbackup wiki - the documentation.
  • The skin is at sqlbackup

The short version...

This is an optional skin (install it or not) that can be configured by you, and runs under weewx's control. It will then perform backups of the main databases, at preset intervals, for a preset timeframe. There are plenty of options available to change those presets (and others) to suit your needs.

The long version...

This skin (sqlbackup) uses a Search List Extension (SLE) of the same name to call mysqldump and/or sqlite to dump data from the weeWX database. If it's MySQL (MariaDB) then it will dump a user specified timeframe; if it's sqlite then it will dump all of it. The default option in both cases is to only dump the archive tables. It will do this at regular intervals as specified by the report_timing feature of weeWX.

If you dump the whole database, and it's large, you can interfere with weeWXs operation and odd things may start to happen. This will depend on your CPU, database size, weeWX setup, maybe even the weather! In most cases this probably won't matter too much and you'll just get a message about skipping reports. If we lock weeWX out of its database for too long though, the weird and wonderful may start to occur, so it's best we don't push it too far.

This skin was originally configured for MySQL (MariaDB) databases only and we can configure mysqldump to do a partial dump. We can therefore limit ourselves to a small, very manageable portion of the database. Because this has since expanded to incorporate sqlite databases, where it captures the whole database, it may be slower and more prone to interfering with weeWX. But compared to MySQL, sqlite is not as demanding so it may still be fit for duty. Because we are getting a full backup of the sqlite database on each run, we can perhaps do them less frequently and here the report_timing feature really comes into its own. Sqlite databases can also be backed up by simply copying them, if you want a similar 'skinned' approach that does just that then have a look at Using the RSYNC skin as a backup solution. Both these methods aim to create a backup during a quiet window of time (when there are no database writes) that's available within the weeWX cycle.

NB: The above notes still apply but the ability to do a full sqlite dump has been made optional. The skin is now configured to do partial sqlite dumps by default. This reduces the load on weewx, but does increase the effort required for a restore. Swings and Roundabouts, but you do have a choice.

With the variation in weeWX setups, the only way to know how it will work for you is to give it a try. Just start off gently and DON'T ask for too much at once, be conservative with what you're asking for.

Installation

This SLE's prerequisites are

* sqlite3 (Which I don't think is installed by default?)
* mysqldump (Which is part of the default mysql install. If you don't have a mysql database you won't need it, it will be ignored.)

Everything else should be there already as they are listed under Debian as [essential] packages, except 'free' but I believe that's also included on a default install.

Otherwise...

1 Run the installer:

wee_extension --install weewx-sqlbackup-master.zip

2 Edit the sqlbackup/skin.conf file to suit your installation

  • Select a suitable report_timing stanza

  • Select a suitable archive period and name (sql_period and sql_label).

  • Check that the defaults are correct for your setup.

  • In particular, check the backup directory (xxsql_bup_dir) paths. They will be created on the first run.

  • The default is to generate reports - sqlbackup/index.html.

  • The templates take their style from the newskin branch of weeWX - Seasons

3 restart weewx:

sudo /etc/init.d/weewx stop

sudo /etc/init.d/weewx start

The html template makes heavy use of #include files to generate the report page. These files are located in /tmp/sqlbackup and will remain after each run (if you need to inspect them or see what's happening). They will be blanked, deleted, re-generated on each run of the skin. ie: they are all renewed and should reflect each runs output. There are no stale files, they are unique to each run - thus their placement in the /tmp directory.

Notes for MySQL (MariaDB) database

If your database is a MySQL one then to use this backup method effectively, you need to do a full dump (backup) of your main weeWX database first. Of note; while dumping the database may take seconds, stretching to minutes, rebuilding it can take hours. The best advice I can offer is to rebuild it as a working copy ASAP. You will then bring that up-todate with the partial dumps. Whatever you do - test your restore strategy well before time.

A strategy I've used is to have a working, preferably empty, sqlite database ready to go. If you need to rebuild your MySQL database you can simply reconfigure weewx to use archive_sqlite while the MySQL is restored. Once the MySQL restore is finished, swap weewx back to archive_mysql so that it points to it. Then dump that temporary sqlite database into an sql file, convert and import it into that restored and working MySQL and you'll fill the gaps nicely - well, those that you can. This method saves weewx from sitting idle while you do the frustrating long, MySQL restore. Visit the weeWx wiki and view Transfer from sqlite to MySQL for help on the transfer.

However you choose to do it, we need a starting point. To get that initial database, you can do this manually by invoking mysqldump from the command line, similar to the following.

Full dump: Method 1

Get the epoch date string to use in the filename (optional but helpful later).

date +%s
1497830683

dump the data into a suitable file(name)

mysqldump -uweewx -p -hlocalhost weatherpi archive --single-transaction --skip-opt |
        gzip > /{your_backup_directory}/wholebackup-1497830683.sql.gz

Adding the epoch date string to the filename helps in determining its current age, when to do update from. You'll then use the partial backups created by this skin, to restore from that date.

When configuring your sqlbackup, DO turn on sql_debug in the skin.conf file and view the output in /var/log/syslog (or your default log) or in the report page, sqlbackup/index.html

Full dump: Method 2

Or if you want to live dangerously and see just how long it does take to generate a whole backup with this skin, and if my warnings are overstated, then configure the skin.conf for that single run. Picking a timestamp that precedes your database starting point should do it. Unless you have over 2 years of data the following should work - adjust accordingly.

60*60*24*365*2
= 63072000

Plug that value in as your time period - sql_period = "63072000" and change the file label to something meaningful - sql_label = "allofit" so that you know what it is.

Jun 22 16:13:47 masterofpis weewx[28785]: sqlbackup : DEBUG:  mysql database is weatherpi
[...]
Jun 22 16:14:35 masterofpis weewx[28785]: engine: Launch of report thread aborted: existing report thread still running
[...]
Jun 22 16:15:18 masterofpis weewx[28785]: sqlbackup : DEBUG: 90.79 secs to run /usr/bin/mysqldump -uXxXxX -pXxXxX -hlocalhost -q  weatherpi archive -w"dateTime>1435039127"  -single-transaction --skip-opt
[...]
Jun 22 16:15:51 masterofpis weewx[28785]: imagegenerator: Generated 22 images for StandardReport in 6.35 seconds

-rw-r--r-- 1 root 58011783 Jun 22 16:15 weatherpi-host.masterofpis-201706221613-allofit.gz

The above shows one hiccup for my single run - it took 90 seconds which exceeds the 60 second archive interval here, therefore weeWX skipped the reports for that time interval. After that it's back to normal, which is OK by me.

And FWIW, opening the resulting file and finding the first dateTime entry show I over estimated at 2 years.

date -d +@1451720100
Sat  2 Jan 18:35:00 AEDT 2016

Looks like that's all I've got in there. Yep! It seems I'll have to find and restore a few earlier databases.

Another aside, it took 7 hours on a quad-core AMD A8-5600K to re-instate that database (1 minute weewx archive_interval). There's something to be said for doing this on a (semi-)regular basis, and with small dump files.

Configuring: skin.conf

The skin.conf file as installed is a simplified version and should run as is. ie: it should pick up the default user, password, database settings as configured in your weewx.conf file. It will then do one 24 hour snapshot (+ 900 seconds for overlap) at 2 min past midnight, every day. It will fit most cases with some timing and backup location tweaks.

There are other options that don't appear but can be set, these are included in the listing below.

###############################################################################
# Copyright (c) 2017 Glenn McKechnie glenn.mckechnie   gmail.com              #
# With credit to Tom Keffer tkeffer   gmail.com                               #
#                                                                             #
# SQLBACKUP CONFIGURATION FILE                                                #
#  This 'report' generates gzipped sql files backup files from a running      #
#  weewx database.                                                            #
#                                                                             #
###############################################################################
#
# Report timing:
# see http://www.weewx.com/docs/customizing.htm#customizing_gen_time
#
# You are strongly encouraged to use the report timing feature to limit this
# skin to the occasional, suitably sized (see sql_period) time span.
# The default is 24 hours of the main database, taken daily.
# Tweak to suit. Test the result.
#
#   2 min after midnight
report_timing = '2 0 * * *'

# First time? Need a refresher? There's a html README file in the skins directory
# sqlbackup/sqlbackupREADME.html which can be found on your weewx web server.
# Not all options are listed below, see the html README if needed.

[sqlbackup] # This section heading is all lower case to enable skin duplication.

        #sql_user = "your_user_if_different_to_weewx.conf"
        #sql_pass = "your_password_if_different_to_weewx.conf"

        # default is preset as localhost
        #sql_host = "localhost"

        # The default database is read from weewx.conf. It can be overidden here.
        # Use a space seperated list for multiple databases.
        #mysql_database = "weatherp mesoraw"
        #sql_database = "pmon.sdb weewx.sdb"

        # Default is preset as 'archive' Change to '' to dump the dailies as well.
        # The daily tables get rebuilt when weewx restarts so we ignore them.
        # If doing sqlite partial dumps, this option won't work. It's only valid
        # for a full dump of sqlite, (mysql works for both full or partial dumps)
        #sql_table = ""

        # The backup location default is preset as /var/backups
        mysql_bup_dir = "/opt/backups/mysql-backups"
        sql_bup_dir = "/opt/backups/sql-backups"

        # A dated_dir structure. Defaults to 'True'. Turn it off with 'False'

        #sql_dated_dir = 'False'

        # Generate a summary report for the last run. Useful for obvious errors,
        # not useful for serious testing - test your backups to ensure they
        # actually do what you want!
        # Default is preset to "True" To disable uncomment the following line.
        #sql_gen_report = 'False'

        # Time span for backups, it's encouraged to limit the backup to a small
        # rolling window.
        # sql_period, sql_label. The sql_label should match the sql_period, for
        # your benefit, ie:  86400 seconds = 24 hours , 604800 seconds = 7 days
        # This value will always be increased by 900 seconds (86400 + 900) to
        # ensure backups overlap.
        #sql_period = '604800' #time in seconds ('86400' is the default setting)
        #sql_label = '7days' # ('daily' is the default) this isshould be  a
        # meaningful string for the filename, use text to match your sql_period
        # ##sql_period = '1262264400' # time in seconds (2 years)
        # ##sql_label = '2years' # meaningful string for the filename

        # Originally the sqlite databases were dumped in their entirety. This
        # could place an excessive load on weewx. The default now is to always
        # do partial dumps - True. Set to False for initial, full backups only?
        # Use with discretion.
        #part_sqlite = 'False

        # Default is preset to "0" so commenting it out will disable DEBUG logging
        # from this skin (will also log when weewx.conf debug is set to "2")
        # Set sql_debug to "2" for extra DEBUG info in the logs.
        # Set sql_debug to "4" for extra DEBUG info in the report page
        # sqlbackup/index.html
        sql_debug = "4"

        # The following should rarely, if ever need to be altered.

        # hide_password: hide_password from logs. Default is True, set to False
        # if you really, really want to bypass this small bit of obscurity.

        # inc_dir: location of .inc files used in html generation; defaults
        # to /tmp/sqlbackup. These are temporary files only, but are needed for
        # the cheetah templates (see Seasons skin in newskin branch or latest
        # release) They aren't persistent so /tmp is as good a spot as any.
###############################################################################
                                                                                                                                                                                                                    
[CheetahGenerator]                                                                                                                                                                                                  
    # The CheetahGenerator creates files from templates.  This section                                                                                                                                              
    # specifies which files will be generated from which template.                                                                                                                                                  
                                                                                                                                                                                                                    
    # Possible encodings are 'html_entities', 'utf8', or 'strict_ascii'                                                                                                                                             
    encoding = html_entities                                                                                                                                                                                        
                                                                                                                                                                                                                    
    search_list_extensions = user.sqlbackup.SqlBackup

    [[ToDate]]
        # Reports that show statistics "to date", such as day-to-date,
        # week-to-date, month-to-date, etc.
        [[[index]]]
            template = index.html.tmpl

###############################################################################

[CopyGenerator]
    # The CopyGenerator copies files from one location to another.

    # List of files to be copied only the first time the generator runs
    copy_once = sqlbackup.css, sqlbackupREADME.html

    # List of files to be copied each time the generator runs
    # copy_always =

###############################################################################

[Generators]
    generator_list = weewx.cheetahgenerator.CheetahGenerator, weewx.reportengine.CopyGenerator
###############################################################################
# Copyright (c) 2010 Tom Keffer <tkeffer@gmail.com>                           #
# STANDARD SKIN CONFIGURATION FILE                                            #
###############################################################################

# reminders of date conversion from shell prompt.
#
# date -d "11-june-2017 21:00:00" +'%s'
# 1497178800
#

Working with the MySQL dump files

When configuring sqlbackup, DO turn on sql_debug in the skin.conf file. Set it to at least 2 for system logging (/var/log/syslog). If you're generating the html report then set it to 4 and you'll find the debug output at the bottom of the sqlbackup/index.html page. Pay particular attention to the times returned in the DEBUG lines. Make sure they are sane. (Remember the warnings above?)

The partial dumps created by this skin have a header, which includes the CREATE TABLE statement - a lot of INSERT statements and a footer.

-- MySQL dump 10.13Distrib 5.5.55, for debian-linux-gnu (i686) 
-- 
-- Host: localhostDatabase: weatherpi
-- ------------------------------------------------------
-- Server version>······5.5.55-0+deb8u1
/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */; 
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */; 
 
 -- 
 -- Table structure for table `archive` 
 -- 

/*!40101 SET @saved_cs_client = @@character_set_client */; 
/*!40101 SET character_set_client = utf8 */; 
CREATE TABLE `archive` ( 
`dateTime` int(11) NOT NULL, 
`usUnits` int(11) NOT NULL,
`interval` int(11) NOT NULL, 
`barometer` double DEFAULT NULL, 
`pressure` double DEFAULT NULL,
`altimeter` double DEFAULT NULL, 
[...]
`windBatteryStatus` double DEFAULT NULL, 
`rainBatteryStatus` double DEFAULT NULL, 
`outTempBatteryStatus` double DEFAULT NULL,
`inTempBatteryStatus` double DEFAULT NULL, 
`lightning` double DEFAULT NULL, 
`rainCount1` double DEFAULT NULL,
`rainCount2` double DEFAULT NULL,
`rainCount3` double DEFAULT NULL,
`rainCount4` double DEFAULT NULL,
PRIMARY KEY (`dateTime`),
UNIQUE KEY `dateTime` (`dateTime`) 
); 
/*!40101 SET character_set_client = @saved_cs_client */; 
 
 -- 
 -- Dumping data for table `archive`-- 
 -- WHERE:dateTime>1497622847

Everything above is the header (we probably need that only once, see below).

Below are the actual INSERT statements (as many that cover the time we need to restore, so we will gather those together in a single clump... keep reading.)

INSERT INTO archive VALUES (1497622860,17,1,1025.077,970.031046132267,1023.07394604233,2.86341264282514,21.351575,4.703125,36.0077,100,0.937546883483462,78.16020796314,1.05551635888867,78.5858585858586,0,0,4.703125,4.703125,4.703125,0.000408665001435199,1,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,0,0,NULL,NULL,NULL,NULL,4.95,4.95,4.95,4.93,NULL,20227104,NULL,404,556);
INSERT INTO archive VALUES (1497622920,17,1,1025.077,970.056190971907,1023.10020511329,3.01429914403898,21.320325,4.734375,36.0083,100,0.775430162444809,77.9557099825767,0.931337963725294,77.7777777777778,0,0,4.734375,4.734375,4.734375,0.000409593627050564,17.220975,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,0,0,NULL,NULL,NULL,NULL,4.9525,4.95,4.95,4.93,NULL,20227104,NULL,404,556);
 [...]
INSERT INTO archive VALUES (1497622980,17,1,1025.077,970.021751502796,1023.06423954295,3.02441428710914,21.3047,4.625,36.005275,100,0.574028000130207,78.0280725907076,0.711203535935679,78.5858585858586,0,0,4.625,4.625,4.625,0.000410737550744648,1,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,0,0,NULL,NULL,NULL,NULL,4.95,4.95,4.95,4.93,NULL,20227104,NULL,404,556);

The INSERT statements finish

And we continue below with the footer (which is also probably only needed once, see below.)

/*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;
/*!40101 SET SQL_MODE=@OLD_SQL_MODE */;
/*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */;
/*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */;
/*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */; 

-- Dump completed on 2017-06-180:20:47

Any of these files will re-populate a database by themselves. To attempt the same process immediately with another file will create an error as they each have a CREATE TABLE statement. So these files are one-shot only - unless we modify them.

To add data from another of these files, you need to do some editing.

To restore using more than one file...

We need:-

one only header

As many INSERT INTO statements as are required for the database update.

one only footer

For example, if we were to do the following, we'd have success.

mkdir /tmp/dump-work
cd /tmp/dump-work
cp our_partial_backups to_here
gunzip *

That's the groundwork, we should now be ready to start. Create some duplicate files Edit these header and footer files to contain only those fields (we don't want any INSERT INTO instructions) header = everything above the first INSERT INTO statement footer = everything below the last INSERT INTO statement

cp weatherpi-host.masterofpis-201706150020-24hours header
cp weatherpi-host.masterofpis-201706150020-24hours footer

We now need our data. Duplicate the required files and edit the result. With these, we only want the INSERT INTO instructions, that's all the middle data or the bulk of the file. We delete the header and footers within these files.

cp weatherpi-host.masterofpis-201706150020-24hours 1
cp weatherpi-host.masterofpis-201706151033-24hours 1a
cp weatherpi-host.masterofpis-201706170020-24hours-middle 2
cp weatherpi-host.masterofpis-201706190021-daily 3

Using these modified files we use sort to remove duplicate lines and create one compact file containing all the INSERT statements that we want.

sort -u 1 1a 2 3 > tempins
# the order really does not matter, sort will do what it says and the -u flag will get rid of the dupes.

We now create a new file with a header, all the INSERTs we require, plus the footer. We're back to the start, but bigger and better.

cat header  > new_file.sql
cat tempins >> new_file.sql
cat footer >> new_file.sql

Create the database

mysql -u root -p
create database dumpnewtest;
quit;

Redirect our new file into mysql and we will have a database consisting of that data.

mysql -u root -p dumptestnew < new_file.sql

And if you want to, dump that and compare it to what we restored from the new_file. They should be the same, where it matters!

mysqldump -u root -p -q --single-transaction --skip-opt dumptestnew > dumptestnew-compare
vim -d new_file dumptestnew-compare

That's an outline of the process. Names have obviously been changed to suit. Tweak to suit.

A pseudo script

The following is a pseudo bash script - it will work as is (after changing names!) but you'll need to be familiar with vim and the process as outlined above, re: headers and footers

#!/bin/sh

mkdir restore
cd restore

cp  ../weatherpi-host.masterofpis-*.gz .

for i in * ; do gunzip $i; done
for i in * ; do grep $i -e INSERT >> all.txt ; done
# or with vim as the editor this will work
#for i in * ; do zcat $i | grep -e INSERT >> all.txt ; done

sort -u all.txt > allsorted

#cp  weatherpi-host.masterofpis-201706130020-24hours  head.txt
#cp  weatherpi-host.masterofpis-201706130020-24hours  tail.txt
# and then manually edit these resulting files
# or we can just use vim to do several steps at once..
vim ..
# edit file to keep header section and save that buffer
#:w head.txt
# undo and re-edit to keep tail section and save that buffer
#:wq! tail.txt

cp head.txt newdump.sql
cat allsorted  >> newdump.sql
cat tail.txt  >> newdump.sql

mysql -u root -p
# mysql > drop database newdump;
# mysql > create database newdump;

date && mysql -u root -p newdump < newdump.sql && date

Notes for sqlite (.sdb) databases

There are now 2 methods to back up the sqlite databases. The original that dumped the lot, and a newer, but less tested method that duplicates the mysql partial dumps. That's not to say that the partial doesn't work. I've stepped through the process and it reassembles just fine, I just haven't used it when I really need to - that's the acid test. :-)

Full dump: Method 1

If the skin.conf setting part_sqlite = 'False' is set / uncommented, sqlite3 will backup the whole database (less the daily tables if you leave it at the default sql_table = 'archive')

The process to dump an sqlite database goes a lot faster than the mysqldump process. This doesn't mean that you can ignore the warnings outlined above. It will still take time and you won't know how it long that will be until you've tried it out. If it does fail badly then try another method, such as the one using Rsync outlined on the weeWX wiki (see the link given at the start).

When you are configuring sqlbackup the same method as above applies, DO turn on sql_debug in the skin.conf file. Set it to at least 2 for system logging (/var/log/syslog). If you're generating the html report then set it to 4 and you'll find the debug output at the bottom of the sqlbackup/index.html page. Pay particular attention to the times returned in the DEBUG lines. Make sure they are sane. (Remember the warnings above?)

To restore it...

gunzip pmon-host.masterofpis-201706210841-daily.gz
sqlite3 pmon.sdb < pmon-host.masterofpis-201706210841-daily

check it using sqlite3 and pragma integrity_check...

09:21 AM $ sqlite3 pmon.sdb
SQLite version 3.8.7.1 2014-10-29 13:59:56
Enter ".help" for usage hints.
sqlite> pragma integrity_check;
ok
sqlite> .quit

or

09:22 AM $ echo 'pragma integrity_check;' | sqlite3 pmon.sdb
ok
Partial dump: Method 2

In the interests of removing some of the burden that the full sqlite dump process imposes on weeWX, there is now a process that mimics the partial dumps. Simply put: It uses the sql_period value in the skin.conf file to set the size of the partial dump, and uses the sql_label to help name the g'zipped sql file accordingly (previously these were only used by the mysqldump process). Those values are configurable if you want to change the defaults.

This method is a little simpler than that described for the mysql.sql files above. It generates a schema file if it doesn't find one in the backup directory (similar to the headers we extracted for the mysql process above). The schema will be extracted for each sqlite database, and if you are using the dated directory option then that will happen each day (new directory, no schema file, new one generated). This schema file sets up the (new) blank database and from there it's just a matter of rebuilding that (now) primed database with the values required. We can use one of the backup files as is, or if you need more than one then sort -u will come to your aid again.

As for the mysql method, It's best to have a saved database as a starting point. You don't really want to build up 1+ years of old data from weekly backup files, especially when you're under the pump. Best to keep upto-date, at least a little bit. You can make that single large backup by using the command line, or set a suitably large sql_period (as for mysql above), or by switching the skin.conf to 'dump' for one run: that's where part_sqlite = 'False' You could then use wee_reports for just one run.

Restoration is simple enough if using one file...

sqlite3 pmon.sdb < pmon.sdb-host.masterofpis-schema.sql
sqlite3 pmon.sdb < pmon.sdb-host.masterofpis-201707101912-daily

If more that one file is required to cover the timespan that needs restoring then do as for the mysql outline above...

sort -u pmon.sdb-host.masterofpis-201707101859-daily pmon.sdb-host.masterofpis-201707101912-daily > all.sql
sqlite3 pmon.sdb < pmon.sdb-host.masterofpis-schema.sql
sqlite3 pmon.sdb < all.sql

As above, check the new database for integrity and it should be right to go.

09:21 AM $ sqlite3 pmon.sdb
SQLite version 3.8.7.1 2014-10-29 13:59:56
Enter ".help" for usage hints.
sqlite> pragma integrity_check;
ok
sqlite> .quit

or

09:22 AM $ echo 'pragma integrity_check;' | sqlite3 pmon.sdb
ok

Setting up the report pages

The report page is optional but recommended. At the very least generate and use it while setting up the skin. It's intended to give a quick overview of what happened during each run. An extract of the .sql capture is displayed, the command syntax used, and any errors that occurred with that system command. The sql extract shows the start and finish of the file so you have an idea of what the file contains and therefore what the command generated. If the dump failed late in the capture, it will often show something to that affect at the end of the file. The mysql extract shows the first 100 lines and last 20. The sqlite shows the first 20 and the last 20.

The report was written with the Seasons skin in mind (ie: the newskin branch at github/weewx) and uses the seasons.css file. It can be adapted to suit any other skin by simply including the #includes as noted within sqlbackup/index.html.tmpl.

The sqlbackup/sqlbackupREADME.html is a html version of the github README.md and is included to give some background and an example / outline of what to do with the resulting files. It's not meant to be a one stop HowTo. You'll need to do some further reading and attempt a few practice "restores" well before the time comes that you'll actually need to.

The file sqlbackup.inc is an #include that I use with the seasons/index.html.tmpl file. It's not the cleanest implementation as /tmp/sqlbackup/head.inc doesn't exist until sqlbackup has run at least once. This results in the seasons skin aborting until that is generated. (You'll get a python exception logged, regarding /tmp/sqlbackup/head.inc missing.) A plain link is probably better if that bothers you, or no link if your memory and typing skills are up to it.

Multiple databases, multiple skins

The default setup of this skin is to backup the main weewx database. It should do that 'out of the box'. If you modify the database stanzas it will ignore the default behaviour and do what you ask. A space seperated list will allow you to do each of those databases. If you specify mysql_database and sql_database it will bow down and do those too, what ever you say goes. If it doesn't get up again you may want to rethink that strategy! Maybe you just want the timing of those backups to be different? If so consider using multiple skins.

This skin can be duplicated if you want to split the backups. You may want to do this if you have multiple databases and don't want the performance hit of doing them all at once. It may also be because you want to treat them differently. Your main weewx database has a 48 hour snapshot taken daily, then a monthly snapshot taken... monthly? if only because you hate the idea of stitching all the smaller ones together again. Your mesoraw weekly, at some other time, for some other reason, etc.

You'll need to do this skin duplication process, manually.

cd skins
cp -r sqlbackup sqlbackupweek

Open the weewx/weewx.conf file and duplicate the sqlbackup entry there, and change as required.

   [StdReport]
   [...]
       #original entry
       [[sqlbackup]]
            HTML_ROOT = /var/www/html/weewx/sqlbackup
            skin = sqlbackup
       # 2nd skin
       [[sqlbackupweek]]
            HTML_ROOT = /var/www/html/weewx/sqlbackupweek
            skin = sqlbackupweek

Open the new directory, skin/sqlbackupweek and edit the skin.conf within. Change the report_timing stanza and the section heading

# Report timing: http://www.weewx.com/docs/customizing.htm#customizing_gen_time
#
# 2 minutes past midnight, on Monday. 2 minutes past the hour may prevent a possible clash with  @daily or @weekly.
# Monday ensures it.
report_timing = '2 * * * 1'

#[...]

#[sqlbackup] # This section heading is all lower case to enable skin duplication.
[sqlbackupweek] # This section heading is all lower case to enable skin duplication.

#sql_user = "your_user_if_different_to_weewx.conf"

That's it. You should be good to go. Tread lightly.