Category Archives: Slony-I

Being cavalier with slony nodes and pg_dump/pg_restore

It’s generally a bad idea to do logical dump/restores of slony nodes, and for this reason slony provides the CLONE PREPARE/CLONE FINISH action commands to clone subscriber nodes.

In this instance though, I’ve a test environment where I’d stopped the slons, dumped out and dropped a subscriber database and then gone on to do some other testing with that postgres cluster. Sometime later I want to do some more testing with the slony cluster; I never dropped this node from the slony config, but in the meantime I’ve changed the postgres version from 9.0 to 9.4 and recreated the postgres cluster with initdb. Schema wise nothing has changed with the old nodes.

What follows is some fudge to make the node resume with replication, it’s neither recommended nor guaranteed to work. Copy this process at your peril.

After recompiling the same version of slony I had before (2.2.4) and installing the binaries, I’ve just restored the old subscriber database dump into the new postgres 9.4 cluster and updated my extensions.

So what’s holding me off just restarting the slons now? Firstly slony stores the oid of each table participating in replication and these will surely have changed now. The following query produces a long list of tables where this is true:

[postgres]
TEST=# SELECT s.tab_set, s.tab_reloid, s.tab_nspname, s.tab_relname,
c.oid, n.nspname, c.relname
FROM _test_replication.sl_table s
JOIN pg_catalog.pg_class c ON c.oid = s.tab_reloid
JOIN pg_catalog.pg_namespace n ON c.relnamespace = n.oid
WHERE (s.tab_reloid, s.tab_nspname, s.tab_relname) <> (c.oid, n.nspname, c.relname);
[/postgres]

The fix here is pretty simple, I could run REPAIR CONFIG against the node, but instead I just hit the updatereloid(<set id>, <node id>) function directly:

[postgres]
TEST=# SELECT s.set_id, s.set_comment,
CASE _test_replication.updatereloid(s.set_id,1) WHEN 1 THEN ‘OK’ ELSE ‘FAILED’ END
FROM _test_replication.sl_set s;
[/postgres]

For clarity, my resurrected node is ID 1, and was a subscriber at the time it was destroyed.

Secondly slony stores the transaction ID for each replication event against the node, and if my resurrected node has a lower current xid then new events are going to look like they’re in the past compared to events that have already been generated against the node. I can see the current xid of my new postgresql cluster with the following query:

[postgres]
TEST=# SELECT txid_current();
txid_current
————–
25765
[/postgres]

And I can compare this to slony events generated against the node in its old incarnation as follows:

[postgres]
TEST=# SELECT max(txid_snapshot_xmax(ev_snapshot)), txid_current(), txid_current()-max(txid_snapshot_xmax(ev_snapshot)) AS delta
FROM _test_replication.sl_event where ev_origin = 1;

max | txid_current | delta
——-+————–+——–
89004 | 25767 | -63237
[/postgres]

So transaction ID wise my node is in the past, to bump that up I need about 63237 transactions to happen, if that were a lot higher I’d have to think of another way (perhaps by stopping all the slons and updating all values of ev_snapshot for node 1 on all nodes), but I can easily generate 60k transactions by hitting txid_current():

$ while [ 1 ]; do psql -U glyn -d TEST -tAc "SELECT txid_current();"; done
25768
25769
25770

<snip>

89002
89003
89004
89005
89006
^C
$ 

I can now restart my slons and replication should continue, and so far all appears well.

Experimenting with Slony 2.2

I’ve been spending my spare bits of time over the last couple of weeks looking at the latest release of Slony-I. At a quick glance the main change between 2.1 and 2.2 appears to be to the sl_log table format, but although seemingly minor, the changes to the way clusters are failed over and reshaped actually go much deeper too.

For example in previous versions it was possible for a subscriber to pull multiple sets from different providers and later change the provider for any set at will using the “SCUBSCRIBE SET” command. However as of 2.2, although it’s still possible to initially subscribe a node with different providers for each set, any changes must use the “RESUBSCRIBE NODE” command, which only allows resubscribing all sets from a particular origin to a single provider.

There’s also changes to the “FAILOVER” command to improve reliability in a situation where multiple nodes have failed; you can now pass in multiple failed nodes and Slony should do the right thing. So far my tests with 2.2.2 show there may be some issues when passing in multiple failed nodes where one is a downstream provider to a cascaded subscriber, however that’s a corner case and hopefully we’ll see a fix soonish. (Edit 16/05/2014: There’s now a patch against 2.2.2 for this)

The changes to the sl_log table mean that replicated data is now replicated in a slightly more logical way; data is logged as arrays of values rather than chunks of sql to execute on the subscriber, and the data is sent over a pipe using copy rather than fetched in chunks via a cursor. Also DDL had been moved out of sl_event and into a new sl_log_script table. Upgrade will most likely require some brief downtime, as running update functions requires a lock of all sets and waiting out the cleanup interval for a cleanupevent/logswitch to happen to clear out the tables.

On a separate note, this evening (not the best use of a bank holiday weekend) whilst looking at how these changes would affect my experimental failover script I had a quick bash at adding in an “autofailover” functionality; the idea being that the script keeps polling all the nodes, and upon detecting any unavailable nodes runs the failover command. It’s a functionality I’ve never personally wanted as it’s possible to get into all sorts of trouble blindly failing over onto an asynchronous replica, in fact in a busy environment it’s pretty much guaranteed (E.g missing a single update to a product price and then taking millions of sales on the wrong price!). However, perhaps it could be quite useful in a mostly read only environment where updates are low volume such as a wiki; more thought needed I think.

(The script by the way is here slony-i/slony_failover and also mirrored on github glynastill/slony_failover)