Author Archives: Glyn Astill

About Glyn Astill

Interested in programming and databases.

Updating Royal Mail PAF data and ignorance about PostgreSQL deferred foreign keys

A few months ago I bashed together a pl/pgsql function for importing Royal Mail PAF
data into PostgreSQL
and at the same time I also put together another function to perform an update from their “changes” files.

Today I came to actually try and use that function, and it’s not much of a surprise that it needed a little more work; at the time I was working off a fresh import and a small set of changes, whilst looking (with squinty eyes) at the fairly long winded docs. The changes files from RM are supplied as a journal of changes, which I can only assume were intended to be read and applied sequentially; this means we have to rely on some of the smarts SQL provides us with to apply them in bulk. The docs even try to dissuade the use of the changes files by saying “Applying Changes files regularly is ‘high maintenance’, because applying thousands of changes is a very long & involved process compared to buying full refreshes”, however I don’t want to do full refreshes as that’s likely to cause more interruption, and besides it just seems a bit of a cop out to me.

So after another quick skim over the docs, I made a couple of corrections to the update function where I’d complicated things by replacing blank key columns with nulls, and added in an update procedure for the Alias and Postzon files (which are only supplied as a full refresh), this time using postgis to generate the latitude and longitude and save running my horrible perl script.

Then I had a go at running the update:

[postgres]
TEST=# BEGIN;
TEST=# SELECT update_pc_paf(‘Y14M04’, ‘/tmp’);
NOTICE: 2014-05-22 19:40:00.161582+01: Import starting for edition Y14M04 with data root /tmp/Y14M04_CHANGES/
NOTICE: 2014-05-22 19:40:00.165233+01: Begin staging Changes1 file
NOTICE: 2014-05-22 19:40:00.439189+01: Done staging Changes1 file
NOTICE: 2014-05-22 19:40:00.439379+01: Preparing to update localities
NOTICE: 2014-05-22 19:40:00.487656+01: Prepared 15 records for update on localities
NOTICE: 2014-05-22 19:40:00.489481+01: Removed 0 records from localities
NOTICE: 2014-05-22 19:40:00.504522+01: Created 15 records in localities
NOTICE: 2014-05-22 19:40:00.504651+01: Preparing to update thoroughfares
NOTICE: 2014-05-22 19:40:00.54741+01: Prepared 377 records for update on thoroughfares
NOTICE: 2014-05-22 19:40:00.548924+01: Removed 0 records from thoroughfares
NOTICE: 2014-05-22 19:40:00.589353+01: Created 377 records in thoroughfares
NOTICE: 2014-05-22 19:40:00.589477+01: Preparing to update thoroughfare_descriptor
NOTICE: 2014-05-22 19:40:00.631469+01: Prepared 0 records for update on thoroughfare_descriptor
NOTICE: 2014-05-22 19:40:00.632967+01: Removed 0 records from thoroughfare_descriptor
NOTICE: 2014-05-22 19:40:00.633275+01: Created 0 records in thoroughfare_descriptor
NOTICE: 2014-05-22 19:40:00.633444+01: Preparing to update building_names
NOTICE: 2014-05-22 19:40:00.700856+01: Prepared 7947 records for update on building_names
NOTICE: 2014-05-22 19:40:00.704261+01: Removed 0 records from building_names
NOTICE: 2014-05-22 19:40:01.155068+01: Created 7947 records in building_names
NOTICE: 2014-05-22 19:40:01.155178+01: Preparing to update sub_building_names
NOTICE: 2014-05-22 19:40:01.223108+01: Prepared 7798 records for update on sub_building_names
NOTICE: 2014-05-22 19:40:01.226335+01: Removed 0 records from sub_building_names
NOTICE: 2014-05-22 19:40:01.669071+01: Created 7798 records in sub_building_names
NOTICE: 2014-05-22 19:40:01.675826+01: Begin staging Changes2 file
NOTICE: 2014-05-22 19:40:02.59217+01: Done staging Changes2 file
NOTICE: 2014-05-22 19:40:02.592303+01: Preparing to update mainfile
NOTICE: 2014-05-22 19:40:05.07166+01: Prepared 216438 records for update on mainfile
NOTICE: 2014-05-22 19:41:41.439761+01: Removed 87291 records from mainfile
NOTICE: 2014-05-22 19:42:54.976222+01: Created 129147 records in mainfile
NOTICE: 2014-05-22 19:42:54.97639+01: Preparing to update organisations
NOTICE: 2014-05-22 19:42:55.609607+01: Prepared 65158 records for update on organisations
NOTICE: 2014-05-22 19:43:07.778516+01: Removed 32840 records from organisations
NOTICE: 2014-05-22 19:43:10.141383+01: Created 32318 records in organisations
NOTICE: 2014-05-22 19:43:10.18821+01: Begin staging WChanges file
NOTICE: 2014-05-22 19:43:10.218245+01: Done staging WChanges file
NOTICE: 2014-05-22 19:43:10.218346+01: Preparing to update welsh mainfile
NOTICE: 2014-05-22 19:43:10.311915+01: Prepared 8395 records for update on welsh mainfile
NOTICE: 2014-05-22 19:43:20.631738+01: Removed 3098 records from welsh mainfile
NOTICE: 2014-05-22 19:43:23.616801+01: Created 5149 records in welsh mainfile
NOTICE: 2014-05-22 19:43:23.61696+01: Preparing to update welsh organisations
NOTICE: 2014-05-22 19:43:23.624131+01: Prepared 0 records for update on welsh organisations
NOTICE: 2014-05-22 19:43:23.624836+01: Removed 0 records from welsh organisations
NOTICE: 2014-05-22 19:43:23.625512+01: Created 0 records in welsh organisations
NOTICE: 2014-05-22 19:43:23.634001+01: Begin staging postzon
NOTICE: 2014-05-22 19:43:29.097317+01: Done staging postzon file, now staging with lat long data
NOTICE: 2014-05-22 19:44:41.996743+01: Done staging postzon with lat long data, updating
NOTICE: 2014-05-22 19:44:57.195802+01: Inserted 3594 new records for update on postzon_100m
NOTICE: 2014-05-22 19:46:13.242174+01: Updated 212398 records for update on postzon_100m
NOTICE: 2014-05-22 19:46:24.038271+01: Deleted 3991 records for update on postzon_100m
NOTICE: 2014-05-22 19:46:24.808325+01: Begin staging alias file
NOTICE: 2014-05-22 19:46:32.810242+01: Done staging alias file
NOTICE: 2014-05-22 19:46:34.021948+01: Done staging counties , updating
NOTICE: 2014-05-22 19:46:34.023405+01: Inserted 0 new records for update on counties
NOTICE: 2014-05-22 19:46:34.024632+01: Updated 0 records for update on counties
NOTICE: 2014-05-22 19:46:34.025273+01: Deleted 0 records for update on counties
NOTICE: 2014-05-22 19:46:38.25488+01: Done staging county_alias , updating
NOTICE: 2014-05-22 19:46:41.711214+01: Inserted 3594 new records for update on county_alias
NOTICE: 2014-05-22 19:46:46.227681+01: Updated 12 records for update on county_alias
NOTICE: 2014-05-22 19:46:53.812323+01: Deleted 3991 records for update on county_alias
NOTICE: 2014-05-22 19:46:53.812483+01: Completed
update_pc_paf
—————
t
(1 row)
[/postgres]

My schema for the PAF data has deferrable foreign keys, and in order to apply the updates in chunks I’ve set them as deferred in the above function; I only find out if I’ve violated them when they’re checked on commit:

[postgres]
TEST=# COMMIT;
[/postgres]

…. And it just sits there forever! I go and do something else, but when I come back it’s still sitting there; what did I forget?

Well we can make a pretty good guess that we’re waiting whilst the deferred constraints are checked, but the constraint checks don’t take that long when they’re not deferred so why is it taking so long?

Well it turns out I’ve got indexes on the satellite tables that are referenced by the foreign keys on my main address table, but no indexes for those keys on the actual main address table. It’s apparent that when the deferred constraint is checked it needs to look up the row first; something that wouldn’t have been required if the constraint were immediate. I add some indexes, and try again:

[postgres]
TEST=# COMMIT;
Time: 281979.437 ms
[/postgres]

It worked, and we didn’t violate any constraints!

The new script (named paf_postgresql_import_postgis.plpgsql) is in the git repository at postgresql/geographic_data (mirrored also on github glynastill/geographic_data).

Experimenting with Slony 2.2

I’ve been spending my spare bits of time over the last couple of weeks looking at the latest release of Slony-I. At a quick glance the main change between 2.1 and 2.2 appears to be to the sl_log table format, but although seemingly minor, the changes to the way clusters are failed over and reshaped actually go much deeper too.

For example in previous versions it was possible for a subscriber to pull multiple sets from different providers and later change the provider for any set at will using the “SCUBSCRIBE SET” command. However as of 2.2, although it’s still possible to initially subscribe a node with different providers for each set, any changes must use the “RESUBSCRIBE NODE” command, which only allows resubscribing all sets from a particular origin to a single provider.

There’s also changes to the “FAILOVER” command to improve reliability in a situation where multiple nodes have failed; you can now pass in multiple failed nodes and Slony should do the right thing. So far my tests with 2.2.2 show there may be some issues when passing in multiple failed nodes where one is a downstream provider to a cascaded subscriber, however that’s a corner case and hopefully we’ll see a fix soonish. (Edit 16/05/2014: There’s now a patch against 2.2.2 for this)

The changes to the sl_log table mean that replicated data is now replicated in a slightly more logical way; data is logged as arrays of values rather than chunks of sql to execute on the subscriber, and the data is sent over a pipe using copy rather than fetched in chunks via a cursor. Also DDL had been moved out of sl_event and into a new sl_log_script table. Upgrade will most likely require some brief downtime, as running update functions requires a lock of all sets and waiting out the cleanup interval for a cleanupevent/logswitch to happen to clear out the tables.

On a separate note, this evening (not the best use of a bank holiday weekend) whilst looking at how these changes would affect my experimental failover script I had a quick bash at adding in an “autofailover” functionality; the idea being that the script keeps polling all the nodes, and upon detecting any unavailable nodes runs the failover command. It’s a functionality I’ve never personally wanted as it’s possible to get into all sorts of trouble blindly failing over onto an asynchronous replica, in fact in a busy environment it’s pretty much guaranteed (E.g missing a single update to a product price and then taking millions of sales on the wrong price!). However, perhaps it could be quite useful in a mostly read only environment where updates are low volume such as a wiki; more thought needed I think.

(The script by the way is here slony-i/slony_failover and also mirrored on github glynastill/slony_failover)

Spatial capabilities in PostgreSQL with PostGIS

When dealing with geographic data in PostgreSQL, at some point we’re going to want to lookup rows based purely on their location relative to one another.

With smaller volumes of data, we can get away quite easily with just latitude / longitude and some maths, but once we get past a certain point we’re going to want to be able to do index lookups based on distance from a specific location; this is where the PostGIS extension is priceless.

Installing PostGIS on Debian Wheezy

Installing under Debian when PostgreSQL has been installed via the package manager is dead simple. I’m using the 2.1 version packaged for a 9.3 server from the apt.postgresql.org repository, but you should select the package that matches your server version. It’s also possible to install version 1.5 directly from Wheezy main if you don’t want to add other repositories to apt.

$ sudo apt-get install postgresql-9.3-postgis-2.1

This will install the PostGIS contrib modules into the /usr/share/postgresql/9.3/contrib directory rather than the “extension” directory used by most other PostgreSQL packages in Debian.

Some of the machines I look after have PostgreSQL installed from source for various reasons (but mainly because historically packages from Debain weren’t very timely), and these machines require a slightly more lengthy setup process. To compile PostGIS from source we need to install some dependencies, one of which is the development files for the geospatial data abstraction library and these are in the “libgdal-dev” package; however this depends on the “libpq-dev” package which will most likely interfere with our source install. There are two options here; either install the package without it’s dependencies (make a note you’ve done it to avoid future dependency problems), or roll gdal from source:

First we install the required dependencies, these are listed in the PostGIS docs and for my test machine that already has everything required to compile PostgreSQL already these are:

$ sudo apt-get install libgeos-dev libproj-dev libjson-c-dev libjson0-dev

Then install the gdal dev libraries in whichever manner suits:

$ sudo apt-get download libgdal-dev
$ sudo dpkg --force-all -i libgdal-dev_1.9.0-3.1_amd64.deb

OR

$ cd /tmp
$ wget http://download.osgeo.org/gdal/1.10.1/gdal-1.10.1.tar.gz
$ tar xvfz gdal-1.10.1.tar.gz && cd gdal-1.10.1
$ ./configure
$ make
$ sudo make install

Once this is done we can compile and install PostGIS:

$ cd /tmp
$ wget http://download.osgeo.org/postgis/source/postgis-2.1.1.tar.gz
$ tar xvfz postgis-2.1.1.tar.gz && cd postgis-2.1.1
$ ./configure
$ make				
$ sudo make install

Getting started with PostGIS

Now we’ve got PostGIS installed we just need to create the extension in the database:

$ psql -U glyn -d test -c 'CREATE EXTENSION postgis; CREATE EXTENSION postgis_topology;'

OR on and on 9.0 and lower run the sql scripts in the contrib/postgis-2.1 directory:

$ psql -U glyn -d test -f postgis.sql
$ psql -U glyn -d test -f postgis_comments.sql
$ psql -U glyn -d test -f spatial_ref_sys.sql
$ psql -U glyn -d test -f rtpostgis.sql
$ psql -U glyn -d test -f raster_comments.sql
$ psql -U glyn -d test -f topology.sql
$ psql -U glyn -d test -f topology_comments.sql

So let’s generate some data for testing; we create a table called “friends” with 90k rows which stores their locations as latitude and longitude values. Admittedly the distribution in this table won’t be that realistic, but it should suffice for testing:

[postgres]
INSERT INTO friends
SELECT b.unnest || ‘ v.’ || generate_series,
CASE WHEN random() > 0.5 THEN ‘Somewhere Else’ ELSE ‘Somewhere’ END,
1.0838637+random()*(CASE WHEN random() > 0.5 THEN -1 ELSE 1 END),
52.7389201+random()*(CASE WHEN random() > 0.5 THEN -1 ELSE 1 END)
FROM generate_series(1,10000)
CROSS JOIN (SELECT unnest(ARRAY[‘White Wonder’,’Colonel K’,’El Loco’,’Count Duckula’,’Leatherhead’,’Barron Greenback’,’Ernest Penfold’,’Professor Heinrich Von Squawkencluck’,’Flying Officer Buggles Pigeon’])) b;
[/postgres]

In the scenario where we don’t have PostGIS we can create an sql function to calculate earth distance between two points, but any relative distances will always be an unknown so can’t be indexed:

[postgres]
CREATE OR REPLACE FUNCTION earth_distance_miles(lat1 float, lat2 float, long1 float, long2 float)
RETURNS double precision
AS
‘SELECT 3963.0*acos(sin($1/57.2958)*sin($2/57.2958)+cos($1/57.2958)*cos($2/57.2958)*cos(($4/57.2958)-($3/57.2958)));’
LANGUAGE SQL IMMUTABLE;
[/postgres]

This makes listing out “friends” within a mile pretty easy:

[postgres]
EXPLAIN ANALYZE SELECT f.*,
earth_distance_miles(52.7389201, f.latitude, 1.0838637, f.longitude) AS dist_miles
FROM friends f WHERE earth_distance_miles(52.7389201, f.latitude, 1.0838637, f.longitude) <= 1
ORDER BY earth_distance_miles(52.7389201, f.latitude, 1.0838637, f.longitude);
[/postgres]

                                           QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=10988.40..11063.40 rows=30000 width=108) (actual time=160.006..160.013 rows=29 loops=1)
   Sort Key: ((3963::double precision * acos(((0.795884736186082::double precision * sin((latitude / 57.2958::double precision))) 
   	+ ((0.605448170123596::double precision * cos((latitude / 57.2958::double precision))) * cos(((longitude / 57.2958::double precision) 
   	- 0.0189169834438126::double precision)))))))
   Sort Method:  quicksort  Memory: 30kB
   ->  Seq Scan on friends f  (cost=0.00..7510.00 rows=30000 width=108) (actual time=19.993..159.930 rows=29 loops=1)
         Filter: ((3963::double precision * acos(((0.795884736186082::double precision * sin((latitude / 57.2958::double precision))) 
         	+ ((0.605448170123596::double precision * cos((latitude / 57.2958::double precision))) 
         	* cos(((longitude / 57.2958::double precision) - 0.0189169834438126::double precision)))))) <= 1::double precision)
 Total runtime: 160.069 ms

Now lets put PostGIS to work and add in an indexed geography column:

[postgres]
ALTER TABLE friends ADD COLUMN geog geography(Point,4326); — SRID 4326 for WGS84
UPDATE friends SET geog = ST_MakePoint(longitude, latitude);
CREATE INDEX ON friends USING GIST (geog);
[/postgres]

Now let’s try to list out our “friends” within a mile again, this time making use of the PostGIS ST_Distance and ST_DWithin functions:

[postgres]
EXPLAIN ANALYZE SELECT f.*,
ST_Distance(f.geog, ST_MakePoint(1.0838637, 52.7389201))/1609 AS dist_miles
FROM friends f WHERE ST_DWithin(f.geog, ST_MakePoint(1.0838637, 52.7389201), 1609)
ORDER BY ST_Distance(f.geog, ST_MakePoint(1.0838637, 52.7389201));
[/postgres]

                                           QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=37.90..37.90 rows=1 width=108) (actual time=1.756..1.760 rows=29 loops=1)
   Sort Key: (_st_distance(geog, '0101000020E610000017258D768157F13FB4ED0FEF945E4A40'::geography, 0::double precision, true))
   Sort Method:  quicksort  Memory: 32kB
   ->  Bitmap Heap Scan on friends f  (cost=2.39..37.89 rows=1 width=108) (actual time=0.908..1.684 rows=29 loops=1)
         Recheck Cond: (geog && '0101000020E610000017258D768157F13FB4ED0FEF945E4A40'::geography)
         Filter: (('0101000020E610000017258D768157F13FB4ED0FEF945E4A40'::geography && _st_expand(geog, 1609::double precision)) 
         	AND _st_dwithin(geog, '0101000020E610000017258D768157F13FB4ED0FEF945E4A40'::geography, 1609::double precision, true))
         ->  Bitmap Index Scan on friends_geog_idx  (cost=0.00..2.39 rows=16 width=0) (actual time=0.351..0.351 rows=45 loops=1)
               Index Cond: (geog && '0101000020E610000017258D768157F13FB4ED0FEF945E4A40'::geography)
 Total runtime: 1.821 ms

This shows a marked improvement from a query time of 160.069ms down to 1.821 ms. Obviously our mileage will vary depending on the quantity of data in the table, it’s distribution and just how many rows we want to retrieve.