Category Archives: PostgreSQL

Copying Pavel Stehule’s simple history table but with the jsonb type

On 15/01/2015 Pavel Stehule wrote about implementing a dead simple history table using the hstore type.

On Friday evening I wanted to copy this almost line for line switching the hstore type for jsonb , but I counldn’t really see how to replicate the update part so simply without creating a delete operator. Once that operator has been created it appears to work:

[postgres highlight=”6,7,14,23,31,32,35″]
CREATE TABLE test(a int, b int, c int);

CREATE TABLE history(
event_time timestamp(2),
executed_by text,
origin_value jsonb,
new_value jsonb
);

CREATE OR REPLACE FUNCTION history_insert()
RETURNS TRIGGER AS $$
BEGIN
INSERT INTO history(event_time, executed_by, new_value)
VALUES(CURRENT_TIMESTAMP, SESSION_USER, row_to_json(NEW)::jsonb);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE OR REPLACE FUNCTION history_delete()
RETURNS TRIGGER AS $$
BEGIN
INSERT INTO history(event_time, executed_by, origin_value)
VALUES(CURRENT_TIMESTAMP, SESSION_USER, row_to_json(OLD)::jsonb);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE OR REPLACE FUNCTION history_update()
RETURNS TRIGGER AS $$
DECLARE
js_new jsonb := row_to_json(NEW)::jsonb;
js_old jsonb := row_to_json(OLD)::jsonb;
BEGIN
INSERT INTO history(event_time, executed_by, origin_value, new_value)
VALUES(CURRENT_TIMESTAMP, SESSION_USER, js_old – js_new, js_new – js_old);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER test_history_insert AFTER INSERT ON test
FOR EACH ROW EXECUTE PROCEDURE history_insert();

CREATE TRIGGER test_history_delete AFTER DELETE ON test
FOR EACH ROW EXECUTE PROCEDURE history_delete();

CREATE TRIGGER test_history_update AFTER UPDATE ON test
FOR EACH ROW EXECUTE PROCEDURE history_update();
[/postgres]

Result:

[postgres]
INSERT INTO test VALUES(1000, 1001, 1002);
UPDATE test SET a = 10, c = 20;
DELETE FROM test;

postgres=# table history;
event_time | executed_by | origin_value | new_value
————————+————-+——————————-+———————————–
2015-01-19 13:12:54.26 | glyn | | {"a": 1000, "b": 1001, "c": 1002}
2015-01-19 13:12:54.31 | glyn | {"a": 1000, "c": 1002} | {"a": 10, "c": 20}
2015-01-19 13:12:54.94 | glyn | {"a": 10, "b": 1001, "c": 20} |
(3 rows)
[/postgres]

Tested on PostgreSQL 9.4

Wanting for a hstore style delete operator in jsonb

PostgreSQL 9.4 intorduced the jsonb type, but it’d be nice to be able to delete keys and pairs using the “-” operator; just like you can with the hstore type.

Fortunately postgres makes creating an operator really easy for us, so lets have a go at creating a delete operator for jsonb.

First lets try to create an operator just to delete one key passed as text. We need to start by creating a function for our operator, and the only way I can think to do this looking at the docs is to unwrap the json with jsonb_each, filter out the matches, and roll it all back up:

[postgres]
TEST=# CREATE OR REPLACE FUNCTION jsonb_delete_left(a jsonb, b text)
RETURNS jsonb AS
$BODY$
SELECT COALESCE(
(
SELECT (‘{‘ || string_agg(to_json(key) || ‘:’ || value, ‘,’) || ‘}’)
FROM jsonb_each(a)
WHERE key <> b
)
, ‘{}’)::jsonb;
$BODY$
LANGUAGE sql IMMUTABLE STRICT;
CREATE FUNCTION

TEST=# COMMENT ON FUNCTION jsonb_delete_left(jsonb, text) IS ‘delete key in second argument from first argument’;
COMMENT
[/postgres]

Once we’ve created our function, we just need to create the operator to use it:

[postgres]
TEST=# CREATE OPERATOR – ( PROCEDURE = jsonb_delete_left, LEFTARG = jsonb, RIGHTARG = text);
CREATE OPERATOR
TEST=# COMMENT ON OPERATOR – (jsonb, text) IS ‘delete key from left operand’;
COMMENT
[/postgres]

And we’re ready to go:

[postgres]
TEST=# SELECT ‘{"a": 1, "b": 2, "c": 3}’::jsonb – ‘b’::text;
?column?
——————
{"a": 1, "c": 3}
(1 row)
[/postgres]

Seems to work, lets now try to create one that will let us delete a set of keys passed as an array:

[postgres]
TEST=# CREATE OR REPLACE FUNCTION jsonb_delete_left(a jsonb, b text[])
RETURNS jsonb AS
$BODY$
SELECT COALESCE(
(
SELECT (‘{‘ || string_agg(to_json(key) || ‘:’ || value, ‘,’) || ‘}’)
FROM jsonb_each(a)
WHERE key <> ALL(b)
)
, ‘{}’)::jsonb;
$BODY$
LANGUAGE sql IMMUTABLE STRICT;
CREATE FUNCTION

TEST=# COMMENT ON FUNCTION jsonb_delete_left(jsonb, text[]) IS ‘delete keys in second argument from first argument’;
COMMENT

TEST=# CREATE OPERATOR – ( PROCEDURE = jsonb_delete_left, LEFTARG = jsonb, RIGHTARG = text[]);
CREATE OPERATOR

TEST=# COMMENT ON OPERATOR – (jsonb, text[]) IS ‘delete keys from left operand’;
COMMENT

TEST=# SELECT ‘{"a": 1, "b": 2, "c": 3}’::jsonb – ARRAY[‘a’,’b’];
?column?
———-
{"c": 3}
(1 row)
[/postgres]

Ok, so now lets create one to delete matching key/value pairs:

[postgres]
TEST=# CREATE OR REPLACE FUNCTION jsonb_delete_left(a jsonb, b jsonb)
RETURNS jsonb AS
$BODY$
SELECT COALESCE(
(
SELECT (‘{‘ || string_agg(to_json(key) || ‘:’ || value, ‘,’) || ‘}’)
FROM jsonb_each(a)
WHERE NOT (‘{‘ || to_json(key) || ‘:’ || value || ‘}’)::jsonb <@ b
)
, ‘{}’)::jsonb;
$BODY$
LANGUAGE sql IMMUTABLE STRICT;
CREATE FUNCTION

TEST=# COMMENT ON FUNCTION jsonb_delete_left(jsonb, jsonb) IS ‘delete matching pairs in second argument from first argument’;
COMMENT

TEST=# CREATE OPERATOR – ( PROCEDURE = jsonb_delete_left, LEFTARG = jsonb, RIGHTARG = jsonb);
CREATE OPERATOR

TEST=# COMMENT ON OPERATOR – (jsonb, jsonb) IS ‘delete matching pairs from left operand’;
COMMENT

TEST=# SELECT ‘{"a": 1, "b": 2, "c": 3}’::jsonb – ‘{"a": 4, "b": 2}’::jsonb;
?column?
——————
{"a": 1, "c": 3}
(1 row)
[/postgres]

Seems to work fine to me, lets try an index:

[postgres]
TEST=# \timing on
Timing is on
TEST=# CREATE TABLE jsonb_test (a jsonb, b jsonb);
CREATE TABLE
Time: 207.038 ms

TEST=# INSERT INTO jsonb_test VALUES (‘{"a": 1, "b": 2, "c": 3}’, ‘{"a": 4, "b": 2}’);
INSERT 0 1
Time: 39.979 ms

TEST=# SELECT * FROM jsonb_test WHERE a-b = ‘{"a": 1, "c": 3}’::jsonb;
a | b
————————–+——————
{"a": 1, "b": 2, "c": 3} | {"a": 4, "b": 2}
(1 row)

Time: 47.197 ms

TEST=# INSERT INTO jsonb_test
SELECT (‘{"a" : ‘ || i+1 || ‘,"b" : ‘ || i+2 || ‘,"c": ‘ || i+3 || ‘}’)::jsonb,
(‘{"a" : ‘ || i+2 || ‘,"b" : ‘ || i || ‘,"c": ‘ || i+5 || ‘}’)::jsonb
FROM generate_series(1,1000) i;
INSERT 0 1000
Time: 84.765 ms

TEST=# CREATE INDEX ON jsonb_test USING gin((a-b));
CREATE INDEX
Time: 229.050 ms
TEST=# EXPLAIN SELECT * FROM jsonb_test WHERE a-b @> ‘{"a": 1, "c": 3}’;
QUERY PLAN
———————————————————————————–
Bitmap Heap Scan on jsonb_test (cost=20.26..24.52 rows=1 width=113)
Recheck Cond: ((a – b) @> ‘{"a": 1, "c": 3}’::jsonb)
-> Bitmap Index Scan on jsonb_test_expr_idx (cost=0.00..20.26 rows=1 width=0)
Index Cond: ((a – b) @> ‘{"a": 1, "c": 3}’::jsonb)
(4 rows)

Time: 13.277 ms
[/postgres]

All seems to work as expected. I guess the one thing I’m not so certain about here is if any of this behaves correctly once we start getting nested json, but at first glance it doesn’t look too wonky to me:

[postgres]
TEST=# SELECT ‘{"a": 1, "b": 2, "c": 3, "d": {"a": 4}}’::jsonb – ‘{"d": {"a": 4}, "b": 2}’::jsonb;
?column?
——————
{"a": 1, "c": 3}

TEST=# SELECT ‘{"a": 4, "b": 2, "c": 3, "d": {"a": 4}}’::jsonb – ‘{"a": 4, "b": 2}’::jsonb;
?column?
————————-
{"c": 3, "d": {"a": 4}}
(1 row
[/postgres]

Of course being written in sql these probably aren’t anywhere near as fast as the hstore equivalents which are written in C, so it’d be nice to see something in core postgres to do this.

PostgreSQL 9.4 released

It looks like PostgreSQL 9.4 was released last Thursday. I’ve been keeping an eye on 9.4 and watching some of the chat about new features, although I’ve just been too buried in work to pay too much attention. Today however is my first day off for Christmas, so finally I’ve got some time to look into it.

The most interesting features to me are jsonb and Logical Decoding, so that’s what I’m going to look at, but there’s more and you can read about it here.

jsonb

The new jsonb data type stores JSON data internally in a binary form, which makes it possible to index the keys and values within. In previous versions we have a JSON data type but all that does is enforce valid JSON; the data is still stored as text. Whilst it is possible to do lookups on key-value data in previous versions using the hstore type (provided by the hstore module), with JSON seemingly being ubiquitous in aplications these days jsonb means we can just let devs store their data straight into the database and still be able to do fast lookups and searches.

At work we get quite a lot of variable callback data from web APIs, or serialized data from application objects that tends to end up being stored as text. The ability to lookup that data via a GIN index will be invaluable. I assume even XML storage should become easier as there’s plenty of pre cooked ways to convert XML to JSON.

Let’s create a quick test table:

[postgres]
CREATE TABLE jsonb_test(
id integer PRIMARY KEY,
data jsonb
);
CREATE INDEX jsonb_test_data ON jsonb_test USING gin(data);

— Obviously this data is ridiculous, but we need enough rows for postgres to prefer an index over a seq scan.
INSERT INTO jsonb_test
SELECT i, (‘{"name": "Person’ || i || ‘","age" : ‘ || i || ‘,"address": {"add1": "’
|| i || ‘ Clarence Street","city": "Lancaster","postcode": "LA13BG"}}’)::jsonb
FROM generate_series(1,100000) i;
[/postgres]

Now if we query on the data column we should see the jsonb_test_data index being used:

[postgres]
TEST=# SELECT * FROM jsonb_test
WHERE data @> ‘{"address": {"add1": "2300 Clarence Street"}}’;
id | data
——+—————————————————————————————————————————–
2300 | {"age": 2300, "name": "Person2300", "address": {"add1": "2300 Clarence Street", "city": "Lancaster", "postcode": "LA13BG"}}
(1 row)

Time: 10.811 ms

TEST=# EXPLAIN SELECT * FROM jsonb_test
WHERE data @> ‘{"address": {"add1": "2300 Clarence Street"}}’;
QUERY PLAN
————————————————————————————–
Bitmap Heap Scan on jsonb_test (cost=1040.83..1395.09 rows=107 width=147)
Recheck Cond: (data @> ‘{"address": {"add1": "2300 Clarence Street"}}’::jsonb)
-> Bitmap Index Scan on jsonb_test_data (cost=0.00..1040.80 rows=107 width=0)
Index Cond: (data @> ‘{"address": {"add1": "2300 Clarence Street"}}’::jsonb)
(4 rows)
[/postgres]

Logical Decoding

Whilst Logical Decoding isn’t really in a state to be put into active duty right away, it is pretty special, and allows postgres to supply a stream of changes (or partial changes) in a user defined format. This is similar to what we’ve been doing for ages with trigger based replication like Slony and Londisite, but dissimilar because instead of all the overhead and clunkyness of log triggers the changes are read directly from WAL in a similar way to streaming binary replication. The uses don’t end at master-slave replication either; multimaster and selective replication with per-table granularity, auditing, online upgrades and cache invalidation are just some of the possible uses.

Logical Decoding uses the concept of “replication slots”, which represent a stream of changes logged for a particular consumer, and we can have as many replication slots as we like. The great thing about replication slots is that once they’re created all WAL files required by the slot are retained, and they aren’t just for Logical Decoding; Streaming Replication can make use of them too, so we don’t have to balance wal_keep_segments or rely on archive_command any more. Replication slots aren’t a magic bullet though; if a replication slot isn’t being consumed it will cause postgresql to consume disk space as it retains WAL files for the slot/consumer.

I mentioned earlier that Logical Decoding allows changes to be supplied in a “user defined format”; this is provided by an output plugin in the form of a shared library that needs to be custom written as required, and it’s in this output plugin where the format and any restrictions on what data we want would be controlled. The one exception to this is data used for identifying old rows from updates or deletes, which is defined before it is written to the WAL, and has to be set on a per table basis with ALTER TABLE REPLICA IDENTITY.

There’s a “test_decoding” plugin supplied as a contrib module that we can use for testing, and that’s what I’m going to have a quick look at now.

The first thing we have to do is set wal_level to logical and make sure max_replication_slots is greater than zero. Once we’ve done that and restarted PostgreSQL we’re ready to start playing, and we can create our first replication slot:

[postgres]
TEST=# SELECT * FROM pg_create_logical_replication_slot(‘test_replication_slot’, ‘test_decoding’);
slot_name | xlog_position
———————–+—————
test_replication_slot | 0/56436390
(1 row)
[/postgres]

We should now be able to see our replication slot in the pg_replication_slots view:

[postgres]
TEST=# SELECT * FROM pg_replication_slots;
slot_name | plugin | slot_type | datoid | database | active | xmin | catalog_xmin | restart_lsn
———————–+—————+———–+——–+———-+——–+——+————–+————-
test_replication_slot | test_decoding | logical | 16422 | TEST | f | | 1135904 | 0/56436358
(1 row)
[/postgres]

To look and see if there are any changes, we can use the pg_logical_slot_peek_changes function:

[postgres]
TEST=# \x
Expanded display is on.
TEST=# SELECT * FROM pg_logical_slot_peek_changes(‘test_replication_slot’,NULL, NULL);

-[ RECORD 1 ]——————————————————————————————————————————————————————————————————————————————————————————————————
location | 0/56436450
xid | 1135906
data | BEGIN 1135906
-[ RECORD 2 ]——————————————————————————————————————————————————————————————————————————————————————————————————
location | 0/56436450
xid | 1135906
data | table _test_replication.sl_components: UPDATE: co_actor:’local_sync’ co_pid[integer]:20814 co_node [integer]:0 co_connection_pid[integer]:20831 co_activity:’thread main loop’ co_starttime[timestamp with time zone]:’2014-12-22 16:00:48+00′ co_event[bigint]:null co_eventtype:’n/a’
-[ RECORD 3 ]——————————————————————————————————————————————————————————————————————————————————————————————————
location | 0/56436518
xid | 1135906
data | COMMIT 1135906

< snip >
[/postgres]

… and I’ll snip my output there at 3 rows; I use this machine for Slony testing, so we’re already seeing all of the Slony chatter here, but you should be able to see the capture of an update to the “_test_replication.sl_components” table (this could be any table – I just happened to call my slony cluster “test_replication” too). If you create some activity on your database, you should start so see some output. Notice that the output is the actual changes on the table, not a capture of the sql statement that caused the changes; we can use this change information to build SQL if we want, or some other form DML for another system.

To actually consume the queue we can call pg_logical_slot_get_changes:

[postgres]
TEST=# SELECT * FROM pg_logical_slot_get_changes(‘test_replication_slot’, NULL, NULL);
[/postgres]

This outputs the same as the above, but once we’ve called it the changes are classed as consumed regardless of the caller actually applying them, and will not be output again (nor the WAL reatined). One thing that would be useful here would be the ability to pull the changes, apply them, then confirm them as applied before they’re marked as consumed; I guess this could be achieved by first calling pg_logical_slot_peek_changes, applying the changes and then calling pg_logical_slot_get_changes passing the latest lsn seen from the peek.

In addition to the sql functions, the pg_recvlogical binary is provided to pull data over the streaming replication protocol with something like:

# pg_recvlogical -U postgres -d TEST --slot test_replication_slot --start -f -

For this, as with streaming replication we need to set max_wal_senders greater than zero.

Once we’re finished with our test, we should drop the replication slot:

[postgres]
TEST=# SELECT pg_drop_replication_slot(‘test_replication_slot’);
[/postgres]

Apparently the one thing Logical Decoding can’t do is output DDL, and I’m guessing this is due to other complexities that need to be overcome rather than by design. All exciting!