Tag Archives: json

Wanting hstore style operators in jsonb – revisited

A couple of weeks ago I wrote about wanting a jsonb delete operator in 9.4, and yesterday evening I decided to have a go at writing some functions in C.

In the end all I actually did yesterday was make a mess and read a lot of existing code, but persisting this evening I’ve managed to put together some functions that appear to work. I’m not confident to say they’re efficient (or even correct; and they certainly shouldn’t be put on production systems), but I thought it’d be useful to benchmark them.

I’ve also added in a concatenate operator after reading Matthew Schinckel’s post.

First install the C shared library:

# make install 

Then install the functions and operators, these functions are named jsonb_delete and jsonb_concat:

test=# \i jsonb_opx.sql
CREATE FUNCTION
COMMENT
CREATE OPERATOR
COMMENT
CREATE FUNCTION
COMMENT
CREATE OPERATOR
COMMENT
CREATE FUNCTION
COMMENT
CREATE OPERATOR
COMMENT
CREATE FUNCTION
COMMENT
CREATE OPERATOR
COMMENT

Then install the SQL versions for comparison, these functions are named jsonb_delete_left and jsonb_concat_left:

test=# \i jsonb_opx_sql_comparison.sql
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION

Test 1 – deleteing a key

This is actually an SQL wrapper to the C function for (jsonb, text[]), breaking that up
to have a separate version would be more efficient, but that should be a trivial task.

test=# \timing on
Timing is on.
test=# SELECT '{"a": 1, "b": 2, "c": 3}'::jsonb - 'b'::text;
     ?column?     
------------------
 {"a": 1, "c": 3}
(1 row)

Time: 7.099 ms

The above is hitting the C function; from this point onwards I’ll just hit the functions directly:

test=# SELECT jsonb_delete('{"a": 1, "b": 2, "c": 3}'::jsonb, 'b');
   jsonb_delete   
------------------
 {"a": 1, "c": 3}
(1 row)
Time: 6.220 ms

Now the original SQL version:

test=# SELECT jsonb_delete_left('{"a": 1, "b": 2, "c": 3}'::jsonb, 'b');
 jsonb_delete_left 
-------------------
 {"a": 1, "c": 3}
(1 row)

Time: 14.570 ms

Now to benchmark for a large quantity of rows:

test=# EXPLAIN ANALYZE SELECT jsonb_delete(('{"a":' || x || ', "b":' || x*2 || ', "c":' || x*x || '}')::jsonb, 'b')
FROM generate_series(1,10000) x;
                                                         QUERY PLAN                                                         
----------------------------------------------------------------------------------------------------------------------------
 Function Scan on generate_series x  (cost=0.00..300.00 rows=1000 width=4) (actual time=10.407..263.489 rows=10000 loops=1)
 Planning time: 0.335 ms
 Execution time: 290.192 ms
(3 rows)

Time: 293.254 ms
test=# EXPLAIN ANALYZE SELECT jsonb_delete_left(('{"a":' || x || ', "b":' || x*2 || ', "c":' || x*x || '}')::jsonb, 'b')
FROM generate_series(1,10000) x;
                                                         QUERY PLAN                                                         
----------------------------------------------------------------------------------------------------------------------------
 Function Scan on generate_series x  (cost=0.00..300.00 rows=1000 width=4) (actual time=15.165..767.706 rows=10000 loops=1)
 Planning time: 0.785 ms
 Execution time: 803.258 ms
(3 rows)

Time: 809.088 ms

Whilst processing 1 row really doesn’t show any improvement (the timings for both varied in the 2~10ms range), with 10,000 rows the C version is just over twice as quick.

If these times stick out as particularly dire, it’s probably just because the machine I’m testing on is very old.

Test 2 – deleting multiple keys

test=# SELECT jsonb_delete('{"a": 1, "b": 2, "c": 3}'::jsonb, ARRAY['a','b']);
 jsonb_delete 
--------------
 {"c": 3}
(1 row)

Time: 3.482 ms

test=# SELECT jsonb_delete_left('{"a": 1, "b": 2, "c": 3}'::jsonb, ARRAY['a','b']);
 jsonb_delete_left 
-------------------
 {"c": 3}
(1 row)

Time: 3.613 ms
test=# EXPLAIN ANALYZE SELECT jsonb_delete(('{"a":' || x || ', "b":' || x*2 || ', "c":' || x*x || '}')::jsonb, ARRAY['a','b'])
FROM generate_series(1,10000) x;
                                                        QUERY PLAN                                                        
--------------------------------------------------------------------------------------------------------------------------
 Function Scan on generate_series x  (cost=0.00..52.50 rows=1000 width=4) (actual time=5.805..177.507 rows=10000 loops=1)
 Planning time: 1.646 ms
 Execution time: 209.137 ms
(3 rows)

Time: 213.507 ms

test=# EXPLAIN ANALYZE SELECT jsonb_delete_left(('{"a":' || x || ', "b":' || x*2 || ', "c":' || x*x || '}')::jsonb, ARRAY['a','b'])
FROM generate_series(1,10000) x;
                                                         QUERY PLAN                                                         
----------------------------------------------------------------------------------------------------------------------------
 Function Scan on generate_series x  (cost=0.00..300.00 rows=1000 width=4) (actual time=15.805..757.500 rows=10000 loops=1)
 Planning time: 0.595 ms
 Execution time: 789.272 ms
(3 rows)

Time: 793.229 ms

Results are similar; we’re essentially hitting the same C function.

Test 3 – Deleting matching jsonb key/value pairs

The C version of this function essentially loops round the left jsonb value looking up keys
in the right jsonb value. If it finds a matching key it does a string based comparison on
the values (treating nested jsonb as a string too) and if the values match as well then the key/value pair is removed.

test=# SELECT jsonb_delete('{"a": 1, "b": 2, "c": 3}'::jsonb, '{"a": 4, "b": 2}'::jsonb);
   jsonb_delete   
------------------
 {"a": 1, "c": 3}
(1 row)

Time: 3.114 ms
test=# SELECT jsonb_delete_left('{"a": 1, "b": 2, "c": 3}'::jsonb, '{"a": 4, "b": 2}'::jsonb);
 jsonb_delete_left 
-------------------
 {"a": 1, "c": 3}
(1 row)

Time: 6.899 ms
test=# EXPLAIN ANALYZE SELECT jsonb_delete(('{"a":' || x || ', "b":' || x*2 || ', "c":' || x*x || '}')::jsonb,
('{"a":' || x || ', "d":' || x*2 || ', "c":' || x*10 || '}')::jsonb)
FROM generate_series(1,10000) x;
                                                        QUERY PLAN                                                        
--------------------------------------------------------------------------------------------------------------------------
 Function Scan on generate_series x  (cost=0.00..92.50 rows=1000 width=4) (actual time=8.452..238.210 rows=10000 loops=1)
 Planning time: 0.428 ms
 Execution time: 266.358 ms
(3 rows)

Time: 270.161 ms

test=# EXPLAIN ANALYZE SELECT jsonb_delete_left(('{"a":' || x || ', "b":' || x*2 || ', "c":' || x*x || '}')::jsonb,
('{"a":' || x || ', "d":' || x*2 || ', "c":' || x*10 || '}')::jsonb)
FROM generate_series(1,10000) x;
                                                         QUERY PLAN                                                          
-----------------------------------------------------------------------------------------------------------------------------
 Function Scan on generate_series x  (cost=0.00..340.00 rows=1000 width=4) (actual time=11.833..1206.990 rows=10000 loops=1)
 Planning time: 0.759 ms
 Execution time: 1248.481 ms
(3 rows)

Time: 1253.392 ms

There’s a bigger improvement here; it’s about 4.5 times quicker.

Test 4 – concatenation

The C function for this is essentially a cut n shut job on both jsonb values, blindly
pushing all the values onto the return value and leaving the lower level jsonb api
to do the deduplication:

test=# SELECT jsonb_concat('{"a": 1, "b": 2, "c": 3}'::jsonb, '{"a": 4, "d": 4, "z": 26}'::jsonb);
               jsonb_concat                
-------------------------------------------
 {"a": 4, "b": 2, "c": 3, "d": 4, "z": 26}
(1 row)

Time: 3.028 ms

test=# SELECT jsonb_concat_left('{"a": 1, "b": 2, "c": 3}'::jsonb, '{"a": 4, "d": 4, "z": 26}'::jsonb);
             jsonb_concat_left             
-------------------------------------------
 {"a": 4, "b": 2, "c": 3, "d": 4, "z": 26}
(1 row)

Time: 4.731 ms

Again nothing to get excited about. Testing on a larger quantity of rows shows a similar improvement to the jsonb -jsonb delete operator/function above:

test=# EXPLAIN ANALYZE SELECT jsonb_concat(('{"a":' || x || ', "b":' || x*2 || ', "c":' || x*x || '}')::jsonb,
('{"a":' || x || ', "d":' || x*2 || ', "c":' || x*10 || '}')::jsonb)
FROM generate_series(1,10000) x;
                                                        QUERY PLAN                                                         
---------------------------------------------------------------------------------------------------------------------------
 Function Scan on generate_series x  (cost=0.00..92.50 rows=1000 width=4) (actual time=12.481..255.910 rows=10000 loops=1)
 Planning time: 0.599 ms
 Execution time: 285.357 ms
(3 rows)

Time: 288.615 ms

test=# EXPLAIN ANALYZE SELECT jsonb_concat_left(('{"a":' || x || ', "b":' || x*2 || ', "c":' || x*x || '}')::jsonb,
('{"a":' || x || ', "d":' || x*2 || ', "c":' || x*10 || '}')::jsonb)
FROM generate_series(1,10000) x;
                                                         QUERY PLAN                                                          
-----------------------------------------------------------------------------------------------------------------------------
 Function Scan on generate_series x  (cost=0.00..340.00 rows=1000 width=4) (actual time=13.931..1051.100 rows=10000 loops=1)
 Planning time: 5.160 ms
 Execution time: 1091.596 ms
(3 rows)

Time: 1103.165 ms

So in conclusion the results are nothing earth shattering, but there is a small improvement. Essentially all these functions are doing is iterating around the jsonb and building new return values; it’d be nice to see what someone more familiar with the jsonb internals at a lower level could come up with.

Wanting for a hstore style delete operator in jsonb

PostgreSQL 9.4 intorduced the jsonb type, but it’d be nice to be able to delete keys and pairs using the “-” operator; just like you can with the hstore type.

Fortunately postgres makes creating an operator really easy for us, so lets have a go at creating a delete operator for jsonb.

First lets try to create an operator just to delete one key passed as text. We need to start by creating a function for our operator, and the only way I can think to do this looking at the docs is to unwrap the json with jsonb_each, filter out the matches, and roll it all back up:

TEST=# CREATE OR REPLACE FUNCTION jsonb_delete_left(a jsonb, b text)
       RETURNS jsonb AS
       $BODY$
       SELECT COALESCE(
              (
              SELECT ('{' || string_agg(to_json(key) || ':' || value, ',') || '}')
              FROM jsonb_each(a)
              WHERE key <> b
              )
       , '{}')::jsonb;
       $BODY$
       LANGUAGE sql IMMUTABLE STRICT;
CREATE FUNCTION

TEST=# COMMENT ON FUNCTION jsonb_delete_left(jsonb, text) IS 'delete key in second argument from first argument';
COMMENT

Once we’ve created our function, we just need to create the operator to use it:

TEST=# CREATE OPERATOR - ( PROCEDURE = jsonb_delete_left, LEFTARG = jsonb, RIGHTARG = text);
CREATE OPERATOR
TEST=# COMMENT ON OPERATOR - (jsonb, text) IS 'delete key from left operand';
COMMENT

And we’re ready to go:

TEST=# SELECT '{"a": 1, "b": 2, "c": 3}'::jsonb - 'b'::text;
     ?column?     
------------------
 {"a": 1, "c": 3}
(1 row)

Seems to work, lets now try to create one that will let us delete a set of keys passed as an array:

TEST=# CREATE OR REPLACE FUNCTION jsonb_delete_left(a jsonb, b text[])
       RETURNS jsonb AS
       $BODY$
       SELECT COALESCE(
              (
              SELECT ('{' || string_agg(to_json(key) || ':' || value, ',') || '}')
              FROM jsonb_each(a)
              WHERE key <> ALL(b)
              )
       , '{}')::jsonb;
       $BODY$
       LANGUAGE sql IMMUTABLE STRICT;
CREATE FUNCTION

TEST=# COMMENT ON FUNCTION jsonb_delete_left(jsonb, text[]) IS 'delete keys in second argument from first argument';
COMMENT

TEST=# CREATE OPERATOR - ( PROCEDURE = jsonb_delete_left, LEFTARG = jsonb, RIGHTARG = text[]);
CREATE OPERATOR

TEST=# COMMENT ON OPERATOR - (jsonb, text[]) IS 'delete keys from left operand';
COMMENT

TEST=# SELECT '{"a": 1, "b": 2, "c": 3}'::jsonb - ARRAY['a','b'];
 ?column? 
----------
 {"c": 3}
(1 row)

Ok, so now lets create one to delete matching key/value pairs:

TEST=# CREATE OR REPLACE FUNCTION jsonb_delete_left(a jsonb, b jsonb)
       RETURNS jsonb AS
       $BODY$
       SELECT COALESCE(
              (
              SELECT ('{' || string_agg(to_json(key) || ':' || value, ',') || '}')
              FROM jsonb_each(a)
              WHERE NOT ('{' || to_json(key) || ':' || value || '}')::jsonb <@ b
              )
       , '{}')::jsonb;
       $BODY$
       LANGUAGE sql IMMUTABLE STRICT;
CREATE FUNCTION

TEST=# COMMENT ON FUNCTION jsonb_delete_left(jsonb, jsonb) IS 'delete matching pairs in second argument from first argument';
COMMENT

TEST=# CREATE OPERATOR - ( PROCEDURE = jsonb_delete_left, LEFTARG = jsonb, RIGHTARG = jsonb);
CREATE OPERATOR

TEST=# COMMENT ON OPERATOR - (jsonb, jsonb) IS 'delete matching pairs from left operand';
COMMENT

TEST=# SELECT '{"a": 1, "b": 2, "c": 3}'::jsonb - '{"a": 4, "b": 2}'::jsonb;
     ?column?     
------------------
 {"a": 1, "c": 3}
(1 row)

Seems to work fine to me, lets try an index:

TEST=# \timing on
Timing is on
TEST=# CREATE TABLE jsonb_test (a jsonb, b jsonb);
CREATE TABLE
Time: 207.038 ms

TEST=# INSERT INTO jsonb_test VALUES ('{"a": 1, "b": 2, "c": 3}', '{"a": 4, "b": 2}');
INSERT 0 1
Time: 39.979 ms

TEST=# SELECT * FROM jsonb_test WHERE a-b = '{"a": 1, "c": 3}'::jsonb;
            a             |        b         
--------------------------+------------------
 {"a": 1, "b": 2, "c": 3} | {"a": 4, "b": 2}
(1 row)

Time: 47.197 ms

TEST=# INSERT INTO jsonb_test
       SELECT ('{"a" : ' || i+1 || ',"b" : ' || i+2 || ',"c": ' || i+3 || '}')::jsonb,
       ('{"a" : ' || i+2 || ',"b" : ' || i || ',"c": ' || i+5 || '}')::jsonb
       FROM generate_series(1,1000) i;
INSERT 0 1000
Time: 84.765 ms

TEST=# CREATE INDEX ON jsonb_test USING gin((a-b));
CREATE INDEX
Time: 229.050 ms
TEST=# EXPLAIN SELECT * FROM jsonb_test WHERE a-b @> '{"a": 1, "c": 3}';
                                    QUERY PLAN                                     
-----------------------------------------------------------------------------------
 Bitmap Heap Scan on jsonb_test  (cost=20.26..24.52 rows=1 width=113)
   Recheck Cond: ((a - b) @> '{"a": 1, "c": 3}'::jsonb)
   ->  Bitmap Index Scan on jsonb_test_expr_idx  (cost=0.00..20.26 rows=1 width=0)
         Index Cond: ((a - b) @> '{"a": 1, "c": 3}'::jsonb)
(4 rows)

Time: 13.277 ms

All seems to work as expected. I guess the one thing I’m not so certain about here is if any of this behaves correctly once we start getting nested json, but at first glance it doesn’t look too wonky to me:

TEST=# SELECT '{"a": 1, "b": 2, "c": 3, "d": {"a": 4}}'::jsonb - '{"d": {"a": 4}, "b": 2}'::jsonb;
     ?column?     
------------------
 {"a": 1, "c": 3}

TEST=# SELECT '{"a": 4, "b": 2, "c": 3, "d": {"a": 4}}'::jsonb - '{"a": 4, "b": 2}'::jsonb;
        ?column?         
-------------------------
 {"c": 3, "d": {"a": 4}}
(1 row

Of course being written in sql these probably aren’t anywhere near as fast as the hstore equivalents which are written in C, so it’d be nice to see something in core postgres to do this.

PostgreSQL 9.4 released

It looks like PostgreSQL 9.4 was released last Thursday. I’ve been keeping an eye on 9.4 and watching some of the chat about new features, although I’ve just been too buried in work to pay too much attention. Today however is my first day off for Christmas, so finally I’ve got some time to look into it.

The most interesting features to me are jsonb and Logical Decoding, so that’s what I’m going to look at, but there’s more and you can read about it here.

jsonb

The new jsonb data type stores JSON data internally in a binary form, which makes it possible to index the keys and values within. In previous versions we have a JSON data type but all that does is enforce valid JSON; the data is still stored as text. Whilst it is possible to do lookups on key-value data in previous versions using the hstore type (provided by the hstore module), with JSON seemingly being ubiquitous in aplications these days jsonb means we can just let devs store their data straight into the database and still be able to do fast lookups and searches.

At work we get quite a lot of variable callback data from web APIs, or serialized data from application objects that tends to end up being stored as text. The ability to lookup that data via a GIN index will be invaluable. I assume even XML storage should become easier as there’s plenty of pre cooked ways to convert XML to JSON.

Let’s create a quick test table:

CREATE TABLE jsonb_test(
    id integer PRIMARY KEY,
    data jsonb
);
CREATE INDEX jsonb_test_data ON jsonb_test USING gin(data);

-- Obviously this data is ridiculous, but we need enough rows for postgres to prefer an index over a seq scan.
INSERT INTO jsonb_test
SELECT i, ('{"name": "Person' || i || '","age" : ' || i || ',"address": {"add1": "' 
    || i || ' Clarence Street","city": "Lancaster","postcode": "LA13BG"}}')::jsonb
FROM generate_series(1,100000) i;

Now if we query on the data column we should see the jsonb_test_data index being used:

TEST=# SELECT * FROM jsonb_test 
WHERE data @> '{"address": {"add1": "2300 Clarence Street"}}';
  id  |                                                            data                                                             
------+-----------------------------------------------------------------------------------------------------------------------------
 2300 | {"age": 2300, "name": "Person2300", "address": {"add1": "2300 Clarence Street", "city": "Lancaster", "postcode": "LA13BG"}}
(1 row)

Time: 10.811 ms

TEST=# EXPLAIN SELECT * FROM jsonb_test 
WHERE data @> '{"address": {"add1": "2300 Clarence Street"}}';
                                      QUERY PLAN                                      
--------------------------------------------------------------------------------------
 Bitmap Heap Scan on jsonb_test  (cost=1040.83..1395.09 rows=107 width=147)
   Recheck Cond: (data @> '{"address": {"add1": "2300 Clarence Street"}}'::jsonb)
   ->  Bitmap Index Scan on jsonb_test_data  (cost=0.00..1040.80 rows=107 width=0)
         Index Cond: (data @> '{"address": {"add1": "2300 Clarence Street"}}'::jsonb)
(4 rows)

Logical Decoding

Whilst Logical Decoding isn’t really in a state to be put into active duty right away, it is pretty special, and allows postgres to supply a stream of changes (or partial changes) in a user defined format. This is similar to what we’ve been doing for ages with trigger based replication like Slony and Londisite, but dissimilar because instead of all the overhead and clunkyness of log triggers the changes are read directly from WAL in a similar way to streaming binary replication. The uses don’t end at master-slave replication either; multimaster and selective replication with per-table granularity, auditing, online upgrades and cache invalidation are just some of the possible uses.

Logical Decoding uses the concept of “replication slots”, which represent a stream of changes logged for a particular consumer, and we can have as many replication slots as we like. The great thing about replication slots is that once they’re created all WAL files required by the slot are retained, and they aren’t just for Logical Decoding; Streaming Replication can make use of them too, so we don’t have to balance wal_keep_segments or rely on archive_command any more. Replication slots aren’t a magic bullet though; if a replication slot isn’t being consumed it will cause postgresql to consume disk space as it retains WAL files for the slot/consumer.

I mentioned earlier that Logical Decoding allows changes to be supplied in a “user defined format”; this is provided by an output plugin in the form of a shared library that needs to be custom written as required, and it’s in this output plugin where the format and any restrictions on what data we want would be controlled. The one exception to this is data used for identifying old rows from updates or deletes, which is defined before it is written to the WAL, and has to be set on a per table basis with ALTER TABLE REPLICA IDENTITY.

There’s a “test_decoding” plugin supplied as a contrib module that we can use for testing, and that’s what I’m going to have a quick look at now.

The first thing we have to do is set wal_level to logical and make sure max_replication_slots is greater than zero. Once we’ve done that and restarted PostgreSQL we’re ready to start playing, and we can create our first replication slot:

TEST=# SELECT * FROM pg_create_logical_replication_slot('test_replication_slot', 'test_decoding');
       slot_name       | xlog_position 
-----------------------+---------------
test_replication_slot | 0/56436390
(1 row)

We should now be able to see our replication slot in the pg_replication_slots view:

TEST=# SELECT * FROM pg_replication_slots;
       slot_name       |    plugin     | slot_type | datoid | database | active | xmin | catalog_xmin | restart_lsn 
-----------------------+---------------+-----------+--------+----------+--------+------+--------------+-------------
 test_replication_slot | test_decoding | logical   |  16422 | TEST     | f      |      |      1135904 | 0/56436358
(1 row)

To look and see if there are any changes, we can use the pg_logical_slot_peek_changes function:

TEST=# \x
Expanded display is on.
TEST=# SELECT * FROM pg_logical_slot_peek_changes('test_replication_slot',NULL, NULL);

-[ RECORD 1 ]------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
location | 0/56436450
xid      | 1135906
data     | BEGIN 1135906
-[ RECORD 2 	]------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
location | 0/56436450
xid      | 1135906
data     | table _test_replication.sl_components: UPDATE: co_actor:'local_sync' co_pid[integer]:20814 co_node	[integer]:0 co_connection_pid[integer]:20831 co_activity:'thread main loop' co_starttime[timestamp with time 	zone]:'2014-12-22 16:00:48+00' co_event[bigint]:null co_eventtype:'n/a'
-[ RECORD 3 	]------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
location | 0/56436518
xid      | 1135906
data     | COMMIT 1135906

< snip >

… and I’ll snip my output there at 3 rows; I use this machine for Slony testing, so we’re already seeing all of the Slony chatter here, but you should be able to see the capture of an update to the “_test_replication.sl_components” table (this could be any table – I just happened to call my slony cluster “test_replication” too). If you create some activity on your database, you should start so see some output. Notice that the output is the actual changes on the table, not a capture of the sql statement that caused the changes; we can use this change information to build SQL if we want, or some other form DML for another system.

To actually consume the queue we can call pg_logical_slot_get_changes:

TEST=# SELECT * FROM pg_logical_slot_get_changes('test_replication_slot', NULL, NULL);

This outputs the same as the above, but once we’ve called it the changes are classed as consumed regardless of the caller actually applying them, and will not be output again (nor the WAL reatined). One thing that would be useful here would be the ability to pull the changes, apply them, then confirm them as applied before they’re marked as consumed; I guess this could be achieved by first calling pg_logical_slot_peek_changes, applying the changes and then calling pg_logical_slot_get_changes passing the latest lsn seen from the peek.

In addition to the sql functions, the pg_recvlogical binary is provided to pull data over the streaming replication protocol with something like:

# pg_recvlogical -U postgres -d TEST --slot test_replication_slot --start -f -

For this, as with streaming replication we need to set max_wal_senders greater than zero.

Once we’re finished with our test, we should drop the replication slot:

TEST=# SELECT pg_drop_replication_slot('test_replication_slot');

Apparently the one thing Logical Decoding can’t do is output DDL, and I’m guessing this is due to other complexities that need to be overcome rather than by design. All exciting!