I had forgotten this file existed, but it needs tqdm and pytest-benchmark, so add those dev
requirements.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We always start a transaction before processing, but there are cases where
we don't need to. Switch to doing it on-demand.
This doesn't make a big difference for sqlite3, but it can for Postgres because
of the latency: 12% or so. Every bit helps!
30 runs, min-max(mean+/-stddev):
Postgres before: 8.842773-9.769030(9.19531+/-0.21)
Postgres after: 8.007967-8.321856(8.14172+/-0.066)
sqlite3 before: 7.486042-8.371831(8.15544+/-0.19)
sqlite3 after: 7.973411-8.576135(8.3025+/-0.12)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is slow, but will make sure we find out if we add latency spikes in future.
tests/test_coinmoves.py::test_generate_coinmoves (5,000,000, sqlite3):
Time (from start to end of l2 node): 223 seconds
Latency min/median/max: 0.0023 / 0.0033 / 0.113 seconds
tests/test_coinmoves.py::test_generate_coinmoves (5,000,000, Postgres):
Time (from start to end of l2 node): 470 seconds
Worst latency: 0.0024 / 0.0098 / 0.124 seconds
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: lightningd: multiple signficant speedups for large nodes, especially preventing "freezes" under exceptionally high load.
This avoids latency spikes when we ask lightningd to give us 2M entries.
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, sqlite3):
Time (from start to end of l2 node): 88 seconds (was 95)
Worst latency: 0.028 seconds **WAS 4.5**
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now we've found all the issues, the latency spike (4 seconds on my laptop)
for querying 2M elements remains.
Restore the limited sampling which we reverted, but make it 10,000 now.
This doesn't help our worst-case latency, because sql still asks for all 2M entries on
first access. We address that next.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We have a reasonable number of commands now, and we *already* keep a
strmap for the usage strings. So simply keep the usage and the command
in the map, and skip the array.
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, sqlite3):
Time (from start to end of l2 node): 95 seconds (was 102)
Worst latency: 4.5 seconds
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, Postgres):
Time (from start to end of l2 node): 231 seconds
Worst latency: 4.8 seconds
Note the values compare against 25.09.2 (Postgres):
sqlite3:
Time (from start to end of l2 node): 403 seconds
Postgres:
Time (from start to end of l2 node): 671 seconds
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, sqlite3):
Time (from start to end of l2 node): 102 seconds **WAS 126**
Worst latency: 4.5 seconds **WAS 5.1**
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now that ccan/io rotates through callbacks, we can call io_always() to
yield.
We're now fast enough that this doesn't have any effect on this test,
bit it's still good to have.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now that ccan/io rotates through callbacks, we can call io_always() to yield.
Though it doesn't fire on our benchmark, it's a good thing to do.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This rotates through fds explicitly, to avoid unfairness.
This doesn't really make a difference until we start using it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We would keep parsing if we were out of tokens, even if we had actually
finished one object!
These are comparison against the "xpay: use filtering on rpc_command
so we only get called on "pay"." not the disasterous previous one!
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, sqlite3):
Time (from start to end of l2 node): 126 seconds (was 135)
Worst latency: 5.1 seconds **WAS 12.1**
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
A client can do this by sending a large request, so this allows us to see what
happens if they do that, even though 1MB (2MB buffer) is more than we need.
This drives our performance through the floor: see next patch which gets
us back on track.
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, sqlite3):
Time (from start to end of l2 node): 271 seconds **WAS 135**
Worst latency: 105 seconds **WAS 12.1**
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Plugins: "filters" can be specified on the `custommsg` hook to limit what message types the hook will be called for.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: pyln-client: optional filters can be given when hooks are registered (for supported hooks)
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, sqlite3):
Time (from start to end of l2 node): 135 seconds **WAS 227**
Worst latency: 12.1 seconds **WAS 62.4**
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
xpay is relying on the destructor to send another request. This means
that it doesn't actually submit the request until *next time* we wake.
This has been in xpay from the start, but it is not noticeable until
xpay stops subscribing to every command on the rpc_command hook.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Plugins: the `rpc_command` hook can now specify a "filter" on what commands it is interested in.
We're going to use this on the "rpc_command" hook, to allow xpay to specify that it
only wants to be called on "pay" commands.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This potentially saves us some reads (not measurably though), at cost
of less fairness. It's important to measure though, because a single
large request will increase buffer size for successive requests, so we
can see this pattern in real usage.
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, sqlite3):
Time (from start to end of l2 node): 227 seconds (was 239)
Worst latency: 62.4 seconds (was 56.9)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Profiling shows us spending all our time in tal_arr_remove when dealing
with a giant number of output streams. This applies both for RPC output
and plugin output.
Use linked list instead.
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, sqlite3):
Time (from start to end of l2 node): 239 seconds **WAS 518**
Worst latency: 56.9 seconds **WAS 353**
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now we've rid ourselves of the worst offenders, we can make this a real
stress test. We remove plugin io saving and low-level logging, to avoid
benchmarking testing artifacts.
Here are the results:
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, sqlite3):
Time (from start to end of l2 node): 518 seconds
Worst latency: 353 seconds
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, Postgres):
Time (from start to end of l2 node): 417 seconds
Worst latency: 96.6 seconds
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we only have 8 or fewer spans at once (as is the normal case), don't
do allocation, which might interfere with tracing.
This doesn't change our test_generate_coinmoves() benchmark.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we have USDT compiled in, scanning the array of spans becomes
prohibitive if we have really large numbers of requests. In the
bookkeeper code, when catching up with 1.6M channel events, this
became clear in profiling.
Use a hash table instead.
Before:
tests/test_coinmoves.py::test_generate_coinmoves (100,000, sqlite3):
Time (from start to end of l2 node): 269 seconds (vs 14 with HAVE_USDT=0)
Worst latency: 4.0 seconds
After:
tests/test_coinmoves.py::test_generate_coinmoves (100,000, sqlite3):
Time (from start to end of l2 node): 14 seconds
Worst latency: 4.3 seconds
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When we have many commands, this is where we spend all our time, and it's
just for an old assertion.
tests/test_coinmoves.py::test_generate_coinmoves (100,000, sqlite3):
Time (from start to end of l2 node): 13 seconds **WAS 34**
Worst latency: 4.0 seconds **WAS 24*
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we add a new hook, not at the end, while hooks are getting called,
then iteration could be messed up (e.g. calling a plugin twice, or
skipping one).
The simplest thing is to defer updates until nobody is calling the
hook. In theory this could livelock, in practice it won't.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We make a copy, then attach a destructor to the hook in case that plugin exits, so we
can NULL it out in the local copy. When we have 300,000 requests pending, this means
we have 300,000 destructors, which don't scale (it's a single-linked list).
Simply NULL out (rather than shrink) the array in the `plugin_hook`.
Then we can keep using that.
tests/test_coinmoves.py::test_generate_coinmoves (100,000, sqlite3):
Time (from start to end of l2 node): 34 seconds **WAS 85**
Worst latency: 24 seconds **WAS 75**
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We start with 100,000 entries. We will scale this to 2M as we fix the
O(N^2) bottlenecks.
I measure the node time after we modify the db, like so:
while guilt push && rm -rf /tmp/ltests* && uv run make -s RUST=0; do RUST=0 VALGRIND=0 TIMEOUT=100 TEST_DEBUG=1 eatmydata uv run pytest -vvv -p no:logging tests/test_coinmoves.py::test_generate_coinmoves > /tmp/`guilt top`-sql 2>&1; done
Then analyzed the results with:
FILE=/tmp/synthetic-data.patch-sql; START=$(grep 'lightningd-2 .* Server started with public key' $FILE | tail -n1 | cut -d\ -f2 | cut -d. -f1); END=$(grep 'lightningd-2 .* JSON-RPC shutdown' $FILE | tail -n1 | cut -d\ -f2 | cut -d. -f1); echo $(( $(date +%s -d $END) - $(date +%s -d $START) )); grep 'E assert' $FILE;
tests/test_coinmoves.py::test_generate_coinmoves (100,000, sqlite3):
Time (from start to end of l2 node): 85 seconds
Worst latency: 75 seconds
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This reverts commit 1dda0c0753 so we can test
what its like to be flooded with logs again.
This benefits from other improvements we've made this release, to handling
plugin input (i.e. converting to use common/jsonrpc_io), so this doesn't
make much difference.
tests/test_coinmoves.py::test_generate_coinmoves (100,000, sqlite3):
Time (from start to end of l2 node): 211 seconds
Worst latency: 108 seconds
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This reverts `bookkeeper: only read listchannelmoves 1000 entries at a time.` commit,
so we can properly fix the scalability in the coming patches.
tests/test_coinmoves.py::test_generate_coinmoves (100,000):
Time (from start to end of l2 node): 207 seconds
Worst latency: 106 seconds
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Nobody has hit this yet, but we're about to with our tests.
The size of the db is going to be whatever the total size of the tables are; bigger nodes,
bigger db.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
bc4bb2b0ef "libplugin: use jsonrpc_io logic for sync requests too."
changed this message, and test was not updated.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I noticed this in the logs:
```
listinvoices: description/bolt11/bolt12 not found (
{"jsonrpc":"2)
```
And we make the same formatting mistake in several places.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
65dccea5bd "pytest: fix flake in test_reconnect_signed" accidentally
introduced a bug, where the connect command may not return.
If we call "connect" while a connection is still being processed
through the peer_connected hooks, we would call peer_channels_cleanup(),
which (if the peer has no channels) would free the peer.
Then when the peer_connected hook returned, it would lookup the peer,
see it was gone, and silently return. The connect_succeeded() function
was never called, and the connect command never woken.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-None: bug introduced this release.
We had a flake of form:
```
2025-11-18T04:42:23.489Z **BROKEN** 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: wake delay for WIRE_CHANNEL_REESTABLISH: 6789msec
```
Which happened as we're shutting down. Some investigation revealed
the cause: `dev-memleak` can be extremely slow. Fair enough.
So we change `dev-memleak` to call connectd first, and connectd uses
that as a trigger to stop complaining about delays.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Delay can cause bogus complaints:
```
2025-11-13T23:50:03.6643632Z lightningd-3 2025-11-13T23:37:29.947Z **BROKEN** 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-connectd: wake delay for WIRE_CHANNEL_REESTABLISH: 5708msec
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we see a close tx at the same time we see the channel reach announce depth (here, 104x1x0 confirms
and we process blocks 109 and 110), we see it go from:
CGOSSIP_WAITING_FOR_ANNOUNCE_DEPTH->CGOSSIP_CHANNEL_UNANNOUNCED_DYING (Channel closing before 6 confirms)
Then:
CGOSSIP_CHANNEL_UNANNOUNCED_DYING->CGOSSIP_CHANNEL_ANNOUNCED_DEAD
```
INFO 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#2: State changed from CHANNELD_NORMAL to CHANNELD_SHUTTING_DOWN
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#2: gossip state: CGOSSIP_WAITING_FOR_ANNOUNCE_DEPTH->CGOSSIP_CHANNEL_UNANNOUNCED_DYING (Channel closing before 6 confirms)
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-channeld-chan#2: billboard: Channel ready for use. They've sent shutdown, waiting for ours
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-channeld-chan#2: Trying commit
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-channeld-chan#2: Can't send commit: nothing to send, feechange not wanted ({ RCVD_ADD_ACK_REVOCATION:3755 }) blockheight not wanted ({ RCVD_ADD_ACK_REVOCATION:109 })
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-channeld-chan#2: peer_out WIRE_SHUTDOWN
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-channeld-chan#2: billboard: Channel ready for use. Shutdown messages exchanged.
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-closingd-chan#2: pid 35692, msgfd 86
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-channeld-chan#2: Status closed, but not exited. Killing
INFO 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#2: State changed from CHANNELD_SHUTTING_DOWN to CLOSINGD_SIGEXCHANGE
DEBUG hsmd: new_client: 2
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-closingd-chan#2: Expected closing weight = 772, fee 2895sat (min 1447sat, max 1006268sat)
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-closingd-chan#2: out = 506268sat/500000sat
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-closingd-chan#2: dustlimit = 546sat
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-closingd-chan#2: fee = 2895sat
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-closingd-chan#2: fee negotiation step = 50%
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-closingd-chan#2: billboard perm: Negotiating closing fee between 1447sat and 1006268sat satoshi (ideal 2895sat) using step 50%
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-closingd-chan#2: billboard: Waiting for their initial closing fee offer
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-closingd-chan#2: peer_in WIRE_CLOSING_SIGNED
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-closingd-chan#2: Making close tx at = 506268sat/500000sat fee 2895sat
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-closingd-chan#2: Received fee offer 2895sat
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-closingd-chan#2: ...offer is reasonable
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#2: Their actual closing tx fee is 2895sat vs previous 4220sat: weight is 772
INFO 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-closingd-chan#2: performing quickclose in range 1447sat-8492sat
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-closingd-chan#2: Making close tx at = 506268sat/500000sat fee 2895sat
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-hsmd: Got WIRE_HSMD_SIGN_MUTUAL_CLOSE_TX
DEBUG hsmd: Client: Received message 21 from client
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-closingd-chan#2: sending fee offer 2895sat
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-closingd-chan#2: peer_out WIRE_CLOSING_SIGNED
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-closingd-chan#2: billboard perm: We agreed on a closing fee of 2895 satoshi for tx:7685256e6e4632bd601f42bb85b6f22b7a0db8d55c468dfbb1703c3ab2e4d9ce
INFO 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#2: State changed from CLOSINGD_SIGEXCHANGE to CLOSINGD_COMPLETE
DEBUG wallet: Owning output 1 506268sat (p2tr) txid 7685256e6e4632bd601f42bb85b6f22b7a0db8d55c468dfbb1703c3ab2e4d9ce
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#2: We have 1 anchor points to use
DEBUG lightningd: Broadcasting txid 7685256e6e4632bd601f42bb85b6f22b7a0db8d55c468dfbb1703c3ab2e4d9ce
DEBUG lightningd: sendrawtransaction: 020000000001016f50e5481abf2482ed719dde6014cf9da9b3076fe12af7c8bbccfb705150ffbe0000000000ffffffff02d195070000000000225120eed745804da9784cc203f563efa99ffa54fdf01b137bc964e63c3124070ffbe69cb90700000000002251208a73bb281433f2b5db5461c6778aa2afd28e011de6bd04799a5991662c61d7a80400473044022015af4ffc277e44ab701507e017194b1e3ab8598397efab02c59dbe72340790b50220762065e91ce0a2c73f36fdf3a4d54732ef91d075c0887cfb28580220e081e81a01473044022050730b9d051a3fdecb368750ecbd558d625a1acaa42e59586c7187407319dda402202f2f6e099b2e092698fe528260e2b56be8de325bc985a17945c724514fb65b9c014752210259b3cb48220dc2016f4d320bb8105cc2c92bd8c79fc25c2ad96f37b496e491c62103bbee60c395056b8a1201e06ed79e2914c11a61d7b1aa781846468d02489dba6952ae00000000
DEBUG lightningd: Adding block 108: 4e6ba3059c6890797a29acd306e82d6f4e85ad62a6aa6d0ee5bc6ead3e6329cd
DEBUG hsmd: Client: Received message 5 from client
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-connectd: peer_downgrade
DEBUG plugin-bcli: sendrawtx exit 0 (bitcoin-cli -regtest -datadir=/tmp/ltests-laf51vlg/test_buy_liquidity_ad_check_bookkeeping_1/lightning-2/ -rpcclienttimeout=60 -rpcport=41847 -rpcuser=... -stdinrpcpass sendrawtransaction 020000000001016f50e5481abf2482ed719dde6014cf9da9b3076fe12af7c8bbccfb705150ffbe0000000000ffffffff02d195070000000000225120eed745804da9784cc203f563efa99ffa54fdf01b137bc964e63c3124070ffbe69cb90700000000002251208a73bb281433f2b5db5461c6778aa2afd28e011de6bd04799a5991662c61d7a80400473044022015af4ffc277e44ab701507e017194b1e3ab8598397efab02c59dbe72340790b50220762065e91ce0a2c73f36fdf3a4d54732ef91d075c0887cfb28580220e081e81a01473044022050730b9d051a3fdecb368750ecbd558d625a1acaa42e59586c7187407319dda402202f2f6e099b2e092698fe528260e2b56be8de325bc985a17945c724514fb65b9c014752210259b3cb48220dc2016f4d320bb8105cc2c92bd8c79fc25c2ad96f37b496e491c62103bbee60c395056b8a1201e06ed79e2914c11a61d7b1aa781846468d02489dba6952ae00000000)
DEBUG lightningd: Adding block 109: 540a47d6b316aec8d1b56efca22ffb384403659e3b29cddd3d68f87df72065a6
DEBUG lightningd: Adding block 110: 1931e9ea62324d0fb50188df525e84ee7021e1910cf3534b343db05cc4639851
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#2: Got UTXO spend for beff505170fbccbbc8f72ae16f07b3a99dcf1460de9d71ed8224bf1a48e5506f:0: 7685256e6e4632bd601f42bb85b6f22b7a0db8d55c468dfbb1703c3ab2e4d9ce
UNUSUAL 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#2: Peer permanent failure in CLOSINGD_COMPLETE: Funding transaction spent: onchain txid 7685256e6e4632bd601f42bb85b6f22b7a0db8d55c468dfbb1703c3ab2e4d9ce (reason=unknown)
UNUSUAL 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#2: Not dropping our unilateral close onchain since we already saw 7685256e6e4632bd601f42bb85b6f22b7a0db8d55c468dfbb1703c3ab2e4d9ce confirm.
INFO 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#2: State changed from CLOSINGD_COMPLETE to FUNDING_SPEND_SEEN
**BROKEN** 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#2: Illegal gossip state transition: CGOSSIP_CHANNEL_UNANNOUNCED_DYING->CGOSSIP_CHANNEL_ANNOUNCED_DEAD
DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-onchaind-chan#2: pid 35721, msgfd 86
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>