17611 Commits

Author SHA1 Message Date
Rusty Russell
d45bc2d56e connectd: don't toggle nagle on and off, leave it always off.
We're doing our own buffering now.

We leave the is_urgent() function for two commits in the future though.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
c23b7a492d connect: switch to using io_write_partial instead of io_write.
This gives us finer control over write sizes: for now we just cap
the write size.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
df1ae1d680 connectd: refactor to break up "encrypt_and_send".
Do all the special treatment of the message type first.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
7577e59f6c connectd: refactor outgoing loop.
Give us a single "next message" function to call.  This will be useful
when we want to write more than one at a time.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
42bdb2d638 CI: run tests in the wireshark group so we can test packet sizes
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
369338347d pytest: add fixture for checking packet sizes.
This requires access to dumpcap.  On Ubuntu, at least, this means you
need to be in the "wireshark" group.

We may also need:
	sudo ethtool -K lo gro off gso off tso off

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
cd7afb506a pytest: remove now-invalid test.
Commit 888745be16 (dev_disconnect:
remove @ marker.) in v0.11 in April 2022) removed the '@' marker from
our dev_disconnect code, but one test still uses it.

Refactoring this code made it crash on invalid input.  The test
triggered a db issue which has been long fixed, so I'm simply removing
it.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
4d030d83ce pytest: fix flae int test_fetchinvoice_autoconnect.
l3 doesn't just need to know about l2 (which it can get from the
channel_announcement), but needs to see the node_announcement.

Otherwise:

```
        l1, l2 = node_factory.line_graph(2, wait_for_announce=True,
                                         # No onion_message support in l1
                                         opts=[{'dev-force-features': -39},
                                               {'dev-allow-localhost': None}])
    
        l3 = node_factory.get_node()
        l3.rpc.connect(l1.info['id'], 'localhost', l1.port)
        wait_for(lambda: l3.rpc.listnodes(l2.info['id'])['nodes'] != [])
    
        offer = l2.rpc.call('offer', {'amount': '2msat',
                                      'description': 'simple test'})
>       l3.rpc.call('fetchinvoice', {'offer': offer['bolt12']})

tests/test_pay.py:4804: 
...	
>           raise RpcError(method, payload, resp['error'])
E           pyln.client.lightning.RpcError: RPC call failed: method: fetchinvoice, payload: {'offer': 'lno1qgsqvgnwgcg35z6ee2h3yczraddm72xrfua9uve2rlrm9deu7xyfzrcgqypq5zmnd9khqmr9yp6x2um5zcssxwz9sqkjtd8qwnx06lxckvu6g8w8t0ue0zsrfqqygj636s4sw7v6'}, error: {'code': 1003, 'message': 'Failed: could not route or connect directly to 033845802d25b4e074ccfd7cd8b339a41dc75bf9978a034800444b51d42b07799a: {"code":400,"message":"Unable to connect, no address known for peer"}'}
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
29e0a1ddfe bkpr: limp along if we lost our db.
We can't really do decent bookkeeping any more, but don't crash!

```
bookkeeper: plugins/bkpr/recorder.c:178: find_txo_chain: Assertion `acct->open_event_db_id' failed.
bookkeeper: FATAL SIGNAL 6 (version v25.12)
0xaaaab7d51a7f send_backtrace
	common/daemon.c:38
0xaaaab7d51b2b crashdump
	common/daemon.c:83
0xffff8c0b07cf ???
	???:0
0xffff8bdf7608 __pthread_kill_implementation
	./nptl/pthread_kill.c:44
0xffff8bdacb3b __GI_raise
	../sysdeps/posix/raise.c:26
0xffff8bd97dff __GI_abort
	./stdlib/abort.c:79
0xffff8bda5cbf __assert_fail_base
	./assert/assert.c:96
0xffff8bda5d2f __assert_fail
	./assert/assert.c:105
0xaaaab7d41fd7 find_txo_chain
	plugins/bkpr/recorder.c:178
0xaaaab7d421fb account_onchain_closeheight
	plugins/bkpr/recorder.c:291
0xaaaab7d37687 do_account_close_checks
	plugins/bkpr/bookkeeper.c:884
0xaaaab7d38203 parse_and_log_chain_move
	plugins/bkpr/bookkeeper.c:1261
0xaaaab7d3871f listchainmoves_done
	plugins/bkpr/bookkeeper.c:171
0xaaaab7d4811f handle_rpc_reply
	plugins/libplugin.c:1073
0xaaaab7d4827b rpc_conn_read_response
	plugins/libplugin.c:1377
0xaaaab7d889a7 next_plan
	ccan/ccan/io/io.c:60
0xaaaab7d88f7b do_plan
	ccan/ccan/io/io.c:422
0xaaaab7d89053 io_ready
	ccan/ccan/io/io.c:439
```

Fixes: https://github.com/ElementsProject/lightning/issues/8854
Changelog-Fixed: Plugins: `bkpr_listbalances` no longer crashes if we lost our db, then do emergencyrecover and close a channel.
Reported-by: https://github.com/enaples
2026-02-17 12:10:26 +10:30
Rusty Russell
2e8261ef9e pytest: test for bkpr_listbalances after emergencyrecover.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-17 12:10:26 +10:30
Rusty Russell
b150309854 pytest: test for crash when we have dying channels and compact the gossip_store.
Before I fixed the handling of dying channels:

```
lightning_gossipd: gossip_store: can't read hdr offset 2362/2110: Success (version v25.12-279-gb38abe6-modded)
0x6537c19ecf3a send_backtrace
        common/daemon.c:38
0x6537c19f1a1d status_failed
        common/status.c:207
0x6537c19e557a gossip_store_get_with_hdr
        gossipd/gossip_store.c:527
0x6537c19e5613 check_msg_type
        gossipd/gossip_store.c:559
0x6537c19e5a36 gossip_store_set_flag
        gossipd/gossip_store.c:577
0x6537c19e5c82 gossip_store_del
        gossipd/gossip_store.c:629
0x6537c19e8ddd gossmap_manage_new_block
        gossipd/gossmap_manage.c:1362
0x6537c19e390e new_blockheight
        gossipd/gossipd.c:430
0x6537c19e3c37 recv_req
        gossipd/gossipd.c:532
0x6537c19ed22a handle_read
        common/daemon_conn.c:35
0x6537c19fbe71 next_plan
        ccan/ccan/io/io.c:60
0x6537c19fc174 do_plan
        ccan/ccan/io/io.c:422
0x6537c19fc231 io_ready
        ccan/ccan/io/io.c:439
0x6537c19fd647 io_loop
        ccan/ccan/io/poll.c:470
0x6537c19e463d main
        gossipd/gossipd.c:609
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
acb8a8cc15 gossipd: dev-compact-gossip-store to manually invoke compaction.
And tests!

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
88f3f97b7c gossipd: reset dying_channels array after compact.
Reported-by: @daywalker90
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
912b40aeff gossipd: compact when gossip store is 80% deleted records.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: `gossipd` now uses a `lightning_gossip_compactd` helper to compact the gossip_store on demand, keeping it under about 210MB.
2026-02-16 17:23:33 +10:30
Rusty Russell
15696d97bd gossipd: code to invoke compactd and reopen store.
This isn't called anywhere yet.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
f56f8adcdf gossipd: lightningd/lightning_gossip_compactd
A new subprocess run by gossipd to create a compacted gossip store.

It's pretty simple: a linear compaction of the file.  Once it's done the amount it
was told to, then gossipd waits until it completes the last bit.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
a966dd71ad common: expose gossip_store "header and type" single-read struct.
gossip_store.c uses this to avoid two reads, and we want to use it
elsewhere too.

Also fix old comment on gossip_store_readhdr().

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
1fb4da075f gossipd: put the last_writes array inside struct gossip_store.
This is the file responsible for all the writing, so it should be
responsible for the rewriting if necessary (rather than
gossmap_manage).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
facf24b6ee devtools/gossmap-compress: create latest gossip_store version
This saves gossipd from converting it:

```
lightningd-1 2026-02-02T00:50:49.505Z DEBUG   gossipd: Time to convert version 14 store: 890 msec
```

Reducing node startup time from 1.4 seconds to 0.5 seconds.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
445bcd040a gossipd: don't compact on startup.
We now only need to walk it if we're doing an upgrade.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: `gossipd` no longer compacts gossip_store on startup (improving start times significantly).
2026-02-16 17:23:33 +10:30
Rusty Russell
dfc4ce21de gossipd: don't gather dying channels during compaction.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
900fd08455 gossipd: use gossmap to load the dying entries.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
7d70e8baf2 gossmap: keep stats on live/deleted records.
This way gossmap_manage can decide when to compact.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
1ad8ca9603 gossmap: add callback for gossipd to see dying messages.
gossmap doesn't care, so gossipd currently has to iterate through the
store to find them at startup.  Create a callback for gossipd to use
instead.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
e8fd235d4e common: move gossip_store_wire.csv into common/ from gossipd/
It's used by common/gossip_store.c, which is used by many things other than
gossipd.  This file belongs in common.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
5dcf39867c gossipd: write uuid record on startup.
This is the first record, and ignored by everything else.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
25131d2ea2 common/gossmap: use the UUID record on reopen.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
b1055aa0ac gossip_store: add UUID entry at front of the store.
We also put this in the store_ended message, too: so you can
tell if the equivalent_offset there really refers to this new
entry (or if two or more rewrites have happened).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
d4c62f8c4c tools: delete gossip_store of needed for downgrade even if db hasn't changed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
5e1bbb08c7 gossmap: reduce load times by 20%
It's actually quite quick to load a cache-hot 308,874,377 byte
gossip_store (normal -Og build), but perf does show time spent
in siphash(), which is a bit overkill here, so drop that:

Before:
	Time to load: 66718983-78037766(7.00553e+07+/-2.8e+06)nsec
	
After: 
	Time to load: 54510433-57991725(5.61457e+07+/-1e+06)nsec

We could save maybe 10% more by disabling checksums, but having
that assurance is nice.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
120c9d8ce4 devtools/gossmap-compress: generate better scids.
Our poor scid generation clashes badly with simplified hashing (the
next patch) leading to l1's startup time when using a generated map
moving from 4 seconds to 14 seconds.  Under CI it actually timed out
several tests.

Fixing our fake scids to be more "random" reduces it to 1.5 seconds.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
ae957161e6 pytest: fix bogus test_gossip_store_compact_noappend test.
It didn't do anything, since the dev_compact_gossip_store command was
removed.  When we make it do something, it crashes since old_len is 0:

```
gossipd: gossip_store_compact: bad version
gossipd: FATAL SIGNAL 6 (version v25.12rc3-1-g9e6c715-modded)
...
gossipd: backtrace: ./stdlib/abort.c:79 (__GI_abort) 0x7119bd8288fe
gossipd: backtrace: ./assert/assert.c:96 (__assert_fail_base) 0x7119bd82881a
gossipd: backtrace: ./assert/assert.c:105 (__assert_fail) 0x7119bd83b516
gossipd: backtrace: gossipd/gossip_store.c:52 (append_msg) 0x56294de240eb
gossipd: backtrace: gossipd/gossip_store.c:358 (gossip_store_compact) 0x56294
gossipd: backtrace: gossipd/gossip_store.c:395 (gossip_store_new) 0x56294de24
gossipd: backtrace: gossipd/gossmap_manage.c:455 (setup_gossmap) 0x56294de255
gossipd: backtrace: gossipd/gossmap_manage.c:488 (gossmap_manage_new) 0x56294
gossipd: backtrace: gossipd/gossipd.c:400 (gossip_init) 0x56294de22de9
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
e5e5998cd8 devtools: enhance dump-gossipstore to show some details of messages.
Not a complete decode, just the highlights (what channel was announced
or updated, what node was announced).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
09781bd381 lightningd: don't assume peer existrs in peer_connected_serialize.
It's always true for the first hook invocation, but if there is more
than one plugin, it could vanish between the two!  In the default configuration, this can't happen.

This bug has been around since v23.02.

Note: we always tell all the plugins about the peer, even if it's
already gone.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: lightningd: possible crash when peers disconnected if there was more than one plugin servicing the `peer_connected` hook.
Reported-by: https://github.com/santyr
Fixes: https://github.com/ElementsProject/lightning/issues/8858
2026-02-12 09:08:10 +10:30
Rusty Russell
eaf6fabf04 pytest: reproduce crash when node disconnects between hooks:
```
lightningd-2 2026-02-09T00:41:35.196Z TRACE   lightningd: Plugin peer_connected_logger_a.py returned from peer_connected hook call
lightningd-2 2026-02-09T00:41:35.196Z TRACE   lightningd: Calling peer_connected hook of plugin peer_connected_logger_b.py
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: FATAL SIGNAL 11 (version v25.12-257-g2a5fbd1-modded)
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: common/daemon.c:46 (send_backtrace) 0x5b2abd7f29bd
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: common/daemon.c:83 (crashdump) 0x5b2abd7f2a0c
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: ./signal/../sysdeps/unix/sysv/linux/x86_64/libc_sigaction.c:0 ((null)) 0x75950d84532f
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: lightningd/peer_control.c:1333 (peer_connected_serialize) 0x5b2abd79c964
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: lightningd/plugin_hook.c:359 (plugin_hook_call_next) 0x5b2abd7ae14a
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: lightningd/plugin_hook.c:299 (plugin_hook_callback) 0x5b2abd7ae38f
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: lightningd/plugin.c:701 (plugin_response_handle) 0x5b2abd7a7e28
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: lightningd/plugin.c:790 (plugin_read_json) 0x5b2abd7ace9c
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: ccan/ccan/io/io.c:60 (next_plan) 0x5b2abd81dada
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: ccan/ccan/io/io.c:422 (do_plan) 0x5b2abd81def6
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: ccan/ccan/io/io.c:439 (io_ready) 0x5b2abd81dfb3
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: ccan/ccan/io/poll.c:470 (io_loop) 0x5b2abd81f0db
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: lightningd/io_loop_with_timers.c:22 (io_loop_with_timers) 0x5b2abd77c13b
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: lightningd/lightningd.c:1495 (main) 0x5b2abd781c6a
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: ../sysdeps/nptl/libc_start_call_main.h:58 (__libc_start_call_main) 0x75950d82a1c9
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: ../csu/libc-start.c:360 (__libc_start_main_impl) 0x75950d82a28a
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: (null):0 ((null)) 0x5b2abd752964
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: (null):0 ((null)) 0xffffffffffffffff
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-12 09:08:10 +10:30
Rusty Russell
1b1274df7e CI: reduce parallelism for pytest.
In November 2022 we seemed to increase parallelism from 2 and 3 to 10!
That is a huge load for these CI boxes, and does explain some of our
flakes.

We only run in parallel because some tests sleep, but it's diminishing
returns (GH runners have 4 VCPUs, 16GB RAM).

This reduces it so:
- Normal runs are -n 4
- Valgrind runs are -n 2
- Sanitizer runs are -n 3

If I use my beefy build box (64BG RAM) but reduce it to 4 CPUs:

Time for pytest -n 5:
Time for pytest -n 4:
Time for pytest -n 3:

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-09 19:46:22 +10:30
Rusty Russell
939aec3b61 pytest: make hold_timeout.py test plugin release on a prompt, not timeout.
Avoids guessing what the timeout should be, use a file trigger.  This
is more optimal, and should reduce a flake in test_sql under valgrind.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-09 14:48:15 +10:30
Rusty Russell
1ad72fdafd pytest: fix test_xpay flake.
```
>       assert len(layers['layers']) == 1
E       AssertionError: assert 2 == 1
E        +  where 2 = len([{'layer': 'xpay', 'persistent': True, 'disabled_nodes': [], 'created_channels': [], 'channel_updates': [], 'constraints': [{'short_channel_id_dir': '45210x2134x44171/0', 'timestamp': 1770341134, 'minimum_msat': 289153519}, {'short_channel_id_dir': '1895x7x1895/1', 'timestamp': 1770341134, 'minimum_msat': 289007015}, {'short_channel_id_dir': '1906x1039x1906/1', 'timestamp': 1770341134, 'minimum_msat': 289008304}, {'short_channel_id_dir': '10070x60x10063/1', 'timestamp': 1770341134, 'minimum_msat': 289005726}, {'short_channel_id_dir': '18772x60x18743/0', 'timestamp': 1770341134, 'minimum_msat': 289005726}, {'short_channel_id_dir': '18623x208x18594/0', 'timestamp': 1770341134, 'minimum_msat': 289004859}, {'short_channel_id_dir': '33935x826x33727/1', 'timestamp': 1770341134, 'maximum_msat': 491501488}], 'biases': [], 'node_biases': []}, {'layer': 'xpay-94', 'persistent': False, 'disabled_nodes': [], 'created_channels': [], 'channel_updates': [], 'constraints': [], 'biases': [], 'node_biases': []}])
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-09 14:48:15 +10:30
Rusty Russell
2a5fbd1730 pytest: speed up test_sql significantly.
It uses the hold_invoice plugin to ensure that an HTLC is in flight, but
it tells it to hold the HTLC for "TIMEOUT * 2" which is a big number under CI.

Reduce it to sqrt(TIMEOUT + 1) * 2, which works for local testing (I run
with TIMEOUT=10) and still should be enough for CI (TIMEOUT=180).

Christian reported that the test took 763.00s (!!) under CI.

On my build machine (TIMEOUT=90):

Before:
	383.00s

After:
	64.38s

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-07 07:29:28 +10:30
Rusty Russell
96adac48ab pytest: update "real gossip map" tests to a recent snapshot.
We delete the test_xpay_maxfee test which required the specific
topology.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-05 08:36:28 +10:30
Rusty Russell
95b63e4738 Makefile: add "canned-gossmap" target
Taked /tmp/gossip_store and created canned gossmap for testing.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-05 08:36:28 +10:30
Rusty Russell
c90c21301f pytest: speed up channeld_fakenet tests.
Reduce randome delay from 0.1 - 1 seconds, to 0.01 to 0.1 seconds.

Running tests/test_xpay.py::test_xpay_fake_channeld[False]

Before:
	348.41s
After:
	76.76s

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-05 08:36:28 +10:30
Rusty Russell
23202b7e85 pytest: fix channeld_fakenet divide by zero bug.
1. If max was 0, we crashed with SIGFPE due to % 0.
2. If min was non-zero, logic was incorrect (but all callers had min == 0).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-05 08:36:28 +10:30
Rusty Russell
13545124eb pytest: fix flake in test_channel_lease_unilat_closes
```
2026-01-30T05:55:13.6654636Z         # Note that l3 has the whole lease delay (minus blocks already mined)
2026-01-30T05:55:13.6655396Z         _, _, l3blocks = l3.wait_for_onchaind_tx('OUR_DELAYED_RETURN_TO_WALLET',
2026-01-30T05:55:13.6656086Z                                                  'OUR_UNILATERAL/DELAYED_OUTPUT_TO_US')
2026-01-30T05:55:13.6656618Z >       assert l3blocks == 4032 - 6 - 2 - 1
2026-01-30T05:55:13.6657033Z E       assert 4025 == (((4032 - 6) - 2) - 1)
```

Turns out that 4342043382 (tests: de-flake test that was failing on
cltv expiry) added a line to mine two more blocks, but the hardcoded
110 was not changed to 112, so we weren't actually waiting correctly.

Remove hardcoded numbers in favor of calculation, and do the same in
test_channel_lease_post_expiry (which was correct, for now).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-03 16:12:04 +10:30
Rusty Russell
63497b3180 pytest: fix flake in test_even_sendcustommsg
We need to make sure the message is fully processed before
removing the plugin.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-03 16:12:04 +10:30
Rusty Russell
db2a560b9c lightningd: fix spurious memleak in peer_connected_serialize.
Of course stream will be freed soon, too, but if run at the exact
right time, memleak will get upset: use tmpctx.

```
lightningd: MEMLEAK: 0x5616f720bc48
lightningd:   label=common/wireaddr.c:255:char[]
lightningd:   alloc:
lightningd:     /home/runner/work/lightning/lightning/ccan/ccan/tal/tal.c:488 (tal_alloc_)
lightningd:     /home/runner/work/lightning/lightning/ccan/ccan/tal/tal.c:517 (tal_alloc_arr_)
lightningd:     /home/runner/work/lightning/lightning/ccan/ccan/tal/str/str.c:81 (tal_vfmt_)
lightningd:     /home/runner/work/lightning/lightning/ccan/ccan/tal/str/str.c:37 (tal_fmt_)
lightningd:     /home/runner/work/lightning/lightning/common/wireaddr.c:255 (fmt_wireaddr_without_port)
lightningd:     /home/runner/work/lightning/lightning/common/wireaddr.c:276 (fmt_wireaddr)
lightningd:     /home/runner/work/lightning/lightning/common/wireaddr.c:232 (fmt_wireaddr_internal)
lightningd:     /home/runner/work/lightning/lightning/lightningd/peer_control.c:1327 (peer_connected_serialize)
lightningd:     /home/runner/work/lightning/lightning/lightningd/plugin_hook.c:359 (plugin_hook_call_next)
lightningd:     /home/runner/work/lightning/lightning/lightningd/plugin_hook.c:395 (plugin_hook_call_)
lightningd:     /home/runner/work/lightning/lightning/lightningd/peer_control.c:1753 (plugin_hook_call_peer_connected)
lightningd:     /home/runner/work/lightning/lightning/lightningd/peer_control.c:1885 (handle_peer_connected)
lightningd:     /home/runner/work/lightning/lightning/lightningd/connect_control.c:563 (connectd_msg)
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-03 16:12:04 +10:30
Rusty Russell
9e8e85de99 pytest: allow pushed after onchain_fee in test_bookkeeping_missed_chans_pushed
It can happen, and it's perfectly reasonable.  If this happens in other places, we might need to allow
arbitrary reordering?

```
2026-01-29T05:55:58.5474967Z         exp_events = [{'tag': 'channel_open', 'credit_msat': open_amt * 1000, 'debit_msat': 0},
2026-01-29T05:55:58.5475765Z                       {'tag': 'pushed', 'credit_msat': 0, 'debit_msat': push_amt},
2026-01-29T05:55:58.5476454Z                       {'tag': 'onchain_fee', 'credit_msat': 4927000, 'debit_msat': 0},
2026-01-29T05:55:58.5477168Z                       {'tag': 'invoice', 'credit_msat': 0, 'debit_msat': invoice_msat}]
2026-01-29T05:55:58.5477797Z >       check_events(l1, channel_id, exp_events)
2026-01-29T05:55:58.5478120Z 
2026-01-29T05:55:58.5478282Z tests/test_bookkeeper.py:402: 
2026-01-29T05:55:58.5478777Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
2026-01-29T05:55:58.5479162Z 
2026-01-29T05:55:58.5479396Z node = <fixtures.LightningNode object at 0x7fa3660a6140>
2026-01-29T05:55:58.5480158Z channel_id = 'a4e913b2d143efc3d90cfa66a56aeed3eb9e1533b350c8e84124bdec37bcf74a'
2026-01-29T05:55:58.5481929Z exp_events = [{'credit_msat': 10000000000, 'debit_msat': 0, 'tag': 'channel_open'}, {'credit_msat': 0, 'debit_msat': 1000000000, 'tag': 'pushed'}, {'credit_msat': 4927000, 'debit_msat': 0, 'tag': 'onchain_fee'}, {'credit_msat': 0, 'debit_msat': 11000000, 'tag': 'invoice'}]
2026-01-29T05:55:58.5483442Z 
2026-01-29T05:55:58.5483671Z     def check_events(node, channel_id, exp_events):
2026-01-29T05:55:58.5484551Z         chan_events = [ev for ev in node.rpc.bkpr_listaccountevents()['events'] if ev['account'] == channel_id]
2026-01-29T05:55:58.5485684Z         stripped = [{k: d[k] for k in ('tag', 'credit_msat', 'debit_msat') if k in d} for d in chan_events]
2026-01-29T05:55:58.5486455Z >       assert stripped == exp_events
2026-01-29T05:55:58.5489277Z E       AssertionError: assert [{'tag': 'channel_open', 'credit_msat': 10000000000, 'debit_msat': 0}, {'tag': 'onchain_fee', 'credit_msat': 4927000, 'debit_msat': 0}, {'tag': 'pushed', 'credit_msat': 0, 'debit_msat': 1000000000}, {'tag': 'invoice', 'credit_msat': 0, 'debit_msat': 11000000}] == [{'tag': 'channel_open', 'credit_msat': 10000000000, 'debit_msat': 0}, {'tag': 'pushed', 'credit_msat': 0, 'debit_msat': 1000000000}, {'tag': 'onchain_fee', 'credit_msat': 4927000, 'debit_msat': 0}, {'tag': 'invoice', 'credit_msat': 0, 'debit_msat': 11000000}]
2026-01-29T05:55:58.5492021Z E         
2026-01-29T05:55:58.5492767Z E         At index 1 diff: {'tag': 'onchain_fee', 'credit_msat': 4927000, 'debit_msat': 0} != {'tag': 'pushed', 'credit_msat': 0, 'debit_msat': 1000000000}
2026-01-29T05:55:58.5493812Z E         
2026-01-29T05:55:58.5494078Z E         Full diff:
2026-01-29T05:55:58.5494373Z E           [
2026-01-29T05:55:58.5494863Z E               {
2026-01-29T05:55:58.5495166Z E                   'credit_msat': 10000000000,
2026-01-29T05:55:58.5495565Z E                   'debit_msat': 0,
2026-01-29T05:55:58.5495946Z E                   'tag': 'channel_open',
2026-01-29T05:55:58.5496330Z E         -     },
2026-01-29T05:55:58.5496613Z E         -     {
2026-01-29T05:55:58.5496906Z E         -         'credit_msat': 0,
2026-01-29T05:55:58.5497285Z E         -         'debit_msat': 1000000000,
2026-01-29T05:55:58.5497900Z E         -         'tag': 'pushed',
2026-01-29T05:55:58.5498264Z E               },
2026-01-29T05:55:58.5498531Z E               {
2026-01-29T05:55:58.5498818Z E                   'credit_msat': 4927000,
2026-01-29T05:55:58.5499200Z E                   'debit_msat': 0,
2026-01-29T05:55:58.5499563Z E                   'tag': 'onchain_fee',
2026-01-29T05:55:58.5499925Z E               },
2026-01-29T05:55:58.5500190Z E               {
2026-01-29T05:55:58.5500477Z E                   'credit_msat': 0,
2026-01-29T05:55:58.5500863Z E         +         'debit_msat': 1000000000,
2026-01-29T05:55:58.5501255Z E         +         'tag': 'pushed',
2026-01-29T05:55:58.5501592Z E         +     },
2026-01-29T05:55:58.5501853Z E         +     {
2026-01-29T05:55:58.5502141Z E         +         'credit_msat': 0,
2026-01-29T05:55:58.5502511Z E                   'debit_msat': 11000000,
2026-01-29T05:55:58.5502889Z E                   'tag': 'invoice',
2026-01-29T05:55:58.5503424Z E               },
2026-01-29T05:55:58.5503698Z E           ]
2026-01-29T05:55:58.5503861Z 
2026-01-29T05:55:58.5504027Z tests/test_bookkeeper.py:29: AssertionError
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-03 16:12:04 +10:30
Rusty Russell
18a416be0d connectd: unify IO logging calls.
Normally, connectd forwards messages and then the subds do logging,
but it logs manually for msgs which are handled internally.

Clarify this logic in one place for all callers.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-03 16:12:04 +10:30
Rusty Russell
e36564f6dc pytest: make test_connect_ratelimit more robust
We were sending SIGSTOP to the lightningds, but that doesn't always
mean the connectd would stop it seems:

```
lightningd-1 2026-01-27T04:49:16.979Z DEBUG   0258f3ff3e0853ccc09f6fe89823056d7c0c55c95fab97674df5e1ad97a72f6265-connectd: Initializing important peer with 1 addresses
lightningd-1 2026-01-27T04:49:16.979Z DEBUG   connectd: Got 10 bad cupdates, ignoring them (expected on mainnet)
lightningd-1 2026-01-27T04:49:16.979Z DEBUG   0258f3ff3e0853ccc09f6fe89823056d7c0c55c95fab97674df5e1ad97a72f6265-connectd: Connected out, starting crypto
lightningd-1 2026-01-27T04:49:16.980Z DEBUG   038194b5f32bdf0aa59812c86c4ef7ad2f294104fa027d1ace9b469bb6f88cf37b-hsmd: Got WIRE_HSMD_ECDH_REQ
lightningd-1 2026-01-27T04:49:16.981Z DEBUG   hsmd: Client: Received message 1 from client
lightningd-1 2026-01-27T04:49:16.985Z TRACE   0258f3ff3e0853ccc09f6fe89823056d7c0c55c95fab97674df5e1ad97a72f6265-gossipd: handle_recv_gossip: WIRE_CHANNEL_ANNOUNCEMENT
lightningd-1 2026-01-27T04:49:16.985Z TRACE   0258f3ff3e0853ccc09f6fe89823056d7c0c55c95fab97674df5e1ad97a72f6265-gossipd: handle_recv_gossip: WIRE_CHANNEL_UPDATE
lightningd-1 2026-01-27T04:49:16.985Z TRACE   0258f3ff3e0853ccc09f6fe89823056d7c0c55c95fab97674df5e1ad97a72f6265-gossipd: handle_recv_gossip: WIRE_CHANNEL_UPDATE
lightningd-1 2026-01-27T04:49:16.985Z TRACE   0258f3ff3e0853ccc09f6fe89823056d7c0c55c95fab97674df5e1ad97a72f6265-gossipd: handle_recv_gossip: WIRE_NODE_ANNOUNCEMENT
lightningd-1 2026-01-27T04:49:16.985Z DEBUG   0258f3ff3e0853ccc09f6fe89823056d7c0c55c95fab97674df5e1ad97a72f6265-connectd: Connect OUT
lightningd-1 2026-01-27T04:49:16.986Z DEBUG   0258f3ff3e0853ccc09f6fe89823056d7c0c55c95fab97674df5e1ad97a72f6265-connectd: peer_out WIRE_INIT
lightningd-1 2026-01-27T04:49:16.986Z DEBUG   0258f3ff3e0853ccc09f6fe89823056d7c0c55c95fab97674df5e1ad97a72f6265-connectd: peer_in WIRE_INIT
lightningd-1 2026-01-27T04:49:16.986Z TRACE   lightningd: Calling peer_connected hook of plugin chanbackup
lightningd-1 2026-01-27T04:49:16.986Z DEBUG   0258f3ff3e0853ccc09f6fe89823056d7c0c55c95fab97674df5e1ad97a72f6265-connectd: Handed peer, entering loop
lightningd-1 2026-01-27T04:49:16.986Z DEBUG   03cecbfdc68544cc596223b68ce0710c9e5d2c9cb317ee07822d95079acc703d31-connectd: Initializing important peer with 1 addresses
lightningd-1 2026-01-27T04:49:16.986Z DEBUG   033845802d25b4e074ccfd7cd8b339a41dc75bf9978a034800444b51d42b07799a-connectd: Initializing important peer with 1 addresses
lightningd-1 2026-01-27T04:49:16.987Z DEBUG   033845802d25b4e074ccfd7cd8b339a41dc75bf9978a034800444b51d42b07799a-connectd: Too many connections, waiting...
lightningd-1 2026-01-27T04:49:16.987Z DEBUG   02186115cb7e93e2cb4d9d9fe7a9cf5ff7a5784bfdda4f164ff041655e4bcd4fd0-connectd: Initializing important peer with 1 addresses
lightningd-1 2026-01-27T04:49:16.987Z DEBUG   02186115cb7e93e2cb4d9d9fe7a9cf5ff7a5784bfdda4f164ff041655e4bcd4fd0-connectd: Too many connections, waiting...
lightningd-1 2026-01-27T04:49:16.987Z DEBUG   02287bfac8b99b35477ebe9334eede1e32b189e24644eb701c079614712331cec0-connectd: Initializing important peer with 1 addresses
lightningd-1 2026-01-27T04:49:16.987Z DEBUG   02287bfac8b99b35477ebe9334eede1e32b189e24644eb701c079614712331cec0-connectd: Too many connections, waiting...
lightningd-1 2026-01-27T04:49:16.987Z DEBUG   03cecbfdc68544cc596223b68ce0710c9e5d2c9cb317ee07822d95079acc703d31-connectd: Connected out, starting crypto
lightningd-1 2026-01-27T04:49:16.989Z DEBUG   038194b5f32bdf0aa59812c86c4ef7ad2f294104fa027d1ace9b469bb6f88cf37b-hsmd: Got WIRE_HSMD_ECDH_REQ
lightningd-1 2026-01-27T04:49:16.989Z DEBUG   hsmd: Client: Received message 1 from client
lightningd-1 2026-01-27T04:49:16.990Z DEBUG   0258f3ff3e0853ccc09f6fe89823056d7c0c55c95fab97674df5e1ad97a72f6265-connectd: peer_in WIRE_GOSSIP_TIMESTAMP_FILTER
lightningd-1 2026-01-27T04:49:16.991Z DEBUG   03cecbfdc68544cc596223b68ce0710c9e5d2c9cb317ee07822d95079acc703d31-connectd: Connect OUT
lightningd-1 2026-01-27T04:49:16.991Z DEBUG   03cecbfdc68544cc596223b68ce0710c9e5d2c9cb317ee07822d95079acc703d31-connectd: peer_out WIRE_INIT
lightningd-1 2026-01-27T04:49:16.991Z DEBUG   0258f3ff3e0853ccc09f6fe89823056d7c0c55c95fab97674df5e1ad97a72f6265-connectd: peer_out WIRE_PEER_STORAGE
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-03 16:12:04 +10:30
Rusty Russell
cf3583ff83 pytest: mark renepay's self-pay test flaky
Not sure this is worth getting to the bottom of, if we're going for xpay?

```
2026-01-20T05:25:55.4375717Z lightningd-1 2026-01-20T05:22:41.062Z DEBUG   plugin-cln-renepay: sendpay_failure notification: {"sendpay_failure":{"code":202,"message":"Malformed error reply","data":{"created_index":2,"id":2,"payment_hash":"8447592dc3786c3181746bd7bf17fe9e84d3de3dfff847f9f2b8c1a65c2d3ff6","groupid":1,"partid":1,"destination":"038194b5f32bdf0aa59812c86c4ef7ad2f294104fa027d1ace9b469bb6f88cf37b","amount_msat":10000,"amount_sent_msat":10000,"created_at":1768886560,"status":"pending","bolt11":"lnbcrt100n1p5k7yfqsp5g072rrl6jn7wu8nlxk279zpkacz34hr22n089j4elguccfxmg0gqpp5s3r4jtwr0pkrrqt5d0tm79l7n6zd8h3alluy070jhrq6vhpd8lmqdqgw3jhxapjxqyjw5qcqp99qxpqysgq6dd74sj0vpyjy9he8pep73tt8pljhv6mc74y28rr8995yjyahgh9kc2jqwqz62h6u6jpqz0u9x6gcahdw3pe2x8r5eyppp53mlpru8sp4p4nf6","onionreply":"c5a02ce5372635959d87818f723767efe3137bbc40568fb04e1f2eedee0a6149000e400f00000000000027100000006c00f20000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"}}}
2026-01-20T05:25:55.4386158Z lightningd-1 2026-01-20T05:22:41.067Z **BROKEN** plugin-cln-renepay: Unable to parse sendpay_failure: {"code":202,"message":"Malformed error reply","data":{"created_index":2,"id":2,"payment_hash":"8447592dc3786c3181746bd7bf17fe9e84d3de3dfff847f9f2b8c1a65c2d3ff6","groupid":1,"partid":1,"destination":"038194b5f32bdf0aa59812c86c4ef7ad2f294104fa027d1ace9b469bb6f88cf37b","amount_msat":10000,"amount_sent_msat":10000,"created_at":1768886560,"status":"pending","bolt11":"lnbcrt100n1p5k7yfqsp5g072rrl6jn7wu8nlxk279zpkacz34hr22n089j4elguccfxmg0gqpp5s3r4jtwr0pkrrqt5d0tm79l7n6zd8h3alluy070jhrq6vhpd8lmqdqgw3jhxapjxqyjw5qcqp99qxpqysgq6dd74sj0vpyjy9he8pep73tt8pljhv6mc74y28rr8995yjyahgh9kc2jqwqz62h6u6jpqz0u9x6gcahdw3pe2x8r5eyppp53mlpru8sp4p4nf6","onionreply":"c5a02ce5372635959d87818f723767efe3137bbc40568fb04e1f2eedee0a6149000e400f00000000000027100000006c00f20000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"}}
```

```
2026-01-20T05:25:55.4465729Z         with pytest.raises(RpcError, match=r"Unknown invoice") as excinfo:
2026-01-20T05:25:55.4466111Z >           l1.rpc.call("renepay", {"invstring": inv2})
2026-01-20T05:25:55.4466310Z 
2026-01-20T05:25:55.4466402Z tests/test_renepay.py:423: 
...
2026-01-20T05:25:55.4489732Z         elif "error" in resp:
2026-01-20T05:25:55.4490005Z >           raise RpcError(method, payload, resp['error'])
2026-01-20T05:25:55.4492135Z E           pyln.client.lightning.RpcError: RPC call failed: method: renepay, payload: {'invstring': 'lnbcrt100n1p5k7yfqsp5g072rrl6jn7wu8nlxk279zpkacz34hr22n089j4elguccfxmg0gqpp5s3r4jtwr0pkrrqt5d0tm79l7n6zd8h3alluy070jhrq6vhpd8lmqdqgw3jhxapjxqyjw5qcqp99qxpqysgq6dd74sj0vpyjy9he8pep73tt8pljhv6mc74y28rr8995yjyahgh9kc2jqwqz62h6u6jpqz0u9x6gcahdw3pe2x8r5eyppp53mlpru8sp4p4nf6'}, error: {'code': -4, 'message': 'Plugin terminated before replying to RPC call.'}
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-03 16:12:04 +10:30